00:00:00.000 Started by upstream project "autotest-nightly" build number 4359 00:00:00.000 originally caused by: 00:00:00.001 Started by upstream project "nightly-trigger" build number 3722 00:00:00.001 originally caused by: 00:00:00.001 Started by timer 00:00:00.126 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.127 The recommended git tool is: git 00:00:00.128 using credential 00000000-0000-0000-0000-000000000002 00:00:00.130 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.184 Fetching changes from the remote Git repository 00:00:00.186 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.240 Using shallow fetch with depth 1 00:00:00.240 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.240 > git --version # timeout=10 00:00:00.280 > git --version # 'git version 2.39.2' 00:00:00.281 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.311 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.311 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:07.974 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:07.985 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:07.998 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:07.998 > git config core.sparsecheckout # timeout=10 00:00:08.009 > git read-tree -mu HEAD # timeout=10 00:00:08.024 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:08.049 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:08.049 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:08.150 [Pipeline] Start of Pipeline 00:00:08.160 [Pipeline] library 00:00:08.162 Loading library shm_lib@master 00:00:08.162 Library shm_lib@master is cached. Copying from home. 00:00:08.173 [Pipeline] node 00:00:08.184 Running on WFP4 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:08.185 [Pipeline] { 00:00:08.193 [Pipeline] catchError 00:00:08.194 [Pipeline] { 00:00:08.204 [Pipeline] wrap 00:00:08.212 [Pipeline] { 00:00:08.218 [Pipeline] stage 00:00:08.220 [Pipeline] { (Prologue) 00:00:08.417 [Pipeline] sh 00:00:08.702 + logger -p user.info -t JENKINS-CI 00:00:08.716 [Pipeline] echo 00:00:08.717 Node: WFP4 00:00:08.723 [Pipeline] sh 00:00:09.021 [Pipeline] setCustomBuildProperty 00:00:09.034 [Pipeline] echo 00:00:09.035 Cleanup processes 00:00:09.041 [Pipeline] sh 00:00:09.327 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:09.327 3706478 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:09.341 [Pipeline] sh 00:00:09.625 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:09.625 ++ grep -v 'sudo pgrep' 00:00:09.625 ++ awk '{print $1}' 00:00:09.625 + sudo kill -9 00:00:09.625 + true 00:00:09.641 [Pipeline] cleanWs 00:00:09.651 [WS-CLEANUP] Deleting project workspace... 00:00:09.651 [WS-CLEANUP] Deferred wipeout is used... 00:00:09.658 [WS-CLEANUP] done 00:00:09.662 [Pipeline] setCustomBuildProperty 00:00:09.677 [Pipeline] sh 00:00:09.961 + sudo git config --global --replace-all safe.directory '*' 00:00:10.058 [Pipeline] httpRequest 00:00:10.737 [Pipeline] echo 00:00:10.738 Sorcerer 10.211.164.20 is alive 00:00:10.747 [Pipeline] retry 00:00:10.748 [Pipeline] { 00:00:10.762 [Pipeline] httpRequest 00:00:10.766 HttpMethod: GET 00:00:10.767 URL: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:10.767 Sending request to url: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:10.788 Response Code: HTTP/1.1 200 OK 00:00:10.789 Success: Status code 200 is in the accepted range: 200,404 00:00:10.789 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:21.707 [Pipeline] } 00:00:21.726 [Pipeline] // retry 00:00:21.733 [Pipeline] sh 00:00:22.017 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:22.033 [Pipeline] httpRequest 00:00:22.492 [Pipeline] echo 00:00:22.494 Sorcerer 10.211.164.20 is alive 00:00:22.503 [Pipeline] retry 00:00:22.504 [Pipeline] { 00:00:22.518 [Pipeline] httpRequest 00:00:22.522 HttpMethod: GET 00:00:22.523 URL: http://10.211.164.20/packages/spdk_e01cb43b8578f9155d07a9bc6eee4e70a3af96b0.tar.gz 00:00:22.523 Sending request to url: http://10.211.164.20/packages/spdk_e01cb43b8578f9155d07a9bc6eee4e70a3af96b0.tar.gz 00:00:22.535 Response Code: HTTP/1.1 200 OK 00:00:22.536 Success: Status code 200 is in the accepted range: 200,404 00:00:22.536 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_e01cb43b8578f9155d07a9bc6eee4e70a3af96b0.tar.gz 00:01:31.655 [Pipeline] } 00:01:31.671 [Pipeline] // retry 00:01:31.678 [Pipeline] sh 00:01:31.964 + tar --no-same-owner -xf spdk_e01cb43b8578f9155d07a9bc6eee4e70a3af96b0.tar.gz 00:01:34.510 [Pipeline] sh 00:01:34.794 + git -C spdk log --oneline -n5 00:01:34.794 e01cb43b8 mk/spdk.common.mk sed the minor version 00:01:34.794 d58eef2a2 nvme/rdma: Fix reinserting qpair in connecting list after stale state 00:01:34.794 2104eacf0 test/check_so_deps: use VERSION to look for prior tags 00:01:34.794 66289a6db build: use VERSION file for storing version 00:01:34.794 626389917 nvme/rdma: Don't limit max_sge if UMR is used 00:01:34.805 [Pipeline] } 00:01:34.818 [Pipeline] // stage 00:01:34.828 [Pipeline] stage 00:01:34.830 [Pipeline] { (Prepare) 00:01:34.845 [Pipeline] writeFile 00:01:34.860 [Pipeline] sh 00:01:35.143 + logger -p user.info -t JENKINS-CI 00:01:35.155 [Pipeline] sh 00:01:35.439 + logger -p user.info -t JENKINS-CI 00:01:35.450 [Pipeline] sh 00:01:35.734 + cat autorun-spdk.conf 00:01:35.734 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:35.734 SPDK_TEST_NVMF=1 00:01:35.734 SPDK_TEST_NVME_CLI=1 00:01:35.734 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:35.734 SPDK_TEST_NVMF_NICS=e810 00:01:35.734 SPDK_RUN_ASAN=1 00:01:35.734 SPDK_RUN_UBSAN=1 00:01:35.734 NET_TYPE=phy 00:01:35.741 RUN_NIGHTLY=1 00:01:35.746 [Pipeline] readFile 00:01:35.769 [Pipeline] withEnv 00:01:35.771 [Pipeline] { 00:01:35.783 [Pipeline] sh 00:01:36.068 + set -ex 00:01:36.068 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:01:36.068 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:36.068 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:36.068 ++ SPDK_TEST_NVMF=1 00:01:36.068 ++ SPDK_TEST_NVME_CLI=1 00:01:36.068 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:36.068 ++ SPDK_TEST_NVMF_NICS=e810 00:01:36.068 ++ SPDK_RUN_ASAN=1 00:01:36.068 ++ SPDK_RUN_UBSAN=1 00:01:36.068 ++ NET_TYPE=phy 00:01:36.068 ++ RUN_NIGHTLY=1 00:01:36.068 + case $SPDK_TEST_NVMF_NICS in 00:01:36.068 + DRIVERS=ice 00:01:36.068 + [[ tcp == \r\d\m\a ]] 00:01:36.068 + [[ -n ice ]] 00:01:36.068 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:01:36.068 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:01:36.068 rmmod: ERROR: Module mlx5_ib is not currently loaded 00:01:36.068 rmmod: ERROR: Module i40iw is not currently loaded 00:01:36.068 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:01:36.068 + true 00:01:36.068 + for D in $DRIVERS 00:01:36.068 + sudo modprobe ice 00:01:36.068 + exit 0 00:01:36.077 [Pipeline] } 00:01:36.088 [Pipeline] // withEnv 00:01:36.093 [Pipeline] } 00:01:36.138 [Pipeline] // stage 00:01:36.170 [Pipeline] catchError 00:01:36.171 [Pipeline] { 00:01:36.178 [Pipeline] timeout 00:01:36.178 Timeout set to expire in 1 hr 0 min 00:01:36.179 [Pipeline] { 00:01:36.185 [Pipeline] stage 00:01:36.186 [Pipeline] { (Tests) 00:01:36.194 [Pipeline] sh 00:01:36.473 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:36.473 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:36.473 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:36.473 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:01:36.473 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:36.473 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:36.473 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:01:36.473 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:36.473 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:36.473 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:36.473 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:01:36.473 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:36.473 + source /etc/os-release 00:01:36.473 ++ NAME='Fedora Linux' 00:01:36.473 ++ VERSION='39 (Cloud Edition)' 00:01:36.473 ++ ID=fedora 00:01:36.473 ++ VERSION_ID=39 00:01:36.473 ++ VERSION_CODENAME= 00:01:36.473 ++ PLATFORM_ID=platform:f39 00:01:36.473 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:01:36.474 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:36.474 ++ LOGO=fedora-logo-icon 00:01:36.474 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:01:36.474 ++ HOME_URL=https://fedoraproject.org/ 00:01:36.474 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:01:36.474 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:36.474 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:36.474 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:36.474 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:01:36.474 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:36.474 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:01:36.474 ++ SUPPORT_END=2024-11-12 00:01:36.474 ++ VARIANT='Cloud Edition' 00:01:36.474 ++ VARIANT_ID=cloud 00:01:36.474 + uname -a 00:01:36.474 Linux spdk-wfp-04 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 05:41:37 UTC 2024 x86_64 GNU/Linux 00:01:36.474 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:01:39.007 Hugepages 00:01:39.007 node hugesize free / total 00:01:39.007 node0 1048576kB 0 / 0 00:01:39.007 node0 2048kB 0 / 0 00:01:39.007 node1 1048576kB 0 / 0 00:01:39.007 node1 2048kB 0 / 0 00:01:39.007 00:01:39.007 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:39.007 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:01:39.007 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:01:39.007 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:01:39.007 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:01:39.007 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:01:39.007 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:01:39.007 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:01:39.007 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:01:39.007 NVMe 0000:5e:00.0 8086 0a54 0 nvme nvme0 nvme0n1 00:01:39.007 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:01:39.007 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:01:39.007 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:01:39.007 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:01:39.007 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:01:39.007 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:01:39.007 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:01:39.007 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:01:39.007 + rm -f /tmp/spdk-ld-path 00:01:39.007 + source autorun-spdk.conf 00:01:39.007 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:39.007 ++ SPDK_TEST_NVMF=1 00:01:39.007 ++ SPDK_TEST_NVME_CLI=1 00:01:39.007 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:39.007 ++ SPDK_TEST_NVMF_NICS=e810 00:01:39.007 ++ SPDK_RUN_ASAN=1 00:01:39.007 ++ SPDK_RUN_UBSAN=1 00:01:39.007 ++ NET_TYPE=phy 00:01:39.007 ++ RUN_NIGHTLY=1 00:01:39.007 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:39.007 + [[ -n '' ]] 00:01:39.007 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:39.007 + for M in /var/spdk/build-*-manifest.txt 00:01:39.007 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:01:39.007 + cp /var/spdk/build-kernel-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:39.007 + for M in /var/spdk/build-*-manifest.txt 00:01:39.007 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:39.007 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:39.007 + for M in /var/spdk/build-*-manifest.txt 00:01:39.007 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:39.007 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:39.007 ++ uname 00:01:39.007 + [[ Linux == \L\i\n\u\x ]] 00:01:39.007 + sudo dmesg -T 00:01:39.007 + sudo dmesg --clear 00:01:39.007 + dmesg_pid=3707934 00:01:39.007 + [[ Fedora Linux == FreeBSD ]] 00:01:39.007 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:39.007 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:39.007 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:39.007 + sudo dmesg -Tw 00:01:39.007 + [[ -x /usr/src/fio-static/fio ]] 00:01:39.007 + export FIO_BIN=/usr/src/fio-static/fio 00:01:39.007 + FIO_BIN=/usr/src/fio-static/fio 00:01:39.007 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:39.007 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:39.007 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:39.007 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:39.007 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:39.007 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:39.007 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:39.007 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:39.007 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:39.007 23:42:18 -- common/autotest_common.sh@1710 -- $ [[ n == y ]] 00:01:39.007 23:42:18 -- spdk/autorun.sh@20 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:39.007 23:42:18 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:39.007 23:42:18 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@2 -- $ SPDK_TEST_NVMF=1 00:01:39.007 23:42:18 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@3 -- $ SPDK_TEST_NVME_CLI=1 00:01:39.007 23:42:18 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@4 -- $ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:39.007 23:42:18 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@5 -- $ SPDK_TEST_NVMF_NICS=e810 00:01:39.007 23:42:18 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@6 -- $ SPDK_RUN_ASAN=1 00:01:39.007 23:42:18 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@7 -- $ SPDK_RUN_UBSAN=1 00:01:39.007 23:42:18 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@8 -- $ NET_TYPE=phy 00:01:39.007 23:42:18 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@9 -- $ RUN_NIGHTLY=1 00:01:39.007 23:42:18 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:01:39.007 23:42:18 -- spdk/autorun.sh@25 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autobuild.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:39.007 23:42:18 -- common/autotest_common.sh@1710 -- $ [[ n == y ]] 00:01:39.007 23:42:18 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:01:39.007 23:42:18 -- scripts/common.sh@15 -- $ shopt -s extglob 00:01:39.007 23:42:18 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:39.007 23:42:18 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:39.007 23:42:18 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:39.007 23:42:18 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:39.007 23:42:18 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:39.007 23:42:18 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:39.007 23:42:18 -- paths/export.sh@5 -- $ export PATH 00:01:39.007 23:42:18 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:39.007 23:42:18 -- common/autobuild_common.sh@492 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:01:39.007 23:42:18 -- common/autobuild_common.sh@493 -- $ date +%s 00:01:39.007 23:42:18 -- common/autobuild_common.sh@493 -- $ mktemp -dt spdk_1734129738.XXXXXX 00:01:39.007 23:42:18 -- common/autobuild_common.sh@493 -- $ SPDK_WORKSPACE=/tmp/spdk_1734129738.toPI2V 00:01:39.007 23:42:18 -- common/autobuild_common.sh@495 -- $ [[ -n '' ]] 00:01:39.007 23:42:18 -- common/autobuild_common.sh@499 -- $ '[' -n '' ']' 00:01:39.007 23:42:18 -- common/autobuild_common.sh@502 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:01:39.007 23:42:18 -- common/autobuild_common.sh@506 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:01:39.008 23:42:18 -- common/autobuild_common.sh@508 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:01:39.008 23:42:18 -- common/autobuild_common.sh@509 -- $ get_config_params 00:01:39.008 23:42:18 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:01:39.008 23:42:18 -- common/autotest_common.sh@10 -- $ set +x 00:01:39.008 23:42:18 -- common/autobuild_common.sh@509 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk' 00:01:39.008 23:42:18 -- common/autobuild_common.sh@511 -- $ start_monitor_resources 00:01:39.008 23:42:18 -- pm/common@17 -- $ local monitor 00:01:39.008 23:42:18 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:39.008 23:42:18 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:39.008 23:42:18 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:39.008 23:42:18 -- pm/common@21 -- $ date +%s 00:01:39.008 23:42:18 -- pm/common@21 -- $ date +%s 00:01:39.008 23:42:18 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:39.008 23:42:18 -- pm/common@25 -- $ sleep 1 00:01:39.008 23:42:18 -- pm/common@21 -- $ date +%s 00:01:39.008 23:42:18 -- pm/common@21 -- $ date +%s 00:01:39.008 23:42:18 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1734129738 00:01:39.008 23:42:18 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1734129738 00:01:39.008 23:42:18 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1734129738 00:01:39.008 23:42:18 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1734129738 00:01:39.267 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1734129738_collect-vmstat.pm.log 00:01:39.267 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1734129738_collect-cpu-load.pm.log 00:01:39.267 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1734129738_collect-cpu-temp.pm.log 00:01:39.267 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1734129738_collect-bmc-pm.bmc.pm.log 00:01:40.204 23:42:19 -- common/autobuild_common.sh@512 -- $ trap stop_monitor_resources EXIT 00:01:40.204 23:42:19 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:40.204 23:42:19 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:40.204 23:42:19 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:40.204 23:42:19 -- spdk/autobuild.sh@16 -- $ date -u 00:01:40.204 Fri Dec 13 10:42:19 PM UTC 2024 00:01:40.204 23:42:19 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:40.204 v25.01-rc1-2-ge01cb43b8 00:01:40.204 23:42:19 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:01:40.204 23:42:19 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:01:40.204 23:42:19 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:01:40.204 23:42:19 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:01:40.204 23:42:19 -- common/autotest_common.sh@10 -- $ set +x 00:01:40.204 ************************************ 00:01:40.204 START TEST asan 00:01:40.204 ************************************ 00:01:40.204 23:42:19 asan -- common/autotest_common.sh@1129 -- $ echo 'using asan' 00:01:40.204 using asan 00:01:40.204 00:01:40.204 real 0m0.001s 00:01:40.204 user 0m0.000s 00:01:40.204 sys 0m0.000s 00:01:40.204 23:42:19 asan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:01:40.204 23:42:19 asan -- common/autotest_common.sh@10 -- $ set +x 00:01:40.204 ************************************ 00:01:40.204 END TEST asan 00:01:40.204 ************************************ 00:01:40.204 23:42:19 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:40.204 23:42:19 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:40.204 23:42:19 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:01:40.204 23:42:19 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:01:40.204 23:42:19 -- common/autotest_common.sh@10 -- $ set +x 00:01:40.204 ************************************ 00:01:40.204 START TEST ubsan 00:01:40.204 ************************************ 00:01:40.204 23:42:19 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:01:40.204 using ubsan 00:01:40.204 00:01:40.204 real 0m0.000s 00:01:40.204 user 0m0.000s 00:01:40.204 sys 0m0.000s 00:01:40.204 23:42:19 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:01:40.204 23:42:19 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:01:40.204 ************************************ 00:01:40.204 END TEST ubsan 00:01:40.204 ************************************ 00:01:40.204 23:42:19 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:01:40.204 23:42:19 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:40.204 23:42:19 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:40.204 23:42:19 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:01:40.204 23:42:19 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:40.204 23:42:19 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:01:40.204 23:42:19 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:01:40.204 23:42:19 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:01:40.204 23:42:19 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-shared 00:01:40.463 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:01:40.463 Using default DPDK in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:01:40.722 Using 'verbs' RDMA provider 00:01:53.883 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:02:06.107 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:02:06.107 Creating mk/config.mk...done. 00:02:06.107 Creating mk/cc.flags.mk...done. 00:02:06.107 Type 'make' to build. 00:02:06.107 23:42:43 -- spdk/autobuild.sh@70 -- $ run_test make make -j96 00:02:06.107 23:42:43 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:02:06.107 23:42:43 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:02:06.107 23:42:43 -- common/autotest_common.sh@10 -- $ set +x 00:02:06.107 ************************************ 00:02:06.107 START TEST make 00:02:06.107 ************************************ 00:02:06.107 23:42:43 make -- common/autotest_common.sh@1129 -- $ make -j96 00:02:14.221 The Meson build system 00:02:14.222 Version: 1.5.0 00:02:14.222 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk 00:02:14.222 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp 00:02:14.222 Build type: native build 00:02:14.222 Program cat found: YES (/usr/bin/cat) 00:02:14.222 Project name: DPDK 00:02:14.222 Project version: 24.03.0 00:02:14.222 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:14.222 C linker for the host machine: cc ld.bfd 2.40-14 00:02:14.222 Host machine cpu family: x86_64 00:02:14.222 Host machine cpu: x86_64 00:02:14.222 Message: ## Building in Developer Mode ## 00:02:14.222 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:14.222 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:02:14.222 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:02:14.222 Program python3 found: YES (/usr/bin/python3) 00:02:14.222 Program cat found: YES (/usr/bin/cat) 00:02:14.222 Compiler for C supports arguments -march=native: YES 00:02:14.222 Checking for size of "void *" : 8 00:02:14.222 Checking for size of "void *" : 8 (cached) 00:02:14.222 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:02:14.222 Library m found: YES 00:02:14.222 Library numa found: YES 00:02:14.222 Has header "numaif.h" : YES 00:02:14.222 Library fdt found: NO 00:02:14.222 Library execinfo found: NO 00:02:14.222 Has header "execinfo.h" : YES 00:02:14.222 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:14.222 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:14.222 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:14.222 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:14.222 Run-time dependency openssl found: YES 3.1.1 00:02:14.222 Run-time dependency libpcap found: YES 1.10.4 00:02:14.222 Has header "pcap.h" with dependency libpcap: YES 00:02:14.222 Compiler for C supports arguments -Wcast-qual: YES 00:02:14.222 Compiler for C supports arguments -Wdeprecated: YES 00:02:14.222 Compiler for C supports arguments -Wformat: YES 00:02:14.222 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:14.222 Compiler for C supports arguments -Wformat-security: NO 00:02:14.222 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:14.222 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:14.222 Compiler for C supports arguments -Wnested-externs: YES 00:02:14.222 Compiler for C supports arguments -Wold-style-definition: YES 00:02:14.222 Compiler for C supports arguments -Wpointer-arith: YES 00:02:14.222 Compiler for C supports arguments -Wsign-compare: YES 00:02:14.222 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:14.222 Compiler for C supports arguments -Wundef: YES 00:02:14.222 Compiler for C supports arguments -Wwrite-strings: YES 00:02:14.222 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:14.222 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:14.222 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:14.222 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:14.222 Program objdump found: YES (/usr/bin/objdump) 00:02:14.222 Compiler for C supports arguments -mavx512f: YES 00:02:14.222 Checking if "AVX512 checking" compiles: YES 00:02:14.222 Fetching value of define "__SSE4_2__" : 1 00:02:14.222 Fetching value of define "__AES__" : 1 00:02:14.222 Fetching value of define "__AVX__" : 1 00:02:14.222 Fetching value of define "__AVX2__" : 1 00:02:14.222 Fetching value of define "__AVX512BW__" : 1 00:02:14.222 Fetching value of define "__AVX512CD__" : 1 00:02:14.222 Fetching value of define "__AVX512DQ__" : 1 00:02:14.222 Fetching value of define "__AVX512F__" : 1 00:02:14.222 Fetching value of define "__AVX512VL__" : 1 00:02:14.222 Fetching value of define "__PCLMUL__" : 1 00:02:14.222 Fetching value of define "__RDRND__" : 1 00:02:14.222 Fetching value of define "__RDSEED__" : 1 00:02:14.222 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:02:14.222 Fetching value of define "__znver1__" : (undefined) 00:02:14.222 Fetching value of define "__znver2__" : (undefined) 00:02:14.222 Fetching value of define "__znver3__" : (undefined) 00:02:14.222 Fetching value of define "__znver4__" : (undefined) 00:02:14.222 Library asan found: YES 00:02:14.222 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:14.222 Message: lib/log: Defining dependency "log" 00:02:14.222 Message: lib/kvargs: Defining dependency "kvargs" 00:02:14.222 Message: lib/telemetry: Defining dependency "telemetry" 00:02:14.222 Library rt found: YES 00:02:14.222 Checking for function "getentropy" : NO 00:02:14.222 Message: lib/eal: Defining dependency "eal" 00:02:14.222 Message: lib/ring: Defining dependency "ring" 00:02:14.222 Message: lib/rcu: Defining dependency "rcu" 00:02:14.222 Message: lib/mempool: Defining dependency "mempool" 00:02:14.222 Message: lib/mbuf: Defining dependency "mbuf" 00:02:14.222 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:14.222 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:14.222 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:14.222 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:14.222 Fetching value of define "__AVX512VL__" : 1 (cached) 00:02:14.222 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:02:14.222 Compiler for C supports arguments -mpclmul: YES 00:02:14.222 Compiler for C supports arguments -maes: YES 00:02:14.222 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:14.222 Compiler for C supports arguments -mavx512bw: YES 00:02:14.222 Compiler for C supports arguments -mavx512dq: YES 00:02:14.222 Compiler for C supports arguments -mavx512vl: YES 00:02:14.222 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:14.222 Compiler for C supports arguments -mavx2: YES 00:02:14.222 Compiler for C supports arguments -mavx: YES 00:02:14.222 Message: lib/net: Defining dependency "net" 00:02:14.222 Message: lib/meter: Defining dependency "meter" 00:02:14.222 Message: lib/ethdev: Defining dependency "ethdev" 00:02:14.222 Message: lib/pci: Defining dependency "pci" 00:02:14.222 Message: lib/cmdline: Defining dependency "cmdline" 00:02:14.222 Message: lib/hash: Defining dependency "hash" 00:02:14.222 Message: lib/timer: Defining dependency "timer" 00:02:14.222 Message: lib/compressdev: Defining dependency "compressdev" 00:02:14.222 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:14.222 Message: lib/dmadev: Defining dependency "dmadev" 00:02:14.222 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:14.222 Message: lib/power: Defining dependency "power" 00:02:14.222 Message: lib/reorder: Defining dependency "reorder" 00:02:14.222 Message: lib/security: Defining dependency "security" 00:02:14.222 Has header "linux/userfaultfd.h" : YES 00:02:14.222 Has header "linux/vduse.h" : YES 00:02:14.222 Message: lib/vhost: Defining dependency "vhost" 00:02:14.222 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:14.222 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:14.222 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:14.222 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:14.222 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:02:14.222 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:02:14.222 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:02:14.222 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:02:14.222 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:02:14.222 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:02:14.222 Program doxygen found: YES (/usr/local/bin/doxygen) 00:02:14.222 Configuring doxy-api-html.conf using configuration 00:02:14.222 Configuring doxy-api-man.conf using configuration 00:02:14.222 Program mandb found: YES (/usr/bin/mandb) 00:02:14.222 Program sphinx-build found: NO 00:02:14.222 Configuring rte_build_config.h using configuration 00:02:14.222 Message: 00:02:14.222 ================= 00:02:14.222 Applications Enabled 00:02:14.222 ================= 00:02:14.222 00:02:14.222 apps: 00:02:14.222 00:02:14.222 00:02:14.222 Message: 00:02:14.222 ================= 00:02:14.222 Libraries Enabled 00:02:14.222 ================= 00:02:14.222 00:02:14.222 libs: 00:02:14.222 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:02:14.222 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:02:14.222 cryptodev, dmadev, power, reorder, security, vhost, 00:02:14.222 00:02:14.222 Message: 00:02:14.222 =============== 00:02:14.222 Drivers Enabled 00:02:14.222 =============== 00:02:14.222 00:02:14.222 common: 00:02:14.222 00:02:14.222 bus: 00:02:14.222 pci, vdev, 00:02:14.222 mempool: 00:02:14.222 ring, 00:02:14.222 dma: 00:02:14.222 00:02:14.222 net: 00:02:14.222 00:02:14.222 crypto: 00:02:14.222 00:02:14.222 compress: 00:02:14.222 00:02:14.222 vdpa: 00:02:14.222 00:02:14.222 00:02:14.222 Message: 00:02:14.222 ================= 00:02:14.222 Content Skipped 00:02:14.222 ================= 00:02:14.222 00:02:14.222 apps: 00:02:14.222 dumpcap: explicitly disabled via build config 00:02:14.222 graph: explicitly disabled via build config 00:02:14.222 pdump: explicitly disabled via build config 00:02:14.222 proc-info: explicitly disabled via build config 00:02:14.222 test-acl: explicitly disabled via build config 00:02:14.222 test-bbdev: explicitly disabled via build config 00:02:14.222 test-cmdline: explicitly disabled via build config 00:02:14.222 test-compress-perf: explicitly disabled via build config 00:02:14.222 test-crypto-perf: explicitly disabled via build config 00:02:14.222 test-dma-perf: explicitly disabled via build config 00:02:14.223 test-eventdev: explicitly disabled via build config 00:02:14.223 test-fib: explicitly disabled via build config 00:02:14.223 test-flow-perf: explicitly disabled via build config 00:02:14.223 test-gpudev: explicitly disabled via build config 00:02:14.223 test-mldev: explicitly disabled via build config 00:02:14.223 test-pipeline: explicitly disabled via build config 00:02:14.223 test-pmd: explicitly disabled via build config 00:02:14.223 test-regex: explicitly disabled via build config 00:02:14.223 test-sad: explicitly disabled via build config 00:02:14.223 test-security-perf: explicitly disabled via build config 00:02:14.223 00:02:14.223 libs: 00:02:14.223 argparse: explicitly disabled via build config 00:02:14.223 metrics: explicitly disabled via build config 00:02:14.223 acl: explicitly disabled via build config 00:02:14.223 bbdev: explicitly disabled via build config 00:02:14.223 bitratestats: explicitly disabled via build config 00:02:14.223 bpf: explicitly disabled via build config 00:02:14.223 cfgfile: explicitly disabled via build config 00:02:14.223 distributor: explicitly disabled via build config 00:02:14.223 efd: explicitly disabled via build config 00:02:14.223 eventdev: explicitly disabled via build config 00:02:14.223 dispatcher: explicitly disabled via build config 00:02:14.223 gpudev: explicitly disabled via build config 00:02:14.223 gro: explicitly disabled via build config 00:02:14.223 gso: explicitly disabled via build config 00:02:14.223 ip_frag: explicitly disabled via build config 00:02:14.223 jobstats: explicitly disabled via build config 00:02:14.223 latencystats: explicitly disabled via build config 00:02:14.223 lpm: explicitly disabled via build config 00:02:14.223 member: explicitly disabled via build config 00:02:14.223 pcapng: explicitly disabled via build config 00:02:14.223 rawdev: explicitly disabled via build config 00:02:14.223 regexdev: explicitly disabled via build config 00:02:14.223 mldev: explicitly disabled via build config 00:02:14.223 rib: explicitly disabled via build config 00:02:14.223 sched: explicitly disabled via build config 00:02:14.223 stack: explicitly disabled via build config 00:02:14.223 ipsec: explicitly disabled via build config 00:02:14.223 pdcp: explicitly disabled via build config 00:02:14.223 fib: explicitly disabled via build config 00:02:14.223 port: explicitly disabled via build config 00:02:14.223 pdump: explicitly disabled via build config 00:02:14.223 table: explicitly disabled via build config 00:02:14.223 pipeline: explicitly disabled via build config 00:02:14.223 graph: explicitly disabled via build config 00:02:14.223 node: explicitly disabled via build config 00:02:14.223 00:02:14.223 drivers: 00:02:14.223 common/cpt: not in enabled drivers build config 00:02:14.223 common/dpaax: not in enabled drivers build config 00:02:14.223 common/iavf: not in enabled drivers build config 00:02:14.223 common/idpf: not in enabled drivers build config 00:02:14.223 common/ionic: not in enabled drivers build config 00:02:14.223 common/mvep: not in enabled drivers build config 00:02:14.223 common/octeontx: not in enabled drivers build config 00:02:14.223 bus/auxiliary: not in enabled drivers build config 00:02:14.223 bus/cdx: not in enabled drivers build config 00:02:14.223 bus/dpaa: not in enabled drivers build config 00:02:14.223 bus/fslmc: not in enabled drivers build config 00:02:14.223 bus/ifpga: not in enabled drivers build config 00:02:14.223 bus/platform: not in enabled drivers build config 00:02:14.223 bus/uacce: not in enabled drivers build config 00:02:14.223 bus/vmbus: not in enabled drivers build config 00:02:14.223 common/cnxk: not in enabled drivers build config 00:02:14.223 common/mlx5: not in enabled drivers build config 00:02:14.223 common/nfp: not in enabled drivers build config 00:02:14.223 common/nitrox: not in enabled drivers build config 00:02:14.223 common/qat: not in enabled drivers build config 00:02:14.223 common/sfc_efx: not in enabled drivers build config 00:02:14.223 mempool/bucket: not in enabled drivers build config 00:02:14.223 mempool/cnxk: not in enabled drivers build config 00:02:14.223 mempool/dpaa: not in enabled drivers build config 00:02:14.223 mempool/dpaa2: not in enabled drivers build config 00:02:14.223 mempool/octeontx: not in enabled drivers build config 00:02:14.223 mempool/stack: not in enabled drivers build config 00:02:14.223 dma/cnxk: not in enabled drivers build config 00:02:14.223 dma/dpaa: not in enabled drivers build config 00:02:14.223 dma/dpaa2: not in enabled drivers build config 00:02:14.223 dma/hisilicon: not in enabled drivers build config 00:02:14.223 dma/idxd: not in enabled drivers build config 00:02:14.223 dma/ioat: not in enabled drivers build config 00:02:14.223 dma/skeleton: not in enabled drivers build config 00:02:14.223 net/af_packet: not in enabled drivers build config 00:02:14.223 net/af_xdp: not in enabled drivers build config 00:02:14.223 net/ark: not in enabled drivers build config 00:02:14.223 net/atlantic: not in enabled drivers build config 00:02:14.223 net/avp: not in enabled drivers build config 00:02:14.223 net/axgbe: not in enabled drivers build config 00:02:14.223 net/bnx2x: not in enabled drivers build config 00:02:14.223 net/bnxt: not in enabled drivers build config 00:02:14.223 net/bonding: not in enabled drivers build config 00:02:14.223 net/cnxk: not in enabled drivers build config 00:02:14.223 net/cpfl: not in enabled drivers build config 00:02:14.223 net/cxgbe: not in enabled drivers build config 00:02:14.223 net/dpaa: not in enabled drivers build config 00:02:14.223 net/dpaa2: not in enabled drivers build config 00:02:14.223 net/e1000: not in enabled drivers build config 00:02:14.223 net/ena: not in enabled drivers build config 00:02:14.223 net/enetc: not in enabled drivers build config 00:02:14.223 net/enetfec: not in enabled drivers build config 00:02:14.223 net/enic: not in enabled drivers build config 00:02:14.223 net/failsafe: not in enabled drivers build config 00:02:14.223 net/fm10k: not in enabled drivers build config 00:02:14.223 net/gve: not in enabled drivers build config 00:02:14.223 net/hinic: not in enabled drivers build config 00:02:14.223 net/hns3: not in enabled drivers build config 00:02:14.223 net/i40e: not in enabled drivers build config 00:02:14.223 net/iavf: not in enabled drivers build config 00:02:14.223 net/ice: not in enabled drivers build config 00:02:14.223 net/idpf: not in enabled drivers build config 00:02:14.223 net/igc: not in enabled drivers build config 00:02:14.223 net/ionic: not in enabled drivers build config 00:02:14.223 net/ipn3ke: not in enabled drivers build config 00:02:14.223 net/ixgbe: not in enabled drivers build config 00:02:14.223 net/mana: not in enabled drivers build config 00:02:14.223 net/memif: not in enabled drivers build config 00:02:14.223 net/mlx4: not in enabled drivers build config 00:02:14.223 net/mlx5: not in enabled drivers build config 00:02:14.223 net/mvneta: not in enabled drivers build config 00:02:14.223 net/mvpp2: not in enabled drivers build config 00:02:14.223 net/netvsc: not in enabled drivers build config 00:02:14.223 net/nfb: not in enabled drivers build config 00:02:14.223 net/nfp: not in enabled drivers build config 00:02:14.223 net/ngbe: not in enabled drivers build config 00:02:14.223 net/null: not in enabled drivers build config 00:02:14.223 net/octeontx: not in enabled drivers build config 00:02:14.223 net/octeon_ep: not in enabled drivers build config 00:02:14.223 net/pcap: not in enabled drivers build config 00:02:14.223 net/pfe: not in enabled drivers build config 00:02:14.223 net/qede: not in enabled drivers build config 00:02:14.223 net/ring: not in enabled drivers build config 00:02:14.223 net/sfc: not in enabled drivers build config 00:02:14.223 net/softnic: not in enabled drivers build config 00:02:14.223 net/tap: not in enabled drivers build config 00:02:14.223 net/thunderx: not in enabled drivers build config 00:02:14.223 net/txgbe: not in enabled drivers build config 00:02:14.223 net/vdev_netvsc: not in enabled drivers build config 00:02:14.223 net/vhost: not in enabled drivers build config 00:02:14.223 net/virtio: not in enabled drivers build config 00:02:14.223 net/vmxnet3: not in enabled drivers build config 00:02:14.223 raw/*: missing internal dependency, "rawdev" 00:02:14.223 crypto/armv8: not in enabled drivers build config 00:02:14.223 crypto/bcmfs: not in enabled drivers build config 00:02:14.223 crypto/caam_jr: not in enabled drivers build config 00:02:14.223 crypto/ccp: not in enabled drivers build config 00:02:14.223 crypto/cnxk: not in enabled drivers build config 00:02:14.223 crypto/dpaa_sec: not in enabled drivers build config 00:02:14.223 crypto/dpaa2_sec: not in enabled drivers build config 00:02:14.223 crypto/ipsec_mb: not in enabled drivers build config 00:02:14.223 crypto/mlx5: not in enabled drivers build config 00:02:14.223 crypto/mvsam: not in enabled drivers build config 00:02:14.223 crypto/nitrox: not in enabled drivers build config 00:02:14.223 crypto/null: not in enabled drivers build config 00:02:14.223 crypto/octeontx: not in enabled drivers build config 00:02:14.223 crypto/openssl: not in enabled drivers build config 00:02:14.223 crypto/scheduler: not in enabled drivers build config 00:02:14.223 crypto/uadk: not in enabled drivers build config 00:02:14.223 crypto/virtio: not in enabled drivers build config 00:02:14.223 compress/isal: not in enabled drivers build config 00:02:14.223 compress/mlx5: not in enabled drivers build config 00:02:14.223 compress/nitrox: not in enabled drivers build config 00:02:14.223 compress/octeontx: not in enabled drivers build config 00:02:14.223 compress/zlib: not in enabled drivers build config 00:02:14.223 regex/*: missing internal dependency, "regexdev" 00:02:14.223 ml/*: missing internal dependency, "mldev" 00:02:14.223 vdpa/ifc: not in enabled drivers build config 00:02:14.223 vdpa/mlx5: not in enabled drivers build config 00:02:14.223 vdpa/nfp: not in enabled drivers build config 00:02:14.223 vdpa/sfc: not in enabled drivers build config 00:02:14.223 event/*: missing internal dependency, "eventdev" 00:02:14.223 baseband/*: missing internal dependency, "bbdev" 00:02:14.223 gpu/*: missing internal dependency, "gpudev" 00:02:14.223 00:02:14.223 00:02:14.223 Build targets in project: 85 00:02:14.223 00:02:14.223 DPDK 24.03.0 00:02:14.223 00:02:14.223 User defined options 00:02:14.223 buildtype : debug 00:02:14.223 default_library : shared 00:02:14.223 libdir : lib 00:02:14.223 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:02:14.223 b_sanitize : address 00:02:14.223 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:02:14.223 c_link_args : 00:02:14.223 cpu_instruction_set: native 00:02:14.224 disable_apps : test-dma-perf,test,test-sad,test-acl,test-pmd,test-mldev,test-compress-perf,test-cmdline,test-regex,test-fib,graph,test-bbdev,dumpcap,test-gpudev,proc-info,test-pipeline,test-flow-perf,test-crypto-perf,pdump,test-eventdev,test-security-perf 00:02:14.224 disable_libs : port,lpm,ipsec,regexdev,dispatcher,argparse,bitratestats,rawdev,stack,graph,acl,bbdev,pipeline,member,sched,pcapng,mldev,eventdev,efd,metrics,latencystats,cfgfile,ip_frag,jobstats,pdump,pdcp,rib,node,fib,distributor,gso,table,bpf,gpudev,gro 00:02:14.224 enable_docs : false 00:02:14.224 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm 00:02:14.224 enable_kmods : false 00:02:14.224 max_lcores : 128 00:02:14.224 tests : false 00:02:14.224 00:02:14.224 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:14.224 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp' 00:02:14.224 [1/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:14.224 [2/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:14.224 [3/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:14.224 [4/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:14.224 [5/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:14.224 [6/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:14.224 [7/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:14.224 [8/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:14.224 [9/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:14.224 [10/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:14.224 [11/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:14.224 [12/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:14.224 [13/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:14.224 [14/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:14.224 [15/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:14.224 [16/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:14.224 [17/268] Linking static target lib/librte_kvargs.a 00:02:14.224 [18/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:14.224 [19/268] Linking static target lib/librte_log.a 00:02:14.488 [20/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:14.488 [21/268] Linking static target lib/librte_pci.a 00:02:14.488 [22/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:14.488 [23/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:14.488 [24/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:14.752 [25/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:14.752 [26/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:14.752 [27/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:14.752 [28/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:14.752 [29/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:14.752 [30/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:14.752 [31/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:14.752 [32/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:14.752 [33/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:14.752 [34/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:14.752 [35/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:14.752 [36/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:14.752 [37/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:14.752 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:14.752 [39/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:14.752 [40/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:14.752 [41/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:14.752 [42/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:14.752 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:14.752 [44/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:14.752 [45/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:14.752 [46/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:14.752 [47/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:14.752 [48/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:14.752 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:14.752 [50/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:14.752 [51/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:14.752 [52/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:14.752 [53/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:14.752 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:14.752 [55/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:14.752 [56/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:14.752 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:14.752 [58/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:14.752 [59/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:14.752 [60/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:14.752 [61/268] Linking static target lib/librte_meter.a 00:02:14.752 [62/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:14.752 [63/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:14.752 [64/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:14.752 [65/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:14.752 [66/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:14.752 [67/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:14.752 [68/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:14.752 [69/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:14.752 [70/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:14.752 [71/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:14.752 [72/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:14.752 [73/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:14.752 [74/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:14.752 [75/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:02:14.752 [76/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:14.752 [77/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:14.752 [78/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:14.752 [79/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:14.752 [80/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:15.012 [81/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:15.012 [82/268] Linking static target lib/librte_ring.a 00:02:15.012 [83/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:15.012 [84/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:15.012 [85/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:15.012 [86/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:02:15.012 [87/268] Linking static target lib/librte_telemetry.a 00:02:15.012 [88/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:15.012 [89/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:15.012 [90/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:02:15.012 [91/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:15.012 [92/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:15.012 [93/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:15.012 [94/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:15.012 [95/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:15.012 [96/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:15.012 [97/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:15.012 [98/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:15.012 [99/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:15.012 [100/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:15.012 [101/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:15.012 [102/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:15.012 [103/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:15.012 [104/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:15.012 [105/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:15.012 [106/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:15.012 [107/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:15.012 [108/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:15.012 [109/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:15.012 [110/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:15.012 [111/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:15.012 [112/268] Linking static target lib/librte_cmdline.a 00:02:15.012 [113/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:15.012 [114/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:15.012 [115/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:15.012 [116/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:15.012 [117/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:15.012 [118/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:02:15.012 [119/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:15.012 [120/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:15.012 [121/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:15.012 [122/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:15.012 [123/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:15.012 [124/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:02:15.012 [125/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:15.012 [126/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:15.271 [127/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:15.271 [128/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:15.271 [129/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:15.271 [130/268] Linking static target lib/librte_mempool.a 00:02:15.271 [131/268] Linking target lib/librte_log.so.24.1 00:02:15.271 [132/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:15.271 [133/268] Linking static target lib/librte_net.a 00:02:15.271 [134/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:15.271 [135/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:15.271 [136/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:15.271 [137/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:15.271 [138/268] Linking static target lib/librte_eal.a 00:02:15.271 [139/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:15.271 [140/268] Linking static target lib/librte_rcu.a 00:02:15.271 [141/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:15.271 [142/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:15.271 [143/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:02:15.271 [144/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:15.271 [145/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:15.271 [146/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:15.271 [147/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:15.271 [148/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:15.271 [149/268] Linking target lib/librte_kvargs.so.24.1 00:02:15.271 [150/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:15.271 [151/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:15.271 [152/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:02:15.271 [153/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:02:15.271 [154/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:02:15.271 [155/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:15.271 [156/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:15.271 [157/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:15.271 [158/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:15.271 [159/268] Linking target lib/librte_telemetry.so.24.1 00:02:15.529 [160/268] Linking static target lib/librte_timer.a 00:02:15.529 [161/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:15.529 [162/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:15.529 [163/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:15.529 [164/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:15.529 [165/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:15.530 [166/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:15.530 [167/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:15.530 [168/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:15.530 [169/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:15.530 [170/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:15.530 [171/268] Linking static target lib/librte_dmadev.a 00:02:15.530 [172/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:15.530 [173/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:02:15.530 [174/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:15.530 [175/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:15.530 [176/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:02:15.530 [177/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:15.530 [178/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:15.530 [179/268] Linking static target lib/librte_power.a 00:02:15.530 [180/268] Linking static target lib/librte_reorder.a 00:02:15.530 [181/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:02:15.530 [182/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:15.530 [183/268] Linking static target lib/librte_compressdev.a 00:02:15.530 [184/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:15.530 [185/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:15.530 [186/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:15.530 [187/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:15.530 [188/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:15.530 [189/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:02:15.530 [190/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:15.530 [191/268] Linking static target lib/librte_mbuf.a 00:02:15.788 [192/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:15.788 [193/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:15.788 [194/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:15.788 [195/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:15.788 [196/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:15.788 [197/268] Linking static target drivers/librte_bus_vdev.a 00:02:15.788 [198/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:15.788 [199/268] Linking static target drivers/librte_bus_pci.a 00:02:15.788 [200/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:15.788 [201/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:15.788 [202/268] Linking static target lib/librte_security.a 00:02:15.788 [203/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:15.788 [204/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:15.788 [205/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:15.788 [206/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:15.788 [207/268] Linking static target drivers/librte_mempool_ring.a 00:02:15.788 [208/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:16.047 [209/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:16.047 [210/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:16.047 [211/268] Linking static target lib/librte_hash.a 00:02:16.047 [212/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:16.047 [213/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:16.047 [214/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:16.047 [215/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:16.047 [216/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:16.304 [217/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:16.304 [218/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:16.304 [219/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:16.304 [220/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:16.561 [221/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:16.561 [222/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:16.561 [223/268] Linking static target lib/librte_cryptodev.a 00:02:16.818 [224/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:16.818 [225/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:16.818 [226/268] Linking static target lib/librte_ethdev.a 00:02:18.253 [227/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:18.253 [228/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:21.532 [229/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:21.532 [230/268] Linking static target lib/librte_vhost.a 00:02:22.902 [231/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:24.800 [232/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:25.057 [233/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:25.057 [234/268] Linking target lib/librte_eal.so.24.1 00:02:25.057 [235/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:02:25.315 [236/268] Linking target lib/librte_ring.so.24.1 00:02:25.315 [237/268] Linking target lib/librte_meter.so.24.1 00:02:25.315 [238/268] Linking target lib/librte_pci.so.24.1 00:02:25.315 [239/268] Linking target drivers/librte_bus_vdev.so.24.1 00:02:25.315 [240/268] Linking target lib/librte_dmadev.so.24.1 00:02:25.315 [241/268] Linking target lib/librte_timer.so.24.1 00:02:25.315 [242/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:02:25.315 [243/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:02:25.315 [244/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:02:25.315 [245/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:02:25.315 [246/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:02:25.315 [247/268] Linking target lib/librte_rcu.so.24.1 00:02:25.315 [248/268] Linking target lib/librte_mempool.so.24.1 00:02:25.315 [249/268] Linking target drivers/librte_bus_pci.so.24.1 00:02:25.572 [250/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:02:25.572 [251/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:02:25.572 [252/268] Linking target drivers/librte_mempool_ring.so.24.1 00:02:25.572 [253/268] Linking target lib/librte_mbuf.so.24.1 00:02:25.830 [254/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:02:25.830 [255/268] Linking target lib/librte_reorder.so.24.1 00:02:25.830 [256/268] Linking target lib/librte_compressdev.so.24.1 00:02:25.830 [257/268] Linking target lib/librte_cryptodev.so.24.1 00:02:25.830 [258/268] Linking target lib/librte_net.so.24.1 00:02:25.830 [259/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:02:25.830 [260/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:02:25.830 [261/268] Linking target lib/librte_security.so.24.1 00:02:25.830 [262/268] Linking target lib/librte_cmdline.so.24.1 00:02:25.830 [263/268] Linking target lib/librte_hash.so.24.1 00:02:26.086 [264/268] Linking target lib/librte_ethdev.so.24.1 00:02:26.086 [265/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:02:26.086 [266/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:02:26.086 [267/268] Linking target lib/librte_power.so.24.1 00:02:26.086 [268/268] Linking target lib/librte_vhost.so.24.1 00:02:26.086 INFO: autodetecting backend as ninja 00:02:26.086 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp -j 96 00:02:38.275 CC lib/log/log.o 00:02:38.275 CC lib/log/log_flags.o 00:02:38.275 CC lib/log/log_deprecated.o 00:02:38.275 CC lib/ut/ut.o 00:02:38.275 CC lib/ut_mock/mock.o 00:02:38.275 LIB libspdk_log.a 00:02:38.275 LIB libspdk_ut.a 00:02:38.275 LIB libspdk_ut_mock.a 00:02:38.275 SO libspdk_log.so.7.1 00:02:38.275 SO libspdk_ut.so.2.0 00:02:38.275 SO libspdk_ut_mock.so.6.0 00:02:38.275 SYMLINK libspdk_log.so 00:02:38.275 SYMLINK libspdk_ut.so 00:02:38.275 SYMLINK libspdk_ut_mock.so 00:02:38.275 CC lib/dma/dma.o 00:02:38.275 CC lib/util/base64.o 00:02:38.275 CC lib/util/bit_array.o 00:02:38.275 CC lib/util/cpuset.o 00:02:38.275 CXX lib/trace_parser/trace.o 00:02:38.275 CC lib/util/crc16.o 00:02:38.275 CC lib/util/crc32.o 00:02:38.275 CC lib/util/crc32c.o 00:02:38.275 CC lib/util/crc32_ieee.o 00:02:38.275 CC lib/ioat/ioat.o 00:02:38.275 CC lib/util/crc64.o 00:02:38.275 CC lib/util/dif.o 00:02:38.275 CC lib/util/fd.o 00:02:38.275 CC lib/util/fd_group.o 00:02:38.275 CC lib/util/file.o 00:02:38.275 CC lib/util/hexlify.o 00:02:38.275 CC lib/util/iov.o 00:02:38.275 CC lib/util/math.o 00:02:38.275 CC lib/util/net.o 00:02:38.275 CC lib/util/pipe.o 00:02:38.275 CC lib/util/strerror_tls.o 00:02:38.275 CC lib/util/string.o 00:02:38.275 CC lib/util/uuid.o 00:02:38.275 CC lib/util/xor.o 00:02:38.275 CC lib/util/zipf.o 00:02:38.275 CC lib/util/md5.o 00:02:38.275 CC lib/vfio_user/host/vfio_user_pci.o 00:02:38.275 CC lib/vfio_user/host/vfio_user.o 00:02:38.275 LIB libspdk_dma.a 00:02:38.275 SO libspdk_dma.so.5.0 00:02:38.275 SYMLINK libspdk_dma.so 00:02:38.275 LIB libspdk_ioat.a 00:02:38.275 SO libspdk_ioat.so.7.0 00:02:38.275 SYMLINK libspdk_ioat.so 00:02:38.275 LIB libspdk_vfio_user.a 00:02:38.275 SO libspdk_vfio_user.so.5.0 00:02:38.275 SYMLINK libspdk_vfio_user.so 00:02:38.275 LIB libspdk_util.a 00:02:38.275 SO libspdk_util.so.10.1 00:02:38.275 LIB libspdk_trace_parser.a 00:02:38.275 SYMLINK libspdk_util.so 00:02:38.275 SO libspdk_trace_parser.so.6.0 00:02:38.532 SYMLINK libspdk_trace_parser.so 00:02:38.532 CC lib/idxd/idxd.o 00:02:38.532 CC lib/idxd/idxd_kernel.o 00:02:38.532 CC lib/idxd/idxd_user.o 00:02:38.532 CC lib/conf/conf.o 00:02:38.532 CC lib/rdma_utils/rdma_utils.o 00:02:38.532 CC lib/json/json_parse.o 00:02:38.532 CC lib/json/json_util.o 00:02:38.532 CC lib/json/json_write.o 00:02:38.532 CC lib/env_dpdk/env.o 00:02:38.532 CC lib/env_dpdk/init.o 00:02:38.532 CC lib/env_dpdk/memory.o 00:02:38.532 CC lib/env_dpdk/pci.o 00:02:38.532 CC lib/vmd/vmd.o 00:02:38.532 CC lib/env_dpdk/threads.o 00:02:38.532 CC lib/env_dpdk/pci_ioat.o 00:02:38.532 CC lib/vmd/led.o 00:02:38.532 CC lib/env_dpdk/pci_virtio.o 00:02:38.532 CC lib/env_dpdk/pci_idxd.o 00:02:38.532 CC lib/env_dpdk/pci_vmd.o 00:02:38.532 CC lib/env_dpdk/sigbus_handler.o 00:02:38.532 CC lib/env_dpdk/pci_event.o 00:02:38.532 CC lib/env_dpdk/pci_dpdk_2207.o 00:02:38.532 CC lib/env_dpdk/pci_dpdk.o 00:02:38.532 CC lib/env_dpdk/pci_dpdk_2211.o 00:02:38.789 LIB libspdk_conf.a 00:02:38.789 SO libspdk_conf.so.6.0 00:02:38.789 LIB libspdk_rdma_utils.a 00:02:39.046 SO libspdk_rdma_utils.so.1.0 00:02:39.046 SYMLINK libspdk_conf.so 00:02:39.046 LIB libspdk_json.a 00:02:39.046 SO libspdk_json.so.6.0 00:02:39.046 SYMLINK libspdk_rdma_utils.so 00:02:39.046 SYMLINK libspdk_json.so 00:02:39.304 LIB libspdk_idxd.a 00:02:39.304 SO libspdk_idxd.so.12.1 00:02:39.304 CC lib/rdma_provider/common.o 00:02:39.304 CC lib/rdma_provider/rdma_provider_verbs.o 00:02:39.304 LIB libspdk_vmd.a 00:02:39.304 SO libspdk_vmd.so.6.0 00:02:39.304 CC lib/jsonrpc/jsonrpc_server.o 00:02:39.304 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:02:39.304 CC lib/jsonrpc/jsonrpc_client.o 00:02:39.304 SYMLINK libspdk_idxd.so 00:02:39.304 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:02:39.561 SYMLINK libspdk_vmd.so 00:02:39.561 LIB libspdk_rdma_provider.a 00:02:39.561 SO libspdk_rdma_provider.so.7.0 00:02:39.561 LIB libspdk_jsonrpc.a 00:02:39.561 SYMLINK libspdk_rdma_provider.so 00:02:39.561 SO libspdk_jsonrpc.so.6.0 00:02:39.818 SYMLINK libspdk_jsonrpc.so 00:02:40.075 LIB libspdk_env_dpdk.a 00:02:40.075 CC lib/rpc/rpc.o 00:02:40.075 SO libspdk_env_dpdk.so.15.1 00:02:40.075 SYMLINK libspdk_env_dpdk.so 00:02:40.332 LIB libspdk_rpc.a 00:02:40.332 SO libspdk_rpc.so.6.0 00:02:40.332 SYMLINK libspdk_rpc.so 00:02:40.589 CC lib/keyring/keyring.o 00:02:40.589 CC lib/keyring/keyring_rpc.o 00:02:40.589 CC lib/trace/trace.o 00:02:40.589 CC lib/trace/trace_flags.o 00:02:40.589 CC lib/trace/trace_rpc.o 00:02:40.589 CC lib/notify/notify_rpc.o 00:02:40.589 CC lib/notify/notify.o 00:02:40.847 LIB libspdk_notify.a 00:02:40.847 SO libspdk_notify.so.6.0 00:02:40.847 LIB libspdk_keyring.a 00:02:40.847 LIB libspdk_trace.a 00:02:40.847 SO libspdk_keyring.so.2.0 00:02:40.847 SYMLINK libspdk_notify.so 00:02:40.847 SO libspdk_trace.so.11.0 00:02:40.847 SYMLINK libspdk_keyring.so 00:02:41.104 SYMLINK libspdk_trace.so 00:02:41.361 CC lib/sock/sock_rpc.o 00:02:41.361 CC lib/sock/sock.o 00:02:41.361 CC lib/thread/thread.o 00:02:41.361 CC lib/thread/iobuf.o 00:02:41.618 LIB libspdk_sock.a 00:02:41.618 SO libspdk_sock.so.10.0 00:02:41.875 SYMLINK libspdk_sock.so 00:02:42.133 CC lib/nvme/nvme_ctrlr.o 00:02:42.133 CC lib/nvme/nvme_ctrlr_cmd.o 00:02:42.133 CC lib/nvme/nvme_fabric.o 00:02:42.133 CC lib/nvme/nvme_ns_cmd.o 00:02:42.133 CC lib/nvme/nvme_ns.o 00:02:42.133 CC lib/nvme/nvme_pcie_common.o 00:02:42.133 CC lib/nvme/nvme_pcie.o 00:02:42.133 CC lib/nvme/nvme_qpair.o 00:02:42.133 CC lib/nvme/nvme.o 00:02:42.133 CC lib/nvme/nvme_quirks.o 00:02:42.133 CC lib/nvme/nvme_transport.o 00:02:42.133 CC lib/nvme/nvme_discovery.o 00:02:42.133 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:02:42.133 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:02:42.133 CC lib/nvme/nvme_io_msg.o 00:02:42.133 CC lib/nvme/nvme_tcp.o 00:02:42.133 CC lib/nvme/nvme_opal.o 00:02:42.133 CC lib/nvme/nvme_stubs.o 00:02:42.133 CC lib/nvme/nvme_poll_group.o 00:02:42.133 CC lib/nvme/nvme_zns.o 00:02:42.133 CC lib/nvme/nvme_auth.o 00:02:42.133 CC lib/nvme/nvme_cuse.o 00:02:42.133 CC lib/nvme/nvme_rdma.o 00:02:42.697 LIB libspdk_thread.a 00:02:42.697 SO libspdk_thread.so.11.0 00:02:42.954 SYMLINK libspdk_thread.so 00:02:43.211 CC lib/accel/accel.o 00:02:43.211 CC lib/accel/accel_rpc.o 00:02:43.211 CC lib/virtio/virtio.o 00:02:43.211 CC lib/accel/accel_sw.o 00:02:43.211 CC lib/blob/blobstore.o 00:02:43.211 CC lib/virtio/virtio_vhost_user.o 00:02:43.211 CC lib/fsdev/fsdev.o 00:02:43.211 CC lib/blob/blob_bs_dev.o 00:02:43.211 CC lib/virtio/virtio_vfio_user.o 00:02:43.211 CC lib/blob/request.o 00:02:43.211 CC lib/blob/zeroes.o 00:02:43.211 CC lib/virtio/virtio_pci.o 00:02:43.211 CC lib/fsdev/fsdev_io.o 00:02:43.211 CC lib/fsdev/fsdev_rpc.o 00:02:43.211 CC lib/init/json_config.o 00:02:43.211 CC lib/init/subsystem.o 00:02:43.211 CC lib/init/subsystem_rpc.o 00:02:43.211 CC lib/init/rpc.o 00:02:43.468 LIB libspdk_init.a 00:02:43.468 SO libspdk_init.so.6.0 00:02:43.468 LIB libspdk_virtio.a 00:02:43.468 SO libspdk_virtio.so.7.0 00:02:43.468 SYMLINK libspdk_init.so 00:02:43.726 SYMLINK libspdk_virtio.so 00:02:43.726 LIB libspdk_fsdev.a 00:02:43.726 SO libspdk_fsdev.so.2.0 00:02:43.983 CC lib/event/app.o 00:02:43.983 CC lib/event/reactor.o 00:02:43.983 CC lib/event/log_rpc.o 00:02:43.983 CC lib/event/scheduler_static.o 00:02:43.983 CC lib/event/app_rpc.o 00:02:43.983 SYMLINK libspdk_fsdev.so 00:02:44.240 LIB libspdk_nvme.a 00:02:44.240 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:02:44.240 LIB libspdk_accel.a 00:02:44.240 SO libspdk_accel.so.16.0 00:02:44.240 SO libspdk_nvme.so.15.0 00:02:44.240 SYMLINK libspdk_accel.so 00:02:44.240 LIB libspdk_event.a 00:02:44.497 SO libspdk_event.so.14.0 00:02:44.497 SYMLINK libspdk_event.so 00:02:44.497 SYMLINK libspdk_nvme.so 00:02:44.755 CC lib/bdev/bdev.o 00:02:44.755 CC lib/bdev/bdev_rpc.o 00:02:44.755 CC lib/bdev/bdev_zone.o 00:02:44.755 CC lib/bdev/scsi_nvme.o 00:02:44.755 CC lib/bdev/part.o 00:02:44.755 LIB libspdk_fuse_dispatcher.a 00:02:44.755 SO libspdk_fuse_dispatcher.so.1.0 00:02:45.012 SYMLINK libspdk_fuse_dispatcher.so 00:02:46.382 LIB libspdk_blob.a 00:02:46.382 SO libspdk_blob.so.12.0 00:02:46.382 SYMLINK libspdk_blob.so 00:02:46.639 CC lib/lvol/lvol.o 00:02:46.639 CC lib/blobfs/blobfs.o 00:02:46.639 CC lib/blobfs/tree.o 00:02:47.204 LIB libspdk_bdev.a 00:02:47.204 SO libspdk_bdev.so.17.0 00:02:47.204 SYMLINK libspdk_bdev.so 00:02:47.462 LIB libspdk_blobfs.a 00:02:47.462 SO libspdk_blobfs.so.11.0 00:02:47.462 LIB libspdk_lvol.a 00:02:47.462 SYMLINK libspdk_blobfs.so 00:02:47.462 SO libspdk_lvol.so.11.0 00:02:47.462 CC lib/ftl/ftl_core.o 00:02:47.462 CC lib/ublk/ublk.o 00:02:47.462 CC lib/ftl/ftl_init.o 00:02:47.462 CC lib/nvmf/ctrlr.o 00:02:47.462 CC lib/ftl/ftl_layout.o 00:02:47.462 CC lib/ublk/ublk_rpc.o 00:02:47.462 CC lib/ftl/ftl_debug.o 00:02:47.462 CC lib/nvmf/ctrlr_discovery.o 00:02:47.462 CC lib/ftl/ftl_io.o 00:02:47.462 CC lib/ftl/ftl_l2p_flat.o 00:02:47.462 CC lib/ftl/ftl_sb.o 00:02:47.462 CC lib/nvmf/ctrlr_bdev.o 00:02:47.462 CC lib/ftl/ftl_l2p.o 00:02:47.462 CC lib/ftl/ftl_nv_cache.o 00:02:47.462 CC lib/nvmf/subsystem.o 00:02:47.462 CC lib/nvmf/nvmf.o 00:02:47.462 CC lib/ftl/ftl_band.o 00:02:47.462 CC lib/nvmf/nvmf_rpc.o 00:02:47.462 CC lib/ftl/ftl_band_ops.o 00:02:47.462 CC lib/nvmf/transport.o 00:02:47.462 CC lib/ftl/ftl_writer.o 00:02:47.462 CC lib/ftl/ftl_l2p_cache.o 00:02:47.462 CC lib/nvmf/tcp.o 00:02:47.462 CC lib/ftl/ftl_reloc.o 00:02:47.462 CC lib/ftl/ftl_rq.o 00:02:47.462 CC lib/ftl/ftl_p2l.o 00:02:47.462 CC lib/nvmf/stubs.o 00:02:47.462 CC lib/ftl/ftl_p2l_log.o 00:02:47.462 CC lib/nvmf/mdns_server.o 00:02:47.462 CC lib/nvmf/rdma.o 00:02:47.462 CC lib/ftl/mngt/ftl_mngt.o 00:02:47.462 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:02:47.462 CC lib/nvmf/auth.o 00:02:47.462 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:02:47.462 CC lib/ftl/mngt/ftl_mngt_md.o 00:02:47.462 CC lib/ftl/mngt/ftl_mngt_startup.o 00:02:47.462 CC lib/scsi/dev.o 00:02:47.462 CC lib/scsi/lun.o 00:02:47.462 CC lib/ftl/mngt/ftl_mngt_misc.o 00:02:47.462 CC lib/nbd/nbd.o 00:02:47.462 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:02:47.462 CC lib/scsi/scsi.o 00:02:47.462 CC lib/nbd/nbd_rpc.o 00:02:47.462 CC lib/scsi/port.o 00:02:47.462 CC lib/ftl/mngt/ftl_mngt_band.o 00:02:47.462 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:02:47.462 CC lib/scsi/scsi_pr.o 00:02:47.462 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:02:47.462 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:02:47.462 CC lib/scsi/scsi_bdev.o 00:02:47.462 CC lib/scsi/scsi_rpc.o 00:02:47.462 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:02:47.462 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:02:47.462 CC lib/ftl/utils/ftl_conf.o 00:02:47.462 CC lib/scsi/task.o 00:02:47.462 CC lib/ftl/utils/ftl_md.o 00:02:47.462 CC lib/ftl/utils/ftl_mempool.o 00:02:47.462 CC lib/ftl/utils/ftl_bitmap.o 00:02:47.721 CC lib/ftl/utils/ftl_property.o 00:02:47.721 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:02:47.721 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:02:47.721 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:02:47.721 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:02:47.721 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:02:47.721 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:02:47.721 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:02:47.721 CC lib/ftl/upgrade/ftl_sb_v3.o 00:02:47.721 CC lib/ftl/upgrade/ftl_sb_v5.o 00:02:47.721 CC lib/ftl/nvc/ftl_nvc_dev.o 00:02:47.721 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:02:47.721 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:02:47.721 CC lib/ftl/base/ftl_base_dev.o 00:02:47.721 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:02:47.721 CC lib/ftl/base/ftl_base_bdev.o 00:02:47.721 CC lib/ftl/ftl_trace.o 00:02:47.721 SYMLINK libspdk_lvol.so 00:02:48.286 LIB libspdk_nbd.a 00:02:48.286 SO libspdk_nbd.so.7.0 00:02:48.286 SYMLINK libspdk_nbd.so 00:02:48.543 LIB libspdk_scsi.a 00:02:48.543 SO libspdk_scsi.so.9.0 00:02:48.543 LIB libspdk_ublk.a 00:02:48.543 SYMLINK libspdk_scsi.so 00:02:48.543 SO libspdk_ublk.so.3.0 00:02:48.801 SYMLINK libspdk_ublk.so 00:02:48.801 LIB libspdk_ftl.a 00:02:48.801 CC lib/iscsi/conn.o 00:02:48.801 CC lib/iscsi/init_grp.o 00:02:48.801 CC lib/iscsi/param.o 00:02:48.801 CC lib/iscsi/iscsi.o 00:02:48.801 CC lib/iscsi/portal_grp.o 00:02:48.801 CC lib/iscsi/tgt_node.o 00:02:48.801 CC lib/iscsi/iscsi_subsystem.o 00:02:48.801 CC lib/iscsi/task.o 00:02:48.801 CC lib/iscsi/iscsi_rpc.o 00:02:48.801 CC lib/vhost/vhost.o 00:02:48.801 CC lib/vhost/vhost_scsi.o 00:02:48.801 CC lib/vhost/vhost_rpc.o 00:02:48.801 CC lib/vhost/vhost_blk.o 00:02:48.801 CC lib/vhost/rte_vhost_user.o 00:02:49.058 SO libspdk_ftl.so.9.0 00:02:49.316 SYMLINK libspdk_ftl.so 00:02:49.881 LIB libspdk_vhost.a 00:02:49.881 LIB libspdk_nvmf.a 00:02:49.881 SO libspdk_vhost.so.8.0 00:02:49.881 SO libspdk_nvmf.so.20.0 00:02:49.881 SYMLINK libspdk_vhost.so 00:02:50.138 SYMLINK libspdk_nvmf.so 00:02:50.138 LIB libspdk_iscsi.a 00:02:50.396 SO libspdk_iscsi.so.8.0 00:02:50.396 SYMLINK libspdk_iscsi.so 00:02:50.960 CC module/env_dpdk/env_dpdk_rpc.o 00:02:50.960 LIB libspdk_env_dpdk_rpc.a 00:02:50.960 CC module/scheduler/gscheduler/gscheduler.o 00:02:50.960 CC module/scheduler/dynamic/scheduler_dynamic.o 00:02:50.960 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:02:50.960 CC module/accel/dsa/accel_dsa_rpc.o 00:02:50.960 CC module/accel/dsa/accel_dsa.o 00:02:50.960 CC module/accel/ioat/accel_ioat.o 00:02:50.960 SO libspdk_env_dpdk_rpc.so.6.0 00:02:50.960 CC module/accel/ioat/accel_ioat_rpc.o 00:02:50.960 CC module/keyring/linux/keyring_rpc.o 00:02:50.960 CC module/keyring/linux/keyring.o 00:02:50.960 CC module/sock/posix/posix.o 00:02:50.960 CC module/fsdev/aio/fsdev_aio_rpc.o 00:02:50.960 CC module/fsdev/aio/linux_aio_mgr.o 00:02:50.960 CC module/fsdev/aio/fsdev_aio.o 00:02:50.960 CC module/accel/error/accel_error.o 00:02:50.960 CC module/keyring/file/keyring.o 00:02:50.960 CC module/accel/error/accel_error_rpc.o 00:02:50.960 CC module/keyring/file/keyring_rpc.o 00:02:51.218 CC module/blob/bdev/blob_bdev.o 00:02:51.218 CC module/accel/iaa/accel_iaa.o 00:02:51.218 CC module/accel/iaa/accel_iaa_rpc.o 00:02:51.218 SYMLINK libspdk_env_dpdk_rpc.so 00:02:51.218 LIB libspdk_keyring_linux.a 00:02:51.218 LIB libspdk_scheduler_gscheduler.a 00:02:51.218 LIB libspdk_scheduler_dpdk_governor.a 00:02:51.218 SO libspdk_keyring_linux.so.1.0 00:02:51.218 LIB libspdk_keyring_file.a 00:02:51.218 SO libspdk_scheduler_dpdk_governor.so.4.0 00:02:51.218 LIB libspdk_accel_ioat.a 00:02:51.218 SO libspdk_scheduler_gscheduler.so.4.0 00:02:51.218 LIB libspdk_scheduler_dynamic.a 00:02:51.218 SO libspdk_keyring_file.so.2.0 00:02:51.218 SO libspdk_accel_ioat.so.6.0 00:02:51.218 SO libspdk_scheduler_dynamic.so.4.0 00:02:51.218 SYMLINK libspdk_keyring_linux.so 00:02:51.218 LIB libspdk_accel_error.a 00:02:51.218 SYMLINK libspdk_scheduler_dpdk_governor.so 00:02:51.218 SYMLINK libspdk_scheduler_gscheduler.so 00:02:51.218 LIB libspdk_accel_iaa.a 00:02:51.475 SO libspdk_accel_error.so.2.0 00:02:51.475 SYMLINK libspdk_keyring_file.so 00:02:51.475 SYMLINK libspdk_accel_ioat.so 00:02:51.475 SO libspdk_accel_iaa.so.3.0 00:02:51.475 SYMLINK libspdk_scheduler_dynamic.so 00:02:51.475 LIB libspdk_accel_dsa.a 00:02:51.475 LIB libspdk_blob_bdev.a 00:02:51.475 SYMLINK libspdk_accel_error.so 00:02:51.475 SO libspdk_accel_dsa.so.5.0 00:02:51.475 SO libspdk_blob_bdev.so.12.0 00:02:51.475 SYMLINK libspdk_accel_iaa.so 00:02:51.475 SYMLINK libspdk_accel_dsa.so 00:02:51.475 SYMLINK libspdk_blob_bdev.so 00:02:51.733 LIB libspdk_fsdev_aio.a 00:02:51.733 SO libspdk_fsdev_aio.so.1.0 00:02:51.733 LIB libspdk_sock_posix.a 00:02:51.991 SO libspdk_sock_posix.so.6.0 00:02:51.991 SYMLINK libspdk_fsdev_aio.so 00:02:51.991 CC module/blobfs/bdev/blobfs_bdev.o 00:02:51.991 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:02:51.991 SYMLINK libspdk_sock_posix.so 00:02:51.991 CC module/bdev/delay/vbdev_delay.o 00:02:51.991 CC module/bdev/error/vbdev_error.o 00:02:51.991 CC module/bdev/gpt/gpt.o 00:02:51.991 CC module/bdev/ftl/bdev_ftl.o 00:02:51.991 CC module/bdev/gpt/vbdev_gpt.o 00:02:51.991 CC module/bdev/delay/vbdev_delay_rpc.o 00:02:51.991 CC module/bdev/error/vbdev_error_rpc.o 00:02:51.991 CC module/bdev/null/bdev_null.o 00:02:51.991 CC module/bdev/ftl/bdev_ftl_rpc.o 00:02:51.991 CC module/bdev/null/bdev_null_rpc.o 00:02:51.991 CC module/bdev/raid/bdev_raid.o 00:02:51.991 CC module/bdev/aio/bdev_aio_rpc.o 00:02:51.991 CC module/bdev/raid/bdev_raid_sb.o 00:02:51.991 CC module/bdev/zone_block/vbdev_zone_block.o 00:02:51.991 CC module/bdev/aio/bdev_aio.o 00:02:51.991 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:02:51.991 CC module/bdev/raid/bdev_raid_rpc.o 00:02:51.991 CC module/bdev/passthru/vbdev_passthru.o 00:02:51.991 CC module/bdev/raid/raid0.o 00:02:51.991 CC module/bdev/malloc/bdev_malloc.o 00:02:51.991 CC module/bdev/iscsi/bdev_iscsi.o 00:02:51.991 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:02:51.991 CC module/bdev/raid/raid1.o 00:02:51.991 CC module/bdev/raid/concat.o 00:02:51.991 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:02:51.991 CC module/bdev/malloc/bdev_malloc_rpc.o 00:02:51.991 CC module/bdev/split/vbdev_split.o 00:02:51.991 CC module/bdev/split/vbdev_split_rpc.o 00:02:51.991 CC module/bdev/nvme/bdev_nvme.o 00:02:51.991 CC module/bdev/virtio/bdev_virtio_scsi.o 00:02:51.991 CC module/bdev/virtio/bdev_virtio_blk.o 00:02:51.991 CC module/bdev/nvme/bdev_nvme_rpc.o 00:02:51.991 CC module/bdev/nvme/nvme_rpc.o 00:02:51.991 CC module/bdev/virtio/bdev_virtio_rpc.o 00:02:51.991 CC module/bdev/nvme/vbdev_opal.o 00:02:51.991 CC module/bdev/nvme/bdev_mdns_client.o 00:02:51.991 CC module/bdev/nvme/vbdev_opal_rpc.o 00:02:51.991 CC module/bdev/lvol/vbdev_lvol.o 00:02:51.991 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:02:51.991 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:02:52.249 LIB libspdk_blobfs_bdev.a 00:02:52.249 SO libspdk_blobfs_bdev.so.6.0 00:02:52.249 SYMLINK libspdk_blobfs_bdev.so 00:02:52.249 LIB libspdk_bdev_split.a 00:02:52.249 LIB libspdk_bdev_error.a 00:02:52.249 LIB libspdk_bdev_null.a 00:02:52.249 LIB libspdk_bdev_gpt.a 00:02:52.507 LIB libspdk_bdev_ftl.a 00:02:52.507 SO libspdk_bdev_split.so.6.0 00:02:52.507 SO libspdk_bdev_null.so.6.0 00:02:52.507 SO libspdk_bdev_error.so.6.0 00:02:52.507 SO libspdk_bdev_gpt.so.6.0 00:02:52.507 SO libspdk_bdev_ftl.so.6.0 00:02:52.507 LIB libspdk_bdev_passthru.a 00:02:52.507 LIB libspdk_bdev_zone_block.a 00:02:52.507 LIB libspdk_bdev_aio.a 00:02:52.507 SYMLINK libspdk_bdev_split.so 00:02:52.507 SYMLINK libspdk_bdev_error.so 00:02:52.507 SYMLINK libspdk_bdev_null.so 00:02:52.507 SO libspdk_bdev_passthru.so.6.0 00:02:52.507 LIB libspdk_bdev_iscsi.a 00:02:52.507 SYMLINK libspdk_bdev_gpt.so 00:02:52.507 SO libspdk_bdev_zone_block.so.6.0 00:02:52.507 LIB libspdk_bdev_delay.a 00:02:52.507 SO libspdk_bdev_aio.so.6.0 00:02:52.507 SYMLINK libspdk_bdev_ftl.so 00:02:52.507 LIB libspdk_bdev_malloc.a 00:02:52.507 SO libspdk_bdev_delay.so.6.0 00:02:52.507 SO libspdk_bdev_iscsi.so.6.0 00:02:52.507 SO libspdk_bdev_malloc.so.6.0 00:02:52.507 SYMLINK libspdk_bdev_zone_block.so 00:02:52.507 SYMLINK libspdk_bdev_passthru.so 00:02:52.507 SYMLINK libspdk_bdev_aio.so 00:02:52.507 SYMLINK libspdk_bdev_delay.so 00:02:52.507 SYMLINK libspdk_bdev_iscsi.so 00:02:52.507 SYMLINK libspdk_bdev_malloc.so 00:02:52.764 LIB libspdk_bdev_lvol.a 00:02:52.764 LIB libspdk_bdev_virtio.a 00:02:52.764 SO libspdk_bdev_lvol.so.6.0 00:02:52.764 SO libspdk_bdev_virtio.so.6.0 00:02:52.764 SYMLINK libspdk_bdev_lvol.so 00:02:52.764 SYMLINK libspdk_bdev_virtio.so 00:02:53.022 LIB libspdk_bdev_raid.a 00:02:53.022 SO libspdk_bdev_raid.so.6.0 00:02:53.278 SYMLINK libspdk_bdev_raid.so 00:02:54.650 LIB libspdk_bdev_nvme.a 00:02:54.650 SO libspdk_bdev_nvme.so.7.1 00:02:54.650 SYMLINK libspdk_bdev_nvme.so 00:02:55.214 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:02:55.214 CC module/event/subsystems/keyring/keyring.o 00:02:55.214 CC module/event/subsystems/vmd/vmd.o 00:02:55.214 CC module/event/subsystems/vmd/vmd_rpc.o 00:02:55.472 CC module/event/subsystems/sock/sock.o 00:02:55.472 CC module/event/subsystems/fsdev/fsdev.o 00:02:55.472 CC module/event/subsystems/iobuf/iobuf.o 00:02:55.472 CC module/event/subsystems/scheduler/scheduler.o 00:02:55.472 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:02:55.472 LIB libspdk_event_keyring.a 00:02:55.472 LIB libspdk_event_vhost_blk.a 00:02:55.472 LIB libspdk_event_fsdev.a 00:02:55.472 SO libspdk_event_keyring.so.1.0 00:02:55.472 LIB libspdk_event_scheduler.a 00:02:55.472 SO libspdk_event_vhost_blk.so.3.0 00:02:55.472 LIB libspdk_event_sock.a 00:02:55.472 LIB libspdk_event_vmd.a 00:02:55.472 SO libspdk_event_fsdev.so.1.0 00:02:55.472 LIB libspdk_event_iobuf.a 00:02:55.472 SO libspdk_event_scheduler.so.4.0 00:02:55.472 SO libspdk_event_sock.so.5.0 00:02:55.472 SO libspdk_event_iobuf.so.3.0 00:02:55.472 SYMLINK libspdk_event_keyring.so 00:02:55.472 SO libspdk_event_vmd.so.6.0 00:02:55.472 SYMLINK libspdk_event_vhost_blk.so 00:02:55.472 SYMLINK libspdk_event_fsdev.so 00:02:55.472 SYMLINK libspdk_event_scheduler.so 00:02:55.472 SYMLINK libspdk_event_sock.so 00:02:55.472 SYMLINK libspdk_event_iobuf.so 00:02:55.472 SYMLINK libspdk_event_vmd.so 00:02:56.038 CC module/event/subsystems/accel/accel.o 00:02:56.038 LIB libspdk_event_accel.a 00:02:56.038 SO libspdk_event_accel.so.6.0 00:02:56.298 SYMLINK libspdk_event_accel.so 00:02:56.617 CC module/event/subsystems/bdev/bdev.o 00:02:56.617 LIB libspdk_event_bdev.a 00:02:56.617 SO libspdk_event_bdev.so.6.0 00:02:56.901 SYMLINK libspdk_event_bdev.so 00:02:57.159 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:02:57.159 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:02:57.159 CC module/event/subsystems/nbd/nbd.o 00:02:57.159 CC module/event/subsystems/ublk/ublk.o 00:02:57.159 CC module/event/subsystems/scsi/scsi.o 00:02:57.159 LIB libspdk_event_nbd.a 00:02:57.159 LIB libspdk_event_ublk.a 00:02:57.159 LIB libspdk_event_scsi.a 00:02:57.159 SO libspdk_event_nbd.so.6.0 00:02:57.159 LIB libspdk_event_nvmf.a 00:02:57.159 SO libspdk_event_ublk.so.3.0 00:02:57.159 SO libspdk_event_scsi.so.6.0 00:02:57.159 SO libspdk_event_nvmf.so.6.0 00:02:57.418 SYMLINK libspdk_event_nbd.so 00:02:57.418 SYMLINK libspdk_event_ublk.so 00:02:57.418 SYMLINK libspdk_event_scsi.so 00:02:57.418 SYMLINK libspdk_event_nvmf.so 00:02:57.676 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:02:57.676 CC module/event/subsystems/iscsi/iscsi.o 00:02:57.676 LIB libspdk_event_vhost_scsi.a 00:02:57.933 SO libspdk_event_vhost_scsi.so.3.0 00:02:57.933 LIB libspdk_event_iscsi.a 00:02:57.933 SYMLINK libspdk_event_vhost_scsi.so 00:02:57.933 SO libspdk_event_iscsi.so.6.0 00:02:57.933 SYMLINK libspdk_event_iscsi.so 00:02:58.191 SO libspdk.so.6.0 00:02:58.191 SYMLINK libspdk.so 00:02:58.450 CC app/trace_record/trace_record.o 00:02:58.450 TEST_HEADER include/spdk/accel.h 00:02:58.450 TEST_HEADER include/spdk/assert.h 00:02:58.450 TEST_HEADER include/spdk/accel_module.h 00:02:58.450 TEST_HEADER include/spdk/barrier.h 00:02:58.450 TEST_HEADER include/spdk/bdev.h 00:02:58.450 TEST_HEADER include/spdk/bdev_zone.h 00:02:58.450 TEST_HEADER include/spdk/base64.h 00:02:58.450 TEST_HEADER include/spdk/bdev_module.h 00:02:58.450 TEST_HEADER include/spdk/bit_pool.h 00:02:58.450 CC app/spdk_top/spdk_top.o 00:02:58.450 CC test/rpc_client/rpc_client_test.o 00:02:58.450 TEST_HEADER include/spdk/bit_array.h 00:02:58.450 TEST_HEADER include/spdk/blob_bdev.h 00:02:58.450 TEST_HEADER include/spdk/blobfs_bdev.h 00:02:58.450 TEST_HEADER include/spdk/config.h 00:02:58.450 TEST_HEADER include/spdk/conf.h 00:02:58.450 TEST_HEADER include/spdk/blobfs.h 00:02:58.450 TEST_HEADER include/spdk/blob.h 00:02:58.450 TEST_HEADER include/spdk/cpuset.h 00:02:58.450 CXX app/trace/trace.o 00:02:58.450 TEST_HEADER include/spdk/crc16.h 00:02:58.450 TEST_HEADER include/spdk/crc32.h 00:02:58.450 TEST_HEADER include/spdk/crc64.h 00:02:58.450 TEST_HEADER include/spdk/dma.h 00:02:58.450 TEST_HEADER include/spdk/dif.h 00:02:58.450 CC app/spdk_nvme_perf/perf.o 00:02:58.450 TEST_HEADER include/spdk/endian.h 00:02:58.450 TEST_HEADER include/spdk/env.h 00:02:58.450 TEST_HEADER include/spdk/fd.h 00:02:58.450 TEST_HEADER include/spdk/fd_group.h 00:02:58.450 TEST_HEADER include/spdk/event.h 00:02:58.450 TEST_HEADER include/spdk/env_dpdk.h 00:02:58.450 CC app/spdk_nvme_discover/discovery_aer.o 00:02:58.450 TEST_HEADER include/spdk/file.h 00:02:58.450 TEST_HEADER include/spdk/fsdev.h 00:02:58.450 TEST_HEADER include/spdk/fsdev_module.h 00:02:58.450 TEST_HEADER include/spdk/gpt_spec.h 00:02:58.450 TEST_HEADER include/spdk/ftl.h 00:02:58.450 CC examples/interrupt_tgt/interrupt_tgt.o 00:02:58.450 TEST_HEADER include/spdk/hexlify.h 00:02:58.450 CC app/spdk_lspci/spdk_lspci.o 00:02:58.450 TEST_HEADER include/spdk/idxd_spec.h 00:02:58.450 TEST_HEADER include/spdk/histogram_data.h 00:02:58.450 TEST_HEADER include/spdk/idxd.h 00:02:58.450 TEST_HEADER include/spdk/init.h 00:02:58.450 TEST_HEADER include/spdk/ioat.h 00:02:58.450 TEST_HEADER include/spdk/ioat_spec.h 00:02:58.450 CC app/spdk_nvme_identify/identify.o 00:02:58.450 TEST_HEADER include/spdk/iscsi_spec.h 00:02:58.450 TEST_HEADER include/spdk/json.h 00:02:58.450 TEST_HEADER include/spdk/keyring.h 00:02:58.450 TEST_HEADER include/spdk/jsonrpc.h 00:02:58.450 TEST_HEADER include/spdk/keyring_module.h 00:02:58.450 TEST_HEADER include/spdk/likely.h 00:02:58.450 TEST_HEADER include/spdk/lvol.h 00:02:58.450 TEST_HEADER include/spdk/log.h 00:02:58.450 TEST_HEADER include/spdk/md5.h 00:02:58.450 TEST_HEADER include/spdk/mmio.h 00:02:58.450 TEST_HEADER include/spdk/memory.h 00:02:58.450 TEST_HEADER include/spdk/nbd.h 00:02:58.450 CC app/spdk_dd/spdk_dd.o 00:02:58.450 TEST_HEADER include/spdk/net.h 00:02:58.450 TEST_HEADER include/spdk/notify.h 00:02:58.450 TEST_HEADER include/spdk/nvme.h 00:02:58.450 TEST_HEADER include/spdk/nvme_intel.h 00:02:58.450 TEST_HEADER include/spdk/nvme_ocssd.h 00:02:58.450 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:02:58.450 TEST_HEADER include/spdk/nvme_spec.h 00:02:58.450 TEST_HEADER include/spdk/nvme_zns.h 00:02:58.450 TEST_HEADER include/spdk/nvmf_cmd.h 00:02:58.450 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:02:58.450 TEST_HEADER include/spdk/nvmf.h 00:02:58.450 TEST_HEADER include/spdk/nvmf_spec.h 00:02:58.450 TEST_HEADER include/spdk/nvmf_transport.h 00:02:58.450 TEST_HEADER include/spdk/opal.h 00:02:58.450 TEST_HEADER include/spdk/opal_spec.h 00:02:58.450 TEST_HEADER include/spdk/pci_ids.h 00:02:58.450 TEST_HEADER include/spdk/pipe.h 00:02:58.450 TEST_HEADER include/spdk/rpc.h 00:02:58.450 TEST_HEADER include/spdk/reduce.h 00:02:58.450 TEST_HEADER include/spdk/queue.h 00:02:58.450 TEST_HEADER include/spdk/scsi.h 00:02:58.450 TEST_HEADER include/spdk/scheduler.h 00:02:58.450 TEST_HEADER include/spdk/sock.h 00:02:58.450 TEST_HEADER include/spdk/stdinc.h 00:02:58.450 TEST_HEADER include/spdk/string.h 00:02:58.450 TEST_HEADER include/spdk/scsi_spec.h 00:02:58.450 TEST_HEADER include/spdk/thread.h 00:02:58.450 CC app/iscsi_tgt/iscsi_tgt.o 00:02:58.451 TEST_HEADER include/spdk/trace_parser.h 00:02:58.451 CC app/spdk_tgt/spdk_tgt.o 00:02:58.451 TEST_HEADER include/spdk/trace.h 00:02:58.451 TEST_HEADER include/spdk/tree.h 00:02:58.451 TEST_HEADER include/spdk/ublk.h 00:02:58.451 TEST_HEADER include/spdk/util.h 00:02:58.451 CC app/nvmf_tgt/nvmf_main.o 00:02:58.451 TEST_HEADER include/spdk/uuid.h 00:02:58.451 TEST_HEADER include/spdk/version.h 00:02:58.451 TEST_HEADER include/spdk/vfio_user_pci.h 00:02:58.451 TEST_HEADER include/spdk/vfio_user_spec.h 00:02:58.451 TEST_HEADER include/spdk/vhost.h 00:02:58.451 TEST_HEADER include/spdk/xor.h 00:02:58.451 TEST_HEADER include/spdk/vmd.h 00:02:58.451 TEST_HEADER include/spdk/zipf.h 00:02:58.451 CXX test/cpp_headers/accel.o 00:02:58.451 CXX test/cpp_headers/accel_module.o 00:02:58.451 CXX test/cpp_headers/barrier.o 00:02:58.451 CXX test/cpp_headers/assert.o 00:02:58.451 CXX test/cpp_headers/bdev_zone.o 00:02:58.451 CXX test/cpp_headers/bdev.o 00:02:58.451 CXX test/cpp_headers/base64.o 00:02:58.451 CXX test/cpp_headers/bdev_module.o 00:02:58.451 CXX test/cpp_headers/bit_pool.o 00:02:58.451 CXX test/cpp_headers/blob_bdev.o 00:02:58.451 CXX test/cpp_headers/bit_array.o 00:02:58.451 CXX test/cpp_headers/blob.o 00:02:58.451 CXX test/cpp_headers/blobfs_bdev.o 00:02:58.451 CXX test/cpp_headers/blobfs.o 00:02:58.451 CXX test/cpp_headers/config.o 00:02:58.451 CXX test/cpp_headers/conf.o 00:02:58.451 CXX test/cpp_headers/crc16.o 00:02:58.451 CXX test/cpp_headers/cpuset.o 00:02:58.451 CXX test/cpp_headers/crc32.o 00:02:58.451 CXX test/cpp_headers/crc64.o 00:02:58.451 CXX test/cpp_headers/dif.o 00:02:58.451 CXX test/cpp_headers/dma.o 00:02:58.451 CXX test/cpp_headers/env_dpdk.o 00:02:58.451 CXX test/cpp_headers/env.o 00:02:58.451 CXX test/cpp_headers/event.o 00:02:58.451 CXX test/cpp_headers/fd_group.o 00:02:58.451 CXX test/cpp_headers/endian.o 00:02:58.451 CXX test/cpp_headers/file.o 00:02:58.451 CXX test/cpp_headers/fd.o 00:02:58.451 CXX test/cpp_headers/fsdev.o 00:02:58.451 CXX test/cpp_headers/ftl.o 00:02:58.451 CXX test/cpp_headers/fsdev_module.o 00:02:58.451 CXX test/cpp_headers/hexlify.o 00:02:58.451 CXX test/cpp_headers/gpt_spec.o 00:02:58.451 CXX test/cpp_headers/idxd_spec.o 00:02:58.451 CXX test/cpp_headers/init.o 00:02:58.451 CXX test/cpp_headers/histogram_data.o 00:02:58.451 CXX test/cpp_headers/idxd.o 00:02:58.451 CXX test/cpp_headers/ioat_spec.o 00:02:58.451 CXX test/cpp_headers/ioat.o 00:02:58.451 CXX test/cpp_headers/json.o 00:02:58.451 CXX test/cpp_headers/jsonrpc.o 00:02:58.451 CXX test/cpp_headers/iscsi_spec.o 00:02:58.451 CXX test/cpp_headers/keyring_module.o 00:02:58.451 CXX test/cpp_headers/keyring.o 00:02:58.451 CXX test/cpp_headers/likely.o 00:02:58.451 CXX test/cpp_headers/log.o 00:02:58.451 CXX test/cpp_headers/lvol.o 00:02:58.451 CXX test/cpp_headers/md5.o 00:02:58.451 CXX test/cpp_headers/memory.o 00:02:58.451 CXX test/cpp_headers/nbd.o 00:02:58.451 CXX test/cpp_headers/mmio.o 00:02:58.451 CXX test/cpp_headers/net.o 00:02:58.718 CXX test/cpp_headers/notify.o 00:02:58.718 CXX test/cpp_headers/nvme_intel.o 00:02:58.718 CXX test/cpp_headers/nvme.o 00:02:58.718 CXX test/cpp_headers/nvme_ocssd.o 00:02:58.718 CXX test/cpp_headers/nvme_spec.o 00:02:58.718 CXX test/cpp_headers/nvme_ocssd_spec.o 00:02:58.718 CXX test/cpp_headers/nvme_zns.o 00:02:58.718 CXX test/cpp_headers/nvmf_cmd.o 00:02:58.718 CXX test/cpp_headers/nvmf.o 00:02:58.718 CXX test/cpp_headers/nvmf_fc_spec.o 00:02:58.718 CXX test/cpp_headers/nvmf_spec.o 00:02:58.718 CXX test/cpp_headers/opal.o 00:02:58.718 CXX test/cpp_headers/nvmf_transport.o 00:02:58.718 CXX test/cpp_headers/opal_spec.o 00:02:58.718 CC examples/ioat/perf/perf.o 00:02:58.718 CC examples/util/zipf/zipf.o 00:02:58.718 CC examples/ioat/verify/verify.o 00:02:58.718 CC test/app/jsoncat/jsoncat.o 00:02:58.718 CC test/app/stub/stub.o 00:02:58.718 CXX test/cpp_headers/pci_ids.o 00:02:58.718 CC test/env/vtophys/vtophys.o 00:02:58.718 CC app/fio/nvme/fio_plugin.o 00:02:58.718 CC test/thread/poller_perf/poller_perf.o 00:02:58.718 CC test/app/histogram_perf/histogram_perf.o 00:02:58.718 CC test/env/pci/pci_ut.o 00:02:58.718 CC test/env/memory/memory_ut.o 00:02:58.718 CC test/app/bdev_svc/bdev_svc.o 00:02:58.718 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:02:58.718 CC test/dma/test_dma/test_dma.o 00:02:58.718 CC app/fio/bdev/fio_plugin.o 00:02:58.983 LINK interrupt_tgt 00:02:58.983 LINK spdk_lspci 00:02:58.983 LINK spdk_nvme_discover 00:02:58.983 LINK nvmf_tgt 00:02:58.983 LINK spdk_tgt 00:02:58.983 LINK rpc_client_test 00:02:58.983 CC test/env/mem_callbacks/mem_callbacks.o 00:02:59.245 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:02:59.245 LINK zipf 00:02:59.245 LINK histogram_perf 00:02:59.245 CXX test/cpp_headers/pipe.o 00:02:59.245 CXX test/cpp_headers/queue.o 00:02:59.245 LINK poller_perf 00:02:59.245 CXX test/cpp_headers/reduce.o 00:02:59.245 CXX test/cpp_headers/rpc.o 00:02:59.245 CXX test/cpp_headers/scheduler.o 00:02:59.245 CXX test/cpp_headers/scsi.o 00:02:59.245 CXX test/cpp_headers/scsi_spec.o 00:02:59.245 CXX test/cpp_headers/sock.o 00:02:59.245 LINK iscsi_tgt 00:02:59.245 CXX test/cpp_headers/stdinc.o 00:02:59.245 CXX test/cpp_headers/string.o 00:02:59.245 CXX test/cpp_headers/thread.o 00:02:59.245 LINK stub 00:02:59.245 CXX test/cpp_headers/trace.o 00:02:59.245 LINK jsoncat 00:02:59.245 CXX test/cpp_headers/trace_parser.o 00:02:59.245 CXX test/cpp_headers/tree.o 00:02:59.245 CXX test/cpp_headers/ublk.o 00:02:59.245 CXX test/cpp_headers/util.o 00:02:59.245 CXX test/cpp_headers/uuid.o 00:02:59.245 CXX test/cpp_headers/version.o 00:02:59.245 CXX test/cpp_headers/vfio_user_pci.o 00:02:59.245 CXX test/cpp_headers/vfio_user_spec.o 00:02:59.245 CXX test/cpp_headers/vhost.o 00:02:59.245 LINK bdev_svc 00:02:59.245 CXX test/cpp_headers/vmd.o 00:02:59.245 LINK vtophys 00:02:59.245 CXX test/cpp_headers/xor.o 00:02:59.245 CXX test/cpp_headers/zipf.o 00:02:59.245 LINK spdk_trace_record 00:02:59.245 LINK env_dpdk_post_init 00:02:59.245 LINK ioat_perf 00:02:59.503 LINK verify 00:02:59.503 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:02:59.503 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:02:59.503 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:02:59.503 LINK spdk_dd 00:02:59.503 LINK spdk_trace 00:02:59.761 LINK pci_ut 00:02:59.761 LINK nvme_fuzz 00:02:59.761 CC test/event/reactor/reactor.o 00:02:59.761 LINK spdk_bdev 00:02:59.761 CC test/event/reactor_perf/reactor_perf.o 00:02:59.761 CC test/event/event_perf/event_perf.o 00:02:59.761 CC test/event/app_repeat/app_repeat.o 00:02:59.761 LINK spdk_nvme 00:02:59.761 CC test/event/scheduler/scheduler.o 00:02:59.761 CC examples/vmd/lsvmd/lsvmd.o 00:02:59.761 CC examples/vmd/led/led.o 00:02:59.761 CC examples/sock/hello_world/hello_sock.o 00:02:59.761 CC examples/idxd/perf/perf.o 00:02:59.761 LINK test_dma 00:02:59.761 CC examples/thread/thread/thread_ex.o 00:02:59.761 CC app/vhost/vhost.o 00:03:00.019 LINK mem_callbacks 00:03:00.019 LINK reactor 00:03:00.019 LINK vhost_fuzz 00:03:00.019 LINK event_perf 00:03:00.019 LINK reactor_perf 00:03:00.019 LINK lsvmd 00:03:00.019 LINK app_repeat 00:03:00.019 LINK spdk_nvme_identify 00:03:00.019 LINK led 00:03:00.019 LINK spdk_nvme_perf 00:03:00.019 LINK scheduler 00:03:00.019 LINK hello_sock 00:03:00.019 LINK vhost 00:03:00.019 LINK thread 00:03:00.277 LINK spdk_top 00:03:00.277 LINK idxd_perf 00:03:00.277 CC test/nvme/reserve/reserve.o 00:03:00.277 CC test/nvme/reset/reset.o 00:03:00.277 CC test/nvme/sgl/sgl.o 00:03:00.277 CC test/nvme/overhead/overhead.o 00:03:00.277 CC test/nvme/aer/aer.o 00:03:00.277 CC test/nvme/startup/startup.o 00:03:00.277 CC test/nvme/cuse/cuse.o 00:03:00.277 CC test/nvme/err_injection/err_injection.o 00:03:00.277 CC test/nvme/boot_partition/boot_partition.o 00:03:00.277 CC test/nvme/fused_ordering/fused_ordering.o 00:03:00.277 CC test/nvme/e2edp/nvme_dp.o 00:03:00.277 CC test/nvme/connect_stress/connect_stress.o 00:03:00.277 CC test/nvme/fdp/fdp.o 00:03:00.277 CC test/nvme/simple_copy/simple_copy.o 00:03:00.277 LINK memory_ut 00:03:00.277 CC test/nvme/doorbell_aers/doorbell_aers.o 00:03:00.277 CC test/nvme/compliance/nvme_compliance.o 00:03:00.535 CC test/blobfs/mkfs/mkfs.o 00:03:00.535 CC test/accel/dif/dif.o 00:03:00.535 CC test/lvol/esnap/esnap.o 00:03:00.535 CC examples/nvme/nvme_manage/nvme_manage.o 00:03:00.535 CC examples/nvme/hotplug/hotplug.o 00:03:00.535 CC examples/nvme/hello_world/hello_world.o 00:03:00.535 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:03:00.535 CC examples/nvme/reconnect/reconnect.o 00:03:00.535 CC examples/nvme/arbitration/arbitration.o 00:03:00.535 CC examples/nvme/abort/abort.o 00:03:00.535 CC examples/nvme/cmb_copy/cmb_copy.o 00:03:00.535 LINK boot_partition 00:03:00.535 LINK startup 00:03:00.535 LINK connect_stress 00:03:00.535 LINK doorbell_aers 00:03:00.535 LINK err_injection 00:03:00.535 LINK reserve 00:03:00.535 LINK fused_ordering 00:03:00.535 LINK reset 00:03:00.535 LINK simple_copy 00:03:00.535 LINK mkfs 00:03:00.794 CC examples/accel/perf/accel_perf.o 00:03:00.794 CC examples/blob/cli/blobcli.o 00:03:00.794 CC examples/blob/hello_world/hello_blob.o 00:03:00.794 CC examples/fsdev/hello_world/hello_fsdev.o 00:03:00.794 LINK nvme_dp 00:03:00.794 LINK aer 00:03:00.794 LINK sgl 00:03:00.794 LINK overhead 00:03:00.794 LINK pmr_persistence 00:03:00.794 LINK fdp 00:03:00.794 LINK cmb_copy 00:03:00.794 LINK nvme_compliance 00:03:00.794 LINK hello_world 00:03:00.794 LINK hotplug 00:03:01.053 LINK arbitration 00:03:01.053 LINK reconnect 00:03:01.053 LINK hello_blob 00:03:01.053 LINK abort 00:03:01.053 LINK hello_fsdev 00:03:01.053 LINK nvme_manage 00:03:01.053 LINK dif 00:03:01.311 LINK blobcli 00:03:01.311 LINK accel_perf 00:03:01.311 LINK iscsi_fuzz 00:03:01.570 LINK cuse 00:03:01.829 CC test/bdev/bdevio/bdevio.o 00:03:01.829 CC examples/bdev/bdevperf/bdevperf.o 00:03:01.829 CC examples/bdev/hello_world/hello_bdev.o 00:03:02.088 LINK hello_bdev 00:03:02.088 LINK bdevio 00:03:02.656 LINK bdevperf 00:03:02.915 CC examples/nvmf/nvmf/nvmf.o 00:03:03.174 LINK nvmf 00:03:05.707 LINK esnap 00:03:05.707 00:03:05.707 real 1m0.720s 00:03:05.707 user 8m55.916s 00:03:05.707 sys 3m30.635s 00:03:05.707 23:43:44 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:03:05.707 23:43:44 make -- common/autotest_common.sh@10 -- $ set +x 00:03:05.707 ************************************ 00:03:05.708 END TEST make 00:03:05.708 ************************************ 00:03:05.708 23:43:44 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:03:05.708 23:43:44 -- pm/common@29 -- $ signal_monitor_resources TERM 00:03:05.708 23:43:44 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:03:05.708 23:43:44 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:05.708 23:43:44 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:03:05.708 23:43:44 -- pm/common@44 -- $ pid=3707976 00:03:05.708 23:43:44 -- pm/common@50 -- $ kill -TERM 3707976 00:03:05.708 23:43:44 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:05.708 23:43:44 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:03:05.708 23:43:44 -- pm/common@44 -- $ pid=3707978 00:03:05.708 23:43:44 -- pm/common@50 -- $ kill -TERM 3707978 00:03:05.708 23:43:44 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:05.708 23:43:44 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:03:05.708 23:43:44 -- pm/common@44 -- $ pid=3707980 00:03:05.708 23:43:44 -- pm/common@50 -- $ kill -TERM 3707980 00:03:05.708 23:43:44 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:05.708 23:43:44 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:03:05.708 23:43:44 -- pm/common@44 -- $ pid=3708004 00:03:05.708 23:43:44 -- pm/common@50 -- $ sudo -E kill -TERM 3708004 00:03:05.708 23:43:44 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:03:05.708 23:43:44 -- spdk/autorun.sh@27 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:03:05.708 23:43:44 -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:03:05.708 23:43:44 -- common/autotest_common.sh@1711 -- # lcov --version 00:03:05.708 23:43:44 -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:03:05.708 23:43:44 -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:03:05.708 23:43:44 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:05.708 23:43:44 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:05.708 23:43:44 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:05.708 23:43:44 -- scripts/common.sh@336 -- # IFS=.-: 00:03:05.708 23:43:44 -- scripts/common.sh@336 -- # read -ra ver1 00:03:05.708 23:43:44 -- scripts/common.sh@337 -- # IFS=.-: 00:03:05.708 23:43:44 -- scripts/common.sh@337 -- # read -ra ver2 00:03:05.708 23:43:44 -- scripts/common.sh@338 -- # local 'op=<' 00:03:05.708 23:43:44 -- scripts/common.sh@340 -- # ver1_l=2 00:03:05.708 23:43:44 -- scripts/common.sh@341 -- # ver2_l=1 00:03:05.708 23:43:44 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:05.708 23:43:44 -- scripts/common.sh@344 -- # case "$op" in 00:03:05.708 23:43:44 -- scripts/common.sh@345 -- # : 1 00:03:05.708 23:43:44 -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:05.708 23:43:44 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:05.708 23:43:44 -- scripts/common.sh@365 -- # decimal 1 00:03:05.708 23:43:44 -- scripts/common.sh@353 -- # local d=1 00:03:05.708 23:43:44 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:05.708 23:43:44 -- scripts/common.sh@355 -- # echo 1 00:03:05.708 23:43:44 -- scripts/common.sh@365 -- # ver1[v]=1 00:03:05.708 23:43:44 -- scripts/common.sh@366 -- # decimal 2 00:03:05.708 23:43:44 -- scripts/common.sh@353 -- # local d=2 00:03:05.708 23:43:44 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:05.708 23:43:44 -- scripts/common.sh@355 -- # echo 2 00:03:05.708 23:43:44 -- scripts/common.sh@366 -- # ver2[v]=2 00:03:05.708 23:43:44 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:05.708 23:43:44 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:05.708 23:43:44 -- scripts/common.sh@368 -- # return 0 00:03:05.708 23:43:44 -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:05.708 23:43:44 -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:03:05.708 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:05.708 --rc genhtml_branch_coverage=1 00:03:05.708 --rc genhtml_function_coverage=1 00:03:05.708 --rc genhtml_legend=1 00:03:05.708 --rc geninfo_all_blocks=1 00:03:05.708 --rc geninfo_unexecuted_blocks=1 00:03:05.708 00:03:05.708 ' 00:03:05.708 23:43:44 -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:03:05.708 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:05.708 --rc genhtml_branch_coverage=1 00:03:05.708 --rc genhtml_function_coverage=1 00:03:05.708 --rc genhtml_legend=1 00:03:05.708 --rc geninfo_all_blocks=1 00:03:05.708 --rc geninfo_unexecuted_blocks=1 00:03:05.708 00:03:05.708 ' 00:03:05.708 23:43:44 -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:03:05.708 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:05.708 --rc genhtml_branch_coverage=1 00:03:05.708 --rc genhtml_function_coverage=1 00:03:05.708 --rc genhtml_legend=1 00:03:05.708 --rc geninfo_all_blocks=1 00:03:05.708 --rc geninfo_unexecuted_blocks=1 00:03:05.708 00:03:05.708 ' 00:03:05.708 23:43:44 -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:03:05.708 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:05.708 --rc genhtml_branch_coverage=1 00:03:05.708 --rc genhtml_function_coverage=1 00:03:05.708 --rc genhtml_legend=1 00:03:05.708 --rc geninfo_all_blocks=1 00:03:05.708 --rc geninfo_unexecuted_blocks=1 00:03:05.708 00:03:05.708 ' 00:03:05.708 23:43:44 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:03:05.708 23:43:44 -- nvmf/common.sh@7 -- # uname -s 00:03:05.708 23:43:44 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:05.708 23:43:44 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:05.708 23:43:44 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:05.708 23:43:44 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:05.708 23:43:44 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:05.708 23:43:44 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:05.708 23:43:44 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:05.708 23:43:44 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:05.708 23:43:44 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:05.708 23:43:44 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:05.708 23:43:44 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:03:05.708 23:43:44 -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:03:05.708 23:43:44 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:05.708 23:43:44 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:05.708 23:43:44 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:03:05.708 23:43:44 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:03:05.708 23:43:44 -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:03:05.708 23:43:44 -- scripts/common.sh@15 -- # shopt -s extglob 00:03:05.708 23:43:44 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:05.708 23:43:44 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:05.708 23:43:44 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:05.708 23:43:44 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:05.708 23:43:44 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:05.708 23:43:44 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:05.708 23:43:44 -- paths/export.sh@5 -- # export PATH 00:03:05.708 23:43:44 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:05.708 23:43:44 -- nvmf/common.sh@51 -- # : 0 00:03:05.708 23:43:44 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:03:05.708 23:43:44 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:03:05.708 23:43:44 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:03:05.708 23:43:44 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:05.708 23:43:44 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:05.708 23:43:44 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:03:05.708 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:03:05.708 23:43:44 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:03:05.708 23:43:44 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:03:05.708 23:43:44 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:03:05.708 23:43:44 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:03:05.708 23:43:44 -- spdk/autotest.sh@32 -- # uname -s 00:03:05.708 23:43:44 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:03:05.708 23:43:44 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:03:05.708 23:43:44 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:03:05.708 23:43:44 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:03:05.708 23:43:44 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:03:05.708 23:43:44 -- spdk/autotest.sh@44 -- # modprobe nbd 00:03:05.708 23:43:44 -- spdk/autotest.sh@46 -- # type -P udevadm 00:03:05.708 23:43:44 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:03:05.708 23:43:44 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:03:05.708 23:43:44 -- spdk/autotest.sh@48 -- # udevadm_pid=3772293 00:03:05.708 23:43:44 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:03:05.708 23:43:44 -- pm/common@17 -- # local monitor 00:03:05.708 23:43:44 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:05.708 23:43:44 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:05.708 23:43:44 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:05.708 23:43:44 -- pm/common@21 -- # date +%s 00:03:05.708 23:43:44 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:05.708 23:43:44 -- pm/common@21 -- # date +%s 00:03:05.708 23:43:44 -- pm/common@21 -- # date +%s 00:03:05.708 23:43:44 -- pm/common@25 -- # sleep 1 00:03:05.708 23:43:44 -- pm/common@21 -- # date +%s 00:03:05.968 23:43:44 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1734129824 00:03:05.968 23:43:44 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1734129824 00:03:05.968 23:43:44 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1734129824 00:03:05.968 23:43:44 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1734129824 00:03:05.968 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1734129824_collect-cpu-temp.pm.log 00:03:05.968 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1734129824_collect-vmstat.pm.log 00:03:05.968 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1734129824_collect-cpu-load.pm.log 00:03:05.968 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1734129824_collect-bmc-pm.bmc.pm.log 00:03:06.903 23:43:45 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:03:06.903 23:43:45 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:03:06.903 23:43:45 -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:06.903 23:43:45 -- common/autotest_common.sh@10 -- # set +x 00:03:06.903 23:43:45 -- spdk/autotest.sh@59 -- # create_test_list 00:03:06.903 23:43:45 -- common/autotest_common.sh@752 -- # xtrace_disable 00:03:06.903 23:43:45 -- common/autotest_common.sh@10 -- # set +x 00:03:06.903 23:43:45 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:03:06.903 23:43:45 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:06.903 23:43:45 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:06.903 23:43:45 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:03:06.903 23:43:45 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:06.903 23:43:45 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:03:06.903 23:43:45 -- common/autotest_common.sh@1457 -- # uname 00:03:06.903 23:43:45 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:03:06.903 23:43:45 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:03:06.903 23:43:45 -- common/autotest_common.sh@1477 -- # uname 00:03:06.903 23:43:45 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:03:06.903 23:43:45 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:03:06.903 23:43:45 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:03:06.903 lcov: LCOV version 1.15 00:03:06.903 23:43:45 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:03:25.121 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:03:25.121 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:03:31.687 23:44:09 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:03:31.687 23:44:09 -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:31.687 23:44:09 -- common/autotest_common.sh@10 -- # set +x 00:03:31.687 23:44:09 -- spdk/autotest.sh@78 -- # rm -f 00:03:31.687 23:44:09 -- spdk/autotest.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:33.593 0000:5e:00.0 (8086 0a54): Already using the nvme driver 00:03:33.593 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:03:33.593 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:03:33.593 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:03:33.593 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:03:33.593 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:03:33.852 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:03:33.852 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:03:33.852 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:03:33.852 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:03:33.852 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:03:33.852 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:03:33.852 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:03:33.852 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:03:33.852 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:03:33.852 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:03:33.852 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:03:34.111 23:44:13 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:03:34.111 23:44:13 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:03:34.111 23:44:13 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:03:34.111 23:44:13 -- common/autotest_common.sh@1658 -- # zoned_ctrls=() 00:03:34.111 23:44:13 -- common/autotest_common.sh@1658 -- # local -A zoned_ctrls 00:03:34.111 23:44:13 -- common/autotest_common.sh@1659 -- # local nvme bdf ns 00:03:34.111 23:44:13 -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:03:34.111 23:44:13 -- common/autotest_common.sh@1669 -- # bdf=0000:5e:00.0 00:03:34.111 23:44:13 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:03:34.111 23:44:13 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n1 00:03:34.111 23:44:13 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:03:34.111 23:44:13 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:34.111 23:44:13 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:03:34.111 23:44:13 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:03:34.111 23:44:13 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:03:34.111 23:44:13 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:03:34.111 23:44:13 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:03:34.111 23:44:13 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:03:34.111 23:44:13 -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:03:34.111 No valid GPT data, bailing 00:03:34.111 23:44:13 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:34.111 23:44:13 -- scripts/common.sh@394 -- # pt= 00:03:34.111 23:44:13 -- scripts/common.sh@395 -- # return 1 00:03:34.111 23:44:13 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:03:34.111 1+0 records in 00:03:34.111 1+0 records out 00:03:34.111 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00216006 s, 485 MB/s 00:03:34.111 23:44:13 -- spdk/autotest.sh@105 -- # sync 00:03:34.111 23:44:13 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:03:34.111 23:44:13 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:03:34.111 23:44:13 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:03:39.387 23:44:17 -- spdk/autotest.sh@111 -- # uname -s 00:03:39.387 23:44:17 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:03:39.387 23:44:17 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:03:39.387 23:44:17 -- spdk/autotest.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:03:41.292 Hugepages 00:03:41.292 node hugesize free / total 00:03:41.292 node0 1048576kB 0 / 0 00:03:41.292 node0 2048kB 0 / 0 00:03:41.292 node1 1048576kB 0 / 0 00:03:41.292 node1 2048kB 0 / 0 00:03:41.292 00:03:41.292 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:41.292 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:03:41.292 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:03:41.292 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:03:41.292 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:03:41.292 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:03:41.292 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:03:41.292 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:03:41.292 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:03:41.292 NVMe 0000:5e:00.0 8086 0a54 0 nvme nvme0 nvme0n1 00:03:41.292 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:03:41.292 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:03:41.292 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:03:41.292 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:03:41.292 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:03:41.292 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:03:41.292 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:03:41.292 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:03:41.292 23:44:20 -- spdk/autotest.sh@117 -- # uname -s 00:03:41.292 23:44:20 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:03:41.292 23:44:20 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:03:41.292 23:44:20 -- common/autotest_common.sh@1516 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:43.825 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:03:43.825 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:03:43.825 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:03:43.825 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:03:43.825 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:03:43.825 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:03:43.825 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:03:43.825 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:03:43.825 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:03:43.825 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:03:43.825 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:03:43.825 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:03:43.825 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:03:43.825 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:03:44.084 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:03:44.084 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:03:44.651 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:03:44.910 23:44:23 -- common/autotest_common.sh@1517 -- # sleep 1 00:03:45.847 23:44:24 -- common/autotest_common.sh@1518 -- # bdfs=() 00:03:45.847 23:44:24 -- common/autotest_common.sh@1518 -- # local bdfs 00:03:45.847 23:44:24 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:03:45.847 23:44:24 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:03:45.847 23:44:24 -- common/autotest_common.sh@1498 -- # bdfs=() 00:03:45.847 23:44:24 -- common/autotest_common.sh@1498 -- # local bdfs 00:03:45.847 23:44:24 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:03:45.847 23:44:24 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:03:45.847 23:44:24 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:03:45.847 23:44:24 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:03:45.847 23:44:24 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:5e:00.0 00:03:45.847 23:44:24 -- common/autotest_common.sh@1522 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:48.381 Waiting for block devices as requested 00:03:48.381 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:03:48.641 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:03:48.641 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:03:48.900 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:03:48.900 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:03:48.900 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:03:48.900 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:03:49.159 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:03:49.159 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:03:49.159 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:03:49.159 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:03:49.417 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:03:49.417 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:03:49.417 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:03:49.675 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:03:49.675 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:03:49.675 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:03:49.934 23:44:28 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:03:49.934 23:44:28 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:5e:00.0 00:03:49.934 23:44:28 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 00:03:49.934 23:44:28 -- common/autotest_common.sh@1487 -- # grep 0000:5e:00.0/nvme/nvme 00:03:49.934 23:44:28 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 00:03:49.934 23:44:28 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 ]] 00:03:49.934 23:44:28 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 00:03:49.934 23:44:28 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:03:49.934 23:44:28 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:03:49.934 23:44:28 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:03:49.934 23:44:28 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:03:49.934 23:44:28 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:03:49.934 23:44:28 -- common/autotest_common.sh@1531 -- # grep oacs 00:03:49.934 23:44:28 -- common/autotest_common.sh@1531 -- # oacs=' 0xf' 00:03:49.934 23:44:28 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:03:49.934 23:44:28 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:03:49.934 23:44:28 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:03:49.934 23:44:28 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:03:49.934 23:44:28 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:03:49.934 23:44:28 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:03:49.934 23:44:28 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:03:49.934 23:44:28 -- common/autotest_common.sh@1543 -- # continue 00:03:49.934 23:44:28 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:03:49.934 23:44:28 -- common/autotest_common.sh@732 -- # xtrace_disable 00:03:49.934 23:44:28 -- common/autotest_common.sh@10 -- # set +x 00:03:49.934 23:44:28 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:03:49.934 23:44:28 -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:49.934 23:44:28 -- common/autotest_common.sh@10 -- # set +x 00:03:49.934 23:44:28 -- spdk/autotest.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:52.468 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:03:52.468 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:03:52.468 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:03:52.468 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:03:52.468 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:03:52.468 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:03:52.468 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:03:52.468 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:03:52.468 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:03:52.727 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:03:52.727 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:03:52.727 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:03:52.727 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:03:52.727 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:03:52.727 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:03:52.727 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:03:53.663 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:03:53.663 23:44:32 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:03:53.663 23:44:32 -- common/autotest_common.sh@732 -- # xtrace_disable 00:03:53.663 23:44:32 -- common/autotest_common.sh@10 -- # set +x 00:03:53.663 23:44:32 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:03:53.663 23:44:32 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:03:53.663 23:44:32 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:03:53.663 23:44:32 -- common/autotest_common.sh@1563 -- # bdfs=() 00:03:53.663 23:44:32 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:03:53.663 23:44:32 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:03:53.663 23:44:32 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:03:53.663 23:44:32 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:03:53.663 23:44:32 -- common/autotest_common.sh@1498 -- # bdfs=() 00:03:53.663 23:44:32 -- common/autotest_common.sh@1498 -- # local bdfs 00:03:53.663 23:44:32 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:03:53.663 23:44:32 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:03:53.663 23:44:32 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:03:53.663 23:44:32 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:03:53.663 23:44:32 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:5e:00.0 00:03:53.663 23:44:32 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:03:53.663 23:44:32 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:5e:00.0/device 00:03:53.663 23:44:32 -- common/autotest_common.sh@1566 -- # device=0x0a54 00:03:53.663 23:44:32 -- common/autotest_common.sh@1567 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:03:53.663 23:44:32 -- common/autotest_common.sh@1568 -- # bdfs+=($bdf) 00:03:53.663 23:44:32 -- common/autotest_common.sh@1572 -- # (( 1 > 0 )) 00:03:53.663 23:44:32 -- common/autotest_common.sh@1573 -- # printf '%s\n' 0000:5e:00.0 00:03:53.663 23:44:32 -- common/autotest_common.sh@1579 -- # [[ -z 0000:5e:00.0 ]] 00:03:53.663 23:44:32 -- common/autotest_common.sh@1584 -- # spdk_tgt_pid=3786192 00:03:53.663 23:44:32 -- common/autotest_common.sh@1585 -- # waitforlisten 3786192 00:03:53.663 23:44:32 -- common/autotest_common.sh@835 -- # '[' -z 3786192 ']' 00:03:53.663 23:44:32 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:53.663 23:44:32 -- common/autotest_common.sh@840 -- # local max_retries=100 00:03:53.663 23:44:32 -- common/autotest_common.sh@1583 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:53.663 23:44:32 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:53.663 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:53.663 23:44:32 -- common/autotest_common.sh@844 -- # xtrace_disable 00:03:53.663 23:44:32 -- common/autotest_common.sh@10 -- # set +x 00:03:53.922 [2024-12-13 23:44:32.810506] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:03:53.922 [2024-12-13 23:44:32.810614] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3786192 ] 00:03:53.922 [2024-12-13 23:44:32.922016] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:53.922 [2024-12-13 23:44:33.026559] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:03:54.859 23:44:33 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:03:54.859 23:44:33 -- common/autotest_common.sh@868 -- # return 0 00:03:54.859 23:44:33 -- common/autotest_common.sh@1587 -- # bdf_id=0 00:03:54.859 23:44:33 -- common/autotest_common.sh@1588 -- # for bdf in "${bdfs[@]}" 00:03:54.859 23:44:33 -- common/autotest_common.sh@1589 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:5e:00.0 00:03:58.146 nvme0n1 00:03:58.146 23:44:36 -- common/autotest_common.sh@1591 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:03:58.146 [2024-12-13 23:44:37.062583] nvme_opal.c:2063:spdk_opal_cmd_revert_tper: *ERROR*: Error on starting admin SP session with error 18 00:03:58.146 [2024-12-13 23:44:37.062627] vbdev_opal_rpc.c: 134:rpc_bdev_nvme_opal_revert: *ERROR*: Revert TPer failure: 18 00:03:58.146 request: 00:03:58.146 { 00:03:58.146 "nvme_ctrlr_name": "nvme0", 00:03:58.146 "password": "test", 00:03:58.146 "method": "bdev_nvme_opal_revert", 00:03:58.146 "req_id": 1 00:03:58.146 } 00:03:58.146 Got JSON-RPC error response 00:03:58.146 response: 00:03:58.146 { 00:03:58.146 "code": -32603, 00:03:58.146 "message": "Internal error" 00:03:58.146 } 00:03:58.146 23:44:37 -- common/autotest_common.sh@1591 -- # true 00:03:58.146 23:44:37 -- common/autotest_common.sh@1592 -- # (( ++bdf_id )) 00:03:58.146 23:44:37 -- common/autotest_common.sh@1595 -- # killprocess 3786192 00:03:58.146 23:44:37 -- common/autotest_common.sh@954 -- # '[' -z 3786192 ']' 00:03:58.146 23:44:37 -- common/autotest_common.sh@958 -- # kill -0 3786192 00:03:58.146 23:44:37 -- common/autotest_common.sh@959 -- # uname 00:03:58.146 23:44:37 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:03:58.146 23:44:37 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3786192 00:03:58.146 23:44:37 -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:03:58.146 23:44:37 -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:03:58.146 23:44:37 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3786192' 00:03:58.146 killing process with pid 3786192 00:03:58.146 23:44:37 -- common/autotest_common.sh@973 -- # kill 3786192 00:03:58.146 23:44:37 -- common/autotest_common.sh@978 -- # wait 3786192 00:04:02.338 23:44:40 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:04:02.338 23:44:40 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:04:02.338 23:44:40 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:04:02.338 23:44:40 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:04:02.338 23:44:40 -- spdk/autotest.sh@149 -- # timing_enter lib 00:04:02.338 23:44:40 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:02.338 23:44:40 -- common/autotest_common.sh@10 -- # set +x 00:04:02.338 23:44:40 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:04:02.338 23:44:40 -- spdk/autotest.sh@155 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:04:02.338 23:44:40 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:02.338 23:44:40 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:02.338 23:44:40 -- common/autotest_common.sh@10 -- # set +x 00:04:02.338 ************************************ 00:04:02.338 START TEST env 00:04:02.338 ************************************ 00:04:02.338 23:44:40 env -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:04:02.338 * Looking for test storage... 00:04:02.338 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:04:02.338 23:44:40 env -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:02.338 23:44:40 env -- common/autotest_common.sh@1711 -- # lcov --version 00:04:02.338 23:44:40 env -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:02.338 23:44:40 env -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:02.338 23:44:40 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:02.338 23:44:40 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:02.338 23:44:40 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:02.338 23:44:40 env -- scripts/common.sh@336 -- # IFS=.-: 00:04:02.338 23:44:40 env -- scripts/common.sh@336 -- # read -ra ver1 00:04:02.338 23:44:40 env -- scripts/common.sh@337 -- # IFS=.-: 00:04:02.338 23:44:40 env -- scripts/common.sh@337 -- # read -ra ver2 00:04:02.338 23:44:40 env -- scripts/common.sh@338 -- # local 'op=<' 00:04:02.338 23:44:40 env -- scripts/common.sh@340 -- # ver1_l=2 00:04:02.338 23:44:40 env -- scripts/common.sh@341 -- # ver2_l=1 00:04:02.338 23:44:40 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:02.338 23:44:40 env -- scripts/common.sh@344 -- # case "$op" in 00:04:02.338 23:44:40 env -- scripts/common.sh@345 -- # : 1 00:04:02.338 23:44:40 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:02.338 23:44:40 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:02.338 23:44:40 env -- scripts/common.sh@365 -- # decimal 1 00:04:02.338 23:44:40 env -- scripts/common.sh@353 -- # local d=1 00:04:02.338 23:44:40 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:02.338 23:44:40 env -- scripts/common.sh@355 -- # echo 1 00:04:02.338 23:44:40 env -- scripts/common.sh@365 -- # ver1[v]=1 00:04:02.338 23:44:40 env -- scripts/common.sh@366 -- # decimal 2 00:04:02.338 23:44:40 env -- scripts/common.sh@353 -- # local d=2 00:04:02.338 23:44:40 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:02.338 23:44:40 env -- scripts/common.sh@355 -- # echo 2 00:04:02.338 23:44:40 env -- scripts/common.sh@366 -- # ver2[v]=2 00:04:02.338 23:44:40 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:02.338 23:44:40 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:02.338 23:44:40 env -- scripts/common.sh@368 -- # return 0 00:04:02.338 23:44:40 env -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:02.338 23:44:40 env -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:02.338 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:02.338 --rc genhtml_branch_coverage=1 00:04:02.338 --rc genhtml_function_coverage=1 00:04:02.338 --rc genhtml_legend=1 00:04:02.338 --rc geninfo_all_blocks=1 00:04:02.338 --rc geninfo_unexecuted_blocks=1 00:04:02.338 00:04:02.338 ' 00:04:02.338 23:44:40 env -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:02.338 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:02.338 --rc genhtml_branch_coverage=1 00:04:02.338 --rc genhtml_function_coverage=1 00:04:02.338 --rc genhtml_legend=1 00:04:02.338 --rc geninfo_all_blocks=1 00:04:02.338 --rc geninfo_unexecuted_blocks=1 00:04:02.338 00:04:02.338 ' 00:04:02.338 23:44:40 env -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:02.338 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:02.338 --rc genhtml_branch_coverage=1 00:04:02.338 --rc genhtml_function_coverage=1 00:04:02.338 --rc genhtml_legend=1 00:04:02.338 --rc geninfo_all_blocks=1 00:04:02.338 --rc geninfo_unexecuted_blocks=1 00:04:02.338 00:04:02.338 ' 00:04:02.338 23:44:40 env -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:02.338 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:02.338 --rc genhtml_branch_coverage=1 00:04:02.338 --rc genhtml_function_coverage=1 00:04:02.338 --rc genhtml_legend=1 00:04:02.338 --rc geninfo_all_blocks=1 00:04:02.338 --rc geninfo_unexecuted_blocks=1 00:04:02.338 00:04:02.338 ' 00:04:02.338 23:44:40 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:04:02.338 23:44:40 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:02.338 23:44:40 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:02.338 23:44:40 env -- common/autotest_common.sh@10 -- # set +x 00:04:02.338 ************************************ 00:04:02.338 START TEST env_memory 00:04:02.338 ************************************ 00:04:02.338 23:44:40 env.env_memory -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:04:02.338 00:04:02.338 00:04:02.338 CUnit - A unit testing framework for C - Version 2.1-3 00:04:02.338 http://cunit.sourceforge.net/ 00:04:02.338 00:04:02.338 00:04:02.338 Suite: memory 00:04:02.338 Test: alloc and free memory map ...[2024-12-13 23:44:40.990368] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:04:02.338 passed 00:04:02.338 Test: mem map translation ...[2024-12-13 23:44:41.030563] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:04:02.338 [2024-12-13 23:44:41.030586] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:04:02.338 [2024-12-13 23:44:41.030632] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:04:02.338 [2024-12-13 23:44:41.030645] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:04:02.338 passed 00:04:02.338 Test: mem map registration ...[2024-12-13 23:44:41.092499] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:04:02.338 [2024-12-13 23:44:41.092528] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:04:02.338 passed 00:04:02.338 Test: mem map adjacent registrations ...passed 00:04:02.338 00:04:02.338 Run Summary: Type Total Ran Passed Failed Inactive 00:04:02.338 suites 1 1 n/a 0 0 00:04:02.338 tests 4 4 4 0 0 00:04:02.338 asserts 152 152 152 0 n/a 00:04:02.338 00:04:02.338 Elapsed time = 0.228 seconds 00:04:02.338 00:04:02.338 real 0m0.261s 00:04:02.338 user 0m0.244s 00:04:02.338 sys 0m0.016s 00:04:02.338 23:44:41 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:02.338 23:44:41 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:04:02.338 ************************************ 00:04:02.338 END TEST env_memory 00:04:02.338 ************************************ 00:04:02.338 23:44:41 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:04:02.338 23:44:41 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:02.338 23:44:41 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:02.338 23:44:41 env -- common/autotest_common.sh@10 -- # set +x 00:04:02.338 ************************************ 00:04:02.338 START TEST env_vtophys 00:04:02.338 ************************************ 00:04:02.338 23:44:41 env.env_vtophys -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:04:02.338 EAL: lib.eal log level changed from notice to debug 00:04:02.338 EAL: Detected lcore 0 as core 0 on socket 0 00:04:02.338 EAL: Detected lcore 1 as core 1 on socket 0 00:04:02.338 EAL: Detected lcore 2 as core 2 on socket 0 00:04:02.338 EAL: Detected lcore 3 as core 3 on socket 0 00:04:02.338 EAL: Detected lcore 4 as core 4 on socket 0 00:04:02.338 EAL: Detected lcore 5 as core 5 on socket 0 00:04:02.338 EAL: Detected lcore 6 as core 6 on socket 0 00:04:02.338 EAL: Detected lcore 7 as core 8 on socket 0 00:04:02.338 EAL: Detected lcore 8 as core 9 on socket 0 00:04:02.338 EAL: Detected lcore 9 as core 10 on socket 0 00:04:02.338 EAL: Detected lcore 10 as core 11 on socket 0 00:04:02.338 EAL: Detected lcore 11 as core 12 on socket 0 00:04:02.338 EAL: Detected lcore 12 as core 13 on socket 0 00:04:02.338 EAL: Detected lcore 13 as core 16 on socket 0 00:04:02.338 EAL: Detected lcore 14 as core 17 on socket 0 00:04:02.338 EAL: Detected lcore 15 as core 18 on socket 0 00:04:02.338 EAL: Detected lcore 16 as core 19 on socket 0 00:04:02.338 EAL: Detected lcore 17 as core 20 on socket 0 00:04:02.338 EAL: Detected lcore 18 as core 21 on socket 0 00:04:02.338 EAL: Detected lcore 19 as core 25 on socket 0 00:04:02.338 EAL: Detected lcore 20 as core 26 on socket 0 00:04:02.338 EAL: Detected lcore 21 as core 27 on socket 0 00:04:02.338 EAL: Detected lcore 22 as core 28 on socket 0 00:04:02.338 EAL: Detected lcore 23 as core 29 on socket 0 00:04:02.338 EAL: Detected lcore 24 as core 0 on socket 1 00:04:02.338 EAL: Detected lcore 25 as core 1 on socket 1 00:04:02.338 EAL: Detected lcore 26 as core 2 on socket 1 00:04:02.338 EAL: Detected lcore 27 as core 3 on socket 1 00:04:02.338 EAL: Detected lcore 28 as core 4 on socket 1 00:04:02.338 EAL: Detected lcore 29 as core 5 on socket 1 00:04:02.338 EAL: Detected lcore 30 as core 6 on socket 1 00:04:02.338 EAL: Detected lcore 31 as core 8 on socket 1 00:04:02.338 EAL: Detected lcore 32 as core 9 on socket 1 00:04:02.338 EAL: Detected lcore 33 as core 10 on socket 1 00:04:02.339 EAL: Detected lcore 34 as core 11 on socket 1 00:04:02.339 EAL: Detected lcore 35 as core 12 on socket 1 00:04:02.339 EAL: Detected lcore 36 as core 13 on socket 1 00:04:02.339 EAL: Detected lcore 37 as core 16 on socket 1 00:04:02.339 EAL: Detected lcore 38 as core 17 on socket 1 00:04:02.339 EAL: Detected lcore 39 as core 18 on socket 1 00:04:02.339 EAL: Detected lcore 40 as core 19 on socket 1 00:04:02.339 EAL: Detected lcore 41 as core 20 on socket 1 00:04:02.339 EAL: Detected lcore 42 as core 21 on socket 1 00:04:02.339 EAL: Detected lcore 43 as core 25 on socket 1 00:04:02.339 EAL: Detected lcore 44 as core 26 on socket 1 00:04:02.339 EAL: Detected lcore 45 as core 27 on socket 1 00:04:02.339 EAL: Detected lcore 46 as core 28 on socket 1 00:04:02.339 EAL: Detected lcore 47 as core 29 on socket 1 00:04:02.339 EAL: Detected lcore 48 as core 0 on socket 0 00:04:02.339 EAL: Detected lcore 49 as core 1 on socket 0 00:04:02.339 EAL: Detected lcore 50 as core 2 on socket 0 00:04:02.339 EAL: Detected lcore 51 as core 3 on socket 0 00:04:02.339 EAL: Detected lcore 52 as core 4 on socket 0 00:04:02.339 EAL: Detected lcore 53 as core 5 on socket 0 00:04:02.339 EAL: Detected lcore 54 as core 6 on socket 0 00:04:02.339 EAL: Detected lcore 55 as core 8 on socket 0 00:04:02.339 EAL: Detected lcore 56 as core 9 on socket 0 00:04:02.339 EAL: Detected lcore 57 as core 10 on socket 0 00:04:02.339 EAL: Detected lcore 58 as core 11 on socket 0 00:04:02.339 EAL: Detected lcore 59 as core 12 on socket 0 00:04:02.339 EAL: Detected lcore 60 as core 13 on socket 0 00:04:02.339 EAL: Detected lcore 61 as core 16 on socket 0 00:04:02.339 EAL: Detected lcore 62 as core 17 on socket 0 00:04:02.339 EAL: Detected lcore 63 as core 18 on socket 0 00:04:02.339 EAL: Detected lcore 64 as core 19 on socket 0 00:04:02.339 EAL: Detected lcore 65 as core 20 on socket 0 00:04:02.339 EAL: Detected lcore 66 as core 21 on socket 0 00:04:02.339 EAL: Detected lcore 67 as core 25 on socket 0 00:04:02.339 EAL: Detected lcore 68 as core 26 on socket 0 00:04:02.339 EAL: Detected lcore 69 as core 27 on socket 0 00:04:02.339 EAL: Detected lcore 70 as core 28 on socket 0 00:04:02.339 EAL: Detected lcore 71 as core 29 on socket 0 00:04:02.339 EAL: Detected lcore 72 as core 0 on socket 1 00:04:02.339 EAL: Detected lcore 73 as core 1 on socket 1 00:04:02.339 EAL: Detected lcore 74 as core 2 on socket 1 00:04:02.339 EAL: Detected lcore 75 as core 3 on socket 1 00:04:02.339 EAL: Detected lcore 76 as core 4 on socket 1 00:04:02.339 EAL: Detected lcore 77 as core 5 on socket 1 00:04:02.339 EAL: Detected lcore 78 as core 6 on socket 1 00:04:02.339 EAL: Detected lcore 79 as core 8 on socket 1 00:04:02.339 EAL: Detected lcore 80 as core 9 on socket 1 00:04:02.339 EAL: Detected lcore 81 as core 10 on socket 1 00:04:02.339 EAL: Detected lcore 82 as core 11 on socket 1 00:04:02.339 EAL: Detected lcore 83 as core 12 on socket 1 00:04:02.339 EAL: Detected lcore 84 as core 13 on socket 1 00:04:02.339 EAL: Detected lcore 85 as core 16 on socket 1 00:04:02.339 EAL: Detected lcore 86 as core 17 on socket 1 00:04:02.339 EAL: Detected lcore 87 as core 18 on socket 1 00:04:02.339 EAL: Detected lcore 88 as core 19 on socket 1 00:04:02.339 EAL: Detected lcore 89 as core 20 on socket 1 00:04:02.339 EAL: Detected lcore 90 as core 21 on socket 1 00:04:02.339 EAL: Detected lcore 91 as core 25 on socket 1 00:04:02.339 EAL: Detected lcore 92 as core 26 on socket 1 00:04:02.339 EAL: Detected lcore 93 as core 27 on socket 1 00:04:02.339 EAL: Detected lcore 94 as core 28 on socket 1 00:04:02.339 EAL: Detected lcore 95 as core 29 on socket 1 00:04:02.339 EAL: Maximum logical cores by configuration: 128 00:04:02.339 EAL: Detected CPU lcores: 96 00:04:02.339 EAL: Detected NUMA nodes: 2 00:04:02.339 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:04:02.339 EAL: Detected shared linkage of DPDK 00:04:02.339 EAL: No shared files mode enabled, IPC will be disabled 00:04:02.339 EAL: Bus pci wants IOVA as 'DC' 00:04:02.339 EAL: Buses did not request a specific IOVA mode. 00:04:02.339 EAL: IOMMU is available, selecting IOVA as VA mode. 00:04:02.339 EAL: Selected IOVA mode 'VA' 00:04:02.339 EAL: Probing VFIO support... 00:04:02.339 EAL: IOMMU type 1 (Type 1) is supported 00:04:02.339 EAL: IOMMU type 7 (sPAPR) is not supported 00:04:02.339 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:04:02.339 EAL: VFIO support initialized 00:04:02.339 EAL: Ask a virtual area of 0x2e000 bytes 00:04:02.339 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:04:02.339 EAL: Setting up physically contiguous memory... 00:04:02.339 EAL: Setting maximum number of open files to 524288 00:04:02.339 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:04:02.339 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:04:02.339 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:04:02.339 EAL: Ask a virtual area of 0x61000 bytes 00:04:02.339 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:04:02.339 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:02.339 EAL: Ask a virtual area of 0x400000000 bytes 00:04:02.339 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:04:02.339 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:04:02.339 EAL: Ask a virtual area of 0x61000 bytes 00:04:02.339 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:04:02.339 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:02.339 EAL: Ask a virtual area of 0x400000000 bytes 00:04:02.339 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:04:02.339 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:04:02.339 EAL: Ask a virtual area of 0x61000 bytes 00:04:02.339 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:04:02.339 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:02.339 EAL: Ask a virtual area of 0x400000000 bytes 00:04:02.339 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:04:02.339 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:04:02.339 EAL: Ask a virtual area of 0x61000 bytes 00:04:02.339 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:04:02.339 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:02.339 EAL: Ask a virtual area of 0x400000000 bytes 00:04:02.339 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:04:02.339 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:04:02.339 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:04:02.339 EAL: Ask a virtual area of 0x61000 bytes 00:04:02.339 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:04:02.339 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:02.339 EAL: Ask a virtual area of 0x400000000 bytes 00:04:02.339 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:04:02.339 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:04:02.339 EAL: Ask a virtual area of 0x61000 bytes 00:04:02.339 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:04:02.339 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:02.339 EAL: Ask a virtual area of 0x400000000 bytes 00:04:02.339 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:04:02.339 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:04:02.339 EAL: Ask a virtual area of 0x61000 bytes 00:04:02.339 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:04:02.339 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:02.339 EAL: Ask a virtual area of 0x400000000 bytes 00:04:02.339 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:04:02.339 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:04:02.339 EAL: Ask a virtual area of 0x61000 bytes 00:04:02.339 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:04:02.339 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:02.339 EAL: Ask a virtual area of 0x400000000 bytes 00:04:02.339 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:04:02.339 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:04:02.339 EAL: Hugepages will be freed exactly as allocated. 00:04:02.339 EAL: No shared files mode enabled, IPC is disabled 00:04:02.339 EAL: No shared files mode enabled, IPC is disabled 00:04:02.339 EAL: TSC frequency is ~2100000 KHz 00:04:02.339 EAL: Main lcore 0 is ready (tid=7f78db67ea40;cpuset=[0]) 00:04:02.339 EAL: Trying to obtain current memory policy. 00:04:02.339 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:02.339 EAL: Restoring previous memory policy: 0 00:04:02.339 EAL: request: mp_malloc_sync 00:04:02.339 EAL: No shared files mode enabled, IPC is disabled 00:04:02.339 EAL: Heap on socket 0 was expanded by 2MB 00:04:02.339 EAL: No shared files mode enabled, IPC is disabled 00:04:02.339 EAL: No PCI address specified using 'addr=' in: bus=pci 00:04:02.339 EAL: Mem event callback 'spdk:(nil)' registered 00:04:02.339 00:04:02.339 00:04:02.339 CUnit - A unit testing framework for C - Version 2.1-3 00:04:02.339 http://cunit.sourceforge.net/ 00:04:02.339 00:04:02.339 00:04:02.339 Suite: components_suite 00:04:02.599 Test: vtophys_malloc_test ...passed 00:04:02.599 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:04:02.599 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:02.599 EAL: Restoring previous memory policy: 4 00:04:02.599 EAL: Calling mem event callback 'spdk:(nil)' 00:04:02.599 EAL: request: mp_malloc_sync 00:04:02.599 EAL: No shared files mode enabled, IPC is disabled 00:04:02.599 EAL: Heap on socket 0 was expanded by 4MB 00:04:02.599 EAL: Calling mem event callback 'spdk:(nil)' 00:04:02.599 EAL: request: mp_malloc_sync 00:04:02.599 EAL: No shared files mode enabled, IPC is disabled 00:04:02.599 EAL: Heap on socket 0 was shrunk by 4MB 00:04:02.599 EAL: Trying to obtain current memory policy. 00:04:02.599 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:02.599 EAL: Restoring previous memory policy: 4 00:04:02.599 EAL: Calling mem event callback 'spdk:(nil)' 00:04:02.599 EAL: request: mp_malloc_sync 00:04:02.599 EAL: No shared files mode enabled, IPC is disabled 00:04:02.599 EAL: Heap on socket 0 was expanded by 6MB 00:04:02.599 EAL: Calling mem event callback 'spdk:(nil)' 00:04:02.599 EAL: request: mp_malloc_sync 00:04:02.599 EAL: No shared files mode enabled, IPC is disabled 00:04:02.599 EAL: Heap on socket 0 was shrunk by 6MB 00:04:02.599 EAL: Trying to obtain current memory policy. 00:04:02.599 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:02.599 EAL: Restoring previous memory policy: 4 00:04:02.599 EAL: Calling mem event callback 'spdk:(nil)' 00:04:02.599 EAL: request: mp_malloc_sync 00:04:02.599 EAL: No shared files mode enabled, IPC is disabled 00:04:02.599 EAL: Heap on socket 0 was expanded by 10MB 00:04:02.857 EAL: Calling mem event callback 'spdk:(nil)' 00:04:02.857 EAL: request: mp_malloc_sync 00:04:02.857 EAL: No shared files mode enabled, IPC is disabled 00:04:02.857 EAL: Heap on socket 0 was shrunk by 10MB 00:04:02.857 EAL: Trying to obtain current memory policy. 00:04:02.857 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:02.857 EAL: Restoring previous memory policy: 4 00:04:02.857 EAL: Calling mem event callback 'spdk:(nil)' 00:04:02.857 EAL: request: mp_malloc_sync 00:04:02.857 EAL: No shared files mode enabled, IPC is disabled 00:04:02.857 EAL: Heap on socket 0 was expanded by 18MB 00:04:02.857 EAL: Calling mem event callback 'spdk:(nil)' 00:04:02.857 EAL: request: mp_malloc_sync 00:04:02.857 EAL: No shared files mode enabled, IPC is disabled 00:04:02.857 EAL: Heap on socket 0 was shrunk by 18MB 00:04:02.857 EAL: Trying to obtain current memory policy. 00:04:02.857 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:02.857 EAL: Restoring previous memory policy: 4 00:04:02.857 EAL: Calling mem event callback 'spdk:(nil)' 00:04:02.857 EAL: request: mp_malloc_sync 00:04:02.857 EAL: No shared files mode enabled, IPC is disabled 00:04:02.857 EAL: Heap on socket 0 was expanded by 34MB 00:04:02.857 EAL: Calling mem event callback 'spdk:(nil)' 00:04:02.857 EAL: request: mp_malloc_sync 00:04:02.857 EAL: No shared files mode enabled, IPC is disabled 00:04:02.857 EAL: Heap on socket 0 was shrunk by 34MB 00:04:02.857 EAL: Trying to obtain current memory policy. 00:04:02.857 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:02.857 EAL: Restoring previous memory policy: 4 00:04:02.857 EAL: Calling mem event callback 'spdk:(nil)' 00:04:02.857 EAL: request: mp_malloc_sync 00:04:02.857 EAL: No shared files mode enabled, IPC is disabled 00:04:02.857 EAL: Heap on socket 0 was expanded by 66MB 00:04:03.116 EAL: Calling mem event callback 'spdk:(nil)' 00:04:03.116 EAL: request: mp_malloc_sync 00:04:03.116 EAL: No shared files mode enabled, IPC is disabled 00:04:03.116 EAL: Heap on socket 0 was shrunk by 66MB 00:04:03.116 EAL: Trying to obtain current memory policy. 00:04:03.116 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:03.116 EAL: Restoring previous memory policy: 4 00:04:03.116 EAL: Calling mem event callback 'spdk:(nil)' 00:04:03.116 EAL: request: mp_malloc_sync 00:04:03.116 EAL: No shared files mode enabled, IPC is disabled 00:04:03.116 EAL: Heap on socket 0 was expanded by 130MB 00:04:03.375 EAL: Calling mem event callback 'spdk:(nil)' 00:04:03.375 EAL: request: mp_malloc_sync 00:04:03.375 EAL: No shared files mode enabled, IPC is disabled 00:04:03.375 EAL: Heap on socket 0 was shrunk by 130MB 00:04:03.633 EAL: Trying to obtain current memory policy. 00:04:03.633 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:03.633 EAL: Restoring previous memory policy: 4 00:04:03.633 EAL: Calling mem event callback 'spdk:(nil)' 00:04:03.633 EAL: request: mp_malloc_sync 00:04:03.633 EAL: No shared files mode enabled, IPC is disabled 00:04:03.633 EAL: Heap on socket 0 was expanded by 258MB 00:04:04.201 EAL: Calling mem event callback 'spdk:(nil)' 00:04:04.201 EAL: request: mp_malloc_sync 00:04:04.201 EAL: No shared files mode enabled, IPC is disabled 00:04:04.201 EAL: Heap on socket 0 was shrunk by 258MB 00:04:04.460 EAL: Trying to obtain current memory policy. 00:04:04.460 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:04.719 EAL: Restoring previous memory policy: 4 00:04:04.719 EAL: Calling mem event callback 'spdk:(nil)' 00:04:04.719 EAL: request: mp_malloc_sync 00:04:04.719 EAL: No shared files mode enabled, IPC is disabled 00:04:04.719 EAL: Heap on socket 0 was expanded by 514MB 00:04:05.570 EAL: Calling mem event callback 'spdk:(nil)' 00:04:05.570 EAL: request: mp_malloc_sync 00:04:05.570 EAL: No shared files mode enabled, IPC is disabled 00:04:05.570 EAL: Heap on socket 0 was shrunk by 514MB 00:04:06.505 EAL: Trying to obtain current memory policy. 00:04:06.505 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:06.505 EAL: Restoring previous memory policy: 4 00:04:06.505 EAL: Calling mem event callback 'spdk:(nil)' 00:04:06.505 EAL: request: mp_malloc_sync 00:04:06.505 EAL: No shared files mode enabled, IPC is disabled 00:04:06.505 EAL: Heap on socket 0 was expanded by 1026MB 00:04:08.408 EAL: Calling mem event callback 'spdk:(nil)' 00:04:08.408 EAL: request: mp_malloc_sync 00:04:08.408 EAL: No shared files mode enabled, IPC is disabled 00:04:08.408 EAL: Heap on socket 0 was shrunk by 1026MB 00:04:10.311 passed 00:04:10.311 00:04:10.311 Run Summary: Type Total Ran Passed Failed Inactive 00:04:10.311 suites 1 1 n/a 0 0 00:04:10.311 tests 2 2 2 0 0 00:04:10.311 asserts 497 497 497 0 n/a 00:04:10.311 00:04:10.311 Elapsed time = 7.670 seconds 00:04:10.311 EAL: Calling mem event callback 'spdk:(nil)' 00:04:10.311 EAL: request: mp_malloc_sync 00:04:10.311 EAL: No shared files mode enabled, IPC is disabled 00:04:10.311 EAL: Heap on socket 0 was shrunk by 2MB 00:04:10.311 EAL: No shared files mode enabled, IPC is disabled 00:04:10.311 EAL: No shared files mode enabled, IPC is disabled 00:04:10.311 EAL: No shared files mode enabled, IPC is disabled 00:04:10.311 00:04:10.311 real 0m7.903s 00:04:10.311 user 0m7.105s 00:04:10.311 sys 0m0.750s 00:04:10.311 23:44:49 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:10.311 23:44:49 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:04:10.311 ************************************ 00:04:10.311 END TEST env_vtophys 00:04:10.311 ************************************ 00:04:10.311 23:44:49 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:04:10.311 23:44:49 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:10.311 23:44:49 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:10.311 23:44:49 env -- common/autotest_common.sh@10 -- # set +x 00:04:10.311 ************************************ 00:04:10.311 START TEST env_pci 00:04:10.311 ************************************ 00:04:10.311 23:44:49 env.env_pci -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:04:10.311 00:04:10.311 00:04:10.311 CUnit - A unit testing framework for C - Version 2.1-3 00:04:10.311 http://cunit.sourceforge.net/ 00:04:10.311 00:04:10.311 00:04:10.311 Suite: pci 00:04:10.311 Test: pci_hook ...[2024-12-13 23:44:49.232009] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 3789046 has claimed it 00:04:10.311 EAL: Cannot find device (10000:00:01.0) 00:04:10.311 EAL: Failed to attach device on primary process 00:04:10.311 passed 00:04:10.311 00:04:10.311 Run Summary: Type Total Ran Passed Failed Inactive 00:04:10.311 suites 1 1 n/a 0 0 00:04:10.311 tests 1 1 1 0 0 00:04:10.311 asserts 25 25 25 0 n/a 00:04:10.311 00:04:10.311 Elapsed time = 0.035 seconds 00:04:10.311 00:04:10.311 real 0m0.094s 00:04:10.311 user 0m0.036s 00:04:10.311 sys 0m0.057s 00:04:10.311 23:44:49 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:10.311 23:44:49 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:04:10.311 ************************************ 00:04:10.311 END TEST env_pci 00:04:10.311 ************************************ 00:04:10.311 23:44:49 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:04:10.311 23:44:49 env -- env/env.sh@15 -- # uname 00:04:10.311 23:44:49 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:04:10.311 23:44:49 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:04:10.311 23:44:49 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:10.311 23:44:49 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:04:10.311 23:44:49 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:10.311 23:44:49 env -- common/autotest_common.sh@10 -- # set +x 00:04:10.311 ************************************ 00:04:10.311 START TEST env_dpdk_post_init 00:04:10.311 ************************************ 00:04:10.311 23:44:49 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:10.311 EAL: Detected CPU lcores: 96 00:04:10.311 EAL: Detected NUMA nodes: 2 00:04:10.311 EAL: Detected shared linkage of DPDK 00:04:10.311 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:10.570 EAL: Selected IOVA mode 'VA' 00:04:10.570 EAL: VFIO support initialized 00:04:10.570 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:10.570 EAL: Using IOMMU type 1 (Type 1) 00:04:10.570 EAL: Ignore mapping IO port bar(1) 00:04:10.570 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.0 (socket 0) 00:04:10.570 EAL: Ignore mapping IO port bar(1) 00:04:10.570 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.1 (socket 0) 00:04:10.570 EAL: Ignore mapping IO port bar(1) 00:04:10.570 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.2 (socket 0) 00:04:10.570 EAL: Ignore mapping IO port bar(1) 00:04:10.570 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.3 (socket 0) 00:04:10.570 EAL: Ignore mapping IO port bar(1) 00:04:10.570 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.4 (socket 0) 00:04:10.570 EAL: Ignore mapping IO port bar(1) 00:04:10.570 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.5 (socket 0) 00:04:10.570 EAL: Ignore mapping IO port bar(1) 00:04:10.570 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.6 (socket 0) 00:04:10.570 EAL: Ignore mapping IO port bar(1) 00:04:10.570 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.7 (socket 0) 00:04:11.508 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:5e:00.0 (socket 0) 00:04:11.508 EAL: Ignore mapping IO port bar(1) 00:04:11.508 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.0 (socket 1) 00:04:11.508 EAL: Ignore mapping IO port bar(1) 00:04:11.508 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.1 (socket 1) 00:04:11.508 EAL: Ignore mapping IO port bar(1) 00:04:11.508 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.2 (socket 1) 00:04:11.508 EAL: Ignore mapping IO port bar(1) 00:04:11.508 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.3 (socket 1) 00:04:11.508 EAL: Ignore mapping IO port bar(1) 00:04:11.508 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.4 (socket 1) 00:04:11.508 EAL: Ignore mapping IO port bar(1) 00:04:11.508 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.5 (socket 1) 00:04:11.508 EAL: Ignore mapping IO port bar(1) 00:04:11.508 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.6 (socket 1) 00:04:11.508 EAL: Ignore mapping IO port bar(1) 00:04:11.508 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.7 (socket 1) 00:04:14.792 EAL: Releasing PCI mapped resource for 0000:5e:00.0 00:04:14.792 EAL: Calling pci_unmap_resource for 0000:5e:00.0 at 0x202001020000 00:04:14.792 Starting DPDK initialization... 00:04:14.792 Starting SPDK post initialization... 00:04:14.792 SPDK NVMe probe 00:04:14.792 Attaching to 0000:5e:00.0 00:04:14.792 Attached to 0000:5e:00.0 00:04:14.792 Cleaning up... 00:04:14.792 00:04:14.792 real 0m4.491s 00:04:14.792 user 0m3.067s 00:04:14.792 sys 0m0.495s 00:04:14.792 23:44:53 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:14.792 23:44:53 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:04:14.792 ************************************ 00:04:14.792 END TEST env_dpdk_post_init 00:04:14.792 ************************************ 00:04:14.792 23:44:53 env -- env/env.sh@26 -- # uname 00:04:14.792 23:44:53 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:04:14.792 23:44:53 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:04:14.792 23:44:53 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:14.792 23:44:53 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:14.792 23:44:53 env -- common/autotest_common.sh@10 -- # set +x 00:04:14.792 ************************************ 00:04:14.792 START TEST env_mem_callbacks 00:04:14.792 ************************************ 00:04:14.792 23:44:53 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:04:15.051 EAL: Detected CPU lcores: 96 00:04:15.051 EAL: Detected NUMA nodes: 2 00:04:15.051 EAL: Detected shared linkage of DPDK 00:04:15.051 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:15.051 EAL: Selected IOVA mode 'VA' 00:04:15.051 EAL: VFIO support initialized 00:04:15.051 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:15.051 00:04:15.051 00:04:15.051 CUnit - A unit testing framework for C - Version 2.1-3 00:04:15.051 http://cunit.sourceforge.net/ 00:04:15.051 00:04:15.051 00:04:15.051 Suite: memory 00:04:15.051 Test: test ... 00:04:15.051 register 0x200000200000 2097152 00:04:15.051 malloc 3145728 00:04:15.051 register 0x200000400000 4194304 00:04:15.051 buf 0x2000004fffc0 len 3145728 PASSED 00:04:15.051 malloc 64 00:04:15.051 buf 0x2000004ffec0 len 64 PASSED 00:04:15.051 malloc 4194304 00:04:15.051 register 0x200000800000 6291456 00:04:15.051 buf 0x2000009fffc0 len 4194304 PASSED 00:04:15.051 free 0x2000004fffc0 3145728 00:04:15.051 free 0x2000004ffec0 64 00:04:15.051 unregister 0x200000400000 4194304 PASSED 00:04:15.051 free 0x2000009fffc0 4194304 00:04:15.051 unregister 0x200000800000 6291456 PASSED 00:04:15.051 malloc 8388608 00:04:15.051 register 0x200000400000 10485760 00:04:15.051 buf 0x2000005fffc0 len 8388608 PASSED 00:04:15.051 free 0x2000005fffc0 8388608 00:04:15.051 unregister 0x200000400000 10485760 PASSED 00:04:15.051 passed 00:04:15.051 00:04:15.051 Run Summary: Type Total Ran Passed Failed Inactive 00:04:15.051 suites 1 1 n/a 0 0 00:04:15.051 tests 1 1 1 0 0 00:04:15.051 asserts 15 15 15 0 n/a 00:04:15.051 00:04:15.051 Elapsed time = 0.070 seconds 00:04:15.051 00:04:15.051 real 0m0.177s 00:04:15.051 user 0m0.100s 00:04:15.051 sys 0m0.076s 00:04:15.051 23:44:54 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:15.051 23:44:54 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:04:15.051 ************************************ 00:04:15.051 END TEST env_mem_callbacks 00:04:15.051 ************************************ 00:04:15.051 00:04:15.051 real 0m13.415s 00:04:15.051 user 0m10.779s 00:04:15.051 sys 0m1.692s 00:04:15.051 23:44:54 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:15.051 23:44:54 env -- common/autotest_common.sh@10 -- # set +x 00:04:15.051 ************************************ 00:04:15.051 END TEST env 00:04:15.051 ************************************ 00:04:15.051 23:44:54 -- spdk/autotest.sh@156 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:04:15.051 23:44:54 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:15.051 23:44:54 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:15.051 23:44:54 -- common/autotest_common.sh@10 -- # set +x 00:04:15.310 ************************************ 00:04:15.310 START TEST rpc 00:04:15.310 ************************************ 00:04:15.310 23:44:54 rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:04:15.310 * Looking for test storage... 00:04:15.310 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:15.310 23:44:54 rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:15.310 23:44:54 rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:15.310 23:44:54 rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:04:15.310 23:44:54 rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:15.310 23:44:54 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:15.310 23:44:54 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:15.310 23:44:54 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:15.310 23:44:54 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:15.310 23:44:54 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:15.310 23:44:54 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:15.310 23:44:54 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:15.310 23:44:54 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:15.310 23:44:54 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:15.310 23:44:54 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:15.310 23:44:54 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:15.310 23:44:54 rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:15.310 23:44:54 rpc -- scripts/common.sh@345 -- # : 1 00:04:15.310 23:44:54 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:15.310 23:44:54 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:15.310 23:44:54 rpc -- scripts/common.sh@365 -- # decimal 1 00:04:15.310 23:44:54 rpc -- scripts/common.sh@353 -- # local d=1 00:04:15.310 23:44:54 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:15.310 23:44:54 rpc -- scripts/common.sh@355 -- # echo 1 00:04:15.310 23:44:54 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:15.310 23:44:54 rpc -- scripts/common.sh@366 -- # decimal 2 00:04:15.310 23:44:54 rpc -- scripts/common.sh@353 -- # local d=2 00:04:15.310 23:44:54 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:15.310 23:44:54 rpc -- scripts/common.sh@355 -- # echo 2 00:04:15.310 23:44:54 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:15.310 23:44:54 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:15.310 23:44:54 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:15.310 23:44:54 rpc -- scripts/common.sh@368 -- # return 0 00:04:15.310 23:44:54 rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:15.310 23:44:54 rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:15.310 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:15.310 --rc genhtml_branch_coverage=1 00:04:15.310 --rc genhtml_function_coverage=1 00:04:15.310 --rc genhtml_legend=1 00:04:15.310 --rc geninfo_all_blocks=1 00:04:15.310 --rc geninfo_unexecuted_blocks=1 00:04:15.310 00:04:15.310 ' 00:04:15.310 23:44:54 rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:15.310 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:15.310 --rc genhtml_branch_coverage=1 00:04:15.310 --rc genhtml_function_coverage=1 00:04:15.310 --rc genhtml_legend=1 00:04:15.310 --rc geninfo_all_blocks=1 00:04:15.310 --rc geninfo_unexecuted_blocks=1 00:04:15.310 00:04:15.310 ' 00:04:15.310 23:44:54 rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:15.310 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:15.310 --rc genhtml_branch_coverage=1 00:04:15.310 --rc genhtml_function_coverage=1 00:04:15.310 --rc genhtml_legend=1 00:04:15.310 --rc geninfo_all_blocks=1 00:04:15.310 --rc geninfo_unexecuted_blocks=1 00:04:15.310 00:04:15.310 ' 00:04:15.310 23:44:54 rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:15.310 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:15.310 --rc genhtml_branch_coverage=1 00:04:15.310 --rc genhtml_function_coverage=1 00:04:15.310 --rc genhtml_legend=1 00:04:15.310 --rc geninfo_all_blocks=1 00:04:15.310 --rc geninfo_unexecuted_blocks=1 00:04:15.310 00:04:15.310 ' 00:04:15.310 23:44:54 rpc -- rpc/rpc.sh@65 -- # spdk_pid=3790072 00:04:15.310 23:44:54 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:15.310 23:44:54 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:04:15.310 23:44:54 rpc -- rpc/rpc.sh@67 -- # waitforlisten 3790072 00:04:15.310 23:44:54 rpc -- common/autotest_common.sh@835 -- # '[' -z 3790072 ']' 00:04:15.310 23:44:54 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:15.310 23:44:54 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:15.310 23:44:54 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:15.310 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:15.310 23:44:54 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:15.310 23:44:54 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:15.569 [2024-12-13 23:44:54.460935] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:04:15.569 [2024-12-13 23:44:54.461025] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3790072 ] 00:04:15.569 [2024-12-13 23:44:54.572878] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:15.569 [2024-12-13 23:44:54.675775] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:04:15.569 [2024-12-13 23:44:54.675815] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 3790072' to capture a snapshot of events at runtime. 00:04:15.569 [2024-12-13 23:44:54.675826] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:04:15.569 [2024-12-13 23:44:54.675850] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:04:15.569 [2024-12-13 23:44:54.675859] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid3790072 for offline analysis/debug. 00:04:15.569 [2024-12-13 23:44:54.677150] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:04:16.515 23:44:55 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:16.515 23:44:55 rpc -- common/autotest_common.sh@868 -- # return 0 00:04:16.515 23:44:55 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:16.515 23:44:55 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:16.515 23:44:55 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:04:16.515 23:44:55 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:04:16.515 23:44:55 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:16.515 23:44:55 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:16.515 23:44:55 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:16.515 ************************************ 00:04:16.515 START TEST rpc_integrity 00:04:16.515 ************************************ 00:04:16.515 23:44:55 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:04:16.515 23:44:55 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:16.515 23:44:55 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:16.515 23:44:55 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:16.515 23:44:55 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:16.515 23:44:55 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:16.515 23:44:55 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:16.515 23:44:55 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:16.515 23:44:55 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:16.515 23:44:55 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:16.515 23:44:55 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:16.515 23:44:55 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:16.515 23:44:55 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:04:16.515 23:44:55 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:16.515 23:44:55 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:16.515 23:44:55 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:16.515 23:44:55 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:16.515 23:44:55 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:16.515 { 00:04:16.515 "name": "Malloc0", 00:04:16.515 "aliases": [ 00:04:16.515 "78b88fd0-ebc4-4198-9ab2-c620475fa3cc" 00:04:16.515 ], 00:04:16.515 "product_name": "Malloc disk", 00:04:16.515 "block_size": 512, 00:04:16.515 "num_blocks": 16384, 00:04:16.515 "uuid": "78b88fd0-ebc4-4198-9ab2-c620475fa3cc", 00:04:16.515 "assigned_rate_limits": { 00:04:16.515 "rw_ios_per_sec": 0, 00:04:16.515 "rw_mbytes_per_sec": 0, 00:04:16.515 "r_mbytes_per_sec": 0, 00:04:16.515 "w_mbytes_per_sec": 0 00:04:16.515 }, 00:04:16.515 "claimed": false, 00:04:16.515 "zoned": false, 00:04:16.515 "supported_io_types": { 00:04:16.515 "read": true, 00:04:16.515 "write": true, 00:04:16.515 "unmap": true, 00:04:16.515 "flush": true, 00:04:16.515 "reset": true, 00:04:16.515 "nvme_admin": false, 00:04:16.515 "nvme_io": false, 00:04:16.515 "nvme_io_md": false, 00:04:16.515 "write_zeroes": true, 00:04:16.515 "zcopy": true, 00:04:16.515 "get_zone_info": false, 00:04:16.515 "zone_management": false, 00:04:16.515 "zone_append": false, 00:04:16.515 "compare": false, 00:04:16.515 "compare_and_write": false, 00:04:16.515 "abort": true, 00:04:16.515 "seek_hole": false, 00:04:16.515 "seek_data": false, 00:04:16.515 "copy": true, 00:04:16.515 "nvme_iov_md": false 00:04:16.515 }, 00:04:16.515 "memory_domains": [ 00:04:16.515 { 00:04:16.515 "dma_device_id": "system", 00:04:16.515 "dma_device_type": 1 00:04:16.515 }, 00:04:16.515 { 00:04:16.515 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:16.515 "dma_device_type": 2 00:04:16.515 } 00:04:16.515 ], 00:04:16.515 "driver_specific": {} 00:04:16.515 } 00:04:16.515 ]' 00:04:16.515 23:44:55 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:16.774 23:44:55 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:16.774 23:44:55 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:04:16.774 23:44:55 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:16.774 23:44:55 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:16.774 [2024-12-13 23:44:55.669551] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:04:16.774 [2024-12-13 23:44:55.669600] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:16.774 [2024-12-13 23:44:55.669623] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000021c80 00:04:16.774 [2024-12-13 23:44:55.669633] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:16.774 [2024-12-13 23:44:55.671580] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:16.774 [2024-12-13 23:44:55.671604] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:16.774 Passthru0 00:04:16.774 23:44:55 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:16.774 23:44:55 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:16.774 23:44:55 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:16.774 23:44:55 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:16.774 23:44:55 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:16.774 23:44:55 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:16.774 { 00:04:16.774 "name": "Malloc0", 00:04:16.774 "aliases": [ 00:04:16.774 "78b88fd0-ebc4-4198-9ab2-c620475fa3cc" 00:04:16.774 ], 00:04:16.774 "product_name": "Malloc disk", 00:04:16.774 "block_size": 512, 00:04:16.774 "num_blocks": 16384, 00:04:16.774 "uuid": "78b88fd0-ebc4-4198-9ab2-c620475fa3cc", 00:04:16.774 "assigned_rate_limits": { 00:04:16.774 "rw_ios_per_sec": 0, 00:04:16.774 "rw_mbytes_per_sec": 0, 00:04:16.774 "r_mbytes_per_sec": 0, 00:04:16.774 "w_mbytes_per_sec": 0 00:04:16.774 }, 00:04:16.774 "claimed": true, 00:04:16.774 "claim_type": "exclusive_write", 00:04:16.774 "zoned": false, 00:04:16.774 "supported_io_types": { 00:04:16.774 "read": true, 00:04:16.774 "write": true, 00:04:16.774 "unmap": true, 00:04:16.774 "flush": true, 00:04:16.774 "reset": true, 00:04:16.774 "nvme_admin": false, 00:04:16.774 "nvme_io": false, 00:04:16.774 "nvme_io_md": false, 00:04:16.774 "write_zeroes": true, 00:04:16.774 "zcopy": true, 00:04:16.774 "get_zone_info": false, 00:04:16.774 "zone_management": false, 00:04:16.774 "zone_append": false, 00:04:16.774 "compare": false, 00:04:16.774 "compare_and_write": false, 00:04:16.774 "abort": true, 00:04:16.774 "seek_hole": false, 00:04:16.774 "seek_data": false, 00:04:16.774 "copy": true, 00:04:16.774 "nvme_iov_md": false 00:04:16.774 }, 00:04:16.774 "memory_domains": [ 00:04:16.774 { 00:04:16.774 "dma_device_id": "system", 00:04:16.774 "dma_device_type": 1 00:04:16.774 }, 00:04:16.774 { 00:04:16.774 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:16.774 "dma_device_type": 2 00:04:16.774 } 00:04:16.774 ], 00:04:16.774 "driver_specific": {} 00:04:16.774 }, 00:04:16.774 { 00:04:16.774 "name": "Passthru0", 00:04:16.774 "aliases": [ 00:04:16.774 "9cb24fe8-b36d-5ce3-9013-a15a1b034d7b" 00:04:16.774 ], 00:04:16.774 "product_name": "passthru", 00:04:16.774 "block_size": 512, 00:04:16.774 "num_blocks": 16384, 00:04:16.774 "uuid": "9cb24fe8-b36d-5ce3-9013-a15a1b034d7b", 00:04:16.774 "assigned_rate_limits": { 00:04:16.774 "rw_ios_per_sec": 0, 00:04:16.774 "rw_mbytes_per_sec": 0, 00:04:16.774 "r_mbytes_per_sec": 0, 00:04:16.774 "w_mbytes_per_sec": 0 00:04:16.774 }, 00:04:16.774 "claimed": false, 00:04:16.774 "zoned": false, 00:04:16.774 "supported_io_types": { 00:04:16.774 "read": true, 00:04:16.774 "write": true, 00:04:16.774 "unmap": true, 00:04:16.774 "flush": true, 00:04:16.774 "reset": true, 00:04:16.774 "nvme_admin": false, 00:04:16.774 "nvme_io": false, 00:04:16.774 "nvme_io_md": false, 00:04:16.774 "write_zeroes": true, 00:04:16.774 "zcopy": true, 00:04:16.774 "get_zone_info": false, 00:04:16.774 "zone_management": false, 00:04:16.774 "zone_append": false, 00:04:16.774 "compare": false, 00:04:16.774 "compare_and_write": false, 00:04:16.774 "abort": true, 00:04:16.774 "seek_hole": false, 00:04:16.774 "seek_data": false, 00:04:16.774 "copy": true, 00:04:16.774 "nvme_iov_md": false 00:04:16.774 }, 00:04:16.774 "memory_domains": [ 00:04:16.774 { 00:04:16.774 "dma_device_id": "system", 00:04:16.774 "dma_device_type": 1 00:04:16.774 }, 00:04:16.774 { 00:04:16.774 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:16.774 "dma_device_type": 2 00:04:16.774 } 00:04:16.774 ], 00:04:16.774 "driver_specific": { 00:04:16.774 "passthru": { 00:04:16.774 "name": "Passthru0", 00:04:16.774 "base_bdev_name": "Malloc0" 00:04:16.774 } 00:04:16.774 } 00:04:16.774 } 00:04:16.774 ]' 00:04:16.774 23:44:55 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:16.774 23:44:55 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:16.774 23:44:55 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:16.774 23:44:55 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:16.774 23:44:55 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:16.774 23:44:55 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:16.774 23:44:55 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:04:16.774 23:44:55 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:16.774 23:44:55 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:16.774 23:44:55 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:16.774 23:44:55 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:16.774 23:44:55 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:16.775 23:44:55 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:16.775 23:44:55 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:16.775 23:44:55 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:16.775 23:44:55 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:16.775 23:44:55 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:16.775 00:04:16.775 real 0m0.293s 00:04:16.775 user 0m0.159s 00:04:16.775 sys 0m0.038s 00:04:16.775 23:44:55 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:16.775 23:44:55 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:16.775 ************************************ 00:04:16.775 END TEST rpc_integrity 00:04:16.775 ************************************ 00:04:16.775 23:44:55 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:04:16.775 23:44:55 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:16.775 23:44:55 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:16.775 23:44:55 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:16.775 ************************************ 00:04:16.775 START TEST rpc_plugins 00:04:16.775 ************************************ 00:04:16.775 23:44:55 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:04:16.775 23:44:55 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:04:16.775 23:44:55 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:16.775 23:44:55 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:16.775 23:44:55 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:16.775 23:44:55 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:04:16.775 23:44:55 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:04:16.775 23:44:55 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:16.775 23:44:55 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:17.033 23:44:55 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:17.033 23:44:55 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:04:17.033 { 00:04:17.033 "name": "Malloc1", 00:04:17.033 "aliases": [ 00:04:17.033 "822db6cc-127b-43d5-b6be-c1a833e0e003" 00:04:17.033 ], 00:04:17.033 "product_name": "Malloc disk", 00:04:17.033 "block_size": 4096, 00:04:17.033 "num_blocks": 256, 00:04:17.033 "uuid": "822db6cc-127b-43d5-b6be-c1a833e0e003", 00:04:17.033 "assigned_rate_limits": { 00:04:17.033 "rw_ios_per_sec": 0, 00:04:17.033 "rw_mbytes_per_sec": 0, 00:04:17.033 "r_mbytes_per_sec": 0, 00:04:17.033 "w_mbytes_per_sec": 0 00:04:17.033 }, 00:04:17.033 "claimed": false, 00:04:17.033 "zoned": false, 00:04:17.033 "supported_io_types": { 00:04:17.033 "read": true, 00:04:17.033 "write": true, 00:04:17.033 "unmap": true, 00:04:17.033 "flush": true, 00:04:17.033 "reset": true, 00:04:17.033 "nvme_admin": false, 00:04:17.033 "nvme_io": false, 00:04:17.033 "nvme_io_md": false, 00:04:17.033 "write_zeroes": true, 00:04:17.033 "zcopy": true, 00:04:17.033 "get_zone_info": false, 00:04:17.033 "zone_management": false, 00:04:17.033 "zone_append": false, 00:04:17.033 "compare": false, 00:04:17.033 "compare_and_write": false, 00:04:17.033 "abort": true, 00:04:17.033 "seek_hole": false, 00:04:17.033 "seek_data": false, 00:04:17.033 "copy": true, 00:04:17.033 "nvme_iov_md": false 00:04:17.033 }, 00:04:17.033 "memory_domains": [ 00:04:17.033 { 00:04:17.033 "dma_device_id": "system", 00:04:17.033 "dma_device_type": 1 00:04:17.033 }, 00:04:17.033 { 00:04:17.033 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:17.033 "dma_device_type": 2 00:04:17.033 } 00:04:17.033 ], 00:04:17.033 "driver_specific": {} 00:04:17.033 } 00:04:17.033 ]' 00:04:17.033 23:44:55 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:04:17.033 23:44:55 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:04:17.033 23:44:55 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:04:17.033 23:44:55 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:17.033 23:44:55 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:17.033 23:44:55 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:17.033 23:44:55 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:04:17.033 23:44:55 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:17.033 23:44:55 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:17.033 23:44:55 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:17.033 23:44:55 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:04:17.033 23:44:55 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:04:17.033 23:44:56 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:04:17.033 00:04:17.033 real 0m0.131s 00:04:17.033 user 0m0.078s 00:04:17.033 sys 0m0.013s 00:04:17.033 23:44:56 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:17.033 23:44:56 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:17.033 ************************************ 00:04:17.033 END TEST rpc_plugins 00:04:17.033 ************************************ 00:04:17.033 23:44:56 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:04:17.033 23:44:56 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:17.033 23:44:56 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:17.033 23:44:56 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:17.033 ************************************ 00:04:17.033 START TEST rpc_trace_cmd_test 00:04:17.033 ************************************ 00:04:17.033 23:44:56 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:04:17.033 23:44:56 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:04:17.033 23:44:56 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:04:17.033 23:44:56 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:17.034 23:44:56 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:17.034 23:44:56 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:17.034 23:44:56 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:04:17.034 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid3790072", 00:04:17.034 "tpoint_group_mask": "0x8", 00:04:17.034 "iscsi_conn": { 00:04:17.034 "mask": "0x2", 00:04:17.034 "tpoint_mask": "0x0" 00:04:17.034 }, 00:04:17.034 "scsi": { 00:04:17.034 "mask": "0x4", 00:04:17.034 "tpoint_mask": "0x0" 00:04:17.034 }, 00:04:17.034 "bdev": { 00:04:17.034 "mask": "0x8", 00:04:17.034 "tpoint_mask": "0xffffffffffffffff" 00:04:17.034 }, 00:04:17.034 "nvmf_rdma": { 00:04:17.034 "mask": "0x10", 00:04:17.034 "tpoint_mask": "0x0" 00:04:17.034 }, 00:04:17.034 "nvmf_tcp": { 00:04:17.034 "mask": "0x20", 00:04:17.034 "tpoint_mask": "0x0" 00:04:17.034 }, 00:04:17.034 "ftl": { 00:04:17.034 "mask": "0x40", 00:04:17.034 "tpoint_mask": "0x0" 00:04:17.034 }, 00:04:17.034 "blobfs": { 00:04:17.034 "mask": "0x80", 00:04:17.034 "tpoint_mask": "0x0" 00:04:17.034 }, 00:04:17.034 "dsa": { 00:04:17.034 "mask": "0x200", 00:04:17.034 "tpoint_mask": "0x0" 00:04:17.034 }, 00:04:17.034 "thread": { 00:04:17.034 "mask": "0x400", 00:04:17.034 "tpoint_mask": "0x0" 00:04:17.034 }, 00:04:17.034 "nvme_pcie": { 00:04:17.034 "mask": "0x800", 00:04:17.034 "tpoint_mask": "0x0" 00:04:17.034 }, 00:04:17.034 "iaa": { 00:04:17.034 "mask": "0x1000", 00:04:17.034 "tpoint_mask": "0x0" 00:04:17.034 }, 00:04:17.034 "nvme_tcp": { 00:04:17.034 "mask": "0x2000", 00:04:17.034 "tpoint_mask": "0x0" 00:04:17.034 }, 00:04:17.034 "bdev_nvme": { 00:04:17.034 "mask": "0x4000", 00:04:17.034 "tpoint_mask": "0x0" 00:04:17.034 }, 00:04:17.034 "sock": { 00:04:17.034 "mask": "0x8000", 00:04:17.034 "tpoint_mask": "0x0" 00:04:17.034 }, 00:04:17.034 "blob": { 00:04:17.034 "mask": "0x10000", 00:04:17.034 "tpoint_mask": "0x0" 00:04:17.034 }, 00:04:17.034 "bdev_raid": { 00:04:17.034 "mask": "0x20000", 00:04:17.034 "tpoint_mask": "0x0" 00:04:17.034 }, 00:04:17.034 "scheduler": { 00:04:17.034 "mask": "0x40000", 00:04:17.034 "tpoint_mask": "0x0" 00:04:17.034 } 00:04:17.034 }' 00:04:17.034 23:44:56 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:04:17.034 23:44:56 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:04:17.034 23:44:56 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:04:17.292 23:44:56 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:04:17.292 23:44:56 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:04:17.292 23:44:56 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:04:17.292 23:44:56 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:04:17.292 23:44:56 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:04:17.292 23:44:56 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:04:17.292 23:44:56 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:04:17.292 00:04:17.292 real 0m0.206s 00:04:17.292 user 0m0.176s 00:04:17.292 sys 0m0.021s 00:04:17.292 23:44:56 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:17.292 23:44:56 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:17.292 ************************************ 00:04:17.292 END TEST rpc_trace_cmd_test 00:04:17.292 ************************************ 00:04:17.292 23:44:56 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:04:17.292 23:44:56 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:04:17.292 23:44:56 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:04:17.292 23:44:56 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:17.292 23:44:56 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:17.292 23:44:56 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:17.292 ************************************ 00:04:17.292 START TEST rpc_daemon_integrity 00:04:17.292 ************************************ 00:04:17.292 23:44:56 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:04:17.292 23:44:56 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:17.292 23:44:56 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:17.292 23:44:56 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:17.292 23:44:56 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:17.292 23:44:56 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:17.292 23:44:56 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:17.292 23:44:56 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:17.292 23:44:56 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:17.292 23:44:56 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:17.292 23:44:56 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:17.551 23:44:56 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:17.551 23:44:56 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:04:17.551 23:44:56 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:17.551 23:44:56 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:17.551 23:44:56 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:17.551 23:44:56 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:17.551 23:44:56 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:17.551 { 00:04:17.551 "name": "Malloc2", 00:04:17.551 "aliases": [ 00:04:17.551 "b7d06128-e8e4-46e1-8457-fa26acbb32e1" 00:04:17.551 ], 00:04:17.551 "product_name": "Malloc disk", 00:04:17.551 "block_size": 512, 00:04:17.551 "num_blocks": 16384, 00:04:17.551 "uuid": "b7d06128-e8e4-46e1-8457-fa26acbb32e1", 00:04:17.551 "assigned_rate_limits": { 00:04:17.551 "rw_ios_per_sec": 0, 00:04:17.551 "rw_mbytes_per_sec": 0, 00:04:17.551 "r_mbytes_per_sec": 0, 00:04:17.551 "w_mbytes_per_sec": 0 00:04:17.551 }, 00:04:17.551 "claimed": false, 00:04:17.551 "zoned": false, 00:04:17.551 "supported_io_types": { 00:04:17.551 "read": true, 00:04:17.551 "write": true, 00:04:17.551 "unmap": true, 00:04:17.551 "flush": true, 00:04:17.551 "reset": true, 00:04:17.551 "nvme_admin": false, 00:04:17.551 "nvme_io": false, 00:04:17.551 "nvme_io_md": false, 00:04:17.551 "write_zeroes": true, 00:04:17.551 "zcopy": true, 00:04:17.551 "get_zone_info": false, 00:04:17.551 "zone_management": false, 00:04:17.551 "zone_append": false, 00:04:17.551 "compare": false, 00:04:17.551 "compare_and_write": false, 00:04:17.551 "abort": true, 00:04:17.551 "seek_hole": false, 00:04:17.551 "seek_data": false, 00:04:17.551 "copy": true, 00:04:17.551 "nvme_iov_md": false 00:04:17.551 }, 00:04:17.551 "memory_domains": [ 00:04:17.551 { 00:04:17.551 "dma_device_id": "system", 00:04:17.551 "dma_device_type": 1 00:04:17.551 }, 00:04:17.551 { 00:04:17.551 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:17.551 "dma_device_type": 2 00:04:17.551 } 00:04:17.551 ], 00:04:17.551 "driver_specific": {} 00:04:17.551 } 00:04:17.551 ]' 00:04:17.551 23:44:56 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:17.551 23:44:56 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:17.551 23:44:56 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:04:17.551 23:44:56 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:17.551 23:44:56 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:17.551 [2024-12-13 23:44:56.494477] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:04:17.551 [2024-12-13 23:44:56.494516] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:17.551 [2024-12-13 23:44:56.494536] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000022e80 00:04:17.551 [2024-12-13 23:44:56.494545] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:17.551 [2024-12-13 23:44:56.496461] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:17.551 [2024-12-13 23:44:56.496485] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:17.551 Passthru0 00:04:17.551 23:44:56 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:17.551 23:44:56 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:17.551 23:44:56 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:17.551 23:44:56 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:17.551 23:44:56 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:17.551 23:44:56 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:17.551 { 00:04:17.551 "name": "Malloc2", 00:04:17.551 "aliases": [ 00:04:17.551 "b7d06128-e8e4-46e1-8457-fa26acbb32e1" 00:04:17.551 ], 00:04:17.551 "product_name": "Malloc disk", 00:04:17.551 "block_size": 512, 00:04:17.551 "num_blocks": 16384, 00:04:17.551 "uuid": "b7d06128-e8e4-46e1-8457-fa26acbb32e1", 00:04:17.551 "assigned_rate_limits": { 00:04:17.551 "rw_ios_per_sec": 0, 00:04:17.551 "rw_mbytes_per_sec": 0, 00:04:17.551 "r_mbytes_per_sec": 0, 00:04:17.551 "w_mbytes_per_sec": 0 00:04:17.551 }, 00:04:17.551 "claimed": true, 00:04:17.551 "claim_type": "exclusive_write", 00:04:17.551 "zoned": false, 00:04:17.551 "supported_io_types": { 00:04:17.551 "read": true, 00:04:17.551 "write": true, 00:04:17.551 "unmap": true, 00:04:17.551 "flush": true, 00:04:17.551 "reset": true, 00:04:17.551 "nvme_admin": false, 00:04:17.551 "nvme_io": false, 00:04:17.551 "nvme_io_md": false, 00:04:17.551 "write_zeroes": true, 00:04:17.551 "zcopy": true, 00:04:17.551 "get_zone_info": false, 00:04:17.551 "zone_management": false, 00:04:17.551 "zone_append": false, 00:04:17.551 "compare": false, 00:04:17.551 "compare_and_write": false, 00:04:17.551 "abort": true, 00:04:17.551 "seek_hole": false, 00:04:17.551 "seek_data": false, 00:04:17.551 "copy": true, 00:04:17.552 "nvme_iov_md": false 00:04:17.552 }, 00:04:17.552 "memory_domains": [ 00:04:17.552 { 00:04:17.552 "dma_device_id": "system", 00:04:17.552 "dma_device_type": 1 00:04:17.552 }, 00:04:17.552 { 00:04:17.552 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:17.552 "dma_device_type": 2 00:04:17.552 } 00:04:17.552 ], 00:04:17.552 "driver_specific": {} 00:04:17.552 }, 00:04:17.552 { 00:04:17.552 "name": "Passthru0", 00:04:17.552 "aliases": [ 00:04:17.552 "99aa46b3-426b-5661-87d2-bdb829b9086a" 00:04:17.552 ], 00:04:17.552 "product_name": "passthru", 00:04:17.552 "block_size": 512, 00:04:17.552 "num_blocks": 16384, 00:04:17.552 "uuid": "99aa46b3-426b-5661-87d2-bdb829b9086a", 00:04:17.552 "assigned_rate_limits": { 00:04:17.552 "rw_ios_per_sec": 0, 00:04:17.552 "rw_mbytes_per_sec": 0, 00:04:17.552 "r_mbytes_per_sec": 0, 00:04:17.552 "w_mbytes_per_sec": 0 00:04:17.552 }, 00:04:17.552 "claimed": false, 00:04:17.552 "zoned": false, 00:04:17.552 "supported_io_types": { 00:04:17.552 "read": true, 00:04:17.552 "write": true, 00:04:17.552 "unmap": true, 00:04:17.552 "flush": true, 00:04:17.552 "reset": true, 00:04:17.552 "nvme_admin": false, 00:04:17.552 "nvme_io": false, 00:04:17.552 "nvme_io_md": false, 00:04:17.552 "write_zeroes": true, 00:04:17.552 "zcopy": true, 00:04:17.552 "get_zone_info": false, 00:04:17.552 "zone_management": false, 00:04:17.552 "zone_append": false, 00:04:17.552 "compare": false, 00:04:17.552 "compare_and_write": false, 00:04:17.552 "abort": true, 00:04:17.552 "seek_hole": false, 00:04:17.552 "seek_data": false, 00:04:17.552 "copy": true, 00:04:17.552 "nvme_iov_md": false 00:04:17.552 }, 00:04:17.552 "memory_domains": [ 00:04:17.552 { 00:04:17.552 "dma_device_id": "system", 00:04:17.552 "dma_device_type": 1 00:04:17.552 }, 00:04:17.552 { 00:04:17.552 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:17.552 "dma_device_type": 2 00:04:17.552 } 00:04:17.552 ], 00:04:17.552 "driver_specific": { 00:04:17.552 "passthru": { 00:04:17.552 "name": "Passthru0", 00:04:17.552 "base_bdev_name": "Malloc2" 00:04:17.552 } 00:04:17.552 } 00:04:17.552 } 00:04:17.552 ]' 00:04:17.552 23:44:56 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:17.552 23:44:56 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:17.552 23:44:56 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:17.552 23:44:56 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:17.552 23:44:56 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:17.552 23:44:56 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:17.552 23:44:56 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:04:17.552 23:44:56 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:17.552 23:44:56 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:17.552 23:44:56 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:17.552 23:44:56 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:17.552 23:44:56 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:17.552 23:44:56 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:17.552 23:44:56 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:17.552 23:44:56 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:17.552 23:44:56 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:17.552 23:44:56 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:17.552 00:04:17.552 real 0m0.295s 00:04:17.552 user 0m0.161s 00:04:17.552 sys 0m0.038s 00:04:17.552 23:44:56 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:17.552 23:44:56 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:17.552 ************************************ 00:04:17.552 END TEST rpc_daemon_integrity 00:04:17.552 ************************************ 00:04:17.552 23:44:56 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:04:17.552 23:44:56 rpc -- rpc/rpc.sh@84 -- # killprocess 3790072 00:04:17.552 23:44:56 rpc -- common/autotest_common.sh@954 -- # '[' -z 3790072 ']' 00:04:17.552 23:44:56 rpc -- common/autotest_common.sh@958 -- # kill -0 3790072 00:04:17.552 23:44:56 rpc -- common/autotest_common.sh@959 -- # uname 00:04:17.810 23:44:56 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:17.810 23:44:56 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3790072 00:04:17.810 23:44:56 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:17.810 23:44:56 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:17.810 23:44:56 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3790072' 00:04:17.810 killing process with pid 3790072 00:04:17.810 23:44:56 rpc -- common/autotest_common.sh@973 -- # kill 3790072 00:04:17.810 23:44:56 rpc -- common/autotest_common.sh@978 -- # wait 3790072 00:04:20.343 00:04:20.343 real 0m4.828s 00:04:20.343 user 0m5.392s 00:04:20.343 sys 0m0.800s 00:04:20.343 23:44:59 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:20.343 23:44:59 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:20.343 ************************************ 00:04:20.343 END TEST rpc 00:04:20.343 ************************************ 00:04:20.343 23:44:59 -- spdk/autotest.sh@157 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:04:20.343 23:44:59 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:20.343 23:44:59 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:20.343 23:44:59 -- common/autotest_common.sh@10 -- # set +x 00:04:20.343 ************************************ 00:04:20.343 START TEST skip_rpc 00:04:20.343 ************************************ 00:04:20.343 23:44:59 skip_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:04:20.343 * Looking for test storage... 00:04:20.343 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:20.343 23:44:59 skip_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:20.343 23:44:59 skip_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:04:20.343 23:44:59 skip_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:20.343 23:44:59 skip_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:20.343 23:44:59 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:20.343 23:44:59 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:20.343 23:44:59 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:20.343 23:44:59 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:20.343 23:44:59 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:20.343 23:44:59 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:20.343 23:44:59 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:20.343 23:44:59 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:20.343 23:44:59 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:20.343 23:44:59 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:20.343 23:44:59 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:20.343 23:44:59 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:20.343 23:44:59 skip_rpc -- scripts/common.sh@345 -- # : 1 00:04:20.343 23:44:59 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:20.343 23:44:59 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:20.343 23:44:59 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:04:20.343 23:44:59 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:04:20.343 23:44:59 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:20.343 23:44:59 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:04:20.343 23:44:59 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:20.343 23:44:59 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:04:20.343 23:44:59 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:04:20.343 23:44:59 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:20.343 23:44:59 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:04:20.343 23:44:59 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:20.343 23:44:59 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:20.343 23:44:59 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:20.343 23:44:59 skip_rpc -- scripts/common.sh@368 -- # return 0 00:04:20.343 23:44:59 skip_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:20.343 23:44:59 skip_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:20.343 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:20.343 --rc genhtml_branch_coverage=1 00:04:20.343 --rc genhtml_function_coverage=1 00:04:20.343 --rc genhtml_legend=1 00:04:20.343 --rc geninfo_all_blocks=1 00:04:20.343 --rc geninfo_unexecuted_blocks=1 00:04:20.343 00:04:20.343 ' 00:04:20.343 23:44:59 skip_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:20.343 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:20.343 --rc genhtml_branch_coverage=1 00:04:20.343 --rc genhtml_function_coverage=1 00:04:20.343 --rc genhtml_legend=1 00:04:20.343 --rc geninfo_all_blocks=1 00:04:20.343 --rc geninfo_unexecuted_blocks=1 00:04:20.343 00:04:20.343 ' 00:04:20.343 23:44:59 skip_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:20.343 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:20.343 --rc genhtml_branch_coverage=1 00:04:20.343 --rc genhtml_function_coverage=1 00:04:20.343 --rc genhtml_legend=1 00:04:20.343 --rc geninfo_all_blocks=1 00:04:20.343 --rc geninfo_unexecuted_blocks=1 00:04:20.343 00:04:20.343 ' 00:04:20.343 23:44:59 skip_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:20.343 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:20.343 --rc genhtml_branch_coverage=1 00:04:20.343 --rc genhtml_function_coverage=1 00:04:20.343 --rc genhtml_legend=1 00:04:20.343 --rc geninfo_all_blocks=1 00:04:20.343 --rc geninfo_unexecuted_blocks=1 00:04:20.343 00:04:20.343 ' 00:04:20.343 23:44:59 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:20.343 23:44:59 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:20.343 23:44:59 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:04:20.343 23:44:59 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:20.343 23:44:59 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:20.343 23:44:59 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:20.343 ************************************ 00:04:20.343 START TEST skip_rpc 00:04:20.343 ************************************ 00:04:20.343 23:44:59 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:04:20.343 23:44:59 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=3790939 00:04:20.343 23:44:59 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:20.343 23:44:59 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:04:20.343 23:44:59 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:04:20.343 [2024-12-13 23:44:59.374913] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:04:20.343 [2024-12-13 23:44:59.375009] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3790939 ] 00:04:20.602 [2024-12-13 23:44:59.486689] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:20.602 [2024-12-13 23:44:59.590960] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:04:25.873 23:45:04 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:04:25.873 23:45:04 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:04:25.873 23:45:04 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:04:25.873 23:45:04 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:04:25.873 23:45:04 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:25.873 23:45:04 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:04:25.873 23:45:04 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:25.873 23:45:04 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:04:25.873 23:45:04 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:25.873 23:45:04 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:25.873 23:45:04 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:04:25.873 23:45:04 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:04:25.873 23:45:04 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:25.873 23:45:04 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:04:25.873 23:45:04 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:25.873 23:45:04 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:04:25.873 23:45:04 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 3790939 00:04:25.873 23:45:04 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 3790939 ']' 00:04:25.873 23:45:04 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 3790939 00:04:25.873 23:45:04 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:04:25.873 23:45:04 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:25.873 23:45:04 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3790939 00:04:25.873 23:45:04 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:25.873 23:45:04 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:25.873 23:45:04 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3790939' 00:04:25.873 killing process with pid 3790939 00:04:25.873 23:45:04 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 3790939 00:04:25.873 23:45:04 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 3790939 00:04:27.778 00:04:27.778 real 0m7.383s 00:04:27.778 user 0m6.999s 00:04:27.778 sys 0m0.394s 00:04:27.778 23:45:06 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:27.778 23:45:06 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:27.778 ************************************ 00:04:27.778 END TEST skip_rpc 00:04:27.778 ************************************ 00:04:27.778 23:45:06 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:04:27.778 23:45:06 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:27.778 23:45:06 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:27.778 23:45:06 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:27.778 ************************************ 00:04:27.778 START TEST skip_rpc_with_json 00:04:27.778 ************************************ 00:04:27.778 23:45:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:04:27.778 23:45:06 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:04:27.778 23:45:06 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=3792693 00:04:27.778 23:45:06 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:27.778 23:45:06 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 3792693 00:04:27.778 23:45:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 3792693 ']' 00:04:27.778 23:45:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:27.778 23:45:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:27.778 23:45:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:27.778 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:27.778 23:45:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:27.778 23:45:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:27.778 23:45:06 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:27.778 [2024-12-13 23:45:06.826343] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:04:27.778 [2024-12-13 23:45:06.826431] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3792693 ] 00:04:28.037 [2024-12-13 23:45:06.939729] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:28.037 [2024-12-13 23:45:07.049389] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:04:28.977 23:45:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:28.977 23:45:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:04:28.977 23:45:07 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:04:28.977 23:45:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:28.977 23:45:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:28.977 [2024-12-13 23:45:07.877351] nvmf_rpc.c:2707:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:04:28.977 request: 00:04:28.977 { 00:04:28.977 "trtype": "tcp", 00:04:28.977 "method": "nvmf_get_transports", 00:04:28.977 "req_id": 1 00:04:28.977 } 00:04:28.977 Got JSON-RPC error response 00:04:28.977 response: 00:04:28.977 { 00:04:28.977 "code": -19, 00:04:28.977 "message": "No such device" 00:04:28.977 } 00:04:28.977 23:45:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:04:28.977 23:45:07 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:04:28.977 23:45:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:28.977 23:45:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:28.977 [2024-12-13 23:45:07.885463] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:28.977 23:45:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:28.977 23:45:07 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:04:28.977 23:45:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:28.977 23:45:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:28.977 23:45:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:28.977 23:45:08 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:28.977 { 00:04:28.977 "subsystems": [ 00:04:28.977 { 00:04:28.977 "subsystem": "fsdev", 00:04:28.977 "config": [ 00:04:28.977 { 00:04:28.977 "method": "fsdev_set_opts", 00:04:28.977 "params": { 00:04:28.977 "fsdev_io_pool_size": 65535, 00:04:28.977 "fsdev_io_cache_size": 256 00:04:28.977 } 00:04:28.977 } 00:04:28.977 ] 00:04:28.977 }, 00:04:28.977 { 00:04:28.977 "subsystem": "keyring", 00:04:28.977 "config": [] 00:04:28.977 }, 00:04:28.977 { 00:04:28.977 "subsystem": "iobuf", 00:04:28.977 "config": [ 00:04:28.977 { 00:04:28.977 "method": "iobuf_set_options", 00:04:28.977 "params": { 00:04:28.977 "small_pool_count": 8192, 00:04:28.977 "large_pool_count": 1024, 00:04:28.977 "small_bufsize": 8192, 00:04:28.977 "large_bufsize": 135168, 00:04:28.977 "enable_numa": false 00:04:28.977 } 00:04:28.977 } 00:04:28.977 ] 00:04:28.977 }, 00:04:28.977 { 00:04:28.977 "subsystem": "sock", 00:04:28.977 "config": [ 00:04:28.977 { 00:04:28.977 "method": "sock_set_default_impl", 00:04:28.977 "params": { 00:04:28.977 "impl_name": "posix" 00:04:28.977 } 00:04:28.977 }, 00:04:28.977 { 00:04:28.977 "method": "sock_impl_set_options", 00:04:28.977 "params": { 00:04:28.977 "impl_name": "ssl", 00:04:28.977 "recv_buf_size": 4096, 00:04:28.977 "send_buf_size": 4096, 00:04:28.977 "enable_recv_pipe": true, 00:04:28.977 "enable_quickack": false, 00:04:28.977 "enable_placement_id": 0, 00:04:28.977 "enable_zerocopy_send_server": true, 00:04:28.977 "enable_zerocopy_send_client": false, 00:04:28.977 "zerocopy_threshold": 0, 00:04:28.977 "tls_version": 0, 00:04:28.977 "enable_ktls": false 00:04:28.977 } 00:04:28.977 }, 00:04:28.977 { 00:04:28.977 "method": "sock_impl_set_options", 00:04:28.977 "params": { 00:04:28.977 "impl_name": "posix", 00:04:28.977 "recv_buf_size": 2097152, 00:04:28.977 "send_buf_size": 2097152, 00:04:28.977 "enable_recv_pipe": true, 00:04:28.977 "enable_quickack": false, 00:04:28.977 "enable_placement_id": 0, 00:04:28.977 "enable_zerocopy_send_server": true, 00:04:28.977 "enable_zerocopy_send_client": false, 00:04:28.977 "zerocopy_threshold": 0, 00:04:28.977 "tls_version": 0, 00:04:28.977 "enable_ktls": false 00:04:28.977 } 00:04:28.977 } 00:04:28.977 ] 00:04:28.977 }, 00:04:28.977 { 00:04:28.977 "subsystem": "vmd", 00:04:28.977 "config": [] 00:04:28.977 }, 00:04:28.977 { 00:04:28.977 "subsystem": "accel", 00:04:28.977 "config": [ 00:04:28.977 { 00:04:28.977 "method": "accel_set_options", 00:04:28.977 "params": { 00:04:28.977 "small_cache_size": 128, 00:04:28.977 "large_cache_size": 16, 00:04:28.977 "task_count": 2048, 00:04:28.977 "sequence_count": 2048, 00:04:28.977 "buf_count": 2048 00:04:28.977 } 00:04:28.977 } 00:04:28.977 ] 00:04:28.977 }, 00:04:28.977 { 00:04:28.977 "subsystem": "bdev", 00:04:28.977 "config": [ 00:04:28.977 { 00:04:28.977 "method": "bdev_set_options", 00:04:28.977 "params": { 00:04:28.977 "bdev_io_pool_size": 65535, 00:04:28.977 "bdev_io_cache_size": 256, 00:04:28.977 "bdev_auto_examine": true, 00:04:28.977 "iobuf_small_cache_size": 128, 00:04:28.977 "iobuf_large_cache_size": 16 00:04:28.977 } 00:04:28.977 }, 00:04:28.977 { 00:04:28.977 "method": "bdev_raid_set_options", 00:04:28.977 "params": { 00:04:28.977 "process_window_size_kb": 1024, 00:04:28.977 "process_max_bandwidth_mb_sec": 0 00:04:28.977 } 00:04:28.977 }, 00:04:28.977 { 00:04:28.977 "method": "bdev_iscsi_set_options", 00:04:28.977 "params": { 00:04:28.977 "timeout_sec": 30 00:04:28.977 } 00:04:28.977 }, 00:04:28.977 { 00:04:28.977 "method": "bdev_nvme_set_options", 00:04:28.977 "params": { 00:04:28.977 "action_on_timeout": "none", 00:04:28.977 "timeout_us": 0, 00:04:28.977 "timeout_admin_us": 0, 00:04:28.978 "keep_alive_timeout_ms": 10000, 00:04:28.978 "arbitration_burst": 0, 00:04:28.978 "low_priority_weight": 0, 00:04:28.978 "medium_priority_weight": 0, 00:04:28.978 "high_priority_weight": 0, 00:04:28.978 "nvme_adminq_poll_period_us": 10000, 00:04:28.978 "nvme_ioq_poll_period_us": 0, 00:04:28.978 "io_queue_requests": 0, 00:04:28.978 "delay_cmd_submit": true, 00:04:28.978 "transport_retry_count": 4, 00:04:28.978 "bdev_retry_count": 3, 00:04:28.978 "transport_ack_timeout": 0, 00:04:28.978 "ctrlr_loss_timeout_sec": 0, 00:04:28.978 "reconnect_delay_sec": 0, 00:04:28.978 "fast_io_fail_timeout_sec": 0, 00:04:28.978 "disable_auto_failback": false, 00:04:28.978 "generate_uuids": false, 00:04:28.978 "transport_tos": 0, 00:04:28.978 "nvme_error_stat": false, 00:04:28.978 "rdma_srq_size": 0, 00:04:28.978 "io_path_stat": false, 00:04:28.978 "allow_accel_sequence": false, 00:04:28.978 "rdma_max_cq_size": 0, 00:04:28.978 "rdma_cm_event_timeout_ms": 0, 00:04:28.978 "dhchap_digests": [ 00:04:28.978 "sha256", 00:04:28.978 "sha384", 00:04:28.978 "sha512" 00:04:28.978 ], 00:04:28.978 "dhchap_dhgroups": [ 00:04:28.978 "null", 00:04:28.978 "ffdhe2048", 00:04:28.978 "ffdhe3072", 00:04:28.978 "ffdhe4096", 00:04:28.978 "ffdhe6144", 00:04:28.978 "ffdhe8192" 00:04:28.978 ], 00:04:28.978 "rdma_umr_per_io": false 00:04:28.978 } 00:04:28.978 }, 00:04:28.978 { 00:04:28.978 "method": "bdev_nvme_set_hotplug", 00:04:28.978 "params": { 00:04:28.978 "period_us": 100000, 00:04:28.978 "enable": false 00:04:28.978 } 00:04:28.978 }, 00:04:28.978 { 00:04:28.978 "method": "bdev_wait_for_examine" 00:04:28.978 } 00:04:28.978 ] 00:04:28.978 }, 00:04:28.978 { 00:04:28.978 "subsystem": "scsi", 00:04:28.978 "config": null 00:04:28.978 }, 00:04:28.978 { 00:04:28.978 "subsystem": "scheduler", 00:04:28.978 "config": [ 00:04:28.978 { 00:04:28.978 "method": "framework_set_scheduler", 00:04:28.978 "params": { 00:04:28.978 "name": "static" 00:04:28.978 } 00:04:28.978 } 00:04:28.978 ] 00:04:28.978 }, 00:04:28.978 { 00:04:28.978 "subsystem": "vhost_scsi", 00:04:28.978 "config": [] 00:04:28.978 }, 00:04:28.978 { 00:04:28.978 "subsystem": "vhost_blk", 00:04:28.978 "config": [] 00:04:28.978 }, 00:04:28.978 { 00:04:28.978 "subsystem": "ublk", 00:04:28.978 "config": [] 00:04:28.978 }, 00:04:28.978 { 00:04:28.978 "subsystem": "nbd", 00:04:28.978 "config": [] 00:04:28.978 }, 00:04:28.978 { 00:04:28.978 "subsystem": "nvmf", 00:04:28.978 "config": [ 00:04:28.978 { 00:04:28.978 "method": "nvmf_set_config", 00:04:28.978 "params": { 00:04:28.978 "discovery_filter": "match_any", 00:04:28.978 "admin_cmd_passthru": { 00:04:28.978 "identify_ctrlr": false 00:04:28.978 }, 00:04:28.978 "dhchap_digests": [ 00:04:28.978 "sha256", 00:04:28.978 "sha384", 00:04:28.978 "sha512" 00:04:28.978 ], 00:04:28.978 "dhchap_dhgroups": [ 00:04:28.978 "null", 00:04:28.978 "ffdhe2048", 00:04:28.978 "ffdhe3072", 00:04:28.978 "ffdhe4096", 00:04:28.978 "ffdhe6144", 00:04:28.978 "ffdhe8192" 00:04:28.978 ] 00:04:28.978 } 00:04:28.978 }, 00:04:28.978 { 00:04:28.978 "method": "nvmf_set_max_subsystems", 00:04:28.978 "params": { 00:04:28.978 "max_subsystems": 1024 00:04:28.978 } 00:04:28.978 }, 00:04:28.978 { 00:04:28.978 "method": "nvmf_set_crdt", 00:04:28.978 "params": { 00:04:28.978 "crdt1": 0, 00:04:28.978 "crdt2": 0, 00:04:28.978 "crdt3": 0 00:04:28.978 } 00:04:28.978 }, 00:04:28.978 { 00:04:28.978 "method": "nvmf_create_transport", 00:04:28.978 "params": { 00:04:28.978 "trtype": "TCP", 00:04:28.978 "max_queue_depth": 128, 00:04:28.978 "max_io_qpairs_per_ctrlr": 127, 00:04:28.978 "in_capsule_data_size": 4096, 00:04:28.978 "max_io_size": 131072, 00:04:28.978 "io_unit_size": 131072, 00:04:28.978 "max_aq_depth": 128, 00:04:28.978 "num_shared_buffers": 511, 00:04:28.978 "buf_cache_size": 4294967295, 00:04:28.978 "dif_insert_or_strip": false, 00:04:28.978 "zcopy": false, 00:04:28.978 "c2h_success": true, 00:04:28.978 "sock_priority": 0, 00:04:28.978 "abort_timeout_sec": 1, 00:04:28.978 "ack_timeout": 0, 00:04:28.978 "data_wr_pool_size": 0 00:04:28.978 } 00:04:28.978 } 00:04:28.978 ] 00:04:28.978 }, 00:04:28.978 { 00:04:28.978 "subsystem": "iscsi", 00:04:28.978 "config": [ 00:04:28.978 { 00:04:28.978 "method": "iscsi_set_options", 00:04:28.978 "params": { 00:04:28.978 "node_base": "iqn.2016-06.io.spdk", 00:04:28.978 "max_sessions": 128, 00:04:28.978 "max_connections_per_session": 2, 00:04:28.978 "max_queue_depth": 64, 00:04:28.978 "default_time2wait": 2, 00:04:28.978 "default_time2retain": 20, 00:04:28.978 "first_burst_length": 8192, 00:04:28.978 "immediate_data": true, 00:04:28.978 "allow_duplicated_isid": false, 00:04:28.978 "error_recovery_level": 0, 00:04:28.978 "nop_timeout": 60, 00:04:28.978 "nop_in_interval": 30, 00:04:28.978 "disable_chap": false, 00:04:28.978 "require_chap": false, 00:04:28.978 "mutual_chap": false, 00:04:28.978 "chap_group": 0, 00:04:28.978 "max_large_datain_per_connection": 64, 00:04:28.978 "max_r2t_per_connection": 4, 00:04:28.978 "pdu_pool_size": 36864, 00:04:28.978 "immediate_data_pool_size": 16384, 00:04:28.978 "data_out_pool_size": 2048 00:04:28.978 } 00:04:28.978 } 00:04:28.978 ] 00:04:28.978 } 00:04:28.978 ] 00:04:28.978 } 00:04:28.978 23:45:08 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:04:28.978 23:45:08 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 3792693 00:04:28.978 23:45:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 3792693 ']' 00:04:28.978 23:45:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 3792693 00:04:28.978 23:45:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:04:28.978 23:45:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:28.978 23:45:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3792693 00:04:28.978 23:45:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:28.978 23:45:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:28.978 23:45:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3792693' 00:04:28.978 killing process with pid 3792693 00:04:28.978 23:45:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 3792693 00:04:28.978 23:45:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 3792693 00:04:31.519 23:45:10 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=3793406 00:04:31.520 23:45:10 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:04:31.520 23:45:10 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:36.795 23:45:15 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 3793406 00:04:36.795 23:45:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 3793406 ']' 00:04:36.795 23:45:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 3793406 00:04:36.795 23:45:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:04:36.795 23:45:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:36.795 23:45:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3793406 00:04:36.795 23:45:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:36.795 23:45:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:36.795 23:45:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3793406' 00:04:36.795 killing process with pid 3793406 00:04:36.795 23:45:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 3793406 00:04:36.795 23:45:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 3793406 00:04:38.703 23:45:17 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:38.703 23:45:17 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:38.703 00:04:38.703 real 0m11.009s 00:04:38.703 user 0m10.569s 00:04:38.703 sys 0m0.847s 00:04:38.703 23:45:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:38.703 23:45:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:38.703 ************************************ 00:04:38.703 END TEST skip_rpc_with_json 00:04:38.703 ************************************ 00:04:38.703 23:45:17 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:04:38.703 23:45:17 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:38.703 23:45:17 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:38.703 23:45:17 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:38.703 ************************************ 00:04:38.703 START TEST skip_rpc_with_delay 00:04:38.703 ************************************ 00:04:38.703 23:45:17 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:04:38.703 23:45:17 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:38.703 23:45:17 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:04:38.703 23:45:17 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:38.703 23:45:17 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:38.703 23:45:17 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:38.703 23:45:17 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:38.703 23:45:17 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:38.703 23:45:17 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:38.703 23:45:17 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:38.703 23:45:17 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:38.703 23:45:17 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:04:38.703 23:45:17 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:38.962 [2024-12-13 23:45:17.899349] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:04:38.962 23:45:17 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:04:38.962 23:45:17 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:38.962 23:45:17 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:04:38.962 23:45:17 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:38.962 00:04:38.962 real 0m0.132s 00:04:38.962 user 0m0.072s 00:04:38.962 sys 0m0.059s 00:04:38.962 23:45:17 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:38.962 23:45:17 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:04:38.962 ************************************ 00:04:38.962 END TEST skip_rpc_with_delay 00:04:38.962 ************************************ 00:04:38.962 23:45:17 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:04:38.962 23:45:17 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:04:38.962 23:45:17 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:04:38.962 23:45:17 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:38.962 23:45:17 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:38.962 23:45:17 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:38.962 ************************************ 00:04:38.962 START TEST exit_on_failed_rpc_init 00:04:38.962 ************************************ 00:04:38.962 23:45:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:04:38.962 23:45:18 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=3794680 00:04:38.962 23:45:18 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 3794680 00:04:38.962 23:45:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 3794680 ']' 00:04:38.962 23:45:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:38.962 23:45:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:38.962 23:45:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:38.962 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:38.962 23:45:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:38.962 23:45:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:38.962 23:45:18 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:38.962 [2024-12-13 23:45:18.102644] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:04:38.962 [2024-12-13 23:45:18.102732] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3794680 ] 00:04:39.222 [2024-12-13 23:45:18.215127] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:39.222 [2024-12-13 23:45:18.318435] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:04:40.162 23:45:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:40.162 23:45:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:04:40.162 23:45:19 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:40.162 23:45:19 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:40.162 23:45:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:04:40.162 23:45:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:40.162 23:45:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:40.162 23:45:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:40.162 23:45:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:40.162 23:45:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:40.162 23:45:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:40.162 23:45:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:40.162 23:45:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:40.162 23:45:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:04:40.162 23:45:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:40.162 [2024-12-13 23:45:19.219669] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:04:40.162 [2024-12-13 23:45:19.219755] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3794908 ] 00:04:40.422 [2024-12-13 23:45:19.330128] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:40.422 [2024-12-13 23:45:19.439010] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:04:40.422 [2024-12-13 23:45:19.439086] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:04:40.422 [2024-12-13 23:45:19.439104] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:04:40.422 [2024-12-13 23:45:19.439114] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:04:40.681 23:45:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:04:40.682 23:45:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:40.682 23:45:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:04:40.682 23:45:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:04:40.682 23:45:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:04:40.682 23:45:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:40.682 23:45:19 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:04:40.682 23:45:19 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 3794680 00:04:40.682 23:45:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 3794680 ']' 00:04:40.682 23:45:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 3794680 00:04:40.682 23:45:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:04:40.682 23:45:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:40.682 23:45:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3794680 00:04:40.682 23:45:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:40.682 23:45:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:40.682 23:45:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3794680' 00:04:40.682 killing process with pid 3794680 00:04:40.682 23:45:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 3794680 00:04:40.682 23:45:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 3794680 00:04:43.218 00:04:43.218 real 0m4.044s 00:04:43.218 user 0m4.386s 00:04:43.218 sys 0m0.579s 00:04:43.218 23:45:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:43.218 23:45:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:43.218 ************************************ 00:04:43.218 END TEST exit_on_failed_rpc_init 00:04:43.218 ************************************ 00:04:43.218 23:45:22 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:43.218 00:04:43.218 real 0m22.994s 00:04:43.218 user 0m22.223s 00:04:43.218 sys 0m2.132s 00:04:43.218 23:45:22 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:43.218 23:45:22 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:43.218 ************************************ 00:04:43.218 END TEST skip_rpc 00:04:43.218 ************************************ 00:04:43.218 23:45:22 -- spdk/autotest.sh@158 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:04:43.218 23:45:22 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:43.218 23:45:22 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:43.218 23:45:22 -- common/autotest_common.sh@10 -- # set +x 00:04:43.218 ************************************ 00:04:43.218 START TEST rpc_client 00:04:43.218 ************************************ 00:04:43.218 23:45:22 rpc_client -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:04:43.218 * Looking for test storage... 00:04:43.218 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:04:43.218 23:45:22 rpc_client -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:43.218 23:45:22 rpc_client -- common/autotest_common.sh@1711 -- # lcov --version 00:04:43.218 23:45:22 rpc_client -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:43.218 23:45:22 rpc_client -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:43.218 23:45:22 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:43.218 23:45:22 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:43.218 23:45:22 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:43.218 23:45:22 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:04:43.218 23:45:22 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:04:43.218 23:45:22 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:04:43.218 23:45:22 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:04:43.218 23:45:22 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:04:43.218 23:45:22 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:04:43.218 23:45:22 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:04:43.218 23:45:22 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:43.218 23:45:22 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:04:43.218 23:45:22 rpc_client -- scripts/common.sh@345 -- # : 1 00:04:43.218 23:45:22 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:43.218 23:45:22 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:43.218 23:45:22 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:04:43.218 23:45:22 rpc_client -- scripts/common.sh@353 -- # local d=1 00:04:43.218 23:45:22 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:43.218 23:45:22 rpc_client -- scripts/common.sh@355 -- # echo 1 00:04:43.218 23:45:22 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:04:43.218 23:45:22 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:04:43.218 23:45:22 rpc_client -- scripts/common.sh@353 -- # local d=2 00:04:43.218 23:45:22 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:43.218 23:45:22 rpc_client -- scripts/common.sh@355 -- # echo 2 00:04:43.218 23:45:22 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:04:43.218 23:45:22 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:43.218 23:45:22 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:43.218 23:45:22 rpc_client -- scripts/common.sh@368 -- # return 0 00:04:43.218 23:45:22 rpc_client -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:43.218 23:45:22 rpc_client -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:43.218 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:43.218 --rc genhtml_branch_coverage=1 00:04:43.218 --rc genhtml_function_coverage=1 00:04:43.218 --rc genhtml_legend=1 00:04:43.218 --rc geninfo_all_blocks=1 00:04:43.218 --rc geninfo_unexecuted_blocks=1 00:04:43.218 00:04:43.218 ' 00:04:43.218 23:45:22 rpc_client -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:43.218 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:43.218 --rc genhtml_branch_coverage=1 00:04:43.218 --rc genhtml_function_coverage=1 00:04:43.218 --rc genhtml_legend=1 00:04:43.218 --rc geninfo_all_blocks=1 00:04:43.218 --rc geninfo_unexecuted_blocks=1 00:04:43.218 00:04:43.218 ' 00:04:43.218 23:45:22 rpc_client -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:43.218 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:43.218 --rc genhtml_branch_coverage=1 00:04:43.218 --rc genhtml_function_coverage=1 00:04:43.219 --rc genhtml_legend=1 00:04:43.219 --rc geninfo_all_blocks=1 00:04:43.219 --rc geninfo_unexecuted_blocks=1 00:04:43.219 00:04:43.219 ' 00:04:43.219 23:45:22 rpc_client -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:43.219 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:43.219 --rc genhtml_branch_coverage=1 00:04:43.219 --rc genhtml_function_coverage=1 00:04:43.219 --rc genhtml_legend=1 00:04:43.219 --rc geninfo_all_blocks=1 00:04:43.219 --rc geninfo_unexecuted_blocks=1 00:04:43.219 00:04:43.219 ' 00:04:43.219 23:45:22 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:04:43.478 OK 00:04:43.478 23:45:22 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:04:43.478 00:04:43.478 real 0m0.221s 00:04:43.478 user 0m0.127s 00:04:43.478 sys 0m0.106s 00:04:43.478 23:45:22 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:43.478 23:45:22 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:04:43.478 ************************************ 00:04:43.478 END TEST rpc_client 00:04:43.478 ************************************ 00:04:43.478 23:45:22 -- spdk/autotest.sh@159 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:04:43.478 23:45:22 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:43.478 23:45:22 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:43.479 23:45:22 -- common/autotest_common.sh@10 -- # set +x 00:04:43.479 ************************************ 00:04:43.479 START TEST json_config 00:04:43.479 ************************************ 00:04:43.479 23:45:22 json_config -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:04:43.479 23:45:22 json_config -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:43.479 23:45:22 json_config -- common/autotest_common.sh@1711 -- # lcov --version 00:04:43.479 23:45:22 json_config -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:43.479 23:45:22 json_config -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:43.479 23:45:22 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:43.479 23:45:22 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:43.479 23:45:22 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:43.479 23:45:22 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:04:43.479 23:45:22 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:04:43.479 23:45:22 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:04:43.479 23:45:22 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:04:43.479 23:45:22 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:04:43.479 23:45:22 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:04:43.479 23:45:22 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:04:43.479 23:45:22 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:43.479 23:45:22 json_config -- scripts/common.sh@344 -- # case "$op" in 00:04:43.479 23:45:22 json_config -- scripts/common.sh@345 -- # : 1 00:04:43.479 23:45:22 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:43.479 23:45:22 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:43.479 23:45:22 json_config -- scripts/common.sh@365 -- # decimal 1 00:04:43.479 23:45:22 json_config -- scripts/common.sh@353 -- # local d=1 00:04:43.479 23:45:22 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:43.479 23:45:22 json_config -- scripts/common.sh@355 -- # echo 1 00:04:43.479 23:45:22 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:04:43.479 23:45:22 json_config -- scripts/common.sh@366 -- # decimal 2 00:04:43.479 23:45:22 json_config -- scripts/common.sh@353 -- # local d=2 00:04:43.479 23:45:22 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:43.479 23:45:22 json_config -- scripts/common.sh@355 -- # echo 2 00:04:43.479 23:45:22 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:04:43.479 23:45:22 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:43.479 23:45:22 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:43.479 23:45:22 json_config -- scripts/common.sh@368 -- # return 0 00:04:43.479 23:45:22 json_config -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:43.479 23:45:22 json_config -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:43.479 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:43.479 --rc genhtml_branch_coverage=1 00:04:43.479 --rc genhtml_function_coverage=1 00:04:43.479 --rc genhtml_legend=1 00:04:43.479 --rc geninfo_all_blocks=1 00:04:43.479 --rc geninfo_unexecuted_blocks=1 00:04:43.479 00:04:43.479 ' 00:04:43.479 23:45:22 json_config -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:43.479 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:43.479 --rc genhtml_branch_coverage=1 00:04:43.479 --rc genhtml_function_coverage=1 00:04:43.479 --rc genhtml_legend=1 00:04:43.479 --rc geninfo_all_blocks=1 00:04:43.479 --rc geninfo_unexecuted_blocks=1 00:04:43.479 00:04:43.479 ' 00:04:43.479 23:45:22 json_config -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:43.479 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:43.479 --rc genhtml_branch_coverage=1 00:04:43.479 --rc genhtml_function_coverage=1 00:04:43.479 --rc genhtml_legend=1 00:04:43.479 --rc geninfo_all_blocks=1 00:04:43.479 --rc geninfo_unexecuted_blocks=1 00:04:43.479 00:04:43.479 ' 00:04:43.479 23:45:22 json_config -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:43.479 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:43.479 --rc genhtml_branch_coverage=1 00:04:43.479 --rc genhtml_function_coverage=1 00:04:43.479 --rc genhtml_legend=1 00:04:43.479 --rc geninfo_all_blocks=1 00:04:43.479 --rc geninfo_unexecuted_blocks=1 00:04:43.479 00:04:43.479 ' 00:04:43.479 23:45:22 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:04:43.479 23:45:22 json_config -- nvmf/common.sh@7 -- # uname -s 00:04:43.479 23:45:22 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:43.479 23:45:22 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:43.479 23:45:22 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:43.479 23:45:22 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:43.479 23:45:22 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:43.479 23:45:22 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:43.479 23:45:22 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:43.479 23:45:22 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:43.479 23:45:22 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:43.479 23:45:22 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:43.479 23:45:22 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:04:43.479 23:45:22 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:04:43.479 23:45:22 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:43.479 23:45:22 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:43.479 23:45:22 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:43.479 23:45:22 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:43.479 23:45:22 json_config -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:43.479 23:45:22 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:04:43.479 23:45:22 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:43.479 23:45:22 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:43.479 23:45:22 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:43.739 23:45:22 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:43.739 23:45:22 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:43.739 23:45:22 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:43.739 23:45:22 json_config -- paths/export.sh@5 -- # export PATH 00:04:43.739 23:45:22 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:43.739 23:45:22 json_config -- nvmf/common.sh@51 -- # : 0 00:04:43.739 23:45:22 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:43.739 23:45:22 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:43.739 23:45:22 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:43.739 23:45:22 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:43.739 23:45:22 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:43.739 23:45:22 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:43.739 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:43.739 23:45:22 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:43.739 23:45:22 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:43.739 23:45:22 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:43.739 23:45:22 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:04:43.739 23:45:22 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:04:43.739 23:45:22 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:04:43.739 23:45:22 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:04:43.739 23:45:22 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:04:43.739 23:45:22 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:04:43.740 23:45:22 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:04:43.740 23:45:22 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:04:43.740 23:45:22 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:04:43.740 23:45:22 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:04:43.740 23:45:22 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:04:43.740 23:45:22 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:04:43.740 23:45:22 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:04:43.740 23:45:22 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:04:43.740 23:45:22 json_config -- json_config/json_config.sh@362 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:43.740 23:45:22 json_config -- json_config/json_config.sh@363 -- # echo 'INFO: JSON configuration test init' 00:04:43.740 INFO: JSON configuration test init 00:04:43.740 23:45:22 json_config -- json_config/json_config.sh@364 -- # json_config_test_init 00:04:43.740 23:45:22 json_config -- json_config/json_config.sh@269 -- # timing_enter json_config_test_init 00:04:43.740 23:45:22 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:43.740 23:45:22 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:43.740 23:45:22 json_config -- json_config/json_config.sh@270 -- # timing_enter json_config_setup_target 00:04:43.740 23:45:22 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:43.740 23:45:22 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:43.740 23:45:22 json_config -- json_config/json_config.sh@272 -- # json_config_test_start_app target --wait-for-rpc 00:04:43.740 23:45:22 json_config -- json_config/common.sh@9 -- # local app=target 00:04:43.740 23:45:22 json_config -- json_config/common.sh@10 -- # shift 00:04:43.740 23:45:22 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:43.740 23:45:22 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:43.740 23:45:22 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:04:43.740 23:45:22 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:43.740 23:45:22 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:43.740 23:45:22 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=3795695 00:04:43.740 23:45:22 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:43.740 Waiting for target to run... 00:04:43.740 23:45:22 json_config -- json_config/common.sh@25 -- # waitforlisten 3795695 /var/tmp/spdk_tgt.sock 00:04:43.740 23:45:22 json_config -- common/autotest_common.sh@835 -- # '[' -z 3795695 ']' 00:04:43.740 23:45:22 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:43.740 23:45:22 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:43.740 23:45:22 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:43.740 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:43.740 23:45:22 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:04:43.740 23:45:22 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:43.740 23:45:22 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:43.740 [2024-12-13 23:45:22.723265] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:04:43.740 [2024-12-13 23:45:22.723358] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3795695 ] 00:04:43.999 [2024-12-13 23:45:23.054336] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:44.258 [2024-12-13 23:45:23.154074] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:04:44.518 23:45:23 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:44.518 23:45:23 json_config -- common/autotest_common.sh@868 -- # return 0 00:04:44.518 23:45:23 json_config -- json_config/common.sh@26 -- # echo '' 00:04:44.518 00:04:44.518 23:45:23 json_config -- json_config/json_config.sh@276 -- # create_accel_config 00:04:44.518 23:45:23 json_config -- json_config/json_config.sh@100 -- # timing_enter create_accel_config 00:04:44.518 23:45:23 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:44.518 23:45:23 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:44.518 23:45:23 json_config -- json_config/json_config.sh@102 -- # [[ 0 -eq 1 ]] 00:04:44.518 23:45:23 json_config -- json_config/json_config.sh@108 -- # timing_exit create_accel_config 00:04:44.518 23:45:23 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:44.518 23:45:23 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:44.518 23:45:23 json_config -- json_config/json_config.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:04:44.518 23:45:23 json_config -- json_config/json_config.sh@281 -- # tgt_rpc load_config 00:04:44.518 23:45:23 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:04:48.711 23:45:27 json_config -- json_config/json_config.sh@283 -- # tgt_check_notification_types 00:04:48.711 23:45:27 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:04:48.711 23:45:27 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:48.711 23:45:27 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:48.711 23:45:27 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:04:48.711 23:45:27 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:04:48.711 23:45:27 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:04:48.711 23:45:27 json_config -- json_config/json_config.sh@47 -- # [[ y == y ]] 00:04:48.711 23:45:27 json_config -- json_config/json_config.sh@48 -- # enabled_types+=("fsdev_register" "fsdev_unregister") 00:04:48.711 23:45:27 json_config -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:04:48.711 23:45:27 json_config -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:04:48.711 23:45:27 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:04:48.711 23:45:27 json_config -- json_config/json_config.sh@51 -- # get_types=('fsdev_register' 'fsdev_unregister' 'bdev_register' 'bdev_unregister') 00:04:48.711 23:45:27 json_config -- json_config/json_config.sh@51 -- # local get_types 00:04:48.711 23:45:27 json_config -- json_config/json_config.sh@53 -- # local type_diff 00:04:48.711 23:45:27 json_config -- json_config/json_config.sh@54 -- # sort 00:04:48.711 23:45:27 json_config -- json_config/json_config.sh@54 -- # echo bdev_register bdev_unregister fsdev_register fsdev_unregister fsdev_register fsdev_unregister bdev_register bdev_unregister 00:04:48.711 23:45:27 json_config -- json_config/json_config.sh@54 -- # tr ' ' '\n' 00:04:48.711 23:45:27 json_config -- json_config/json_config.sh@54 -- # uniq -u 00:04:48.711 23:45:27 json_config -- json_config/json_config.sh@54 -- # type_diff= 00:04:48.711 23:45:27 json_config -- json_config/json_config.sh@56 -- # [[ -n '' ]] 00:04:48.711 23:45:27 json_config -- json_config/json_config.sh@61 -- # timing_exit tgt_check_notification_types 00:04:48.711 23:45:27 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:48.711 23:45:27 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:48.711 23:45:27 json_config -- json_config/json_config.sh@62 -- # return 0 00:04:48.711 23:45:27 json_config -- json_config/json_config.sh@285 -- # [[ 0 -eq 1 ]] 00:04:48.711 23:45:27 json_config -- json_config/json_config.sh@289 -- # [[ 0 -eq 1 ]] 00:04:48.711 23:45:27 json_config -- json_config/json_config.sh@293 -- # [[ 0 -eq 1 ]] 00:04:48.711 23:45:27 json_config -- json_config/json_config.sh@297 -- # [[ 1 -eq 1 ]] 00:04:48.711 23:45:27 json_config -- json_config/json_config.sh@298 -- # create_nvmf_subsystem_config 00:04:48.711 23:45:27 json_config -- json_config/json_config.sh@237 -- # timing_enter create_nvmf_subsystem_config 00:04:48.711 23:45:27 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:48.711 23:45:27 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:48.711 23:45:27 json_config -- json_config/json_config.sh@239 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:04:48.711 23:45:27 json_config -- json_config/json_config.sh@240 -- # [[ tcp == \r\d\m\a ]] 00:04:48.711 23:45:27 json_config -- json_config/json_config.sh@244 -- # [[ -z 127.0.0.1 ]] 00:04:48.711 23:45:27 json_config -- json_config/json_config.sh@249 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:48.711 23:45:27 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:48.711 MallocForNvmf0 00:04:48.711 23:45:27 json_config -- json_config/json_config.sh@250 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:48.711 23:45:27 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:48.970 MallocForNvmf1 00:04:48.970 23:45:27 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:04:48.970 23:45:27 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:04:48.970 [2024-12-13 23:45:28.054328] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:48.970 23:45:28 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:48.970 23:45:28 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:49.229 23:45:28 json_config -- json_config/json_config.sh@254 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:49.229 23:45:28 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:49.489 23:45:28 json_config -- json_config/json_config.sh@255 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:49.489 23:45:28 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:49.489 23:45:28 json_config -- json_config/json_config.sh@256 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:04:49.489 23:45:28 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:04:49.747 [2024-12-13 23:45:28.788671] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:04:49.748 23:45:28 json_config -- json_config/json_config.sh@258 -- # timing_exit create_nvmf_subsystem_config 00:04:49.748 23:45:28 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:49.748 23:45:28 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:49.748 23:45:28 json_config -- json_config/json_config.sh@300 -- # timing_exit json_config_setup_target 00:04:49.748 23:45:28 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:49.748 23:45:28 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:49.748 23:45:28 json_config -- json_config/json_config.sh@302 -- # [[ 0 -eq 1 ]] 00:04:49.748 23:45:28 json_config -- json_config/json_config.sh@307 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:49.748 23:45:28 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:50.006 MallocBdevForConfigChangeCheck 00:04:50.006 23:45:29 json_config -- json_config/json_config.sh@309 -- # timing_exit json_config_test_init 00:04:50.006 23:45:29 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:50.006 23:45:29 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:50.006 23:45:29 json_config -- json_config/json_config.sh@366 -- # tgt_rpc save_config 00:04:50.006 23:45:29 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:50.573 23:45:29 json_config -- json_config/json_config.sh@368 -- # echo 'INFO: shutting down applications...' 00:04:50.573 INFO: shutting down applications... 00:04:50.573 23:45:29 json_config -- json_config/json_config.sh@369 -- # [[ 0 -eq 1 ]] 00:04:50.573 23:45:29 json_config -- json_config/json_config.sh@375 -- # json_config_clear target 00:04:50.573 23:45:29 json_config -- json_config/json_config.sh@339 -- # [[ -n 22 ]] 00:04:50.573 23:45:29 json_config -- json_config/json_config.sh@340 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:04:51.949 Calling clear_iscsi_subsystem 00:04:51.949 Calling clear_nvmf_subsystem 00:04:51.949 Calling clear_nbd_subsystem 00:04:51.949 Calling clear_ublk_subsystem 00:04:51.949 Calling clear_vhost_blk_subsystem 00:04:51.949 Calling clear_vhost_scsi_subsystem 00:04:51.949 Calling clear_bdev_subsystem 00:04:51.949 23:45:31 json_config -- json_config/json_config.sh@344 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:04:51.949 23:45:31 json_config -- json_config/json_config.sh@350 -- # count=100 00:04:51.949 23:45:31 json_config -- json_config/json_config.sh@351 -- # '[' 100 -gt 0 ']' 00:04:51.949 23:45:31 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:51.949 23:45:31 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:04:51.949 23:45:31 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:04:52.208 23:45:31 json_config -- json_config/json_config.sh@352 -- # break 00:04:52.208 23:45:31 json_config -- json_config/json_config.sh@357 -- # '[' 100 -eq 0 ']' 00:04:52.208 23:45:31 json_config -- json_config/json_config.sh@376 -- # json_config_test_shutdown_app target 00:04:52.208 23:45:31 json_config -- json_config/common.sh@31 -- # local app=target 00:04:52.208 23:45:31 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:52.208 23:45:31 json_config -- json_config/common.sh@35 -- # [[ -n 3795695 ]] 00:04:52.208 23:45:31 json_config -- json_config/common.sh@38 -- # kill -SIGINT 3795695 00:04:52.208 23:45:31 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:52.208 23:45:31 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:52.208 23:45:31 json_config -- json_config/common.sh@41 -- # kill -0 3795695 00:04:52.208 23:45:31 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:04:52.775 23:45:31 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:04:52.775 23:45:31 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:52.775 23:45:31 json_config -- json_config/common.sh@41 -- # kill -0 3795695 00:04:52.775 23:45:31 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:04:53.343 23:45:32 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:04:53.343 23:45:32 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:53.343 23:45:32 json_config -- json_config/common.sh@41 -- # kill -0 3795695 00:04:53.343 23:45:32 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:53.343 23:45:32 json_config -- json_config/common.sh@43 -- # break 00:04:53.343 23:45:32 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:53.343 23:45:32 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:53.343 SPDK target shutdown done 00:04:53.343 23:45:32 json_config -- json_config/json_config.sh@378 -- # echo 'INFO: relaunching applications...' 00:04:53.343 INFO: relaunching applications... 00:04:53.343 23:45:32 json_config -- json_config/json_config.sh@379 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:53.343 23:45:32 json_config -- json_config/common.sh@9 -- # local app=target 00:04:53.343 23:45:32 json_config -- json_config/common.sh@10 -- # shift 00:04:53.343 23:45:32 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:53.343 23:45:32 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:53.343 23:45:32 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:04:53.343 23:45:32 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:53.343 23:45:32 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:53.343 23:45:32 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=3797399 00:04:53.343 23:45:32 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:53.343 Waiting for target to run... 00:04:53.343 23:45:32 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:53.343 23:45:32 json_config -- json_config/common.sh@25 -- # waitforlisten 3797399 /var/tmp/spdk_tgt.sock 00:04:53.343 23:45:32 json_config -- common/autotest_common.sh@835 -- # '[' -z 3797399 ']' 00:04:53.343 23:45:32 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:53.343 23:45:32 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:53.343 23:45:32 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:53.343 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:53.343 23:45:32 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:53.343 23:45:32 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:53.343 [2024-12-13 23:45:32.445145] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:04:53.343 [2024-12-13 23:45:32.445239] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3797399 ] 00:04:53.911 [2024-12-13 23:45:32.935051] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:54.170 [2024-12-13 23:45:33.054924] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:04:58.580 [2024-12-13 23:45:36.728010] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:58.580 [2024-12-13 23:45:36.760314] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:04:58.580 23:45:36 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:58.580 23:45:36 json_config -- common/autotest_common.sh@868 -- # return 0 00:04:58.580 23:45:36 json_config -- json_config/common.sh@26 -- # echo '' 00:04:58.580 00:04:58.580 23:45:36 json_config -- json_config/json_config.sh@380 -- # [[ 0 -eq 1 ]] 00:04:58.580 23:45:36 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: Checking if target configuration is the same...' 00:04:58.580 INFO: Checking if target configuration is the same... 00:04:58.580 23:45:36 json_config -- json_config/json_config.sh@385 -- # tgt_rpc save_config 00:04:58.580 23:45:36 json_config -- json_config/json_config.sh@385 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:58.580 23:45:36 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:58.580 + '[' 2 -ne 2 ']' 00:04:58.580 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:04:58.580 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:04:58.580 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:58.580 +++ basename /dev/fd/62 00:04:58.580 ++ mktemp /tmp/62.XXX 00:04:58.580 + tmp_file_1=/tmp/62.nwW 00:04:58.580 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:58.580 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:58.580 + tmp_file_2=/tmp/spdk_tgt_config.json.FXm 00:04:58.580 + ret=0 00:04:58.580 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:58.580 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:58.580 + diff -u /tmp/62.nwW /tmp/spdk_tgt_config.json.FXm 00:04:58.580 + echo 'INFO: JSON config files are the same' 00:04:58.580 INFO: JSON config files are the same 00:04:58.580 + rm /tmp/62.nwW /tmp/spdk_tgt_config.json.FXm 00:04:58.580 + exit 0 00:04:58.580 23:45:37 json_config -- json_config/json_config.sh@386 -- # [[ 0 -eq 1 ]] 00:04:58.580 23:45:37 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:04:58.580 INFO: changing configuration and checking if this can be detected... 00:04:58.580 23:45:37 json_config -- json_config/json_config.sh@393 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:58.580 23:45:37 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:58.580 23:45:37 json_config -- json_config/json_config.sh@394 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:58.580 23:45:37 json_config -- json_config/json_config.sh@394 -- # tgt_rpc save_config 00:04:58.580 23:45:37 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:58.580 + '[' 2 -ne 2 ']' 00:04:58.580 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:04:58.580 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:04:58.580 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:58.580 +++ basename /dev/fd/62 00:04:58.580 ++ mktemp /tmp/62.XXX 00:04:58.580 + tmp_file_1=/tmp/62.hVz 00:04:58.580 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:58.580 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:58.580 + tmp_file_2=/tmp/spdk_tgt_config.json.ee9 00:04:58.580 + ret=0 00:04:58.580 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:58.580 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:58.898 + diff -u /tmp/62.hVz /tmp/spdk_tgt_config.json.ee9 00:04:58.898 + ret=1 00:04:58.898 + echo '=== Start of file: /tmp/62.hVz ===' 00:04:58.898 + cat /tmp/62.hVz 00:04:58.898 + echo '=== End of file: /tmp/62.hVz ===' 00:04:58.898 + echo '' 00:04:58.898 + echo '=== Start of file: /tmp/spdk_tgt_config.json.ee9 ===' 00:04:58.898 + cat /tmp/spdk_tgt_config.json.ee9 00:04:58.898 + echo '=== End of file: /tmp/spdk_tgt_config.json.ee9 ===' 00:04:58.898 + echo '' 00:04:58.898 + rm /tmp/62.hVz /tmp/spdk_tgt_config.json.ee9 00:04:58.898 + exit 1 00:04:58.898 23:45:37 json_config -- json_config/json_config.sh@398 -- # echo 'INFO: configuration change detected.' 00:04:58.898 INFO: configuration change detected. 00:04:58.898 23:45:37 json_config -- json_config/json_config.sh@401 -- # json_config_test_fini 00:04:58.898 23:45:37 json_config -- json_config/json_config.sh@313 -- # timing_enter json_config_test_fini 00:04:58.898 23:45:37 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:58.898 23:45:37 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:58.898 23:45:37 json_config -- json_config/json_config.sh@314 -- # local ret=0 00:04:58.898 23:45:37 json_config -- json_config/json_config.sh@316 -- # [[ -n '' ]] 00:04:58.898 23:45:37 json_config -- json_config/json_config.sh@324 -- # [[ -n 3797399 ]] 00:04:58.898 23:45:37 json_config -- json_config/json_config.sh@327 -- # cleanup_bdev_subsystem_config 00:04:58.898 23:45:37 json_config -- json_config/json_config.sh@191 -- # timing_enter cleanup_bdev_subsystem_config 00:04:58.898 23:45:37 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:58.898 23:45:37 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:58.898 23:45:37 json_config -- json_config/json_config.sh@193 -- # [[ 0 -eq 1 ]] 00:04:58.898 23:45:37 json_config -- json_config/json_config.sh@200 -- # uname -s 00:04:58.898 23:45:37 json_config -- json_config/json_config.sh@200 -- # [[ Linux = Linux ]] 00:04:58.898 23:45:37 json_config -- json_config/json_config.sh@201 -- # rm -f /sample_aio 00:04:58.898 23:45:37 json_config -- json_config/json_config.sh@204 -- # [[ 0 -eq 1 ]] 00:04:58.898 23:45:37 json_config -- json_config/json_config.sh@208 -- # timing_exit cleanup_bdev_subsystem_config 00:04:58.898 23:45:37 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:58.898 23:45:37 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:58.898 23:45:37 json_config -- json_config/json_config.sh@330 -- # killprocess 3797399 00:04:58.898 23:45:37 json_config -- common/autotest_common.sh@954 -- # '[' -z 3797399 ']' 00:04:58.898 23:45:37 json_config -- common/autotest_common.sh@958 -- # kill -0 3797399 00:04:58.898 23:45:37 json_config -- common/autotest_common.sh@959 -- # uname 00:04:58.898 23:45:37 json_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:58.898 23:45:37 json_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3797399 00:04:58.898 23:45:37 json_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:58.898 23:45:37 json_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:58.898 23:45:37 json_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3797399' 00:04:58.898 killing process with pid 3797399 00:04:58.898 23:45:37 json_config -- common/autotest_common.sh@973 -- # kill 3797399 00:04:58.898 23:45:37 json_config -- common/autotest_common.sh@978 -- # wait 3797399 00:05:01.432 23:45:40 json_config -- json_config/json_config.sh@333 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:01.432 23:45:40 json_config -- json_config/json_config.sh@334 -- # timing_exit json_config_test_fini 00:05:01.432 23:45:40 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:01.432 23:45:40 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:01.432 23:45:40 json_config -- json_config/json_config.sh@335 -- # return 0 00:05:01.432 23:45:40 json_config -- json_config/json_config.sh@403 -- # echo 'INFO: Success' 00:05:01.432 INFO: Success 00:05:01.432 00:05:01.432 real 0m17.672s 00:05:01.432 user 0m18.084s 00:05:01.432 sys 0m2.753s 00:05:01.432 23:45:40 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:01.432 23:45:40 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:01.432 ************************************ 00:05:01.432 END TEST json_config 00:05:01.432 ************************************ 00:05:01.432 23:45:40 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:01.432 23:45:40 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:01.432 23:45:40 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:01.432 23:45:40 -- common/autotest_common.sh@10 -- # set +x 00:05:01.432 ************************************ 00:05:01.432 START TEST json_config_extra_key 00:05:01.432 ************************************ 00:05:01.432 23:45:40 json_config_extra_key -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:01.432 23:45:40 json_config_extra_key -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:01.432 23:45:40 json_config_extra_key -- common/autotest_common.sh@1711 -- # lcov --version 00:05:01.432 23:45:40 json_config_extra_key -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:01.432 23:45:40 json_config_extra_key -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:01.432 23:45:40 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:01.432 23:45:40 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:01.432 23:45:40 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:01.432 23:45:40 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:05:01.432 23:45:40 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:05:01.432 23:45:40 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:05:01.432 23:45:40 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:05:01.432 23:45:40 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:05:01.432 23:45:40 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:05:01.432 23:45:40 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:05:01.432 23:45:40 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:01.432 23:45:40 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:05:01.432 23:45:40 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:05:01.432 23:45:40 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:01.432 23:45:40 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:01.432 23:45:40 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:05:01.432 23:45:40 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:05:01.432 23:45:40 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:01.432 23:45:40 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:05:01.432 23:45:40 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:05:01.432 23:45:40 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:05:01.432 23:45:40 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:05:01.432 23:45:40 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:01.432 23:45:40 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:05:01.432 23:45:40 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:05:01.432 23:45:40 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:01.432 23:45:40 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:01.432 23:45:40 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:05:01.432 23:45:40 json_config_extra_key -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:01.432 23:45:40 json_config_extra_key -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:01.432 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:01.432 --rc genhtml_branch_coverage=1 00:05:01.432 --rc genhtml_function_coverage=1 00:05:01.432 --rc genhtml_legend=1 00:05:01.432 --rc geninfo_all_blocks=1 00:05:01.432 --rc geninfo_unexecuted_blocks=1 00:05:01.432 00:05:01.432 ' 00:05:01.432 23:45:40 json_config_extra_key -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:01.432 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:01.432 --rc genhtml_branch_coverage=1 00:05:01.432 --rc genhtml_function_coverage=1 00:05:01.432 --rc genhtml_legend=1 00:05:01.432 --rc geninfo_all_blocks=1 00:05:01.432 --rc geninfo_unexecuted_blocks=1 00:05:01.432 00:05:01.432 ' 00:05:01.432 23:45:40 json_config_extra_key -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:01.432 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:01.432 --rc genhtml_branch_coverage=1 00:05:01.432 --rc genhtml_function_coverage=1 00:05:01.432 --rc genhtml_legend=1 00:05:01.432 --rc geninfo_all_blocks=1 00:05:01.432 --rc geninfo_unexecuted_blocks=1 00:05:01.432 00:05:01.432 ' 00:05:01.432 23:45:40 json_config_extra_key -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:01.432 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:01.432 --rc genhtml_branch_coverage=1 00:05:01.433 --rc genhtml_function_coverage=1 00:05:01.433 --rc genhtml_legend=1 00:05:01.433 --rc geninfo_all_blocks=1 00:05:01.433 --rc geninfo_unexecuted_blocks=1 00:05:01.433 00:05:01.433 ' 00:05:01.433 23:45:40 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:01.433 23:45:40 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:05:01.433 23:45:40 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:01.433 23:45:40 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:01.433 23:45:40 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:01.433 23:45:40 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:01.433 23:45:40 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:01.433 23:45:40 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:01.433 23:45:40 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:01.433 23:45:40 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:01.433 23:45:40 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:01.433 23:45:40 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:01.433 23:45:40 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:05:01.433 23:45:40 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:05:01.433 23:45:40 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:01.433 23:45:40 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:01.433 23:45:40 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:01.433 23:45:40 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:01.433 23:45:40 json_config_extra_key -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:01.433 23:45:40 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:05:01.433 23:45:40 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:01.433 23:45:40 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:01.433 23:45:40 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:01.433 23:45:40 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:01.433 23:45:40 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:01.433 23:45:40 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:01.433 23:45:40 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:05:01.433 23:45:40 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:01.433 23:45:40 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:05:01.433 23:45:40 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:01.433 23:45:40 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:01.433 23:45:40 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:01.433 23:45:40 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:01.433 23:45:40 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:01.433 23:45:40 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:01.433 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:01.433 23:45:40 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:01.433 23:45:40 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:01.433 23:45:40 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:01.433 23:45:40 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:05:01.433 23:45:40 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:05:01.433 23:45:40 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:05:01.433 23:45:40 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:05:01.433 23:45:40 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:05:01.433 23:45:40 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:05:01.433 23:45:40 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:05:01.433 23:45:40 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:05:01.433 23:45:40 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:05:01.433 23:45:40 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:01.433 23:45:40 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:05:01.433 INFO: launching applications... 00:05:01.433 23:45:40 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:05:01.433 23:45:40 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:05:01.433 23:45:40 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:05:01.433 23:45:40 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:01.433 23:45:40 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:01.433 23:45:40 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:05:01.433 23:45:40 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:01.433 23:45:40 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:01.433 23:45:40 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=3798879 00:05:01.433 23:45:40 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:01.433 Waiting for target to run... 00:05:01.433 23:45:40 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 3798879 /var/tmp/spdk_tgt.sock 00:05:01.433 23:45:40 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 3798879 ']' 00:05:01.433 23:45:40 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:05:01.433 23:45:40 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:01.433 23:45:40 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:01.433 23:45:40 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:01.433 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:01.433 23:45:40 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:01.433 23:45:40 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:01.433 [2024-12-13 23:45:40.456409] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:05:01.433 [2024-12-13 23:45:40.456526] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3798879 ] 00:05:02.001 [2024-12-13 23:45:40.946456] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:02.001 [2024-12-13 23:45:41.061134] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:02.937 23:45:41 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:02.937 23:45:41 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:05:02.937 23:45:41 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:05:02.937 00:05:02.937 23:45:41 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:05:02.937 INFO: shutting down applications... 00:05:02.937 23:45:41 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:05:02.937 23:45:41 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:05:02.937 23:45:41 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:02.937 23:45:41 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 3798879 ]] 00:05:02.937 23:45:41 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 3798879 00:05:02.937 23:45:41 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:02.937 23:45:41 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:02.937 23:45:41 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 3798879 00:05:02.937 23:45:41 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:03.196 23:45:42 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:03.196 23:45:42 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:03.196 23:45:42 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 3798879 00:05:03.196 23:45:42 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:03.764 23:45:42 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:03.764 23:45:42 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:03.764 23:45:42 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 3798879 00:05:03.764 23:45:42 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:04.332 23:45:43 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:04.333 23:45:43 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:04.333 23:45:43 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 3798879 00:05:04.333 23:45:43 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:04.900 23:45:43 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:04.900 23:45:43 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:04.900 23:45:43 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 3798879 00:05:04.900 23:45:43 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:05.158 23:45:44 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:05.158 23:45:44 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:05.158 23:45:44 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 3798879 00:05:05.159 23:45:44 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:05.727 23:45:44 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:05.727 23:45:44 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:05.727 23:45:44 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 3798879 00:05:05.727 23:45:44 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:05.727 23:45:44 json_config_extra_key -- json_config/common.sh@43 -- # break 00:05:05.727 23:45:44 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:05.727 23:45:44 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:05.727 SPDK target shutdown done 00:05:05.727 23:45:44 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:05:05.727 Success 00:05:05.727 00:05:05.727 real 0m4.565s 00:05:05.727 user 0m3.858s 00:05:05.727 sys 0m0.692s 00:05:05.727 23:45:44 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:05.727 23:45:44 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:05.727 ************************************ 00:05:05.727 END TEST json_config_extra_key 00:05:05.727 ************************************ 00:05:05.727 23:45:44 -- spdk/autotest.sh@161 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:05.727 23:45:44 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:05.727 23:45:44 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:05.727 23:45:44 -- common/autotest_common.sh@10 -- # set +x 00:05:05.727 ************************************ 00:05:05.727 START TEST alias_rpc 00:05:05.727 ************************************ 00:05:05.727 23:45:44 alias_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:05.987 * Looking for test storage... 00:05:05.987 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:05:05.987 23:45:44 alias_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:05.987 23:45:44 alias_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:05:05.987 23:45:44 alias_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:05.987 23:45:44 alias_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:05.987 23:45:44 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:05.987 23:45:44 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:05.987 23:45:44 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:05.987 23:45:44 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:05.987 23:45:44 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:05.987 23:45:44 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:05.987 23:45:44 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:05.987 23:45:44 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:05.987 23:45:44 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:05.987 23:45:44 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:05.987 23:45:44 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:05.987 23:45:44 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:05.987 23:45:44 alias_rpc -- scripts/common.sh@345 -- # : 1 00:05:05.987 23:45:44 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:05.987 23:45:44 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:05.987 23:45:44 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:05:05.987 23:45:44 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:05:05.987 23:45:44 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:05.987 23:45:44 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:05:05.987 23:45:44 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:05.987 23:45:44 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:05:05.987 23:45:44 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:05:05.987 23:45:44 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:05.987 23:45:44 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:05:05.987 23:45:44 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:05.987 23:45:44 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:05.987 23:45:44 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:05.987 23:45:44 alias_rpc -- scripts/common.sh@368 -- # return 0 00:05:05.987 23:45:44 alias_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:05.987 23:45:44 alias_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:05.987 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:05.987 --rc genhtml_branch_coverage=1 00:05:05.987 --rc genhtml_function_coverage=1 00:05:05.987 --rc genhtml_legend=1 00:05:05.987 --rc geninfo_all_blocks=1 00:05:05.987 --rc geninfo_unexecuted_blocks=1 00:05:05.987 00:05:05.987 ' 00:05:05.987 23:45:44 alias_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:05.987 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:05.987 --rc genhtml_branch_coverage=1 00:05:05.987 --rc genhtml_function_coverage=1 00:05:05.987 --rc genhtml_legend=1 00:05:05.987 --rc geninfo_all_blocks=1 00:05:05.987 --rc geninfo_unexecuted_blocks=1 00:05:05.987 00:05:05.987 ' 00:05:05.987 23:45:44 alias_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:05.987 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:05.987 --rc genhtml_branch_coverage=1 00:05:05.987 --rc genhtml_function_coverage=1 00:05:05.987 --rc genhtml_legend=1 00:05:05.987 --rc geninfo_all_blocks=1 00:05:05.987 --rc geninfo_unexecuted_blocks=1 00:05:05.987 00:05:05.987 ' 00:05:05.987 23:45:44 alias_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:05.987 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:05.987 --rc genhtml_branch_coverage=1 00:05:05.987 --rc genhtml_function_coverage=1 00:05:05.987 --rc genhtml_legend=1 00:05:05.987 --rc geninfo_all_blocks=1 00:05:05.987 --rc geninfo_unexecuted_blocks=1 00:05:05.987 00:05:05.987 ' 00:05:05.987 23:45:44 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:05.987 23:45:44 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:05.987 23:45:44 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=3799622 00:05:05.987 23:45:45 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 3799622 00:05:05.987 23:45:45 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 3799622 ']' 00:05:05.987 23:45:45 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:05.987 23:45:45 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:05.987 23:45:45 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:05.987 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:05.987 23:45:45 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:05.987 23:45:45 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:05.987 [2024-12-13 23:45:45.072746] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:05:05.987 [2024-12-13 23:45:45.072839] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3799622 ] 00:05:06.245 [2024-12-13 23:45:45.184673] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:06.245 [2024-12-13 23:45:45.289925] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:07.180 23:45:46 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:07.180 23:45:46 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:07.180 23:45:46 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:05:07.180 23:45:46 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 3799622 00:05:07.180 23:45:46 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 3799622 ']' 00:05:07.180 23:45:46 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 3799622 00:05:07.180 23:45:46 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:05:07.180 23:45:46 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:07.180 23:45:46 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3799622 00:05:07.439 23:45:46 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:07.439 23:45:46 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:07.439 23:45:46 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3799622' 00:05:07.439 killing process with pid 3799622 00:05:07.439 23:45:46 alias_rpc -- common/autotest_common.sh@973 -- # kill 3799622 00:05:07.439 23:45:46 alias_rpc -- common/autotest_common.sh@978 -- # wait 3799622 00:05:09.974 00:05:09.974 real 0m3.836s 00:05:09.974 user 0m3.868s 00:05:09.974 sys 0m0.549s 00:05:09.974 23:45:48 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:09.974 23:45:48 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:09.974 ************************************ 00:05:09.974 END TEST alias_rpc 00:05:09.974 ************************************ 00:05:09.974 23:45:48 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:05:09.974 23:45:48 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:09.974 23:45:48 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:09.974 23:45:48 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:09.974 23:45:48 -- common/autotest_common.sh@10 -- # set +x 00:05:09.974 ************************************ 00:05:09.974 START TEST spdkcli_tcp 00:05:09.974 ************************************ 00:05:09.974 23:45:48 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:09.974 * Looking for test storage... 00:05:09.974 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:05:09.974 23:45:48 spdkcli_tcp -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:09.974 23:45:48 spdkcli_tcp -- common/autotest_common.sh@1711 -- # lcov --version 00:05:09.974 23:45:48 spdkcli_tcp -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:09.974 23:45:48 spdkcli_tcp -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:09.974 23:45:48 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:09.974 23:45:48 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:09.974 23:45:48 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:09.974 23:45:48 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:05:09.974 23:45:48 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:05:09.974 23:45:48 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:05:09.974 23:45:48 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:05:09.974 23:45:48 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:05:09.974 23:45:48 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:05:09.974 23:45:48 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:05:09.974 23:45:48 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:09.974 23:45:48 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:05:09.974 23:45:48 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:05:09.974 23:45:48 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:09.974 23:45:48 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:09.974 23:45:48 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:05:09.974 23:45:48 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:05:09.974 23:45:48 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:09.974 23:45:48 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:05:09.974 23:45:48 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:05:09.974 23:45:48 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:05:09.974 23:45:48 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:05:09.974 23:45:48 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:09.974 23:45:48 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:05:09.974 23:45:48 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:05:09.974 23:45:48 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:09.974 23:45:48 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:09.974 23:45:48 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:05:09.974 23:45:48 spdkcli_tcp -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:09.974 23:45:48 spdkcli_tcp -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:09.974 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:09.974 --rc genhtml_branch_coverage=1 00:05:09.974 --rc genhtml_function_coverage=1 00:05:09.974 --rc genhtml_legend=1 00:05:09.974 --rc geninfo_all_blocks=1 00:05:09.974 --rc geninfo_unexecuted_blocks=1 00:05:09.974 00:05:09.974 ' 00:05:09.974 23:45:48 spdkcli_tcp -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:09.974 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:09.974 --rc genhtml_branch_coverage=1 00:05:09.974 --rc genhtml_function_coverage=1 00:05:09.974 --rc genhtml_legend=1 00:05:09.974 --rc geninfo_all_blocks=1 00:05:09.974 --rc geninfo_unexecuted_blocks=1 00:05:09.974 00:05:09.974 ' 00:05:09.974 23:45:48 spdkcli_tcp -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:09.974 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:09.974 --rc genhtml_branch_coverage=1 00:05:09.974 --rc genhtml_function_coverage=1 00:05:09.974 --rc genhtml_legend=1 00:05:09.974 --rc geninfo_all_blocks=1 00:05:09.974 --rc geninfo_unexecuted_blocks=1 00:05:09.974 00:05:09.974 ' 00:05:09.974 23:45:48 spdkcli_tcp -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:09.974 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:09.974 --rc genhtml_branch_coverage=1 00:05:09.974 --rc genhtml_function_coverage=1 00:05:09.974 --rc genhtml_legend=1 00:05:09.974 --rc geninfo_all_blocks=1 00:05:09.974 --rc geninfo_unexecuted_blocks=1 00:05:09.974 00:05:09.974 ' 00:05:09.974 23:45:48 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:05:09.974 23:45:48 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:05:09.974 23:45:48 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:05:09.974 23:45:48 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:05:09.974 23:45:48 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:05:09.974 23:45:48 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:05:09.974 23:45:48 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:05:09.974 23:45:48 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:09.974 23:45:48 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:09.974 23:45:48 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=3800357 00:05:09.974 23:45:48 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 3800357 00:05:09.974 23:45:48 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 3800357 ']' 00:05:09.974 23:45:48 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:09.974 23:45:48 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:09.974 23:45:48 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:09.974 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:09.974 23:45:48 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:09.974 23:45:48 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:09.974 23:45:48 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:05:09.974 [2024-12-13 23:45:48.981893] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:05:09.974 [2024-12-13 23:45:48.981982] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3800357 ] 00:05:09.974 [2024-12-13 23:45:49.093926] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:10.233 [2024-12-13 23:45:49.199906] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:10.233 [2024-12-13 23:45:49.199915] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:05:11.170 23:45:50 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:11.170 23:45:50 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0 00:05:11.170 23:45:50 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=3800583 00:05:11.170 23:45:50 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:05:11.170 23:45:50 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:05:11.170 [ 00:05:11.170 "bdev_malloc_delete", 00:05:11.170 "bdev_malloc_create", 00:05:11.170 "bdev_null_resize", 00:05:11.170 "bdev_null_delete", 00:05:11.170 "bdev_null_create", 00:05:11.170 "bdev_nvme_cuse_unregister", 00:05:11.170 "bdev_nvme_cuse_register", 00:05:11.170 "bdev_opal_new_user", 00:05:11.170 "bdev_opal_set_lock_state", 00:05:11.170 "bdev_opal_delete", 00:05:11.170 "bdev_opal_get_info", 00:05:11.170 "bdev_opal_create", 00:05:11.170 "bdev_nvme_opal_revert", 00:05:11.170 "bdev_nvme_opal_init", 00:05:11.170 "bdev_nvme_send_cmd", 00:05:11.170 "bdev_nvme_set_keys", 00:05:11.170 "bdev_nvme_get_path_iostat", 00:05:11.170 "bdev_nvme_get_mdns_discovery_info", 00:05:11.170 "bdev_nvme_stop_mdns_discovery", 00:05:11.170 "bdev_nvme_start_mdns_discovery", 00:05:11.170 "bdev_nvme_set_multipath_policy", 00:05:11.170 "bdev_nvme_set_preferred_path", 00:05:11.170 "bdev_nvme_get_io_paths", 00:05:11.170 "bdev_nvme_remove_error_injection", 00:05:11.170 "bdev_nvme_add_error_injection", 00:05:11.170 "bdev_nvme_get_discovery_info", 00:05:11.170 "bdev_nvme_stop_discovery", 00:05:11.170 "bdev_nvme_start_discovery", 00:05:11.170 "bdev_nvme_get_controller_health_info", 00:05:11.170 "bdev_nvme_disable_controller", 00:05:11.170 "bdev_nvme_enable_controller", 00:05:11.170 "bdev_nvme_reset_controller", 00:05:11.170 "bdev_nvme_get_transport_statistics", 00:05:11.170 "bdev_nvme_apply_firmware", 00:05:11.170 "bdev_nvme_detach_controller", 00:05:11.170 "bdev_nvme_get_controllers", 00:05:11.170 "bdev_nvme_attach_controller", 00:05:11.170 "bdev_nvme_set_hotplug", 00:05:11.170 "bdev_nvme_set_options", 00:05:11.170 "bdev_passthru_delete", 00:05:11.170 "bdev_passthru_create", 00:05:11.170 "bdev_lvol_set_parent_bdev", 00:05:11.170 "bdev_lvol_set_parent", 00:05:11.170 "bdev_lvol_check_shallow_copy", 00:05:11.170 "bdev_lvol_start_shallow_copy", 00:05:11.170 "bdev_lvol_grow_lvstore", 00:05:11.170 "bdev_lvol_get_lvols", 00:05:11.170 "bdev_lvol_get_lvstores", 00:05:11.170 "bdev_lvol_delete", 00:05:11.170 "bdev_lvol_set_read_only", 00:05:11.170 "bdev_lvol_resize", 00:05:11.170 "bdev_lvol_decouple_parent", 00:05:11.170 "bdev_lvol_inflate", 00:05:11.170 "bdev_lvol_rename", 00:05:11.170 "bdev_lvol_clone_bdev", 00:05:11.170 "bdev_lvol_clone", 00:05:11.170 "bdev_lvol_snapshot", 00:05:11.170 "bdev_lvol_create", 00:05:11.170 "bdev_lvol_delete_lvstore", 00:05:11.170 "bdev_lvol_rename_lvstore", 00:05:11.170 "bdev_lvol_create_lvstore", 00:05:11.170 "bdev_raid_set_options", 00:05:11.170 "bdev_raid_remove_base_bdev", 00:05:11.170 "bdev_raid_add_base_bdev", 00:05:11.170 "bdev_raid_delete", 00:05:11.170 "bdev_raid_create", 00:05:11.170 "bdev_raid_get_bdevs", 00:05:11.170 "bdev_error_inject_error", 00:05:11.170 "bdev_error_delete", 00:05:11.170 "bdev_error_create", 00:05:11.170 "bdev_split_delete", 00:05:11.170 "bdev_split_create", 00:05:11.170 "bdev_delay_delete", 00:05:11.170 "bdev_delay_create", 00:05:11.170 "bdev_delay_update_latency", 00:05:11.170 "bdev_zone_block_delete", 00:05:11.170 "bdev_zone_block_create", 00:05:11.170 "blobfs_create", 00:05:11.170 "blobfs_detect", 00:05:11.170 "blobfs_set_cache_size", 00:05:11.170 "bdev_aio_delete", 00:05:11.170 "bdev_aio_rescan", 00:05:11.170 "bdev_aio_create", 00:05:11.170 "bdev_ftl_set_property", 00:05:11.170 "bdev_ftl_get_properties", 00:05:11.170 "bdev_ftl_get_stats", 00:05:11.170 "bdev_ftl_unmap", 00:05:11.170 "bdev_ftl_unload", 00:05:11.170 "bdev_ftl_delete", 00:05:11.170 "bdev_ftl_load", 00:05:11.170 "bdev_ftl_create", 00:05:11.170 "bdev_virtio_attach_controller", 00:05:11.170 "bdev_virtio_scsi_get_devices", 00:05:11.170 "bdev_virtio_detach_controller", 00:05:11.170 "bdev_virtio_blk_set_hotplug", 00:05:11.170 "bdev_iscsi_delete", 00:05:11.170 "bdev_iscsi_create", 00:05:11.170 "bdev_iscsi_set_options", 00:05:11.170 "accel_error_inject_error", 00:05:11.170 "ioat_scan_accel_module", 00:05:11.170 "dsa_scan_accel_module", 00:05:11.170 "iaa_scan_accel_module", 00:05:11.170 "keyring_file_remove_key", 00:05:11.170 "keyring_file_add_key", 00:05:11.170 "keyring_linux_set_options", 00:05:11.170 "fsdev_aio_delete", 00:05:11.170 "fsdev_aio_create", 00:05:11.170 "iscsi_get_histogram", 00:05:11.170 "iscsi_enable_histogram", 00:05:11.171 "iscsi_set_options", 00:05:11.171 "iscsi_get_auth_groups", 00:05:11.171 "iscsi_auth_group_remove_secret", 00:05:11.171 "iscsi_auth_group_add_secret", 00:05:11.171 "iscsi_delete_auth_group", 00:05:11.171 "iscsi_create_auth_group", 00:05:11.171 "iscsi_set_discovery_auth", 00:05:11.171 "iscsi_get_options", 00:05:11.171 "iscsi_target_node_request_logout", 00:05:11.171 "iscsi_target_node_set_redirect", 00:05:11.171 "iscsi_target_node_set_auth", 00:05:11.171 "iscsi_target_node_add_lun", 00:05:11.171 "iscsi_get_stats", 00:05:11.171 "iscsi_get_connections", 00:05:11.171 "iscsi_portal_group_set_auth", 00:05:11.171 "iscsi_start_portal_group", 00:05:11.171 "iscsi_delete_portal_group", 00:05:11.171 "iscsi_create_portal_group", 00:05:11.171 "iscsi_get_portal_groups", 00:05:11.171 "iscsi_delete_target_node", 00:05:11.171 "iscsi_target_node_remove_pg_ig_maps", 00:05:11.171 "iscsi_target_node_add_pg_ig_maps", 00:05:11.171 "iscsi_create_target_node", 00:05:11.171 "iscsi_get_target_nodes", 00:05:11.171 "iscsi_delete_initiator_group", 00:05:11.171 "iscsi_initiator_group_remove_initiators", 00:05:11.171 "iscsi_initiator_group_add_initiators", 00:05:11.171 "iscsi_create_initiator_group", 00:05:11.171 "iscsi_get_initiator_groups", 00:05:11.171 "nvmf_set_crdt", 00:05:11.171 "nvmf_set_config", 00:05:11.171 "nvmf_set_max_subsystems", 00:05:11.171 "nvmf_stop_mdns_prr", 00:05:11.171 "nvmf_publish_mdns_prr", 00:05:11.171 "nvmf_subsystem_get_listeners", 00:05:11.171 "nvmf_subsystem_get_qpairs", 00:05:11.171 "nvmf_subsystem_get_controllers", 00:05:11.171 "nvmf_get_stats", 00:05:11.171 "nvmf_get_transports", 00:05:11.171 "nvmf_create_transport", 00:05:11.171 "nvmf_get_targets", 00:05:11.171 "nvmf_delete_target", 00:05:11.171 "nvmf_create_target", 00:05:11.171 "nvmf_subsystem_allow_any_host", 00:05:11.171 "nvmf_subsystem_set_keys", 00:05:11.171 "nvmf_subsystem_remove_host", 00:05:11.171 "nvmf_subsystem_add_host", 00:05:11.171 "nvmf_ns_remove_host", 00:05:11.171 "nvmf_ns_add_host", 00:05:11.171 "nvmf_subsystem_remove_ns", 00:05:11.171 "nvmf_subsystem_set_ns_ana_group", 00:05:11.171 "nvmf_subsystem_add_ns", 00:05:11.171 "nvmf_subsystem_listener_set_ana_state", 00:05:11.171 "nvmf_discovery_get_referrals", 00:05:11.171 "nvmf_discovery_remove_referral", 00:05:11.171 "nvmf_discovery_add_referral", 00:05:11.171 "nvmf_subsystem_remove_listener", 00:05:11.171 "nvmf_subsystem_add_listener", 00:05:11.171 "nvmf_delete_subsystem", 00:05:11.171 "nvmf_create_subsystem", 00:05:11.171 "nvmf_get_subsystems", 00:05:11.171 "env_dpdk_get_mem_stats", 00:05:11.171 "nbd_get_disks", 00:05:11.171 "nbd_stop_disk", 00:05:11.171 "nbd_start_disk", 00:05:11.171 "ublk_recover_disk", 00:05:11.171 "ublk_get_disks", 00:05:11.171 "ublk_stop_disk", 00:05:11.171 "ublk_start_disk", 00:05:11.171 "ublk_destroy_target", 00:05:11.171 "ublk_create_target", 00:05:11.171 "virtio_blk_create_transport", 00:05:11.171 "virtio_blk_get_transports", 00:05:11.171 "vhost_controller_set_coalescing", 00:05:11.171 "vhost_get_controllers", 00:05:11.171 "vhost_delete_controller", 00:05:11.171 "vhost_create_blk_controller", 00:05:11.171 "vhost_scsi_controller_remove_target", 00:05:11.171 "vhost_scsi_controller_add_target", 00:05:11.171 "vhost_start_scsi_controller", 00:05:11.171 "vhost_create_scsi_controller", 00:05:11.171 "thread_set_cpumask", 00:05:11.171 "scheduler_set_options", 00:05:11.171 "framework_get_governor", 00:05:11.171 "framework_get_scheduler", 00:05:11.171 "framework_set_scheduler", 00:05:11.171 "framework_get_reactors", 00:05:11.171 "thread_get_io_channels", 00:05:11.171 "thread_get_pollers", 00:05:11.171 "thread_get_stats", 00:05:11.171 "framework_monitor_context_switch", 00:05:11.171 "spdk_kill_instance", 00:05:11.171 "log_enable_timestamps", 00:05:11.171 "log_get_flags", 00:05:11.171 "log_clear_flag", 00:05:11.171 "log_set_flag", 00:05:11.171 "log_get_level", 00:05:11.171 "log_set_level", 00:05:11.171 "log_get_print_level", 00:05:11.171 "log_set_print_level", 00:05:11.171 "framework_enable_cpumask_locks", 00:05:11.171 "framework_disable_cpumask_locks", 00:05:11.171 "framework_wait_init", 00:05:11.171 "framework_start_init", 00:05:11.171 "scsi_get_devices", 00:05:11.171 "bdev_get_histogram", 00:05:11.171 "bdev_enable_histogram", 00:05:11.171 "bdev_set_qos_limit", 00:05:11.171 "bdev_set_qd_sampling_period", 00:05:11.171 "bdev_get_bdevs", 00:05:11.171 "bdev_reset_iostat", 00:05:11.171 "bdev_get_iostat", 00:05:11.171 "bdev_examine", 00:05:11.171 "bdev_wait_for_examine", 00:05:11.171 "bdev_set_options", 00:05:11.171 "accel_get_stats", 00:05:11.171 "accel_set_options", 00:05:11.171 "accel_set_driver", 00:05:11.171 "accel_crypto_key_destroy", 00:05:11.171 "accel_crypto_keys_get", 00:05:11.171 "accel_crypto_key_create", 00:05:11.171 "accel_assign_opc", 00:05:11.171 "accel_get_module_info", 00:05:11.171 "accel_get_opc_assignments", 00:05:11.171 "vmd_rescan", 00:05:11.171 "vmd_remove_device", 00:05:11.171 "vmd_enable", 00:05:11.171 "sock_get_default_impl", 00:05:11.171 "sock_set_default_impl", 00:05:11.171 "sock_impl_set_options", 00:05:11.171 "sock_impl_get_options", 00:05:11.171 "iobuf_get_stats", 00:05:11.171 "iobuf_set_options", 00:05:11.171 "keyring_get_keys", 00:05:11.171 "framework_get_pci_devices", 00:05:11.171 "framework_get_config", 00:05:11.171 "framework_get_subsystems", 00:05:11.171 "fsdev_set_opts", 00:05:11.171 "fsdev_get_opts", 00:05:11.171 "trace_get_info", 00:05:11.171 "trace_get_tpoint_group_mask", 00:05:11.171 "trace_disable_tpoint_group", 00:05:11.171 "trace_enable_tpoint_group", 00:05:11.171 "trace_clear_tpoint_mask", 00:05:11.171 "trace_set_tpoint_mask", 00:05:11.171 "notify_get_notifications", 00:05:11.171 "notify_get_types", 00:05:11.171 "spdk_get_version", 00:05:11.171 "rpc_get_methods" 00:05:11.171 ] 00:05:11.171 23:45:50 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:05:11.171 23:45:50 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:11.171 23:45:50 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:11.171 23:45:50 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:05:11.171 23:45:50 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 3800357 00:05:11.171 23:45:50 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 3800357 ']' 00:05:11.171 23:45:50 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 3800357 00:05:11.171 23:45:50 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname 00:05:11.171 23:45:50 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:11.171 23:45:50 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3800357 00:05:11.171 23:45:50 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:11.171 23:45:50 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:11.171 23:45:50 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3800357' 00:05:11.171 killing process with pid 3800357 00:05:11.171 23:45:50 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 3800357 00:05:11.171 23:45:50 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 3800357 00:05:13.705 00:05:13.706 real 0m3.946s 00:05:13.706 user 0m7.216s 00:05:13.706 sys 0m0.589s 00:05:13.706 23:45:52 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:13.706 23:45:52 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:13.706 ************************************ 00:05:13.706 END TEST spdkcli_tcp 00:05:13.706 ************************************ 00:05:13.706 23:45:52 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:13.706 23:45:52 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:13.706 23:45:52 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:13.706 23:45:52 -- common/autotest_common.sh@10 -- # set +x 00:05:13.706 ************************************ 00:05:13.706 START TEST dpdk_mem_utility 00:05:13.706 ************************************ 00:05:13.706 23:45:52 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:13.706 * Looking for test storage... 00:05:13.706 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:05:13.706 23:45:52 dpdk_mem_utility -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:13.706 23:45:52 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # lcov --version 00:05:13.706 23:45:52 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:13.965 23:45:52 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:13.965 23:45:52 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:13.965 23:45:52 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:13.965 23:45:52 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:13.965 23:45:52 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:05:13.965 23:45:52 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:05:13.965 23:45:52 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:05:13.965 23:45:52 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:05:13.965 23:45:52 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:05:13.965 23:45:52 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:05:13.965 23:45:52 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:05:13.965 23:45:52 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:13.965 23:45:52 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:05:13.965 23:45:52 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:05:13.965 23:45:52 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:13.965 23:45:52 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:13.965 23:45:52 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:05:13.965 23:45:52 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:05:13.965 23:45:52 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:13.965 23:45:52 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:05:13.965 23:45:52 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:05:13.965 23:45:52 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:05:13.965 23:45:52 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:05:13.965 23:45:52 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:13.965 23:45:52 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:05:13.965 23:45:52 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:05:13.965 23:45:52 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:13.965 23:45:52 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:13.965 23:45:52 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:05:13.965 23:45:52 dpdk_mem_utility -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:13.965 23:45:52 dpdk_mem_utility -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:13.965 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:13.965 --rc genhtml_branch_coverage=1 00:05:13.965 --rc genhtml_function_coverage=1 00:05:13.965 --rc genhtml_legend=1 00:05:13.965 --rc geninfo_all_blocks=1 00:05:13.965 --rc geninfo_unexecuted_blocks=1 00:05:13.965 00:05:13.965 ' 00:05:13.965 23:45:52 dpdk_mem_utility -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:13.965 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:13.965 --rc genhtml_branch_coverage=1 00:05:13.965 --rc genhtml_function_coverage=1 00:05:13.965 --rc genhtml_legend=1 00:05:13.965 --rc geninfo_all_blocks=1 00:05:13.965 --rc geninfo_unexecuted_blocks=1 00:05:13.965 00:05:13.965 ' 00:05:13.965 23:45:52 dpdk_mem_utility -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:13.965 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:13.965 --rc genhtml_branch_coverage=1 00:05:13.965 --rc genhtml_function_coverage=1 00:05:13.965 --rc genhtml_legend=1 00:05:13.965 --rc geninfo_all_blocks=1 00:05:13.966 --rc geninfo_unexecuted_blocks=1 00:05:13.966 00:05:13.966 ' 00:05:13.966 23:45:52 dpdk_mem_utility -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:13.966 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:13.966 --rc genhtml_branch_coverage=1 00:05:13.966 --rc genhtml_function_coverage=1 00:05:13.966 --rc genhtml_legend=1 00:05:13.966 --rc geninfo_all_blocks=1 00:05:13.966 --rc geninfo_unexecuted_blocks=1 00:05:13.966 00:05:13.966 ' 00:05:13.966 23:45:52 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:13.966 23:45:52 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=3801103 00:05:13.966 23:45:52 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:13.966 23:45:52 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 3801103 00:05:13.966 23:45:52 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 3801103 ']' 00:05:13.966 23:45:52 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:13.966 23:45:52 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:13.966 23:45:52 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:13.966 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:13.966 23:45:52 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:13.966 23:45:52 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:13.966 [2024-12-13 23:45:52.996110] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:05:13.966 [2024-12-13 23:45:52.996200] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3801103 ] 00:05:14.225 [2024-12-13 23:45:53.107164] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:14.225 [2024-12-13 23:45:53.213390] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:15.165 23:45:54 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:15.165 23:45:54 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:05:15.165 23:45:54 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:05:15.165 23:45:54 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:05:15.165 23:45:54 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:15.165 23:45:54 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:15.165 { 00:05:15.165 "filename": "/tmp/spdk_mem_dump.txt" 00:05:15.165 } 00:05:15.165 23:45:54 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:15.165 23:45:54 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:15.165 DPDK memory size 824.000000 MiB in 1 heap(s) 00:05:15.165 1 heaps totaling size 824.000000 MiB 00:05:15.165 size: 824.000000 MiB heap id: 0 00:05:15.165 end heaps---------- 00:05:15.165 9 mempools totaling size 603.782043 MiB 00:05:15.165 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:05:15.165 size: 158.602051 MiB name: PDU_data_out_Pool 00:05:15.165 size: 100.555481 MiB name: bdev_io_3801103 00:05:15.165 size: 50.003479 MiB name: msgpool_3801103 00:05:15.165 size: 36.509338 MiB name: fsdev_io_3801103 00:05:15.165 size: 21.763794 MiB name: PDU_Pool 00:05:15.165 size: 19.513306 MiB name: SCSI_TASK_Pool 00:05:15.165 size: 4.133484 MiB name: evtpool_3801103 00:05:15.165 size: 0.026123 MiB name: Session_Pool 00:05:15.165 end mempools------- 00:05:15.165 6 memzones totaling size 4.142822 MiB 00:05:15.165 size: 1.000366 MiB name: RG_ring_0_3801103 00:05:15.165 size: 1.000366 MiB name: RG_ring_1_3801103 00:05:15.165 size: 1.000366 MiB name: RG_ring_4_3801103 00:05:15.165 size: 1.000366 MiB name: RG_ring_5_3801103 00:05:15.165 size: 0.125366 MiB name: RG_ring_2_3801103 00:05:15.165 size: 0.015991 MiB name: RG_ring_3_3801103 00:05:15.165 end memzones------- 00:05:15.165 23:45:54 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:05:15.165 heap id: 0 total size: 824.000000 MiB number of busy elements: 44 number of free elements: 19 00:05:15.165 list of free elements. size: 16.847595 MiB 00:05:15.165 element at address: 0x200006400000 with size: 1.995972 MiB 00:05:15.165 element at address: 0x20000a600000 with size: 1.995972 MiB 00:05:15.165 element at address: 0x200003e00000 with size: 1.991028 MiB 00:05:15.165 element at address: 0x200019500040 with size: 0.999939 MiB 00:05:15.165 element at address: 0x200019900040 with size: 0.999939 MiB 00:05:15.165 element at address: 0x200019a00000 with size: 0.999329 MiB 00:05:15.165 element at address: 0x200000400000 with size: 0.998108 MiB 00:05:15.165 element at address: 0x200032600000 with size: 0.994324 MiB 00:05:15.165 element at address: 0x200019200000 with size: 0.959900 MiB 00:05:15.165 element at address: 0x200019d00040 with size: 0.937256 MiB 00:05:15.165 element at address: 0x200000200000 with size: 0.716980 MiB 00:05:15.165 element at address: 0x20001b400000 with size: 0.583191 MiB 00:05:15.165 element at address: 0x200000c00000 with size: 0.495300 MiB 00:05:15.165 element at address: 0x200019600000 with size: 0.491150 MiB 00:05:15.165 element at address: 0x200019e00000 with size: 0.485657 MiB 00:05:15.165 element at address: 0x200012c00000 with size: 0.436157 MiB 00:05:15.165 element at address: 0x200028800000 with size: 0.411072 MiB 00:05:15.165 element at address: 0x200000800000 with size: 0.355286 MiB 00:05:15.165 element at address: 0x20000a5ff040 with size: 0.001038 MiB 00:05:15.165 list of standard malloc elements. size: 199.221497 MiB 00:05:15.165 element at address: 0x20000a7fef80 with size: 132.000183 MiB 00:05:15.165 element at address: 0x2000065fef80 with size: 64.000183 MiB 00:05:15.165 element at address: 0x2000193fff80 with size: 1.000183 MiB 00:05:15.165 element at address: 0x2000197fff80 with size: 1.000183 MiB 00:05:15.165 element at address: 0x200019bfff80 with size: 1.000183 MiB 00:05:15.165 element at address: 0x2000003d9e80 with size: 0.140808 MiB 00:05:15.165 element at address: 0x200019deff40 with size: 0.062683 MiB 00:05:15.165 element at address: 0x2000003fdf40 with size: 0.007996 MiB 00:05:15.165 element at address: 0x200012bff040 with size: 0.000427 MiB 00:05:15.165 element at address: 0x200012bffa00 with size: 0.000366 MiB 00:05:15.165 element at address: 0x2000002d7b00 with size: 0.000244 MiB 00:05:15.165 element at address: 0x2000003d9d80 with size: 0.000244 MiB 00:05:15.165 element at address: 0x2000004ff840 with size: 0.000244 MiB 00:05:15.165 element at address: 0x2000004ff940 with size: 0.000244 MiB 00:05:15.165 element at address: 0x2000004ffa40 with size: 0.000244 MiB 00:05:15.165 element at address: 0x2000004ffcc0 with size: 0.000244 MiB 00:05:15.165 element at address: 0x2000004ffdc0 with size: 0.000244 MiB 00:05:15.165 element at address: 0x20000087f3c0 with size: 0.000244 MiB 00:05:15.165 element at address: 0x20000087f4c0 with size: 0.000244 MiB 00:05:15.165 element at address: 0x2000008ff800 with size: 0.000244 MiB 00:05:15.165 element at address: 0x2000008ffa80 with size: 0.000244 MiB 00:05:15.165 element at address: 0x200000cfef00 with size: 0.000244 MiB 00:05:15.165 element at address: 0x200000cff000 with size: 0.000244 MiB 00:05:15.165 element at address: 0x20000a5ff480 with size: 0.000244 MiB 00:05:15.165 element at address: 0x20000a5ff580 with size: 0.000244 MiB 00:05:15.165 element at address: 0x20000a5ff680 with size: 0.000244 MiB 00:05:15.165 element at address: 0x20000a5ff780 with size: 0.000244 MiB 00:05:15.165 element at address: 0x20000a5ff880 with size: 0.000244 MiB 00:05:15.165 element at address: 0x20000a5ff980 with size: 0.000244 MiB 00:05:15.165 element at address: 0x20000a5ffc00 with size: 0.000244 MiB 00:05:15.165 element at address: 0x20000a5ffd00 with size: 0.000244 MiB 00:05:15.165 element at address: 0x20000a5ffe00 with size: 0.000244 MiB 00:05:15.165 element at address: 0x20000a5fff00 with size: 0.000244 MiB 00:05:15.165 element at address: 0x200012bff200 with size: 0.000244 MiB 00:05:15.165 element at address: 0x200012bff300 with size: 0.000244 MiB 00:05:15.165 element at address: 0x200012bff400 with size: 0.000244 MiB 00:05:15.165 element at address: 0x200012bff500 with size: 0.000244 MiB 00:05:15.165 element at address: 0x200012bff600 with size: 0.000244 MiB 00:05:15.165 element at address: 0x200012bff700 with size: 0.000244 MiB 00:05:15.165 element at address: 0x200012bff800 with size: 0.000244 MiB 00:05:15.165 element at address: 0x200012bff900 with size: 0.000244 MiB 00:05:15.165 element at address: 0x200012bffb80 with size: 0.000244 MiB 00:05:15.165 element at address: 0x200012bffc80 with size: 0.000244 MiB 00:05:15.165 element at address: 0x200012bfff00 with size: 0.000244 MiB 00:05:15.165 list of memzone associated elements. size: 607.930908 MiB 00:05:15.165 element at address: 0x20001b4954c0 with size: 211.416809 MiB 00:05:15.165 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:05:15.165 element at address: 0x20002886ff80 with size: 157.562622 MiB 00:05:15.165 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:05:15.165 element at address: 0x200012df1e40 with size: 100.055115 MiB 00:05:15.165 associated memzone info: size: 100.054932 MiB name: MP_bdev_io_3801103_0 00:05:15.165 element at address: 0x200000dff340 with size: 48.003113 MiB 00:05:15.165 associated memzone info: size: 48.002930 MiB name: MP_msgpool_3801103_0 00:05:15.165 element at address: 0x200003ffdb40 with size: 36.008972 MiB 00:05:15.165 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_3801103_0 00:05:15.165 element at address: 0x200019fbe900 with size: 20.255615 MiB 00:05:15.165 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:05:15.165 element at address: 0x2000327feb00 with size: 18.005127 MiB 00:05:15.165 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:05:15.165 element at address: 0x2000004ffec0 with size: 3.000305 MiB 00:05:15.165 associated memzone info: size: 3.000122 MiB name: MP_evtpool_3801103_0 00:05:15.166 element at address: 0x2000009ffdc0 with size: 2.000549 MiB 00:05:15.166 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_3801103 00:05:15.166 element at address: 0x2000002d7c00 with size: 1.008179 MiB 00:05:15.166 associated memzone info: size: 1.007996 MiB name: MP_evtpool_3801103 00:05:15.166 element at address: 0x2000196fde00 with size: 1.008179 MiB 00:05:15.166 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:05:15.166 element at address: 0x200019ebc780 with size: 1.008179 MiB 00:05:15.166 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:05:15.166 element at address: 0x2000192fde00 with size: 1.008179 MiB 00:05:15.166 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:05:15.166 element at address: 0x200012cefcc0 with size: 1.008179 MiB 00:05:15.166 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:05:15.166 element at address: 0x200000cff100 with size: 1.000549 MiB 00:05:15.166 associated memzone info: size: 1.000366 MiB name: RG_ring_0_3801103 00:05:15.166 element at address: 0x2000008ffb80 with size: 1.000549 MiB 00:05:15.166 associated memzone info: size: 1.000366 MiB name: RG_ring_1_3801103 00:05:15.166 element at address: 0x200019affd40 with size: 1.000549 MiB 00:05:15.166 associated memzone info: size: 1.000366 MiB name: RG_ring_4_3801103 00:05:15.166 element at address: 0x2000326fe8c0 with size: 1.000549 MiB 00:05:15.166 associated memzone info: size: 1.000366 MiB name: RG_ring_5_3801103 00:05:15.166 element at address: 0x20000087f5c0 with size: 0.500549 MiB 00:05:15.166 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_3801103 00:05:15.166 element at address: 0x200000c7ecc0 with size: 0.500549 MiB 00:05:15.166 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_3801103 00:05:15.166 element at address: 0x20001967dbc0 with size: 0.500549 MiB 00:05:15.166 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:05:15.166 element at address: 0x200012c6fa80 with size: 0.500549 MiB 00:05:15.166 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:05:15.166 element at address: 0x200019e7c540 with size: 0.250549 MiB 00:05:15.166 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:05:15.166 element at address: 0x2000002b78c0 with size: 0.125549 MiB 00:05:15.166 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_3801103 00:05:15.166 element at address: 0x20000085f180 with size: 0.125549 MiB 00:05:15.166 associated memzone info: size: 0.125366 MiB name: RG_ring_2_3801103 00:05:15.166 element at address: 0x2000192f5bc0 with size: 0.031799 MiB 00:05:15.166 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:05:15.166 element at address: 0x2000288693c0 with size: 0.023804 MiB 00:05:15.166 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:05:15.166 element at address: 0x20000085af40 with size: 0.016174 MiB 00:05:15.166 associated memzone info: size: 0.015991 MiB name: RG_ring_3_3801103 00:05:15.166 element at address: 0x20002886f540 with size: 0.002502 MiB 00:05:15.166 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:05:15.166 element at address: 0x2000004ffb40 with size: 0.000366 MiB 00:05:15.166 associated memzone info: size: 0.000183 MiB name: MP_msgpool_3801103 00:05:15.166 element at address: 0x2000008ff900 with size: 0.000366 MiB 00:05:15.166 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_3801103 00:05:15.166 element at address: 0x200012bffd80 with size: 0.000366 MiB 00:05:15.166 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_3801103 00:05:15.166 element at address: 0x20000a5ffa80 with size: 0.000366 MiB 00:05:15.166 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:05:15.166 23:45:54 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:05:15.166 23:45:54 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 3801103 00:05:15.166 23:45:54 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 3801103 ']' 00:05:15.166 23:45:54 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 3801103 00:05:15.166 23:45:54 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:05:15.166 23:45:54 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:15.166 23:45:54 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3801103 00:05:15.166 23:45:54 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:15.166 23:45:54 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:15.166 23:45:54 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3801103' 00:05:15.166 killing process with pid 3801103 00:05:15.166 23:45:54 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 3801103 00:05:15.166 23:45:54 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 3801103 00:05:17.696 00:05:17.696 real 0m3.745s 00:05:17.696 user 0m3.694s 00:05:17.696 sys 0m0.547s 00:05:17.696 23:45:56 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:17.696 23:45:56 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:17.696 ************************************ 00:05:17.696 END TEST dpdk_mem_utility 00:05:17.696 ************************************ 00:05:17.696 23:45:56 -- spdk/autotest.sh@168 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:05:17.696 23:45:56 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:17.696 23:45:56 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:17.696 23:45:56 -- common/autotest_common.sh@10 -- # set +x 00:05:17.696 ************************************ 00:05:17.696 START TEST event 00:05:17.696 ************************************ 00:05:17.696 23:45:56 event -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:05:17.696 * Looking for test storage... 00:05:17.696 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:05:17.696 23:45:56 event -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:17.696 23:45:56 event -- common/autotest_common.sh@1711 -- # lcov --version 00:05:17.696 23:45:56 event -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:17.696 23:45:56 event -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:17.696 23:45:56 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:17.696 23:45:56 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:17.696 23:45:56 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:17.696 23:45:56 event -- scripts/common.sh@336 -- # IFS=.-: 00:05:17.696 23:45:56 event -- scripts/common.sh@336 -- # read -ra ver1 00:05:17.696 23:45:56 event -- scripts/common.sh@337 -- # IFS=.-: 00:05:17.696 23:45:56 event -- scripts/common.sh@337 -- # read -ra ver2 00:05:17.696 23:45:56 event -- scripts/common.sh@338 -- # local 'op=<' 00:05:17.696 23:45:56 event -- scripts/common.sh@340 -- # ver1_l=2 00:05:17.696 23:45:56 event -- scripts/common.sh@341 -- # ver2_l=1 00:05:17.696 23:45:56 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:17.696 23:45:56 event -- scripts/common.sh@344 -- # case "$op" in 00:05:17.696 23:45:56 event -- scripts/common.sh@345 -- # : 1 00:05:17.696 23:45:56 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:17.697 23:45:56 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:17.697 23:45:56 event -- scripts/common.sh@365 -- # decimal 1 00:05:17.697 23:45:56 event -- scripts/common.sh@353 -- # local d=1 00:05:17.697 23:45:56 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:17.697 23:45:56 event -- scripts/common.sh@355 -- # echo 1 00:05:17.697 23:45:56 event -- scripts/common.sh@365 -- # ver1[v]=1 00:05:17.697 23:45:56 event -- scripts/common.sh@366 -- # decimal 2 00:05:17.697 23:45:56 event -- scripts/common.sh@353 -- # local d=2 00:05:17.697 23:45:56 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:17.697 23:45:56 event -- scripts/common.sh@355 -- # echo 2 00:05:17.697 23:45:56 event -- scripts/common.sh@366 -- # ver2[v]=2 00:05:17.697 23:45:56 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:17.697 23:45:56 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:17.697 23:45:56 event -- scripts/common.sh@368 -- # return 0 00:05:17.697 23:45:56 event -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:17.697 23:45:56 event -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:17.697 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:17.697 --rc genhtml_branch_coverage=1 00:05:17.697 --rc genhtml_function_coverage=1 00:05:17.697 --rc genhtml_legend=1 00:05:17.697 --rc geninfo_all_blocks=1 00:05:17.697 --rc geninfo_unexecuted_blocks=1 00:05:17.697 00:05:17.697 ' 00:05:17.697 23:45:56 event -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:17.697 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:17.697 --rc genhtml_branch_coverage=1 00:05:17.697 --rc genhtml_function_coverage=1 00:05:17.697 --rc genhtml_legend=1 00:05:17.697 --rc geninfo_all_blocks=1 00:05:17.697 --rc geninfo_unexecuted_blocks=1 00:05:17.697 00:05:17.697 ' 00:05:17.697 23:45:56 event -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:17.697 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:17.697 --rc genhtml_branch_coverage=1 00:05:17.697 --rc genhtml_function_coverage=1 00:05:17.697 --rc genhtml_legend=1 00:05:17.697 --rc geninfo_all_blocks=1 00:05:17.697 --rc geninfo_unexecuted_blocks=1 00:05:17.697 00:05:17.697 ' 00:05:17.697 23:45:56 event -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:17.697 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:17.697 --rc genhtml_branch_coverage=1 00:05:17.697 --rc genhtml_function_coverage=1 00:05:17.697 --rc genhtml_legend=1 00:05:17.697 --rc geninfo_all_blocks=1 00:05:17.697 --rc geninfo_unexecuted_blocks=1 00:05:17.697 00:05:17.697 ' 00:05:17.697 23:45:56 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:05:17.697 23:45:56 event -- bdev/nbd_common.sh@6 -- # set -e 00:05:17.697 23:45:56 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:17.697 23:45:56 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:05:17.697 23:45:56 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:17.697 23:45:56 event -- common/autotest_common.sh@10 -- # set +x 00:05:17.697 ************************************ 00:05:17.697 START TEST event_perf 00:05:17.697 ************************************ 00:05:17.697 23:45:56 event.event_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:17.697 Running I/O for 1 seconds...[2024-12-13 23:45:56.787807] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:05:17.697 [2024-12-13 23:45:56.787880] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3801844 ] 00:05:17.956 [2024-12-13 23:45:56.898690] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:17.956 [2024-12-13 23:45:57.007585] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:05:17.956 [2024-12-13 23:45:57.007659] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:05:17.956 [2024-12-13 23:45:57.007734] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:17.956 [2024-12-13 23:45:57.007741] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:05:19.331 Running I/O for 1 seconds... 00:05:19.331 lcore 0: 208046 00:05:19.331 lcore 1: 208045 00:05:19.331 lcore 2: 208046 00:05:19.331 lcore 3: 208047 00:05:19.331 done. 00:05:19.331 00:05:19.331 real 0m1.483s 00:05:19.331 user 0m4.353s 00:05:19.331 sys 0m0.124s 00:05:19.331 23:45:58 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:19.331 23:45:58 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:05:19.331 ************************************ 00:05:19.331 END TEST event_perf 00:05:19.331 ************************************ 00:05:19.331 23:45:58 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:19.331 23:45:58 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:05:19.331 23:45:58 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:19.331 23:45:58 event -- common/autotest_common.sh@10 -- # set +x 00:05:19.331 ************************************ 00:05:19.331 START TEST event_reactor 00:05:19.331 ************************************ 00:05:19.331 23:45:58 event.event_reactor -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:19.331 [2024-12-13 23:45:58.341530] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:05:19.331 [2024-12-13 23:45:58.341604] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3802111 ] 00:05:19.331 [2024-12-13 23:45:58.449345] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:19.589 [2024-12-13 23:45:58.553653] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:20.964 test_start 00:05:20.964 oneshot 00:05:20.964 tick 100 00:05:20.964 tick 100 00:05:20.964 tick 250 00:05:20.964 tick 100 00:05:20.964 tick 100 00:05:20.964 tick 100 00:05:20.964 tick 250 00:05:20.964 tick 500 00:05:20.964 tick 100 00:05:20.964 tick 100 00:05:20.964 tick 250 00:05:20.964 tick 100 00:05:20.964 tick 100 00:05:20.964 test_end 00:05:20.964 00:05:20.964 real 0m1.465s 00:05:20.964 user 0m1.340s 00:05:20.964 sys 0m0.118s 00:05:20.964 23:45:59 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:20.964 23:45:59 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:05:20.964 ************************************ 00:05:20.964 END TEST event_reactor 00:05:20.964 ************************************ 00:05:20.964 23:45:59 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:20.964 23:45:59 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:05:20.964 23:45:59 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:20.964 23:45:59 event -- common/autotest_common.sh@10 -- # set +x 00:05:20.964 ************************************ 00:05:20.964 START TEST event_reactor_perf 00:05:20.964 ************************************ 00:05:20.964 23:45:59 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:20.964 [2024-12-13 23:45:59.870730] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:05:20.964 [2024-12-13 23:45:59.870810] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3802355 ] 00:05:20.964 [2024-12-13 23:45:59.980026] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:20.964 [2024-12-13 23:46:00.098369] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:22.338 test_start 00:05:22.338 test_end 00:05:22.338 Performance: 386958 events per second 00:05:22.338 00:05:22.338 real 0m1.477s 00:05:22.338 user 0m1.357s 00:05:22.338 sys 0m0.113s 00:05:22.338 23:46:01 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:22.338 23:46:01 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:05:22.338 ************************************ 00:05:22.338 END TEST event_reactor_perf 00:05:22.338 ************************************ 00:05:22.338 23:46:01 event -- event/event.sh@49 -- # uname -s 00:05:22.338 23:46:01 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:22.338 23:46:01 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:22.338 23:46:01 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:22.338 23:46:01 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:22.338 23:46:01 event -- common/autotest_common.sh@10 -- # set +x 00:05:22.338 ************************************ 00:05:22.338 START TEST event_scheduler 00:05:22.338 ************************************ 00:05:22.338 23:46:01 event.event_scheduler -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:22.338 * Looking for test storage... 00:05:22.338 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:05:22.338 23:46:01 event.event_scheduler -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:22.338 23:46:01 event.event_scheduler -- common/autotest_common.sh@1711 -- # lcov --version 00:05:22.338 23:46:01 event.event_scheduler -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:22.597 23:46:01 event.event_scheduler -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:22.597 23:46:01 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:22.597 23:46:01 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:22.597 23:46:01 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:22.597 23:46:01 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:05:22.597 23:46:01 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:05:22.597 23:46:01 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:05:22.597 23:46:01 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:05:22.597 23:46:01 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:05:22.597 23:46:01 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:05:22.597 23:46:01 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:05:22.597 23:46:01 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:22.597 23:46:01 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:05:22.597 23:46:01 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:05:22.597 23:46:01 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:22.597 23:46:01 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:22.597 23:46:01 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:05:22.597 23:46:01 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:05:22.597 23:46:01 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:22.597 23:46:01 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:05:22.597 23:46:01 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:05:22.597 23:46:01 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:05:22.597 23:46:01 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:05:22.597 23:46:01 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:22.597 23:46:01 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:05:22.597 23:46:01 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:05:22.597 23:46:01 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:22.597 23:46:01 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:22.597 23:46:01 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:05:22.597 23:46:01 event.event_scheduler -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:22.597 23:46:01 event.event_scheduler -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:22.597 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:22.597 --rc genhtml_branch_coverage=1 00:05:22.597 --rc genhtml_function_coverage=1 00:05:22.597 --rc genhtml_legend=1 00:05:22.597 --rc geninfo_all_blocks=1 00:05:22.597 --rc geninfo_unexecuted_blocks=1 00:05:22.597 00:05:22.597 ' 00:05:22.597 23:46:01 event.event_scheduler -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:22.597 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:22.597 --rc genhtml_branch_coverage=1 00:05:22.597 --rc genhtml_function_coverage=1 00:05:22.597 --rc genhtml_legend=1 00:05:22.597 --rc geninfo_all_blocks=1 00:05:22.597 --rc geninfo_unexecuted_blocks=1 00:05:22.597 00:05:22.597 ' 00:05:22.597 23:46:01 event.event_scheduler -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:22.597 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:22.597 --rc genhtml_branch_coverage=1 00:05:22.597 --rc genhtml_function_coverage=1 00:05:22.597 --rc genhtml_legend=1 00:05:22.597 --rc geninfo_all_blocks=1 00:05:22.597 --rc geninfo_unexecuted_blocks=1 00:05:22.597 00:05:22.597 ' 00:05:22.598 23:46:01 event.event_scheduler -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:22.598 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:22.598 --rc genhtml_branch_coverage=1 00:05:22.598 --rc genhtml_function_coverage=1 00:05:22.598 --rc genhtml_legend=1 00:05:22.598 --rc geninfo_all_blocks=1 00:05:22.598 --rc geninfo_unexecuted_blocks=1 00:05:22.598 00:05:22.598 ' 00:05:22.598 23:46:01 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:22.598 23:46:01 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:22.598 23:46:01 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=3802755 00:05:22.598 23:46:01 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:22.598 23:46:01 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 3802755 00:05:22.598 23:46:01 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 3802755 ']' 00:05:22.598 23:46:01 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:22.598 23:46:01 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:22.598 23:46:01 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:22.598 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:22.598 23:46:01 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:22.598 23:46:01 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:22.598 [2024-12-13 23:46:01.615836] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:05:22.598 [2024-12-13 23:46:01.615931] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3802755 ] 00:05:22.598 [2024-12-13 23:46:01.726138] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:22.856 [2024-12-13 23:46:01.838017] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:22.856 [2024-12-13 23:46:01.838103] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:05:22.856 [2024-12-13 23:46:01.838161] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:05:22.856 [2024-12-13 23:46:01.838172] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:05:23.422 23:46:02 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:23.422 23:46:02 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:05:23.422 23:46:02 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:23.422 23:46:02 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:23.422 23:46:02 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:23.422 [2024-12-13 23:46:02.440595] dpdk_governor.c: 178:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:05:23.422 [2024-12-13 23:46:02.440621] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:05:23.422 [2024-12-13 23:46:02.440639] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:05:23.423 [2024-12-13 23:46:02.440648] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:05:23.423 [2024-12-13 23:46:02.440659] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:05:23.423 23:46:02 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:23.423 23:46:02 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:05:23.423 23:46:02 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:23.423 23:46:02 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:23.680 [2024-12-13 23:46:02.762558] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:05:23.680 23:46:02 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:23.680 23:46:02 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:05:23.680 23:46:02 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:23.680 23:46:02 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:23.680 23:46:02 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:23.680 ************************************ 00:05:23.680 START TEST scheduler_create_thread 00:05:23.680 ************************************ 00:05:23.680 23:46:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:05:23.680 23:46:02 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:23.680 23:46:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:23.680 23:46:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:23.680 2 00:05:23.680 23:46:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:23.680 23:46:02 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:23.680 23:46:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:23.680 23:46:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:23.939 3 00:05:23.939 23:46:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:23.939 23:46:02 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:23.939 23:46:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:23.939 23:46:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:23.939 4 00:05:23.939 23:46:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:23.939 23:46:02 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:23.939 23:46:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:23.939 23:46:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:23.939 5 00:05:23.939 23:46:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:23.939 23:46:02 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:23.939 23:46:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:23.939 23:46:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:23.939 6 00:05:23.939 23:46:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:23.939 23:46:02 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:23.939 23:46:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:23.939 23:46:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:23.939 7 00:05:23.939 23:46:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:23.939 23:46:02 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:23.939 23:46:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:23.939 23:46:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:23.939 8 00:05:23.939 23:46:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:23.939 23:46:02 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:23.939 23:46:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:23.939 23:46:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:23.939 9 00:05:23.939 23:46:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:23.939 23:46:02 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:23.939 23:46:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:23.939 23:46:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:23.939 10 00:05:23.939 23:46:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:23.939 23:46:02 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:23.939 23:46:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:23.939 23:46:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:23.939 23:46:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:23.939 23:46:02 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:05:23.939 23:46:02 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:05:23.939 23:46:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:23.939 23:46:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:23.939 23:46:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:23.939 23:46:02 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:05:23.939 23:46:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:23.939 23:46:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:25.314 23:46:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:25.314 23:46:04 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:05:25.314 23:46:04 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:05:25.314 23:46:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:25.314 23:46:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:26.686 23:46:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:26.686 00:05:26.686 real 0m2.626s 00:05:26.686 user 0m0.024s 00:05:26.686 sys 0m0.005s 00:05:26.686 23:46:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:26.686 23:46:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:26.686 ************************************ 00:05:26.686 END TEST scheduler_create_thread 00:05:26.686 ************************************ 00:05:26.686 23:46:05 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:05:26.686 23:46:05 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 3802755 00:05:26.686 23:46:05 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 3802755 ']' 00:05:26.686 23:46:05 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 3802755 00:05:26.686 23:46:05 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:05:26.686 23:46:05 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:26.686 23:46:05 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3802755 00:05:26.686 23:46:05 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:05:26.686 23:46:05 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:05:26.686 23:46:05 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3802755' 00:05:26.686 killing process with pid 3802755 00:05:26.686 23:46:05 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 3802755 00:05:26.686 23:46:05 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 3802755 00:05:26.945 [2024-12-13 23:46:05.906678] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:05:28.319 00:05:28.319 real 0m5.668s 00:05:28.319 user 0m10.031s 00:05:28.319 sys 0m0.475s 00:05:28.319 23:46:07 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:28.319 23:46:07 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:28.319 ************************************ 00:05:28.319 END TEST event_scheduler 00:05:28.319 ************************************ 00:05:28.319 23:46:07 event -- event/event.sh@51 -- # modprobe -n nbd 00:05:28.319 23:46:07 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:05:28.319 23:46:07 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:28.319 23:46:07 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:28.319 23:46:07 event -- common/autotest_common.sh@10 -- # set +x 00:05:28.319 ************************************ 00:05:28.319 START TEST app_repeat 00:05:28.319 ************************************ 00:05:28.319 23:46:07 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:05:28.319 23:46:07 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:28.319 23:46:07 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:28.319 23:46:07 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:05:28.319 23:46:07 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:28.319 23:46:07 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:05:28.319 23:46:07 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:05:28.319 23:46:07 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:05:28.319 23:46:07 event.app_repeat -- event/event.sh@19 -- # repeat_pid=3803799 00:05:28.319 23:46:07 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:05:28.319 23:46:07 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:05:28.319 23:46:07 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 3803799' 00:05:28.319 Process app_repeat pid: 3803799 00:05:28.319 23:46:07 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:28.319 23:46:07 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:05:28.319 spdk_app_start Round 0 00:05:28.319 23:46:07 event.app_repeat -- event/event.sh@25 -- # waitforlisten 3803799 /var/tmp/spdk-nbd.sock 00:05:28.320 23:46:07 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 3803799 ']' 00:05:28.320 23:46:07 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:28.320 23:46:07 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:28.320 23:46:07 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:28.320 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:28.320 23:46:07 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:28.320 23:46:07 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:28.320 [2024-12-13 23:46:07.185109] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:05:28.320 [2024-12-13 23:46:07.185194] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3803799 ] 00:05:28.320 [2024-12-13 23:46:07.297629] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:28.320 [2024-12-13 23:46:07.404651] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:28.320 [2024-12-13 23:46:07.404662] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:05:28.886 23:46:08 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:28.886 23:46:08 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:28.886 23:46:08 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:29.144 Malloc0 00:05:29.144 23:46:08 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:29.402 Malloc1 00:05:29.402 23:46:08 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:29.402 23:46:08 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:29.402 23:46:08 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:29.402 23:46:08 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:29.402 23:46:08 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:29.402 23:46:08 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:29.402 23:46:08 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:29.402 23:46:08 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:29.402 23:46:08 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:29.402 23:46:08 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:29.402 23:46:08 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:29.402 23:46:08 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:29.661 23:46:08 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:29.661 23:46:08 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:29.661 23:46:08 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:29.661 23:46:08 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:29.661 /dev/nbd0 00:05:29.661 23:46:08 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:29.661 23:46:08 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:29.661 23:46:08 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:29.661 23:46:08 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:29.661 23:46:08 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:29.661 23:46:08 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:29.661 23:46:08 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:29.661 23:46:08 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:29.661 23:46:08 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:29.661 23:46:08 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:29.661 23:46:08 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:29.661 1+0 records in 00:05:29.661 1+0 records out 00:05:29.661 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000184785 s, 22.2 MB/s 00:05:29.661 23:46:08 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:29.661 23:46:08 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:29.661 23:46:08 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:29.661 23:46:08 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:29.661 23:46:08 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:29.661 23:46:08 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:29.661 23:46:08 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:29.661 23:46:08 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:29.919 /dev/nbd1 00:05:29.919 23:46:08 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:29.919 23:46:08 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:29.919 23:46:09 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:29.919 23:46:09 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:29.919 23:46:09 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:29.920 23:46:09 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:29.920 23:46:09 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:29.920 23:46:09 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:29.920 23:46:09 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:29.920 23:46:09 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:29.920 23:46:09 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:29.920 1+0 records in 00:05:29.920 1+0 records out 00:05:29.920 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000197817 s, 20.7 MB/s 00:05:29.920 23:46:09 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:29.920 23:46:09 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:29.920 23:46:09 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:29.920 23:46:09 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:29.920 23:46:09 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:29.920 23:46:09 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:29.920 23:46:09 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:29.920 23:46:09 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:29.920 23:46:09 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:29.920 23:46:09 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:30.178 23:46:09 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:30.178 { 00:05:30.178 "nbd_device": "/dev/nbd0", 00:05:30.178 "bdev_name": "Malloc0" 00:05:30.178 }, 00:05:30.178 { 00:05:30.178 "nbd_device": "/dev/nbd1", 00:05:30.178 "bdev_name": "Malloc1" 00:05:30.178 } 00:05:30.178 ]' 00:05:30.178 23:46:09 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:30.178 { 00:05:30.178 "nbd_device": "/dev/nbd0", 00:05:30.178 "bdev_name": "Malloc0" 00:05:30.178 }, 00:05:30.178 { 00:05:30.178 "nbd_device": "/dev/nbd1", 00:05:30.178 "bdev_name": "Malloc1" 00:05:30.178 } 00:05:30.178 ]' 00:05:30.178 23:46:09 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:30.178 23:46:09 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:30.178 /dev/nbd1' 00:05:30.178 23:46:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:30.178 /dev/nbd1' 00:05:30.178 23:46:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:30.178 23:46:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:30.178 23:46:09 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:30.178 23:46:09 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:30.178 23:46:09 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:30.178 23:46:09 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:30.178 23:46:09 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:30.178 23:46:09 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:30.178 23:46:09 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:30.179 23:46:09 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:30.179 23:46:09 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:30.179 23:46:09 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:30.179 256+0 records in 00:05:30.179 256+0 records out 00:05:30.179 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0104751 s, 100 MB/s 00:05:30.179 23:46:09 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:30.179 23:46:09 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:30.179 256+0 records in 00:05:30.179 256+0 records out 00:05:30.179 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0156363 s, 67.1 MB/s 00:05:30.179 23:46:09 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:30.179 23:46:09 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:30.179 256+0 records in 00:05:30.179 256+0 records out 00:05:30.179 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0180349 s, 58.1 MB/s 00:05:30.179 23:46:09 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:30.179 23:46:09 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:30.179 23:46:09 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:30.179 23:46:09 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:30.179 23:46:09 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:30.179 23:46:09 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:30.179 23:46:09 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:30.179 23:46:09 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:30.179 23:46:09 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:30.179 23:46:09 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:30.179 23:46:09 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:30.437 23:46:09 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:30.437 23:46:09 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:30.437 23:46:09 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:30.437 23:46:09 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:30.437 23:46:09 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:30.437 23:46:09 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:30.437 23:46:09 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:30.437 23:46:09 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:30.437 23:46:09 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:30.437 23:46:09 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:30.437 23:46:09 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:30.437 23:46:09 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:30.437 23:46:09 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:30.437 23:46:09 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:30.437 23:46:09 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:30.438 23:46:09 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:30.438 23:46:09 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:30.438 23:46:09 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:30.696 23:46:09 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:30.696 23:46:09 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:30.696 23:46:09 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:30.696 23:46:09 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:30.696 23:46:09 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:30.696 23:46:09 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:30.696 23:46:09 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:30.696 23:46:09 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:30.696 23:46:09 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:30.696 23:46:09 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:30.696 23:46:09 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:30.954 23:46:09 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:30.954 23:46:09 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:30.954 23:46:09 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:30.954 23:46:09 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:30.954 23:46:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:30.954 23:46:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:30.954 23:46:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:30.954 23:46:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:30.954 23:46:09 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:30.954 23:46:09 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:30.954 23:46:09 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:30.954 23:46:09 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:30.955 23:46:09 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:31.213 23:46:10 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:32.640 [2024-12-13 23:46:11.565271] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:32.640 [2024-12-13 23:46:11.665216] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:05:32.640 [2024-12-13 23:46:11.665217] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:32.899 [2024-12-13 23:46:11.856280] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:32.899 [2024-12-13 23:46:11.856321] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:34.275 23:46:13 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:34.275 23:46:13 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:05:34.275 spdk_app_start Round 1 00:05:34.275 23:46:13 event.app_repeat -- event/event.sh@25 -- # waitforlisten 3803799 /var/tmp/spdk-nbd.sock 00:05:34.275 23:46:13 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 3803799 ']' 00:05:34.275 23:46:13 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:34.275 23:46:13 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:34.275 23:46:13 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:34.275 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:34.275 23:46:13 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:34.275 23:46:13 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:34.533 23:46:13 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:34.533 23:46:13 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:34.533 23:46:13 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:34.791 Malloc0 00:05:34.791 23:46:13 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:35.050 Malloc1 00:05:35.050 23:46:14 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:35.050 23:46:14 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:35.050 23:46:14 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:35.050 23:46:14 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:35.050 23:46:14 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:35.050 23:46:14 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:35.050 23:46:14 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:35.050 23:46:14 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:35.050 23:46:14 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:35.050 23:46:14 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:35.050 23:46:14 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:35.050 23:46:14 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:35.050 23:46:14 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:35.050 23:46:14 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:35.050 23:46:14 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:35.050 23:46:14 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:35.309 /dev/nbd0 00:05:35.309 23:46:14 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:35.309 23:46:14 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:35.309 23:46:14 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:35.309 23:46:14 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:35.309 23:46:14 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:35.309 23:46:14 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:35.309 23:46:14 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:35.309 23:46:14 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:35.309 23:46:14 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:35.309 23:46:14 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:35.309 23:46:14 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:35.309 1+0 records in 00:05:35.309 1+0 records out 00:05:35.309 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000191731 s, 21.4 MB/s 00:05:35.309 23:46:14 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:35.309 23:46:14 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:35.309 23:46:14 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:35.309 23:46:14 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:35.309 23:46:14 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:35.309 23:46:14 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:35.309 23:46:14 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:35.309 23:46:14 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:35.567 /dev/nbd1 00:05:35.567 23:46:14 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:35.567 23:46:14 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:35.567 23:46:14 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:35.567 23:46:14 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:35.567 23:46:14 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:35.568 23:46:14 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:35.568 23:46:14 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:35.568 23:46:14 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:35.568 23:46:14 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:35.568 23:46:14 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:35.568 23:46:14 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:35.568 1+0 records in 00:05:35.568 1+0 records out 00:05:35.568 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000221193 s, 18.5 MB/s 00:05:35.568 23:46:14 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:35.568 23:46:14 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:35.568 23:46:14 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:35.568 23:46:14 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:35.568 23:46:14 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:35.568 23:46:14 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:35.568 23:46:14 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:35.568 23:46:14 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:35.568 23:46:14 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:35.568 23:46:14 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:35.827 23:46:14 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:35.827 { 00:05:35.827 "nbd_device": "/dev/nbd0", 00:05:35.827 "bdev_name": "Malloc0" 00:05:35.827 }, 00:05:35.827 { 00:05:35.827 "nbd_device": "/dev/nbd1", 00:05:35.827 "bdev_name": "Malloc1" 00:05:35.827 } 00:05:35.827 ]' 00:05:35.827 23:46:14 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:35.827 { 00:05:35.827 "nbd_device": "/dev/nbd0", 00:05:35.827 "bdev_name": "Malloc0" 00:05:35.827 }, 00:05:35.827 { 00:05:35.827 "nbd_device": "/dev/nbd1", 00:05:35.827 "bdev_name": "Malloc1" 00:05:35.827 } 00:05:35.827 ]' 00:05:35.827 23:46:14 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:35.827 23:46:14 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:35.827 /dev/nbd1' 00:05:35.827 23:46:14 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:35.827 /dev/nbd1' 00:05:35.827 23:46:14 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:35.827 23:46:14 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:35.827 23:46:14 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:35.827 23:46:14 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:35.827 23:46:14 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:35.827 23:46:14 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:35.827 23:46:14 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:35.827 23:46:14 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:35.827 23:46:14 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:35.827 23:46:14 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:35.827 23:46:14 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:35.827 23:46:14 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:35.827 256+0 records in 00:05:35.827 256+0 records out 00:05:35.827 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0106195 s, 98.7 MB/s 00:05:35.827 23:46:14 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:35.827 23:46:14 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:35.827 256+0 records in 00:05:35.827 256+0 records out 00:05:35.827 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0162381 s, 64.6 MB/s 00:05:35.827 23:46:14 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:35.827 23:46:14 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:35.827 256+0 records in 00:05:35.827 256+0 records out 00:05:35.827 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0185783 s, 56.4 MB/s 00:05:35.827 23:46:14 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:35.827 23:46:14 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:35.827 23:46:14 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:35.827 23:46:14 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:35.827 23:46:14 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:35.827 23:46:14 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:35.827 23:46:14 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:35.827 23:46:14 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:35.827 23:46:14 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:35.827 23:46:14 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:35.827 23:46:14 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:35.827 23:46:14 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:35.827 23:46:14 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:35.827 23:46:14 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:35.827 23:46:14 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:35.827 23:46:14 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:35.827 23:46:14 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:35.827 23:46:14 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:35.827 23:46:14 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:36.091 23:46:15 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:36.091 23:46:15 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:36.091 23:46:15 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:36.091 23:46:15 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:36.091 23:46:15 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:36.091 23:46:15 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:36.091 23:46:15 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:36.091 23:46:15 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:36.091 23:46:15 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:36.091 23:46:15 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:36.350 23:46:15 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:36.350 23:46:15 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:36.350 23:46:15 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:36.350 23:46:15 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:36.350 23:46:15 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:36.350 23:46:15 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:36.350 23:46:15 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:36.350 23:46:15 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:36.350 23:46:15 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:36.350 23:46:15 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:36.350 23:46:15 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:36.609 23:46:15 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:36.609 23:46:15 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:36.609 23:46:15 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:36.609 23:46:15 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:36.609 23:46:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:36.609 23:46:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:36.609 23:46:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:36.609 23:46:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:36.609 23:46:15 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:36.609 23:46:15 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:36.609 23:46:15 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:36.609 23:46:15 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:36.609 23:46:15 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:36.867 23:46:15 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:38.243 [2024-12-13 23:46:17.109782] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:38.243 [2024-12-13 23:46:17.209335] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:38.243 [2024-12-13 23:46:17.209343] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:05:38.502 [2024-12-13 23:46:17.403632] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:38.502 [2024-12-13 23:46:17.403683] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:39.878 23:46:18 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:39.878 23:46:18 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:05:39.878 spdk_app_start Round 2 00:05:39.878 23:46:18 event.app_repeat -- event/event.sh@25 -- # waitforlisten 3803799 /var/tmp/spdk-nbd.sock 00:05:39.878 23:46:18 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 3803799 ']' 00:05:39.878 23:46:18 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:39.878 23:46:18 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:39.878 23:46:18 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:39.878 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:39.878 23:46:18 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:39.878 23:46:18 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:40.137 23:46:19 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:40.137 23:46:19 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:40.137 23:46:19 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:40.396 Malloc0 00:05:40.396 23:46:19 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:40.654 Malloc1 00:05:40.654 23:46:19 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:40.654 23:46:19 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:40.654 23:46:19 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:40.654 23:46:19 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:40.654 23:46:19 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:40.654 23:46:19 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:40.654 23:46:19 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:40.654 23:46:19 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:40.654 23:46:19 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:40.654 23:46:19 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:40.654 23:46:19 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:40.654 23:46:19 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:40.654 23:46:19 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:40.654 23:46:19 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:40.654 23:46:19 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:40.654 23:46:19 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:40.654 /dev/nbd0 00:05:40.913 23:46:19 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:40.913 23:46:19 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:40.913 23:46:19 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:40.913 23:46:19 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:40.913 23:46:19 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:40.913 23:46:19 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:40.913 23:46:19 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:40.913 23:46:19 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:40.913 23:46:19 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:40.913 23:46:19 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:40.913 23:46:19 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:40.913 1+0 records in 00:05:40.913 1+0 records out 00:05:40.913 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000204507 s, 20.0 MB/s 00:05:40.913 23:46:19 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:40.913 23:46:19 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:40.913 23:46:19 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:40.913 23:46:19 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:40.913 23:46:19 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:40.913 23:46:19 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:40.913 23:46:19 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:40.913 23:46:19 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:40.913 /dev/nbd1 00:05:40.913 23:46:20 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:40.913 23:46:20 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:40.913 23:46:20 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:40.913 23:46:20 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:40.913 23:46:20 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:40.913 23:46:20 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:40.913 23:46:20 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:40.913 23:46:20 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:40.913 23:46:20 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:40.913 23:46:20 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:40.913 23:46:20 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:41.172 1+0 records in 00:05:41.172 1+0 records out 00:05:41.172 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000255572 s, 16.0 MB/s 00:05:41.172 23:46:20 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:41.172 23:46:20 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:41.172 23:46:20 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:41.172 23:46:20 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:41.172 23:46:20 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:41.172 23:46:20 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:41.172 23:46:20 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:41.172 23:46:20 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:41.172 23:46:20 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:41.172 23:46:20 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:41.172 23:46:20 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:41.172 { 00:05:41.172 "nbd_device": "/dev/nbd0", 00:05:41.172 "bdev_name": "Malloc0" 00:05:41.172 }, 00:05:41.172 { 00:05:41.172 "nbd_device": "/dev/nbd1", 00:05:41.172 "bdev_name": "Malloc1" 00:05:41.172 } 00:05:41.172 ]' 00:05:41.172 23:46:20 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:41.172 { 00:05:41.172 "nbd_device": "/dev/nbd0", 00:05:41.172 "bdev_name": "Malloc0" 00:05:41.172 }, 00:05:41.172 { 00:05:41.172 "nbd_device": "/dev/nbd1", 00:05:41.172 "bdev_name": "Malloc1" 00:05:41.172 } 00:05:41.172 ]' 00:05:41.172 23:46:20 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:41.172 23:46:20 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:41.172 /dev/nbd1' 00:05:41.172 23:46:20 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:41.172 /dev/nbd1' 00:05:41.172 23:46:20 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:41.172 23:46:20 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:41.172 23:46:20 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:41.172 23:46:20 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:41.172 23:46:20 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:41.172 23:46:20 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:41.172 23:46:20 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:41.172 23:46:20 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:41.172 23:46:20 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:41.173 23:46:20 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:41.173 23:46:20 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:41.173 23:46:20 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:41.431 256+0 records in 00:05:41.431 256+0 records out 00:05:41.431 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0106535 s, 98.4 MB/s 00:05:41.431 23:46:20 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:41.431 23:46:20 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:41.431 256+0 records in 00:05:41.431 256+0 records out 00:05:41.431 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0163201 s, 64.3 MB/s 00:05:41.431 23:46:20 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:41.431 23:46:20 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:41.431 256+0 records in 00:05:41.431 256+0 records out 00:05:41.431 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0195014 s, 53.8 MB/s 00:05:41.431 23:46:20 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:41.431 23:46:20 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:41.431 23:46:20 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:41.431 23:46:20 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:41.431 23:46:20 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:41.431 23:46:20 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:41.431 23:46:20 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:41.431 23:46:20 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:41.431 23:46:20 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:41.431 23:46:20 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:41.431 23:46:20 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:41.431 23:46:20 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:41.431 23:46:20 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:41.431 23:46:20 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:41.431 23:46:20 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:41.431 23:46:20 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:41.431 23:46:20 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:41.431 23:46:20 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:41.431 23:46:20 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:41.690 23:46:20 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:41.690 23:46:20 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:41.690 23:46:20 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:41.690 23:46:20 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:41.690 23:46:20 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:41.690 23:46:20 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:41.690 23:46:20 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:41.690 23:46:20 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:41.690 23:46:20 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:41.690 23:46:20 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:41.690 23:46:20 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:41.690 23:46:20 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:41.690 23:46:20 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:41.690 23:46:20 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:41.690 23:46:20 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:41.690 23:46:20 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:41.690 23:46:20 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:41.690 23:46:20 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:41.690 23:46:20 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:41.690 23:46:20 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:41.690 23:46:20 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:41.949 23:46:20 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:41.949 23:46:20 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:41.949 23:46:20 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:41.949 23:46:21 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:41.949 23:46:21 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:41.949 23:46:21 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:41.949 23:46:21 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:41.949 23:46:21 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:41.949 23:46:21 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:41.949 23:46:21 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:41.949 23:46:21 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:41.949 23:46:21 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:41.949 23:46:21 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:42.516 23:46:21 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:43.893 [2024-12-13 23:46:22.611762] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:43.894 [2024-12-13 23:46:22.712858] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:05:43.894 [2024-12-13 23:46:22.712859] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:43.894 [2024-12-13 23:46:22.904591] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:43.894 [2024-12-13 23:46:22.904643] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:45.797 23:46:24 event.app_repeat -- event/event.sh@38 -- # waitforlisten 3803799 /var/tmp/spdk-nbd.sock 00:05:45.797 23:46:24 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 3803799 ']' 00:05:45.797 23:46:24 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:45.797 23:46:24 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:45.797 23:46:24 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:45.797 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:45.797 23:46:24 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:45.797 23:46:24 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:45.797 23:46:24 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:45.797 23:46:24 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:45.797 23:46:24 event.app_repeat -- event/event.sh@39 -- # killprocess 3803799 00:05:45.797 23:46:24 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 3803799 ']' 00:05:45.797 23:46:24 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 3803799 00:05:45.797 23:46:24 event.app_repeat -- common/autotest_common.sh@959 -- # uname 00:05:45.797 23:46:24 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:45.797 23:46:24 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3803799 00:05:45.797 23:46:24 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:45.797 23:46:24 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:45.797 23:46:24 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3803799' 00:05:45.797 killing process with pid 3803799 00:05:45.797 23:46:24 event.app_repeat -- common/autotest_common.sh@973 -- # kill 3803799 00:05:45.797 23:46:24 event.app_repeat -- common/autotest_common.sh@978 -- # wait 3803799 00:05:46.733 spdk_app_start is called in Round 0. 00:05:46.733 Shutdown signal received, stop current app iteration 00:05:46.733 Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 reinitialization... 00:05:46.733 spdk_app_start is called in Round 1. 00:05:46.733 Shutdown signal received, stop current app iteration 00:05:46.733 Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 reinitialization... 00:05:46.733 spdk_app_start is called in Round 2. 00:05:46.733 Shutdown signal received, stop current app iteration 00:05:46.733 Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 reinitialization... 00:05:46.733 spdk_app_start is called in Round 3. 00:05:46.733 Shutdown signal received, stop current app iteration 00:05:46.733 23:46:25 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:05:46.733 23:46:25 event.app_repeat -- event/event.sh@42 -- # return 0 00:05:46.733 00:05:46.733 real 0m18.573s 00:05:46.733 user 0m39.353s 00:05:46.733 sys 0m2.536s 00:05:46.733 23:46:25 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:46.733 23:46:25 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:46.733 ************************************ 00:05:46.733 END TEST app_repeat 00:05:46.733 ************************************ 00:05:46.733 23:46:25 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:05:46.733 23:46:25 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:05:46.733 23:46:25 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:46.733 23:46:25 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:46.733 23:46:25 event -- common/autotest_common.sh@10 -- # set +x 00:05:46.733 ************************************ 00:05:46.733 START TEST cpu_locks 00:05:46.733 ************************************ 00:05:46.733 23:46:25 event.cpu_locks -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:05:46.733 * Looking for test storage... 00:05:46.733 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:05:46.733 23:46:25 event.cpu_locks -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:46.733 23:46:25 event.cpu_locks -- common/autotest_common.sh@1711 -- # lcov --version 00:05:46.733 23:46:25 event.cpu_locks -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:46.992 23:46:25 event.cpu_locks -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:46.992 23:46:25 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:46.992 23:46:25 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:46.992 23:46:25 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:46.992 23:46:25 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:05:46.992 23:46:25 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:05:46.992 23:46:25 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:05:46.992 23:46:25 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:05:46.992 23:46:25 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:05:46.992 23:46:25 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:05:46.992 23:46:25 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:05:46.992 23:46:25 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:46.992 23:46:25 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:05:46.992 23:46:25 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:05:46.992 23:46:25 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:46.992 23:46:25 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:46.992 23:46:25 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:05:46.992 23:46:25 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:05:46.992 23:46:25 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:46.992 23:46:25 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:05:46.992 23:46:25 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:05:46.992 23:46:25 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:05:46.992 23:46:25 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:05:46.992 23:46:25 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:46.992 23:46:25 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:05:46.992 23:46:25 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:05:46.992 23:46:25 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:46.993 23:46:25 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:46.993 23:46:25 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:05:46.993 23:46:25 event.cpu_locks -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:46.993 23:46:25 event.cpu_locks -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:46.993 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:46.993 --rc genhtml_branch_coverage=1 00:05:46.993 --rc genhtml_function_coverage=1 00:05:46.993 --rc genhtml_legend=1 00:05:46.993 --rc geninfo_all_blocks=1 00:05:46.993 --rc geninfo_unexecuted_blocks=1 00:05:46.993 00:05:46.993 ' 00:05:46.993 23:46:25 event.cpu_locks -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:46.993 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:46.993 --rc genhtml_branch_coverage=1 00:05:46.993 --rc genhtml_function_coverage=1 00:05:46.993 --rc genhtml_legend=1 00:05:46.993 --rc geninfo_all_blocks=1 00:05:46.993 --rc geninfo_unexecuted_blocks=1 00:05:46.993 00:05:46.993 ' 00:05:46.993 23:46:25 event.cpu_locks -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:46.993 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:46.993 --rc genhtml_branch_coverage=1 00:05:46.993 --rc genhtml_function_coverage=1 00:05:46.993 --rc genhtml_legend=1 00:05:46.993 --rc geninfo_all_blocks=1 00:05:46.993 --rc geninfo_unexecuted_blocks=1 00:05:46.993 00:05:46.993 ' 00:05:46.993 23:46:25 event.cpu_locks -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:46.993 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:46.993 --rc genhtml_branch_coverage=1 00:05:46.993 --rc genhtml_function_coverage=1 00:05:46.993 --rc genhtml_legend=1 00:05:46.993 --rc geninfo_all_blocks=1 00:05:46.993 --rc geninfo_unexecuted_blocks=1 00:05:46.993 00:05:46.993 ' 00:05:46.993 23:46:25 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:05:46.993 23:46:25 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:05:46.993 23:46:25 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:05:46.993 23:46:25 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:05:46.993 23:46:25 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:46.993 23:46:25 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:46.993 23:46:25 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:46.993 ************************************ 00:05:46.993 START TEST default_locks 00:05:46.993 ************************************ 00:05:46.993 23:46:25 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 00:05:46.993 23:46:25 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=3807167 00:05:46.993 23:46:25 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 3807167 00:05:46.993 23:46:25 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:46.993 23:46:25 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 3807167 ']' 00:05:46.993 23:46:25 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:46.993 23:46:25 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:46.993 23:46:25 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:46.993 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:46.993 23:46:25 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:46.993 23:46:25 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:46.993 [2024-12-13 23:46:26.055307] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:05:46.993 [2024-12-13 23:46:26.055395] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3807167 ] 00:05:47.252 [2024-12-13 23:46:26.166190] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:47.253 [2024-12-13 23:46:26.270837] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:48.190 23:46:27 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:48.190 23:46:27 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 00:05:48.190 23:46:27 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 3807167 00:05:48.190 23:46:27 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 3807167 00:05:48.190 23:46:27 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:48.449 lslocks: write error 00:05:48.449 23:46:27 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 3807167 00:05:48.449 23:46:27 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 3807167 ']' 00:05:48.449 23:46:27 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 3807167 00:05:48.449 23:46:27 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 00:05:48.449 23:46:27 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:48.449 23:46:27 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3807167 00:05:48.449 23:46:27 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:48.449 23:46:27 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:48.449 23:46:27 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3807167' 00:05:48.449 killing process with pid 3807167 00:05:48.449 23:46:27 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 3807167 00:05:48.449 23:46:27 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 3807167 00:05:50.983 23:46:29 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 3807167 00:05:50.983 23:46:29 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:05:50.983 23:46:29 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 3807167 00:05:50.983 23:46:29 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:05:50.983 23:46:29 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:50.983 23:46:29 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:05:50.983 23:46:29 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:50.983 23:46:29 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 3807167 00:05:50.983 23:46:29 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 3807167 ']' 00:05:50.983 23:46:29 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:50.983 23:46:29 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:50.983 23:46:29 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:50.983 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:50.983 23:46:29 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:50.983 23:46:29 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:50.983 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (3807167) - No such process 00:05:50.983 ERROR: process (pid: 3807167) is no longer running 00:05:50.983 23:46:29 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:50.983 23:46:29 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 00:05:50.983 23:46:29 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:05:50.983 23:46:29 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:50.983 23:46:29 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:50.983 23:46:29 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:50.983 23:46:29 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:05:50.983 23:46:29 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:50.983 23:46:29 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:05:50.983 23:46:29 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:50.983 00:05:50.983 real 0m3.879s 00:05:50.983 user 0m3.873s 00:05:50.983 sys 0m0.628s 00:05:50.983 23:46:29 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:50.983 23:46:29 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:50.983 ************************************ 00:05:50.983 END TEST default_locks 00:05:50.983 ************************************ 00:05:50.983 23:46:29 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:05:50.983 23:46:29 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:50.983 23:46:29 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:50.983 23:46:29 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:50.983 ************************************ 00:05:50.983 START TEST default_locks_via_rpc 00:05:50.983 ************************************ 00:05:50.983 23:46:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 00:05:50.983 23:46:29 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:50.983 23:46:29 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=3807867 00:05:50.983 23:46:29 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 3807867 00:05:50.983 23:46:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 3807867 ']' 00:05:50.983 23:46:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:50.983 23:46:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:50.983 23:46:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:50.983 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:50.983 23:46:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:50.983 23:46:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:50.983 [2024-12-13 23:46:29.984609] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:05:50.983 [2024-12-13 23:46:29.984700] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3807867 ] 00:05:50.983 [2024-12-13 23:46:30.109757] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:51.242 [2024-12-13 23:46:30.217057] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:52.178 23:46:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:52.178 23:46:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:52.178 23:46:31 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:05:52.178 23:46:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:52.178 23:46:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:52.178 23:46:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:52.178 23:46:31 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:05:52.178 23:46:31 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:52.178 23:46:31 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:05:52.178 23:46:31 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:52.178 23:46:31 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:05:52.178 23:46:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:52.178 23:46:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:52.178 23:46:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:52.178 23:46:31 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 3807867 00:05:52.178 23:46:31 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 3807867 00:05:52.178 23:46:31 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:52.178 23:46:31 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 3807867 00:05:52.178 23:46:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 3807867 ']' 00:05:52.178 23:46:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 3807867 00:05:52.178 23:46:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 00:05:52.178 23:46:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:52.178 23:46:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3807867 00:05:52.178 23:46:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:52.178 23:46:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:52.178 23:46:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3807867' 00:05:52.178 killing process with pid 3807867 00:05:52.178 23:46:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 3807867 00:05:52.178 23:46:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 3807867 00:05:54.711 00:05:54.711 real 0m3.647s 00:05:54.711 user 0m3.625s 00:05:54.711 sys 0m0.553s 00:05:54.711 23:46:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:54.711 23:46:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:54.711 ************************************ 00:05:54.711 END TEST default_locks_via_rpc 00:05:54.711 ************************************ 00:05:54.711 23:46:33 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:05:54.711 23:46:33 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:54.711 23:46:33 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:54.711 23:46:33 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:54.711 ************************************ 00:05:54.711 START TEST non_locking_app_on_locked_coremask 00:05:54.711 ************************************ 00:05:54.711 23:46:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 00:05:54.711 23:46:33 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:54.711 23:46:33 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=3808362 00:05:54.711 23:46:33 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 3808362 /var/tmp/spdk.sock 00:05:54.711 23:46:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 3808362 ']' 00:05:54.711 23:46:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:54.711 23:46:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:54.711 23:46:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:54.711 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:54.711 23:46:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:54.711 23:46:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:54.711 [2024-12-13 23:46:33.703850] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:05:54.711 [2024-12-13 23:46:33.703943] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3808362 ] 00:05:54.711 [2024-12-13 23:46:33.811093] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:54.970 [2024-12-13 23:46:33.927236] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:55.906 23:46:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:55.906 23:46:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:55.906 23:46:34 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=3808588 00:05:55.906 23:46:34 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 3808588 /var/tmp/spdk2.sock 00:05:55.906 23:46:34 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:05:55.906 23:46:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 3808588 ']' 00:05:55.906 23:46:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:55.906 23:46:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:55.906 23:46:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:55.906 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:55.906 23:46:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:55.906 23:46:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:55.906 [2024-12-13 23:46:34.824071] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:05:55.906 [2024-12-13 23:46:34.824199] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3808588 ] 00:05:55.906 [2024-12-13 23:46:34.976197] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:55.906 [2024-12-13 23:46:34.976245] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:56.164 [2024-12-13 23:46:35.191301] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:58.697 23:46:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:58.697 23:46:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:58.697 23:46:37 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 3808362 00:05:58.697 23:46:37 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 3808362 00:05:58.697 23:46:37 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:58.697 lslocks: write error 00:05:58.697 23:46:37 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 3808362 00:05:58.697 23:46:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 3808362 ']' 00:05:58.697 23:46:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 3808362 00:05:58.697 23:46:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:58.697 23:46:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:58.697 23:46:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3808362 00:05:58.697 23:46:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:58.697 23:46:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:58.697 23:46:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3808362' 00:05:58.697 killing process with pid 3808362 00:05:58.697 23:46:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 3808362 00:05:58.697 23:46:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 3808362 00:06:03.981 23:46:42 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 3808588 00:06:03.981 23:46:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 3808588 ']' 00:06:03.981 23:46:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 3808588 00:06:03.981 23:46:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:03.981 23:46:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:03.981 23:46:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3808588 00:06:03.981 23:46:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:03.981 23:46:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:03.981 23:46:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3808588' 00:06:03.981 killing process with pid 3808588 00:06:03.981 23:46:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 3808588 00:06:03.981 23:46:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 3808588 00:06:05.886 00:06:05.886 real 0m11.042s 00:06:05.886 user 0m11.320s 00:06:05.886 sys 0m1.116s 00:06:05.886 23:46:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:05.886 23:46:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:05.886 ************************************ 00:06:05.886 END TEST non_locking_app_on_locked_coremask 00:06:05.886 ************************************ 00:06:05.886 23:46:44 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:06:05.886 23:46:44 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:05.886 23:46:44 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:05.886 23:46:44 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:05.886 ************************************ 00:06:05.886 START TEST locking_app_on_unlocked_coremask 00:06:05.886 ************************************ 00:06:05.886 23:46:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 00:06:05.886 23:46:44 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=3810366 00:06:05.886 23:46:44 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 3810366 /var/tmp/spdk.sock 00:06:05.886 23:46:44 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:06:05.886 23:46:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 3810366 ']' 00:06:05.886 23:46:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:05.886 23:46:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:05.886 23:46:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:05.886 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:05.886 23:46:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:05.886 23:46:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:05.886 [2024-12-13 23:46:44.826741] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:06:05.886 [2024-12-13 23:46:44.826833] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3810366 ] 00:06:05.886 [2024-12-13 23:46:44.938485] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:05.886 [2024-12-13 23:46:44.938523] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:06.149 [2024-12-13 23:46:45.040944] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:06.720 23:46:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:06.720 23:46:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:06.720 23:46:45 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=3810440 00:06:06.720 23:46:45 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 3810440 /var/tmp/spdk2.sock 00:06:06.720 23:46:45 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:06.720 23:46:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 3810440 ']' 00:06:06.720 23:46:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:06.720 23:46:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:06.720 23:46:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:06.720 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:06.720 23:46:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:06.720 23:46:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:06.979 [2024-12-13 23:46:45.940891] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:06:06.979 [2024-12-13 23:46:45.940981] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3810440 ] 00:06:06.979 [2024-12-13 23:46:46.097982] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:07.238 [2024-12-13 23:46:46.306830] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:09.772 23:46:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:09.772 23:46:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:09.772 23:46:48 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 3810440 00:06:09.772 23:46:48 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 3810440 00:06:09.772 23:46:48 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:09.772 lslocks: write error 00:06:09.772 23:46:48 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 3810366 00:06:09.772 23:46:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 3810366 ']' 00:06:09.772 23:46:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 3810366 00:06:09.772 23:46:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:09.772 23:46:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:09.772 23:46:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3810366 00:06:09.772 23:46:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:09.772 23:46:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:09.772 23:46:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3810366' 00:06:09.772 killing process with pid 3810366 00:06:09.772 23:46:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 3810366 00:06:09.772 23:46:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 3810366 00:06:15.044 23:46:53 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 3810440 00:06:15.044 23:46:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 3810440 ']' 00:06:15.044 23:46:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 3810440 00:06:15.044 23:46:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:15.044 23:46:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:15.044 23:46:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3810440 00:06:15.044 23:46:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:15.044 23:46:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:15.044 23:46:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3810440' 00:06:15.044 killing process with pid 3810440 00:06:15.044 23:46:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 3810440 00:06:15.044 23:46:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 3810440 00:06:16.948 00:06:16.948 real 0m11.039s 00:06:16.948 user 0m11.319s 00:06:16.948 sys 0m1.099s 00:06:16.948 23:46:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:16.948 23:46:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:16.948 ************************************ 00:06:16.948 END TEST locking_app_on_unlocked_coremask 00:06:16.948 ************************************ 00:06:16.948 23:46:55 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:06:16.948 23:46:55 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:16.948 23:46:55 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:16.948 23:46:55 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:16.948 ************************************ 00:06:16.948 START TEST locking_app_on_locked_coremask 00:06:16.948 ************************************ 00:06:16.948 23:46:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 00:06:16.948 23:46:55 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=3812238 00:06:16.948 23:46:55 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 3812238 /var/tmp/spdk.sock 00:06:16.948 23:46:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 3812238 ']' 00:06:16.948 23:46:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:16.948 23:46:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:16.948 23:46:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:16.948 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:16.948 23:46:55 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:16.948 23:46:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:16.948 23:46:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:16.948 [2024-12-13 23:46:55.933922] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:06:16.948 [2024-12-13 23:46:55.934008] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3812238 ] 00:06:16.948 [2024-12-13 23:46:56.046757] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:17.207 [2024-12-13 23:46:56.148626] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:18.144 23:46:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:18.144 23:46:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:18.144 23:46:56 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=3812462 00:06:18.144 23:46:56 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 3812462 /var/tmp/spdk2.sock 00:06:18.144 23:46:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:06:18.144 23:46:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 3812462 /var/tmp/spdk2.sock 00:06:18.144 23:46:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:06:18.144 23:46:56 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:18.144 23:46:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:18.144 23:46:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:06:18.144 23:46:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:18.144 23:46:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 3812462 /var/tmp/spdk2.sock 00:06:18.144 23:46:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 3812462 ']' 00:06:18.144 23:46:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:18.144 23:46:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:18.144 23:46:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:18.144 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:18.144 23:46:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:18.144 23:46:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:18.144 [2024-12-13 23:46:57.067456] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:06:18.144 [2024-12-13 23:46:57.067557] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3812462 ] 00:06:18.144 [2024-12-13 23:46:57.217703] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 3812238 has claimed it. 00:06:18.144 [2024-12-13 23:46:57.217757] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:18.711 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (3812462) - No such process 00:06:18.711 ERROR: process (pid: 3812462) is no longer running 00:06:18.711 23:46:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:18.711 23:46:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 00:06:18.711 23:46:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:06:18.711 23:46:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:18.711 23:46:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:18.711 23:46:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:18.711 23:46:57 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 3812238 00:06:18.711 23:46:57 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 3812238 00:06:18.711 23:46:57 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:18.970 lslocks: write error 00:06:18.970 23:46:57 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 3812238 00:06:18.970 23:46:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 3812238 ']' 00:06:18.970 23:46:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 3812238 00:06:18.970 23:46:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:18.970 23:46:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:18.970 23:46:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3812238 00:06:18.970 23:46:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:18.970 23:46:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:18.970 23:46:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3812238' 00:06:18.970 killing process with pid 3812238 00:06:18.970 23:46:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 3812238 00:06:18.970 23:46:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 3812238 00:06:21.505 00:06:21.505 real 0m4.486s 00:06:21.505 user 0m4.616s 00:06:21.505 sys 0m0.797s 00:06:21.505 23:47:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:21.505 23:47:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:21.505 ************************************ 00:06:21.505 END TEST locking_app_on_locked_coremask 00:06:21.505 ************************************ 00:06:21.505 23:47:00 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:06:21.505 23:47:00 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:21.505 23:47:00 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:21.505 23:47:00 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:21.505 ************************************ 00:06:21.505 START TEST locking_overlapped_coremask 00:06:21.505 ************************************ 00:06:21.505 23:47:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 00:06:21.505 23:47:00 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=3812949 00:06:21.505 23:47:00 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 3812949 /var/tmp/spdk.sock 00:06:21.505 23:47:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 3812949 ']' 00:06:21.505 23:47:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:21.505 23:47:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:21.505 23:47:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:21.505 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:21.505 23:47:00 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:06:21.505 23:47:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:21.505 23:47:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:21.505 [2024-12-13 23:47:00.488292] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:06:21.505 [2024-12-13 23:47:00.488382] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3812949 ] 00:06:21.505 [2024-12-13 23:47:00.601342] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:21.764 [2024-12-13 23:47:00.712652] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:06:21.764 [2024-12-13 23:47:00.712719] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:21.764 [2024-12-13 23:47:00.712724] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:06:22.700 23:47:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:22.700 23:47:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:22.700 23:47:01 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=3813177 00:06:22.700 23:47:01 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 3813177 /var/tmp/spdk2.sock 00:06:22.700 23:47:01 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:06:22.700 23:47:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:06:22.700 23:47:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 3813177 /var/tmp/spdk2.sock 00:06:22.700 23:47:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:06:22.700 23:47:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:22.700 23:47:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:06:22.700 23:47:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:22.700 23:47:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 3813177 /var/tmp/spdk2.sock 00:06:22.700 23:47:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 3813177 ']' 00:06:22.700 23:47:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:22.700 23:47:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:22.700 23:47:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:22.700 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:22.700 23:47:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:22.700 23:47:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:22.700 [2024-12-13 23:47:01.634126] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:06:22.700 [2024-12-13 23:47:01.634214] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3813177 ] 00:06:22.700 [2024-12-13 23:47:01.789991] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 3812949 has claimed it. 00:06:22.700 [2024-12-13 23:47:01.790043] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:23.268 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (3813177) - No such process 00:06:23.268 ERROR: process (pid: 3813177) is no longer running 00:06:23.268 23:47:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:23.268 23:47:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 00:06:23.268 23:47:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:06:23.268 23:47:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:23.268 23:47:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:23.268 23:47:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:23.268 23:47:02 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:06:23.268 23:47:02 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:23.268 23:47:02 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:23.268 23:47:02 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:23.268 23:47:02 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 3812949 00:06:23.268 23:47:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 3812949 ']' 00:06:23.268 23:47:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 3812949 00:06:23.268 23:47:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 00:06:23.268 23:47:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:23.268 23:47:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3812949 00:06:23.268 23:47:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:23.268 23:47:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:23.268 23:47:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3812949' 00:06:23.268 killing process with pid 3812949 00:06:23.268 23:47:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 3812949 00:06:23.268 23:47:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 3812949 00:06:25.801 00:06:25.801 real 0m4.286s 00:06:25.801 user 0m11.833s 00:06:25.801 sys 0m0.599s 00:06:25.801 23:47:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:25.801 23:47:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:25.801 ************************************ 00:06:25.801 END TEST locking_overlapped_coremask 00:06:25.801 ************************************ 00:06:25.801 23:47:04 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:06:25.801 23:47:04 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:25.801 23:47:04 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:25.801 23:47:04 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:25.801 ************************************ 00:06:25.801 START TEST locking_overlapped_coremask_via_rpc 00:06:25.801 ************************************ 00:06:25.801 23:47:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 00:06:25.801 23:47:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=3813667 00:06:25.801 23:47:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 3813667 /var/tmp/spdk.sock 00:06:25.801 23:47:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:06:25.801 23:47:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 3813667 ']' 00:06:25.801 23:47:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:25.801 23:47:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:25.801 23:47:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:25.801 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:25.801 23:47:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:25.801 23:47:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:25.801 [2024-12-13 23:47:04.849734] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:06:25.801 [2024-12-13 23:47:04.849817] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3813667 ] 00:06:26.062 [2024-12-13 23:47:04.964209] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:26.062 [2024-12-13 23:47:04.964253] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:26.062 [2024-12-13 23:47:05.081307] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:06:26.062 [2024-12-13 23:47:05.081385] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:26.062 [2024-12-13 23:47:05.081391] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:06:26.998 23:47:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:26.998 23:47:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:26.998 23:47:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:06:26.998 23:47:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=3813890 00:06:26.998 23:47:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 3813890 /var/tmp/spdk2.sock 00:06:26.998 23:47:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 3813890 ']' 00:06:26.998 23:47:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:26.998 23:47:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:26.998 23:47:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:26.998 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:26.998 23:47:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:26.998 23:47:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:26.998 [2024-12-13 23:47:05.998224] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:06:26.998 [2024-12-13 23:47:05.998315] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3813890 ] 00:06:27.257 [2024-12-13 23:47:06.156804] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:27.257 [2024-12-13 23:47:06.156852] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:27.257 [2024-12-13 23:47:06.381773] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:06:27.257 [2024-12-13 23:47:06.385488] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:06:27.257 [2024-12-13 23:47:06.385512] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 4 00:06:29.790 23:47:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:29.790 23:47:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:29.790 23:47:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:29.790 23:47:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:29.790 23:47:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:29.790 23:47:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:29.790 23:47:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:29.790 23:47:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:06:29.790 23:47:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:29.790 23:47:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:06:29.790 23:47:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:29.790 23:47:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:06:29.790 23:47:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:29.790 23:47:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:29.790 23:47:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:29.790 23:47:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:29.790 [2024-12-13 23:47:08.533562] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 3813667 has claimed it. 00:06:29.790 request: 00:06:29.790 { 00:06:29.790 "method": "framework_enable_cpumask_locks", 00:06:29.790 "req_id": 1 00:06:29.790 } 00:06:29.790 Got JSON-RPC error response 00:06:29.790 response: 00:06:29.790 { 00:06:29.790 "code": -32603, 00:06:29.790 "message": "Failed to claim CPU core: 2" 00:06:29.790 } 00:06:29.790 23:47:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:06:29.790 23:47:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:06:29.790 23:47:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:29.790 23:47:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:29.790 23:47:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:29.791 23:47:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 3813667 /var/tmp/spdk.sock 00:06:29.791 23:47:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 3813667 ']' 00:06:29.791 23:47:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:29.791 23:47:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:29.791 23:47:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:29.791 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:29.791 23:47:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:29.791 23:47:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:29.791 23:47:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:29.791 23:47:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:29.791 23:47:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 3813890 /var/tmp/spdk2.sock 00:06:29.791 23:47:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 3813890 ']' 00:06:29.791 23:47:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:29.791 23:47:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:29.791 23:47:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:29.791 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:29.791 23:47:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:29.791 23:47:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:30.050 23:47:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:30.050 23:47:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:30.050 23:47:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:30.050 23:47:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:30.050 23:47:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:30.050 23:47:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:30.050 00:06:30.050 real 0m4.181s 00:06:30.050 user 0m1.104s 00:06:30.050 sys 0m0.203s 00:06:30.050 23:47:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:30.050 23:47:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:30.050 ************************************ 00:06:30.050 END TEST locking_overlapped_coremask_via_rpc 00:06:30.050 ************************************ 00:06:30.050 23:47:08 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:06:30.050 23:47:08 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 3813667 ]] 00:06:30.050 23:47:08 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 3813667 00:06:30.050 23:47:08 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 3813667 ']' 00:06:30.050 23:47:08 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 3813667 00:06:30.050 23:47:08 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:06:30.050 23:47:08 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:30.050 23:47:08 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3813667 00:06:30.050 23:47:09 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:30.050 23:47:09 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:30.050 23:47:09 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3813667' 00:06:30.050 killing process with pid 3813667 00:06:30.050 23:47:09 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 3813667 00:06:30.050 23:47:09 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 3813667 00:06:32.771 23:47:11 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 3813890 ]] 00:06:32.771 23:47:11 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 3813890 00:06:32.771 23:47:11 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 3813890 ']' 00:06:32.771 23:47:11 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 3813890 00:06:32.771 23:47:11 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:06:32.771 23:47:11 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:32.771 23:47:11 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3813890 00:06:32.771 23:47:11 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:06:32.771 23:47:11 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:06:32.771 23:47:11 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3813890' 00:06:32.771 killing process with pid 3813890 00:06:32.771 23:47:11 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 3813890 00:06:32.771 23:47:11 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 3813890 00:06:35.305 23:47:13 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:35.305 23:47:13 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:06:35.305 23:47:13 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 3813667 ]] 00:06:35.305 23:47:13 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 3813667 00:06:35.305 23:47:13 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 3813667 ']' 00:06:35.305 23:47:13 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 3813667 00:06:35.305 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (3813667) - No such process 00:06:35.305 23:47:13 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 3813667 is not found' 00:06:35.305 Process with pid 3813667 is not found 00:06:35.305 23:47:13 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 3813890 ]] 00:06:35.305 23:47:13 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 3813890 00:06:35.305 23:47:13 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 3813890 ']' 00:06:35.305 23:47:13 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 3813890 00:06:35.305 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (3813890) - No such process 00:06:35.305 23:47:13 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 3813890 is not found' 00:06:35.305 Process with pid 3813890 is not found 00:06:35.305 23:47:13 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:35.305 00:06:35.305 real 0m48.197s 00:06:35.305 user 1m23.674s 00:06:35.305 sys 0m6.218s 00:06:35.305 23:47:13 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:35.305 23:47:13 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:35.305 ************************************ 00:06:35.305 END TEST cpu_locks 00:06:35.305 ************************************ 00:06:35.305 00:06:35.305 real 1m17.452s 00:06:35.305 user 2m20.383s 00:06:35.305 sys 0m9.938s 00:06:35.305 23:47:13 event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:35.305 23:47:13 event -- common/autotest_common.sh@10 -- # set +x 00:06:35.305 ************************************ 00:06:35.305 END TEST event 00:06:35.305 ************************************ 00:06:35.306 23:47:14 -- spdk/autotest.sh@169 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:06:35.306 23:47:14 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:35.306 23:47:14 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:35.306 23:47:14 -- common/autotest_common.sh@10 -- # set +x 00:06:35.306 ************************************ 00:06:35.306 START TEST thread 00:06:35.306 ************************************ 00:06:35.306 23:47:14 thread -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:06:35.306 * Looking for test storage... 00:06:35.306 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:06:35.306 23:47:14 thread -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:35.306 23:47:14 thread -- common/autotest_common.sh@1711 -- # lcov --version 00:06:35.306 23:47:14 thread -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:35.306 23:47:14 thread -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:35.306 23:47:14 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:35.306 23:47:14 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:35.306 23:47:14 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:35.306 23:47:14 thread -- scripts/common.sh@336 -- # IFS=.-: 00:06:35.306 23:47:14 thread -- scripts/common.sh@336 -- # read -ra ver1 00:06:35.306 23:47:14 thread -- scripts/common.sh@337 -- # IFS=.-: 00:06:35.306 23:47:14 thread -- scripts/common.sh@337 -- # read -ra ver2 00:06:35.306 23:47:14 thread -- scripts/common.sh@338 -- # local 'op=<' 00:06:35.306 23:47:14 thread -- scripts/common.sh@340 -- # ver1_l=2 00:06:35.306 23:47:14 thread -- scripts/common.sh@341 -- # ver2_l=1 00:06:35.306 23:47:14 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:35.306 23:47:14 thread -- scripts/common.sh@344 -- # case "$op" in 00:06:35.306 23:47:14 thread -- scripts/common.sh@345 -- # : 1 00:06:35.306 23:47:14 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:35.306 23:47:14 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:35.306 23:47:14 thread -- scripts/common.sh@365 -- # decimal 1 00:06:35.306 23:47:14 thread -- scripts/common.sh@353 -- # local d=1 00:06:35.306 23:47:14 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:35.306 23:47:14 thread -- scripts/common.sh@355 -- # echo 1 00:06:35.306 23:47:14 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:06:35.306 23:47:14 thread -- scripts/common.sh@366 -- # decimal 2 00:06:35.306 23:47:14 thread -- scripts/common.sh@353 -- # local d=2 00:06:35.306 23:47:14 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:35.306 23:47:14 thread -- scripts/common.sh@355 -- # echo 2 00:06:35.306 23:47:14 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:06:35.306 23:47:14 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:35.306 23:47:14 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:35.306 23:47:14 thread -- scripts/common.sh@368 -- # return 0 00:06:35.306 23:47:14 thread -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:35.306 23:47:14 thread -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:35.306 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:35.306 --rc genhtml_branch_coverage=1 00:06:35.306 --rc genhtml_function_coverage=1 00:06:35.306 --rc genhtml_legend=1 00:06:35.306 --rc geninfo_all_blocks=1 00:06:35.306 --rc geninfo_unexecuted_blocks=1 00:06:35.306 00:06:35.306 ' 00:06:35.306 23:47:14 thread -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:35.306 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:35.306 --rc genhtml_branch_coverage=1 00:06:35.306 --rc genhtml_function_coverage=1 00:06:35.306 --rc genhtml_legend=1 00:06:35.306 --rc geninfo_all_blocks=1 00:06:35.306 --rc geninfo_unexecuted_blocks=1 00:06:35.306 00:06:35.306 ' 00:06:35.306 23:47:14 thread -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:35.306 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:35.306 --rc genhtml_branch_coverage=1 00:06:35.306 --rc genhtml_function_coverage=1 00:06:35.306 --rc genhtml_legend=1 00:06:35.306 --rc geninfo_all_blocks=1 00:06:35.306 --rc geninfo_unexecuted_blocks=1 00:06:35.306 00:06:35.306 ' 00:06:35.306 23:47:14 thread -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:35.306 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:35.306 --rc genhtml_branch_coverage=1 00:06:35.306 --rc genhtml_function_coverage=1 00:06:35.306 --rc genhtml_legend=1 00:06:35.306 --rc geninfo_all_blocks=1 00:06:35.306 --rc geninfo_unexecuted_blocks=1 00:06:35.306 00:06:35.306 ' 00:06:35.306 23:47:14 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:35.306 23:47:14 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:06:35.306 23:47:14 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:35.306 23:47:14 thread -- common/autotest_common.sh@10 -- # set +x 00:06:35.306 ************************************ 00:06:35.306 START TEST thread_poller_perf 00:06:35.306 ************************************ 00:06:35.306 23:47:14 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:35.306 [2024-12-13 23:47:14.311934] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:06:35.306 [2024-12-13 23:47:14.312051] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3815361 ] 00:06:35.306 [2024-12-13 23:47:14.429731] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:35.565 [2024-12-13 23:47:14.536312] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:35.565 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:36.944 [2024-12-13T22:47:16.085Z] ====================================== 00:06:36.944 [2024-12-13T22:47:16.085Z] busy:2108222772 (cyc) 00:06:36.944 [2024-12-13T22:47:16.085Z] total_run_count: 410000 00:06:36.944 [2024-12-13T22:47:16.085Z] tsc_hz: 2100000000 (cyc) 00:06:36.944 [2024-12-13T22:47:16.085Z] ====================================== 00:06:36.944 [2024-12-13T22:47:16.085Z] poller_cost: 5142 (cyc), 2448 (nsec) 00:06:36.944 00:06:36.944 real 0m1.487s 00:06:36.944 user 0m1.356s 00:06:36.944 sys 0m0.125s 00:06:36.944 23:47:15 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:36.944 23:47:15 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:36.944 ************************************ 00:06:36.944 END TEST thread_poller_perf 00:06:36.944 ************************************ 00:06:36.944 23:47:15 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:36.944 23:47:15 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:06:36.944 23:47:15 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:36.944 23:47:15 thread -- common/autotest_common.sh@10 -- # set +x 00:06:36.944 ************************************ 00:06:36.944 START TEST thread_poller_perf 00:06:36.944 ************************************ 00:06:36.944 23:47:15 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:36.944 [2024-12-13 23:47:15.869151] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:06:36.944 [2024-12-13 23:47:15.869246] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3815730 ] 00:06:36.944 [2024-12-13 23:47:15.986803] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:37.203 [2024-12-13 23:47:16.089477] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:37.203 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:38.580 [2024-12-13T22:47:17.721Z] ====================================== 00:06:38.580 [2024-12-13T22:47:17.721Z] busy:2102341548 (cyc) 00:06:38.580 [2024-12-13T22:47:17.721Z] total_run_count: 4890000 00:06:38.580 [2024-12-13T22:47:17.721Z] tsc_hz: 2100000000 (cyc) 00:06:38.580 [2024-12-13T22:47:17.721Z] ====================================== 00:06:38.580 [2024-12-13T22:47:17.721Z] poller_cost: 429 (cyc), 204 (nsec) 00:06:38.580 00:06:38.580 real 0m1.475s 00:06:38.580 user 0m1.343s 00:06:38.580 sys 0m0.127s 00:06:38.580 23:47:17 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:38.580 23:47:17 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:38.580 ************************************ 00:06:38.580 END TEST thread_poller_perf 00:06:38.580 ************************************ 00:06:38.580 23:47:17 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:06:38.580 00:06:38.580 real 0m3.274s 00:06:38.580 user 0m2.861s 00:06:38.580 sys 0m0.425s 00:06:38.580 23:47:17 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:38.580 23:47:17 thread -- common/autotest_common.sh@10 -- # set +x 00:06:38.580 ************************************ 00:06:38.580 END TEST thread 00:06:38.580 ************************************ 00:06:38.580 23:47:17 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:06:38.580 23:47:17 -- spdk/autotest.sh@176 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:06:38.580 23:47:17 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:38.580 23:47:17 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:38.580 23:47:17 -- common/autotest_common.sh@10 -- # set +x 00:06:38.580 ************************************ 00:06:38.580 START TEST app_cmdline 00:06:38.581 ************************************ 00:06:38.581 23:47:17 app_cmdline -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:06:38.581 * Looking for test storage... 00:06:38.581 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:06:38.581 23:47:17 app_cmdline -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:38.581 23:47:17 app_cmdline -- common/autotest_common.sh@1711 -- # lcov --version 00:06:38.581 23:47:17 app_cmdline -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:38.581 23:47:17 app_cmdline -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:38.581 23:47:17 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:38.581 23:47:17 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:38.581 23:47:17 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:38.581 23:47:17 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:06:38.581 23:47:17 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:06:38.581 23:47:17 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:06:38.581 23:47:17 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:06:38.581 23:47:17 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:06:38.581 23:47:17 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:06:38.581 23:47:17 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:06:38.581 23:47:17 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:38.581 23:47:17 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:06:38.581 23:47:17 app_cmdline -- scripts/common.sh@345 -- # : 1 00:06:38.581 23:47:17 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:38.581 23:47:17 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:38.581 23:47:17 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:06:38.581 23:47:17 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:06:38.581 23:47:17 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:38.581 23:47:17 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:06:38.581 23:47:17 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:06:38.581 23:47:17 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:06:38.581 23:47:17 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:06:38.581 23:47:17 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:38.581 23:47:17 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:06:38.581 23:47:17 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:06:38.581 23:47:17 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:38.581 23:47:17 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:38.581 23:47:17 app_cmdline -- scripts/common.sh@368 -- # return 0 00:06:38.581 23:47:17 app_cmdline -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:38.581 23:47:17 app_cmdline -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:38.581 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:38.581 --rc genhtml_branch_coverage=1 00:06:38.581 --rc genhtml_function_coverage=1 00:06:38.581 --rc genhtml_legend=1 00:06:38.581 --rc geninfo_all_blocks=1 00:06:38.581 --rc geninfo_unexecuted_blocks=1 00:06:38.581 00:06:38.581 ' 00:06:38.581 23:47:17 app_cmdline -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:38.581 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:38.581 --rc genhtml_branch_coverage=1 00:06:38.581 --rc genhtml_function_coverage=1 00:06:38.581 --rc genhtml_legend=1 00:06:38.581 --rc geninfo_all_blocks=1 00:06:38.581 --rc geninfo_unexecuted_blocks=1 00:06:38.581 00:06:38.581 ' 00:06:38.581 23:47:17 app_cmdline -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:38.581 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:38.581 --rc genhtml_branch_coverage=1 00:06:38.581 --rc genhtml_function_coverage=1 00:06:38.581 --rc genhtml_legend=1 00:06:38.581 --rc geninfo_all_blocks=1 00:06:38.581 --rc geninfo_unexecuted_blocks=1 00:06:38.581 00:06:38.581 ' 00:06:38.581 23:47:17 app_cmdline -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:38.581 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:38.581 --rc genhtml_branch_coverage=1 00:06:38.581 --rc genhtml_function_coverage=1 00:06:38.581 --rc genhtml_legend=1 00:06:38.581 --rc geninfo_all_blocks=1 00:06:38.581 --rc geninfo_unexecuted_blocks=1 00:06:38.581 00:06:38.581 ' 00:06:38.581 23:47:17 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:06:38.581 23:47:17 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=3816102 00:06:38.581 23:47:17 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 3816102 00:06:38.581 23:47:17 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 3816102 ']' 00:06:38.581 23:47:17 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:38.581 23:47:17 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:38.581 23:47:17 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:06:38.581 23:47:17 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:38.581 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:38.581 23:47:17 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:38.581 23:47:17 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:38.581 [2024-12-13 23:47:17.646917] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:06:38.581 [2024-12-13 23:47:17.647007] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3816102 ] 00:06:38.840 [2024-12-13 23:47:17.759580] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:38.840 [2024-12-13 23:47:17.864405] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:39.777 23:47:18 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:39.777 23:47:18 app_cmdline -- common/autotest_common.sh@868 -- # return 0 00:06:39.777 23:47:18 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:06:39.777 { 00:06:39.777 "version": "SPDK v25.01-pre git sha1 e01cb43b8", 00:06:39.777 "fields": { 00:06:39.777 "major": 25, 00:06:39.777 "minor": 1, 00:06:39.777 "patch": 0, 00:06:39.777 "suffix": "-pre", 00:06:39.777 "commit": "e01cb43b8" 00:06:39.777 } 00:06:39.777 } 00:06:39.777 23:47:18 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:06:39.777 23:47:18 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:06:39.777 23:47:18 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:06:39.777 23:47:18 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:06:39.777 23:47:18 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:06:39.777 23:47:18 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:39.777 23:47:18 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:39.777 23:47:18 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:06:39.777 23:47:18 app_cmdline -- app/cmdline.sh@26 -- # sort 00:06:39.777 23:47:18 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:39.777 23:47:18 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:06:39.777 23:47:18 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:06:39.777 23:47:18 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:39.777 23:47:18 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:06:39.777 23:47:18 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:39.777 23:47:18 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:40.036 23:47:18 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:40.036 23:47:18 app_cmdline -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:40.036 23:47:18 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:40.036 23:47:18 app_cmdline -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:40.036 23:47:18 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:40.036 23:47:18 app_cmdline -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:40.036 23:47:18 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:06:40.036 23:47:18 app_cmdline -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:40.036 request: 00:06:40.036 { 00:06:40.036 "method": "env_dpdk_get_mem_stats", 00:06:40.036 "req_id": 1 00:06:40.036 } 00:06:40.036 Got JSON-RPC error response 00:06:40.036 response: 00:06:40.036 { 00:06:40.036 "code": -32601, 00:06:40.036 "message": "Method not found" 00:06:40.036 } 00:06:40.036 23:47:19 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:06:40.036 23:47:19 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:40.036 23:47:19 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:40.036 23:47:19 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:40.036 23:47:19 app_cmdline -- app/cmdline.sh@1 -- # killprocess 3816102 00:06:40.036 23:47:19 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 3816102 ']' 00:06:40.036 23:47:19 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 3816102 00:06:40.036 23:47:19 app_cmdline -- common/autotest_common.sh@959 -- # uname 00:06:40.036 23:47:19 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:40.036 23:47:19 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3816102 00:06:40.036 23:47:19 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:40.036 23:47:19 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:40.036 23:47:19 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3816102' 00:06:40.036 killing process with pid 3816102 00:06:40.036 23:47:19 app_cmdline -- common/autotest_common.sh@973 -- # kill 3816102 00:06:40.036 23:47:19 app_cmdline -- common/autotest_common.sh@978 -- # wait 3816102 00:06:42.567 00:06:42.567 real 0m4.070s 00:06:42.567 user 0m4.298s 00:06:42.567 sys 0m0.584s 00:06:42.567 23:47:21 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:42.567 23:47:21 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:42.567 ************************************ 00:06:42.567 END TEST app_cmdline 00:06:42.567 ************************************ 00:06:42.567 23:47:21 -- spdk/autotest.sh@177 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:06:42.567 23:47:21 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:42.567 23:47:21 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:42.567 23:47:21 -- common/autotest_common.sh@10 -- # set +x 00:06:42.567 ************************************ 00:06:42.567 START TEST version 00:06:42.568 ************************************ 00:06:42.568 23:47:21 version -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:06:42.568 * Looking for test storage... 00:06:42.568 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:06:42.568 23:47:21 version -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:42.568 23:47:21 version -- common/autotest_common.sh@1711 -- # lcov --version 00:06:42.568 23:47:21 version -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:42.568 23:47:21 version -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:42.568 23:47:21 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:42.568 23:47:21 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:42.568 23:47:21 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:42.568 23:47:21 version -- scripts/common.sh@336 -- # IFS=.-: 00:06:42.568 23:47:21 version -- scripts/common.sh@336 -- # read -ra ver1 00:06:42.568 23:47:21 version -- scripts/common.sh@337 -- # IFS=.-: 00:06:42.568 23:47:21 version -- scripts/common.sh@337 -- # read -ra ver2 00:06:42.568 23:47:21 version -- scripts/common.sh@338 -- # local 'op=<' 00:06:42.568 23:47:21 version -- scripts/common.sh@340 -- # ver1_l=2 00:06:42.568 23:47:21 version -- scripts/common.sh@341 -- # ver2_l=1 00:06:42.568 23:47:21 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:42.568 23:47:21 version -- scripts/common.sh@344 -- # case "$op" in 00:06:42.568 23:47:21 version -- scripts/common.sh@345 -- # : 1 00:06:42.568 23:47:21 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:42.568 23:47:21 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:42.568 23:47:21 version -- scripts/common.sh@365 -- # decimal 1 00:06:42.568 23:47:21 version -- scripts/common.sh@353 -- # local d=1 00:06:42.568 23:47:21 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:42.568 23:47:21 version -- scripts/common.sh@355 -- # echo 1 00:06:42.568 23:47:21 version -- scripts/common.sh@365 -- # ver1[v]=1 00:06:42.568 23:47:21 version -- scripts/common.sh@366 -- # decimal 2 00:06:42.568 23:47:21 version -- scripts/common.sh@353 -- # local d=2 00:06:42.568 23:47:21 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:42.568 23:47:21 version -- scripts/common.sh@355 -- # echo 2 00:06:42.568 23:47:21 version -- scripts/common.sh@366 -- # ver2[v]=2 00:06:42.568 23:47:21 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:42.827 23:47:21 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:42.827 23:47:21 version -- scripts/common.sh@368 -- # return 0 00:06:42.827 23:47:21 version -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:42.827 23:47:21 version -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:42.827 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:42.827 --rc genhtml_branch_coverage=1 00:06:42.827 --rc genhtml_function_coverage=1 00:06:42.827 --rc genhtml_legend=1 00:06:42.827 --rc geninfo_all_blocks=1 00:06:42.827 --rc geninfo_unexecuted_blocks=1 00:06:42.827 00:06:42.827 ' 00:06:42.827 23:47:21 version -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:42.827 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:42.827 --rc genhtml_branch_coverage=1 00:06:42.827 --rc genhtml_function_coverage=1 00:06:42.827 --rc genhtml_legend=1 00:06:42.827 --rc geninfo_all_blocks=1 00:06:42.827 --rc geninfo_unexecuted_blocks=1 00:06:42.827 00:06:42.827 ' 00:06:42.827 23:47:21 version -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:42.827 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:42.827 --rc genhtml_branch_coverage=1 00:06:42.827 --rc genhtml_function_coverage=1 00:06:42.827 --rc genhtml_legend=1 00:06:42.827 --rc geninfo_all_blocks=1 00:06:42.827 --rc geninfo_unexecuted_blocks=1 00:06:42.827 00:06:42.827 ' 00:06:42.827 23:47:21 version -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:42.827 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:42.827 --rc genhtml_branch_coverage=1 00:06:42.827 --rc genhtml_function_coverage=1 00:06:42.827 --rc genhtml_legend=1 00:06:42.827 --rc geninfo_all_blocks=1 00:06:42.827 --rc geninfo_unexecuted_blocks=1 00:06:42.827 00:06:42.827 ' 00:06:42.827 23:47:21 version -- app/version.sh@17 -- # get_header_version major 00:06:42.827 23:47:21 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:42.827 23:47:21 version -- app/version.sh@14 -- # tr -d '"' 00:06:42.827 23:47:21 version -- app/version.sh@14 -- # cut -f2 00:06:42.827 23:47:21 version -- app/version.sh@17 -- # major=25 00:06:42.827 23:47:21 version -- app/version.sh@18 -- # get_header_version minor 00:06:42.827 23:47:21 version -- app/version.sh@14 -- # tr -d '"' 00:06:42.827 23:47:21 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:42.827 23:47:21 version -- app/version.sh@14 -- # cut -f2 00:06:42.827 23:47:21 version -- app/version.sh@18 -- # minor=1 00:06:42.827 23:47:21 version -- app/version.sh@19 -- # get_header_version patch 00:06:42.827 23:47:21 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:42.827 23:47:21 version -- app/version.sh@14 -- # cut -f2 00:06:42.827 23:47:21 version -- app/version.sh@14 -- # tr -d '"' 00:06:42.827 23:47:21 version -- app/version.sh@19 -- # patch=0 00:06:42.827 23:47:21 version -- app/version.sh@20 -- # get_header_version suffix 00:06:42.827 23:47:21 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:42.827 23:47:21 version -- app/version.sh@14 -- # cut -f2 00:06:42.827 23:47:21 version -- app/version.sh@14 -- # tr -d '"' 00:06:42.827 23:47:21 version -- app/version.sh@20 -- # suffix=-pre 00:06:42.827 23:47:21 version -- app/version.sh@22 -- # version=25.1 00:06:42.827 23:47:21 version -- app/version.sh@25 -- # (( patch != 0 )) 00:06:42.827 23:47:21 version -- app/version.sh@28 -- # version=25.1rc0 00:06:42.827 23:47:21 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:06:42.827 23:47:21 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:06:42.827 23:47:21 version -- app/version.sh@30 -- # py_version=25.1rc0 00:06:42.828 23:47:21 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:06:42.828 00:06:42.828 real 0m0.241s 00:06:42.828 user 0m0.155s 00:06:42.828 sys 0m0.127s 00:06:42.828 23:47:21 version -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:42.828 23:47:21 version -- common/autotest_common.sh@10 -- # set +x 00:06:42.828 ************************************ 00:06:42.828 END TEST version 00:06:42.828 ************************************ 00:06:42.828 23:47:21 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:06:42.828 23:47:21 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:06:42.828 23:47:21 -- spdk/autotest.sh@194 -- # uname -s 00:06:42.828 23:47:21 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:06:42.828 23:47:21 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:06:42.828 23:47:21 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:06:42.828 23:47:21 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:06:42.828 23:47:21 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:06:42.828 23:47:21 -- spdk/autotest.sh@260 -- # timing_exit lib 00:06:42.828 23:47:21 -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:42.828 23:47:21 -- common/autotest_common.sh@10 -- # set +x 00:06:42.828 23:47:21 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:06:42.828 23:47:21 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:06:42.828 23:47:21 -- spdk/autotest.sh@276 -- # '[' 1 -eq 1 ']' 00:06:42.828 23:47:21 -- spdk/autotest.sh@277 -- # export NET_TYPE 00:06:42.828 23:47:21 -- spdk/autotest.sh@280 -- # '[' tcp = rdma ']' 00:06:42.828 23:47:21 -- spdk/autotest.sh@283 -- # '[' tcp = tcp ']' 00:06:42.828 23:47:21 -- spdk/autotest.sh@284 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:06:42.828 23:47:21 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:42.828 23:47:21 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:42.828 23:47:21 -- common/autotest_common.sh@10 -- # set +x 00:06:42.828 ************************************ 00:06:42.828 START TEST nvmf_tcp 00:06:42.828 ************************************ 00:06:42.828 23:47:21 nvmf_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:06:43.087 * Looking for test storage... 00:06:43.087 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:06:43.087 23:47:21 nvmf_tcp -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:43.087 23:47:21 nvmf_tcp -- common/autotest_common.sh@1711 -- # lcov --version 00:06:43.087 23:47:21 nvmf_tcp -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:43.087 23:47:22 nvmf_tcp -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:43.087 23:47:22 nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:43.087 23:47:22 nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:43.087 23:47:22 nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:43.087 23:47:22 nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:06:43.087 23:47:22 nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:06:43.087 23:47:22 nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:06:43.087 23:47:22 nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:06:43.087 23:47:22 nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:06:43.087 23:47:22 nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:06:43.087 23:47:22 nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:06:43.087 23:47:22 nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:43.087 23:47:22 nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:06:43.087 23:47:22 nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:06:43.087 23:47:22 nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:43.087 23:47:22 nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:43.087 23:47:22 nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:06:43.087 23:47:22 nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:06:43.087 23:47:22 nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:43.087 23:47:22 nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:06:43.087 23:47:22 nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:06:43.087 23:47:22 nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:06:43.087 23:47:22 nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:06:43.087 23:47:22 nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:43.087 23:47:22 nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:06:43.087 23:47:22 nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:06:43.087 23:47:22 nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:43.087 23:47:22 nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:43.087 23:47:22 nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:06:43.087 23:47:22 nvmf_tcp -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:43.087 23:47:22 nvmf_tcp -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:43.087 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:43.087 --rc genhtml_branch_coverage=1 00:06:43.087 --rc genhtml_function_coverage=1 00:06:43.087 --rc genhtml_legend=1 00:06:43.087 --rc geninfo_all_blocks=1 00:06:43.087 --rc geninfo_unexecuted_blocks=1 00:06:43.087 00:06:43.087 ' 00:06:43.087 23:47:22 nvmf_tcp -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:43.087 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:43.087 --rc genhtml_branch_coverage=1 00:06:43.087 --rc genhtml_function_coverage=1 00:06:43.087 --rc genhtml_legend=1 00:06:43.087 --rc geninfo_all_blocks=1 00:06:43.087 --rc geninfo_unexecuted_blocks=1 00:06:43.087 00:06:43.087 ' 00:06:43.087 23:47:22 nvmf_tcp -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:43.087 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:43.087 --rc genhtml_branch_coverage=1 00:06:43.087 --rc genhtml_function_coverage=1 00:06:43.087 --rc genhtml_legend=1 00:06:43.087 --rc geninfo_all_blocks=1 00:06:43.087 --rc geninfo_unexecuted_blocks=1 00:06:43.087 00:06:43.087 ' 00:06:43.087 23:47:22 nvmf_tcp -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:43.087 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:43.087 --rc genhtml_branch_coverage=1 00:06:43.087 --rc genhtml_function_coverage=1 00:06:43.087 --rc genhtml_legend=1 00:06:43.087 --rc geninfo_all_blocks=1 00:06:43.087 --rc geninfo_unexecuted_blocks=1 00:06:43.087 00:06:43.087 ' 00:06:43.087 23:47:22 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:06:43.087 23:47:22 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:06:43.087 23:47:22 nvmf_tcp -- nvmf/nvmf.sh@14 -- # run_test nvmf_target_core /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:06:43.087 23:47:22 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:43.087 23:47:22 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:43.087 23:47:22 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:43.087 ************************************ 00:06:43.087 START TEST nvmf_target_core 00:06:43.087 ************************************ 00:06:43.087 23:47:22 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:06:43.087 * Looking for test storage... 00:06:43.087 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:06:43.087 23:47:22 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:43.087 23:47:22 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1711 -- # lcov --version 00:06:43.087 23:47:22 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:43.347 23:47:22 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:43.347 23:47:22 nvmf_tcp.nvmf_target_core -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:43.347 23:47:22 nvmf_tcp.nvmf_target_core -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:43.347 23:47:22 nvmf_tcp.nvmf_target_core -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:43.347 23:47:22 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # IFS=.-: 00:06:43.347 23:47:22 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # read -ra ver1 00:06:43.347 23:47:22 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # IFS=.-: 00:06:43.347 23:47:22 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # read -ra ver2 00:06:43.347 23:47:22 nvmf_tcp.nvmf_target_core -- scripts/common.sh@338 -- # local 'op=<' 00:06:43.347 23:47:22 nvmf_tcp.nvmf_target_core -- scripts/common.sh@340 -- # ver1_l=2 00:06:43.347 23:47:22 nvmf_tcp.nvmf_target_core -- scripts/common.sh@341 -- # ver2_l=1 00:06:43.347 23:47:22 nvmf_tcp.nvmf_target_core -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:43.347 23:47:22 nvmf_tcp.nvmf_target_core -- scripts/common.sh@344 -- # case "$op" in 00:06:43.347 23:47:22 nvmf_tcp.nvmf_target_core -- scripts/common.sh@345 -- # : 1 00:06:43.347 23:47:22 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:43.347 23:47:22 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:43.347 23:47:22 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # decimal 1 00:06:43.347 23:47:22 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=1 00:06:43.347 23:47:22 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:43.347 23:47:22 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 1 00:06:43.347 23:47:22 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # ver1[v]=1 00:06:43.347 23:47:22 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # decimal 2 00:06:43.347 23:47:22 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=2 00:06:43.347 23:47:22 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:43.347 23:47:22 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 2 00:06:43.347 23:47:22 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # ver2[v]=2 00:06:43.347 23:47:22 nvmf_tcp.nvmf_target_core -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:43.347 23:47:22 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:43.347 23:47:22 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # return 0 00:06:43.347 23:47:22 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:43.347 23:47:22 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:43.347 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:43.347 --rc genhtml_branch_coverage=1 00:06:43.347 --rc genhtml_function_coverage=1 00:06:43.347 --rc genhtml_legend=1 00:06:43.347 --rc geninfo_all_blocks=1 00:06:43.347 --rc geninfo_unexecuted_blocks=1 00:06:43.347 00:06:43.347 ' 00:06:43.347 23:47:22 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:43.347 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:43.347 --rc genhtml_branch_coverage=1 00:06:43.347 --rc genhtml_function_coverage=1 00:06:43.347 --rc genhtml_legend=1 00:06:43.347 --rc geninfo_all_blocks=1 00:06:43.347 --rc geninfo_unexecuted_blocks=1 00:06:43.347 00:06:43.347 ' 00:06:43.347 23:47:22 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:43.347 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:43.347 --rc genhtml_branch_coverage=1 00:06:43.347 --rc genhtml_function_coverage=1 00:06:43.347 --rc genhtml_legend=1 00:06:43.347 --rc geninfo_all_blocks=1 00:06:43.347 --rc geninfo_unexecuted_blocks=1 00:06:43.347 00:06:43.347 ' 00:06:43.347 23:47:22 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:43.347 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:43.347 --rc genhtml_branch_coverage=1 00:06:43.347 --rc genhtml_function_coverage=1 00:06:43.347 --rc genhtml_legend=1 00:06:43.347 --rc geninfo_all_blocks=1 00:06:43.347 --rc geninfo_unexecuted_blocks=1 00:06:43.347 00:06:43.347 ' 00:06:43.347 23:47:22 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:06:43.347 23:47:22 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:06:43.347 23:47:22 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:43.347 23:47:22 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s 00:06:43.347 23:47:22 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:43.347 23:47:22 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:43.347 23:47:22 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:43.347 23:47:22 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:43.347 23:47:22 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:43.347 23:47:22 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:43.347 23:47:22 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:43.347 23:47:22 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:43.347 23:47:22 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:43.347 23:47:22 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:43.347 23:47:22 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:06:43.347 23:47:22 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:06:43.347 23:47:22 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:43.347 23:47:22 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:43.347 23:47:22 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:43.347 23:47:22 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:43.347 23:47:22 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:43.347 23:47:22 nvmf_tcp.nvmf_target_core -- scripts/common.sh@15 -- # shopt -s extglob 00:06:43.347 23:47:22 nvmf_tcp.nvmf_target_core -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:43.347 23:47:22 nvmf_tcp.nvmf_target_core -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:43.347 23:47:22 nvmf_tcp.nvmf_target_core -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:43.347 23:47:22 nvmf_tcp.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:43.347 23:47:22 nvmf_tcp.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:43.347 23:47:22 nvmf_tcp.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:43.347 23:47:22 nvmf_tcp.nvmf_target_core -- paths/export.sh@5 -- # export PATH 00:06:43.347 23:47:22 nvmf_tcp.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:43.347 23:47:22 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@51 -- # : 0 00:06:43.347 23:47:22 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:43.347 23:47:22 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:43.347 23:47:22 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:43.347 23:47:22 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:43.347 23:47:22 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:43.347 23:47:22 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:43.347 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:43.347 23:47:22 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:43.347 23:47:22 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:43.347 23:47:22 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:43.347 23:47:22 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:06:43.347 23:47:22 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:06:43.347 23:47:22 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:06:43.347 23:47:22 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:06:43.347 23:47:22 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:43.347 23:47:22 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:43.347 23:47:22 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:43.347 ************************************ 00:06:43.347 START TEST nvmf_abort 00:06:43.347 ************************************ 00:06:43.347 23:47:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:06:43.347 * Looking for test storage... 00:06:43.347 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:43.347 23:47:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:43.348 23:47:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1711 -- # lcov --version 00:06:43.348 23:47:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:43.348 23:47:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:43.348 23:47:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:43.348 23:47:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:43.348 23:47:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:43.348 23:47:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:06:43.348 23:47:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:06:43.607 23:47:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:06:43.607 23:47:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:06:43.607 23:47:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:06:43.607 23:47:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:06:43.607 23:47:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:06:43.607 23:47:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:43.607 23:47:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:06:43.607 23:47:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:06:43.607 23:47:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:43.607 23:47:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:43.607 23:47:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:06:43.607 23:47:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:06:43.607 23:47:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:43.607 23:47:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:06:43.607 23:47:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:06:43.607 23:47:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:06:43.607 23:47:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:06:43.607 23:47:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:43.607 23:47:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:06:43.607 23:47:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:06:43.607 23:47:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:43.607 23:47:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:43.607 23:47:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:06:43.607 23:47:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:43.607 23:47:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:43.607 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:43.607 --rc genhtml_branch_coverage=1 00:06:43.607 --rc genhtml_function_coverage=1 00:06:43.607 --rc genhtml_legend=1 00:06:43.607 --rc geninfo_all_blocks=1 00:06:43.607 --rc geninfo_unexecuted_blocks=1 00:06:43.607 00:06:43.607 ' 00:06:43.607 23:47:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:43.608 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:43.608 --rc genhtml_branch_coverage=1 00:06:43.608 --rc genhtml_function_coverage=1 00:06:43.608 --rc genhtml_legend=1 00:06:43.608 --rc geninfo_all_blocks=1 00:06:43.608 --rc geninfo_unexecuted_blocks=1 00:06:43.608 00:06:43.608 ' 00:06:43.608 23:47:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:43.608 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:43.608 --rc genhtml_branch_coverage=1 00:06:43.608 --rc genhtml_function_coverage=1 00:06:43.608 --rc genhtml_legend=1 00:06:43.608 --rc geninfo_all_blocks=1 00:06:43.608 --rc geninfo_unexecuted_blocks=1 00:06:43.608 00:06:43.608 ' 00:06:43.608 23:47:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:43.608 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:43.608 --rc genhtml_branch_coverage=1 00:06:43.608 --rc genhtml_function_coverage=1 00:06:43.608 --rc genhtml_legend=1 00:06:43.608 --rc geninfo_all_blocks=1 00:06:43.608 --rc geninfo_unexecuted_blocks=1 00:06:43.608 00:06:43.608 ' 00:06:43.608 23:47:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:43.608 23:47:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:06:43.608 23:47:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:43.608 23:47:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:43.608 23:47:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:43.608 23:47:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:43.608 23:47:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:43.608 23:47:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:43.608 23:47:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:43.608 23:47:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:43.608 23:47:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:43.608 23:47:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:43.608 23:47:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:06:43.608 23:47:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:06:43.608 23:47:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:43.608 23:47:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:43.608 23:47:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:43.608 23:47:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:43.608 23:47:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:43.608 23:47:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:06:43.608 23:47:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:43.608 23:47:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:43.608 23:47:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:43.608 23:47:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:43.608 23:47:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:43.608 23:47:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:43.608 23:47:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:06:43.608 23:47:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:43.608 23:47:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:06:43.608 23:47:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:43.608 23:47:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:43.608 23:47:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:43.608 23:47:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:43.608 23:47:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:43.608 23:47:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:43.608 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:43.608 23:47:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:43.608 23:47:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:43.608 23:47:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:43.608 23:47:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:06:43.608 23:47:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:06:43.608 23:47:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:06:43.608 23:47:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:06:43.608 23:47:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:43.608 23:47:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@476 -- # prepare_net_devs 00:06:43.608 23:47:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@438 -- # local -g is_hw=no 00:06:43.608 23:47:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@440 -- # remove_spdk_ns 00:06:43.608 23:47:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:43.608 23:47:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:43.608 23:47:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:43.608 23:47:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:06:43.608 23:47:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:06:43.608 23:47:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:06:43.608 23:47:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:48.881 23:47:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:48.881 23:47:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:06:48.881 23:47:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:06:48.881 23:47:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:06:48.881 23:47:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:06:48.881 23:47:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:06:48.881 23:47:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:06:48.881 23:47:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:06:48.881 23:47:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:06:48.881 23:47:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:06:48.881 23:47:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:06:48.881 23:47:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:06:48.881 23:47:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:06:48.881 23:47:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:06:48.881 23:47:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:06:48.881 23:47:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:48.881 23:47:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:48.881 23:47:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:48.881 23:47:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:48.881 23:47:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:48.881 23:47:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:48.881 23:47:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:48.881 23:47:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:06:48.881 23:47:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:48.881 23:47:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:48.881 23:47:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:48.881 23:47:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:48.881 23:47:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:06:48.881 23:47:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:06:48.881 23:47:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:06:48.881 23:47:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:06:48.881 23:47:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:06:48.881 23:47:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:06:48.881 23:47:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:48.881 23:47:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:06:48.881 Found 0000:af:00.0 (0x8086 - 0x159b) 00:06:48.881 23:47:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:48.881 23:47:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:48.881 23:47:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:48.881 23:47:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:48.881 23:47:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:48.881 23:47:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:48.881 23:47:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:06:48.881 Found 0000:af:00.1 (0x8086 - 0x159b) 00:06:48.881 23:47:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:48.881 23:47:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:48.881 23:47:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:48.881 23:47:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:48.881 23:47:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:48.881 23:47:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:06:48.881 23:47:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:06:48.881 23:47:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:06:48.881 23:47:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:48.882 23:47:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:48.882 23:47:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:48.882 23:47:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:48.882 23:47:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:48.882 23:47:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:48.882 23:47:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:48.882 23:47:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:06:48.882 Found net devices under 0000:af:00.0: cvl_0_0 00:06:48.882 23:47:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:48.882 23:47:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:48.882 23:47:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:48.882 23:47:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:48.882 23:47:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:48.882 23:47:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:48.882 23:47:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:48.882 23:47:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:48.882 23:47:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:06:48.882 Found net devices under 0000:af:00.1: cvl_0_1 00:06:48.882 23:47:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:48.882 23:47:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:06:48.882 23:47:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # is_hw=yes 00:06:48.882 23:47:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:06:48.882 23:47:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:06:48.882 23:47:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:06:48.882 23:47:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:06:48.882 23:47:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:48.882 23:47:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:48.882 23:47:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:48.882 23:47:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:06:48.882 23:47:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:48.882 23:47:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:48.882 23:47:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:06:48.882 23:47:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:06:48.882 23:47:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:48.882 23:47:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:48.882 23:47:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:06:48.882 23:47:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:06:48.882 23:47:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:06:48.882 23:47:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:49.141 23:47:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:49.141 23:47:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:49.141 23:47:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:06:49.141 23:47:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:49.141 23:47:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:49.141 23:47:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:49.141 23:47:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:06:49.141 23:47:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:06:49.141 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:49.141 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.289 ms 00:06:49.141 00:06:49.141 --- 10.0.0.2 ping statistics --- 00:06:49.141 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:49.141 rtt min/avg/max/mdev = 0.289/0.289/0.289/0.000 ms 00:06:49.141 23:47:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:49.141 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:49.141 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.148 ms 00:06:49.141 00:06:49.141 --- 10.0.0.1 ping statistics --- 00:06:49.141 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:49.141 rtt min/avg/max/mdev = 0.148/0.148/0.148/0.000 ms 00:06:49.141 23:47:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:49.141 23:47:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@450 -- # return 0 00:06:49.141 23:47:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:06:49.141 23:47:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:49.141 23:47:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:06:49.141 23:47:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:06:49.141 23:47:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:49.141 23:47:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:06:49.141 23:47:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:06:49.401 23:47:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:06:49.401 23:47:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:06:49.401 23:47:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:49.401 23:47:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:49.401 23:47:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@509 -- # nvmfpid=3820166 00:06:49.401 23:47:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:06:49.401 23:47:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@510 -- # waitforlisten 3820166 00:06:49.401 23:47:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@835 -- # '[' -z 3820166 ']' 00:06:49.401 23:47:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:49.401 23:47:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:49.401 23:47:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:49.401 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:49.401 23:47:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:49.401 23:47:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:49.401 [2024-12-13 23:47:28.385714] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:06:49.401 [2024-12-13 23:47:28.385804] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:49.401 [2024-12-13 23:47:28.503735] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:49.660 [2024-12-13 23:47:28.613090] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:49.660 [2024-12-13 23:47:28.613135] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:49.660 [2024-12-13 23:47:28.613145] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:49.660 [2024-12-13 23:47:28.613156] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:49.660 [2024-12-13 23:47:28.613164] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:49.660 [2024-12-13 23:47:28.615389] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:06:49.660 [2024-12-13 23:47:28.615478] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:06:49.660 [2024-12-13 23:47:28.615482] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:06:50.228 23:47:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:50.228 23:47:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@868 -- # return 0 00:06:50.228 23:47:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:06:50.228 23:47:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:50.228 23:47:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:50.228 23:47:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:50.228 23:47:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:06:50.228 23:47:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:50.228 23:47:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:50.228 [2024-12-13 23:47:29.228628] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:50.228 23:47:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:50.228 23:47:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:06:50.228 23:47:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:50.228 23:47:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:50.228 Malloc0 00:06:50.228 23:47:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:50.228 23:47:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:06:50.228 23:47:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:50.228 23:47:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:50.228 Delay0 00:06:50.228 23:47:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:50.228 23:47:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:06:50.228 23:47:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:50.228 23:47:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:50.228 23:47:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:50.228 23:47:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:06:50.228 23:47:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:50.228 23:47:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:50.228 23:47:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:50.228 23:47:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:06:50.228 23:47:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:50.228 23:47:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:50.228 [2024-12-13 23:47:29.367580] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:50.487 23:47:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:50.487 23:47:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:06:50.487 23:47:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:50.487 23:47:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:50.487 23:47:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:50.487 23:47:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:06:50.487 [2024-12-13 23:47:29.522618] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:06:53.022 Initializing NVMe Controllers 00:06:53.022 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:06:53.022 controller IO queue size 128 less than required 00:06:53.022 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:06:53.022 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:06:53.022 Initialization complete. Launching workers. 00:06:53.022 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 33315 00:06:53.022 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 33372, failed to submit 66 00:06:53.022 success 33315, unsuccessful 57, failed 0 00:06:53.022 23:47:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:06:53.022 23:47:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:53.022 23:47:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:53.022 23:47:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:53.022 23:47:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:06:53.022 23:47:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:06:53.022 23:47:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@516 -- # nvmfcleanup 00:06:53.022 23:47:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:06:53.022 23:47:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:06:53.022 23:47:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:06:53.022 23:47:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:06:53.022 23:47:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:06:53.022 rmmod nvme_tcp 00:06:53.022 rmmod nvme_fabrics 00:06:53.022 rmmod nvme_keyring 00:06:53.022 23:47:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:06:53.022 23:47:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:06:53.022 23:47:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:06:53.022 23:47:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@517 -- # '[' -n 3820166 ']' 00:06:53.022 23:47:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@518 -- # killprocess 3820166 00:06:53.022 23:47:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@954 -- # '[' -z 3820166 ']' 00:06:53.022 23:47:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@958 -- # kill -0 3820166 00:06:53.022 23:47:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@959 -- # uname 00:06:53.022 23:47:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:53.022 23:47:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3820166 00:06:53.022 23:47:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:06:53.022 23:47:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:06:53.022 23:47:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3820166' 00:06:53.022 killing process with pid 3820166 00:06:53.022 23:47:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@973 -- # kill 3820166 00:06:53.022 23:47:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@978 -- # wait 3820166 00:06:53.959 23:47:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:06:53.959 23:47:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:06:53.959 23:47:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:06:53.959 23:47:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:06:53.959 23:47:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # iptables-save 00:06:53.959 23:47:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:06:53.959 23:47:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # iptables-restore 00:06:53.959 23:47:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:06:53.959 23:47:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:06:53.959 23:47:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:53.959 23:47:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:53.959 23:47:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:56.495 23:47:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:06:56.495 00:06:56.495 real 0m12.813s 00:06:56.495 user 0m16.292s 00:06:56.495 sys 0m5.290s 00:06:56.495 23:47:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:56.495 23:47:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:56.495 ************************************ 00:06:56.495 END TEST nvmf_abort 00:06:56.495 ************************************ 00:06:56.495 23:47:35 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:06:56.495 23:47:35 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:56.495 23:47:35 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:56.495 23:47:35 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:56.495 ************************************ 00:06:56.495 START TEST nvmf_ns_hotplug_stress 00:06:56.495 ************************************ 00:06:56.495 23:47:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:06:56.495 * Looking for test storage... 00:06:56.495 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:56.495 23:47:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:56.495 23:47:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # lcov --version 00:06:56.495 23:47:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:56.495 23:47:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:56.495 23:47:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:56.495 23:47:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:56.495 23:47:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:56.495 23:47:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:06:56.495 23:47:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:06:56.495 23:47:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:06:56.495 23:47:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:06:56.495 23:47:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:06:56.495 23:47:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:06:56.495 23:47:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:06:56.495 23:47:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:56.495 23:47:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:06:56.495 23:47:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:06:56.495 23:47:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:56.495 23:47:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:56.495 23:47:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:06:56.495 23:47:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:06:56.495 23:47:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:56.495 23:47:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:06:56.495 23:47:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:06:56.495 23:47:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:06:56.495 23:47:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:06:56.495 23:47:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:56.495 23:47:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:06:56.495 23:47:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:06:56.495 23:47:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:56.495 23:47:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:56.495 23:47:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:06:56.495 23:47:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:56.495 23:47:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:56.495 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:56.495 --rc genhtml_branch_coverage=1 00:06:56.495 --rc genhtml_function_coverage=1 00:06:56.495 --rc genhtml_legend=1 00:06:56.495 --rc geninfo_all_blocks=1 00:06:56.495 --rc geninfo_unexecuted_blocks=1 00:06:56.495 00:06:56.495 ' 00:06:56.495 23:47:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:56.495 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:56.495 --rc genhtml_branch_coverage=1 00:06:56.495 --rc genhtml_function_coverage=1 00:06:56.495 --rc genhtml_legend=1 00:06:56.495 --rc geninfo_all_blocks=1 00:06:56.495 --rc geninfo_unexecuted_blocks=1 00:06:56.495 00:06:56.495 ' 00:06:56.495 23:47:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:56.495 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:56.495 --rc genhtml_branch_coverage=1 00:06:56.495 --rc genhtml_function_coverage=1 00:06:56.495 --rc genhtml_legend=1 00:06:56.495 --rc geninfo_all_blocks=1 00:06:56.495 --rc geninfo_unexecuted_blocks=1 00:06:56.495 00:06:56.495 ' 00:06:56.495 23:47:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:56.495 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:56.495 --rc genhtml_branch_coverage=1 00:06:56.495 --rc genhtml_function_coverage=1 00:06:56.495 --rc genhtml_legend=1 00:06:56.495 --rc geninfo_all_blocks=1 00:06:56.495 --rc geninfo_unexecuted_blocks=1 00:06:56.495 00:06:56.495 ' 00:06:56.495 23:47:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:56.495 23:47:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:06:56.495 23:47:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:56.495 23:47:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:56.495 23:47:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:56.495 23:47:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:56.495 23:47:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:56.495 23:47:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:56.495 23:47:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:56.495 23:47:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:56.495 23:47:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:56.495 23:47:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:56.495 23:47:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:06:56.495 23:47:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:06:56.495 23:47:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:56.495 23:47:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:56.495 23:47:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:56.495 23:47:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:56.495 23:47:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:56.495 23:47:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:06:56.495 23:47:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:56.495 23:47:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:56.495 23:47:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:56.496 23:47:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:56.496 23:47:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:56.496 23:47:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:56.496 23:47:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:06:56.496 23:47:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:56.496 23:47:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:06:56.496 23:47:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:56.496 23:47:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:56.496 23:47:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:56.496 23:47:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:56.496 23:47:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:56.496 23:47:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:56.496 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:56.496 23:47:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:56.496 23:47:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:56.496 23:47:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:56.496 23:47:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:56.496 23:47:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:06:56.496 23:47:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:06:56.496 23:47:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:56.496 23:47:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:06:56.496 23:47:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:06:56.496 23:47:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:06:56.496 23:47:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:56.496 23:47:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:56.496 23:47:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:56.496 23:47:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:06:56.496 23:47:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:06:56.496 23:47:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:06:56.496 23:47:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:01.771 23:47:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:01.771 23:47:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:07:01.771 23:47:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:01.771 23:47:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:01.771 23:47:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:01.771 23:47:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:01.771 23:47:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:01.771 23:47:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:07:01.771 23:47:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:01.771 23:47:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:07:01.771 23:47:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:07:01.771 23:47:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:07:01.771 23:47:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:07:01.771 23:47:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:07:01.771 23:47:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:07:01.771 23:47:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:01.771 23:47:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:01.771 23:47:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:01.771 23:47:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:01.771 23:47:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:01.771 23:47:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:01.771 23:47:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:01.771 23:47:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:01.771 23:47:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:01.771 23:47:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:01.771 23:47:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:01.771 23:47:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:01.771 23:47:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:01.771 23:47:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:07:01.771 23:47:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:07:01.771 23:47:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:07:01.771 23:47:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:07:01.771 23:47:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:01.771 23:47:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:01.771 23:47:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:07:01.771 Found 0000:af:00.0 (0x8086 - 0x159b) 00:07:01.771 23:47:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:01.771 23:47:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:01.771 23:47:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:01.771 23:47:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:01.771 23:47:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:01.771 23:47:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:01.771 23:47:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:07:01.771 Found 0000:af:00.1 (0x8086 - 0x159b) 00:07:01.771 23:47:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:01.771 23:47:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:01.771 23:47:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:01.771 23:47:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:01.771 23:47:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:01.771 23:47:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:01.771 23:47:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:07:01.771 23:47:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:07:01.771 23:47:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:01.771 23:47:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:01.771 23:47:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:01.771 23:47:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:01.771 23:47:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:01.771 23:47:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:01.771 23:47:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:01.771 23:47:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:07:01.771 Found net devices under 0000:af:00.0: cvl_0_0 00:07:01.771 23:47:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:01.771 23:47:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:01.771 23:47:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:01.771 23:47:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:01.771 23:47:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:01.771 23:47:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:01.771 23:47:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:01.771 23:47:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:01.771 23:47:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:07:01.771 Found net devices under 0000:af:00.1: cvl_0_1 00:07:01.771 23:47:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:01.771 23:47:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:07:01.771 23:47:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:07:01.771 23:47:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:07:01.771 23:47:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:07:01.771 23:47:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:07:01.771 23:47:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:01.771 23:47:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:01.771 23:47:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:01.771 23:47:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:01.771 23:47:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:01.771 23:47:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:01.771 23:47:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:01.771 23:47:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:01.771 23:47:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:01.772 23:47:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:01.772 23:47:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:01.772 23:47:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:01.772 23:47:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:01.772 23:47:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:01.772 23:47:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:02.035 23:47:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:02.035 23:47:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:02.035 23:47:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:02.035 23:47:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:02.035 23:47:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:02.035 23:47:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:02.035 23:47:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:02.294 23:47:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:02.294 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:02.294 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.481 ms 00:07:02.294 00:07:02.294 --- 10.0.0.2 ping statistics --- 00:07:02.294 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:02.294 rtt min/avg/max/mdev = 0.481/0.481/0.481/0.000 ms 00:07:02.294 23:47:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:02.294 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:02.294 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.205 ms 00:07:02.294 00:07:02.294 --- 10.0.0.1 ping statistics --- 00:07:02.294 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:02.294 rtt min/avg/max/mdev = 0.205/0.205/0.205/0.000 ms 00:07:02.295 23:47:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:02.295 23:47:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # return 0 00:07:02.295 23:47:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:02.295 23:47:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:02.295 23:47:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:02.295 23:47:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:02.295 23:47:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:02.295 23:47:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:02.295 23:47:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:02.295 23:47:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:07:02.295 23:47:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:02.295 23:47:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:02.295 23:47:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:02.295 23:47:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # nvmfpid=3824348 00:07:02.295 23:47:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # waitforlisten 3824348 00:07:02.295 23:47:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:07:02.295 23:47:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # '[' -z 3824348 ']' 00:07:02.295 23:47:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:02.295 23:47:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:02.295 23:47:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:02.295 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:02.295 23:47:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:02.295 23:47:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:02.295 [2024-12-13 23:47:41.316512] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:07:02.295 [2024-12-13 23:47:41.316616] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:02.554 [2024-12-13 23:47:41.435777] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:02.554 [2024-12-13 23:47:41.546557] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:02.554 [2024-12-13 23:47:41.546598] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:02.554 [2024-12-13 23:47:41.546608] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:02.554 [2024-12-13 23:47:41.546618] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:02.554 [2024-12-13 23:47:41.546626] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:02.554 [2024-12-13 23:47:41.548880] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:07:02.554 [2024-12-13 23:47:41.548899] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:07:02.554 [2024-12-13 23:47:41.548906] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:07:03.122 23:47:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:03.122 23:47:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@868 -- # return 0 00:07:03.122 23:47:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:03.122 23:47:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:03.122 23:47:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:03.122 23:47:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:03.122 23:47:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:07:03.122 23:47:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:07:03.380 [2024-12-13 23:47:42.322310] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:03.380 23:47:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:07:03.639 23:47:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:03.639 [2024-12-13 23:47:42.709404] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:03.639 23:47:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:03.898 23:47:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:07:04.157 Malloc0 00:07:04.157 23:47:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:07:04.415 Delay0 00:07:04.415 23:47:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:04.674 23:47:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:07:04.674 NULL1 00:07:04.674 23:47:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:07:04.932 23:47:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:07:04.932 23:47:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=3824830 00:07:04.932 23:47:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3824830 00:07:04.932 23:47:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:05.190 23:47:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:05.448 23:47:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:07:05.448 23:47:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:07:05.448 true 00:07:05.706 23:47:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3824830 00:07:05.706 23:47:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:05.706 23:47:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:05.965 23:47:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:07:05.965 23:47:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:07:06.222 true 00:07:06.223 23:47:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3824830 00:07:06.223 23:47:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:06.481 23:47:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:06.739 23:47:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:07:06.739 23:47:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:07:06.739 true 00:07:06.997 23:47:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3824830 00:07:06.997 23:47:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:06.997 23:47:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:07.256 23:47:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:07:07.256 23:47:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:07:07.515 true 00:07:07.515 23:47:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3824830 00:07:07.515 23:47:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:07.773 23:47:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:08.032 23:47:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:07:08.032 23:47:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:07:08.032 true 00:07:08.032 23:47:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3824830 00:07:08.032 23:47:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:08.291 23:47:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:08.549 23:47:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:07:08.549 23:47:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:07:08.808 true 00:07:08.808 23:47:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3824830 00:07:08.808 23:47:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:09.067 23:47:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:09.326 23:47:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:07:09.326 23:47:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:07:09.326 true 00:07:09.326 23:47:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3824830 00:07:09.326 23:47:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:09.585 23:47:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:09.844 23:47:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:07:09.844 23:47:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:07:10.102 true 00:07:10.102 23:47:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3824830 00:07:10.102 23:47:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:10.361 23:47:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:10.620 23:47:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:07:10.620 23:47:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:07:10.620 true 00:07:10.620 23:47:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3824830 00:07:10.620 23:47:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:10.878 23:47:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:11.137 23:47:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:07:11.137 23:47:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:07:11.395 true 00:07:11.395 23:47:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3824830 00:07:11.395 23:47:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:11.654 23:47:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:11.913 23:47:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:07:11.913 23:47:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:07:11.913 true 00:07:11.913 23:47:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3824830 00:07:11.913 23:47:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:12.171 23:47:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:12.430 23:47:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:07:12.430 23:47:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:07:12.689 true 00:07:12.689 23:47:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3824830 00:07:12.689 23:47:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:12.947 23:47:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:13.206 23:47:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:07:13.206 23:47:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:07:13.206 true 00:07:13.206 23:47:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3824830 00:07:13.206 23:47:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:13.464 23:47:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:13.723 23:47:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:07:13.723 23:47:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:07:13.981 true 00:07:13.981 23:47:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3824830 00:07:13.981 23:47:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:14.239 23:47:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:14.239 23:47:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:07:14.239 23:47:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:07:14.497 true 00:07:14.497 23:47:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3824830 00:07:14.497 23:47:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:14.756 23:47:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:15.015 23:47:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:07:15.015 23:47:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:07:15.273 true 00:07:15.273 23:47:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3824830 00:07:15.274 23:47:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:15.532 23:47:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:15.532 23:47:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:07:15.532 23:47:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:07:15.790 true 00:07:15.790 23:47:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3824830 00:07:15.790 23:47:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:16.049 23:47:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:16.306 23:47:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:07:16.306 23:47:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:07:16.565 true 00:07:16.565 23:47:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3824830 00:07:16.565 23:47:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:16.565 23:47:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:16.884 23:47:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:07:16.884 23:47:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:07:17.185 true 00:07:17.185 23:47:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3824830 00:07:17.185 23:47:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:17.185 23:47:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:17.443 23:47:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:07:17.443 23:47:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:07:17.702 true 00:07:17.702 23:47:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3824830 00:07:17.702 23:47:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:17.960 23:47:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:18.219 23:47:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:07:18.219 23:47:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:07:18.219 true 00:07:18.477 23:47:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3824830 00:07:18.477 23:47:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:18.477 23:47:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:18.736 23:47:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:07:18.736 23:47:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:07:18.995 true 00:07:18.995 23:47:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3824830 00:07:18.995 23:47:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:19.254 23:47:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:19.513 23:47:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:07:19.513 23:47:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:07:19.513 true 00:07:19.771 23:47:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3824830 00:07:19.771 23:47:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:19.771 23:47:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:20.030 23:47:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:07:20.030 23:47:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:07:20.288 true 00:07:20.288 23:47:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3824830 00:07:20.288 23:47:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:20.547 23:47:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:20.805 23:47:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:07:20.805 23:47:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:07:20.805 true 00:07:21.064 23:47:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3824830 00:07:21.064 23:47:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:21.064 23:48:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:21.323 23:48:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:07:21.323 23:48:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:07:21.581 true 00:07:21.581 23:48:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3824830 00:07:21.581 23:48:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:21.840 23:48:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:22.100 23:48:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:07:22.100 23:48:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:07:22.100 true 00:07:22.359 23:48:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3824830 00:07:22.359 23:48:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:22.359 23:48:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:22.617 23:48:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:07:22.617 23:48:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:07:22.876 true 00:07:22.876 23:48:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3824830 00:07:22.876 23:48:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:23.135 23:48:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:23.393 23:48:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:07:23.393 23:48:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:07:23.393 true 00:07:23.393 23:48:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3824830 00:07:23.393 23:48:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:23.652 23:48:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:23.910 23:48:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:07:23.910 23:48:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:07:24.169 true 00:07:24.169 23:48:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3824830 00:07:24.169 23:48:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:24.428 23:48:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:24.686 23:48:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1031 00:07:24.686 23:48:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1031 00:07:24.686 true 00:07:24.686 23:48:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3824830 00:07:24.686 23:48:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:24.945 23:48:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:25.203 23:48:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1032 00:07:25.204 23:48:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1032 00:07:25.462 true 00:07:25.462 23:48:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3824830 00:07:25.462 23:48:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:25.721 23:48:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:25.980 23:48:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1033 00:07:25.980 23:48:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1033 00:07:25.980 true 00:07:25.980 23:48:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3824830 00:07:25.980 23:48:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:26.237 23:48:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:26.495 23:48:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1034 00:07:26.495 23:48:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1034 00:07:26.754 true 00:07:26.754 23:48:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3824830 00:07:26.754 23:48:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:27.013 23:48:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:27.013 23:48:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1035 00:07:27.013 23:48:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1035 00:07:27.271 true 00:07:27.271 23:48:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3824830 00:07:27.271 23:48:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:27.530 23:48:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:27.789 23:48:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1036 00:07:27.789 23:48:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1036 00:07:28.047 true 00:07:28.047 23:48:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3824830 00:07:28.048 23:48:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:28.306 23:48:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:28.565 23:48:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1037 00:07:28.565 23:48:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1037 00:07:28.565 true 00:07:28.565 23:48:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3824830 00:07:28.565 23:48:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:28.824 23:48:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:29.083 23:48:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1038 00:07:29.083 23:48:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1038 00:07:29.341 true 00:07:29.341 23:48:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3824830 00:07:29.341 23:48:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:29.600 23:48:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:29.600 23:48:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1039 00:07:29.600 23:48:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1039 00:07:29.858 true 00:07:29.858 23:48:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3824830 00:07:29.858 23:48:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:30.117 23:48:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:30.376 23:48:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1040 00:07:30.376 23:48:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1040 00:07:30.376 true 00:07:30.634 23:48:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3824830 00:07:30.634 23:48:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:30.634 23:48:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:30.893 23:48:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1041 00:07:30.893 23:48:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1041 00:07:31.152 true 00:07:31.152 23:48:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3824830 00:07:31.152 23:48:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:31.411 23:48:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:31.669 23:48:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1042 00:07:31.669 23:48:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1042 00:07:31.669 true 00:07:31.669 23:48:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3824830 00:07:31.669 23:48:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:31.928 23:48:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:32.187 23:48:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1043 00:07:32.187 23:48:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1043 00:07:32.446 true 00:07:32.446 23:48:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3824830 00:07:32.446 23:48:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:32.704 23:48:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:32.963 23:48:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1044 00:07:32.963 23:48:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1044 00:07:32.963 true 00:07:32.963 23:48:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3824830 00:07:32.963 23:48:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:33.222 23:48:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:33.480 23:48:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1045 00:07:33.480 23:48:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1045 00:07:33.739 true 00:07:33.739 23:48:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3824830 00:07:33.739 23:48:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:33.998 23:48:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:34.257 23:48:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1046 00:07:34.257 23:48:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1046 00:07:34.257 true 00:07:34.257 23:48:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3824830 00:07:34.257 23:48:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:34.516 23:48:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:34.775 23:48:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1047 00:07:34.775 23:48:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1047 00:07:35.034 true 00:07:35.034 23:48:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3824830 00:07:35.034 23:48:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:35.293 23:48:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:35.293 Initializing NVMe Controllers 00:07:35.293 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:07:35.293 Controller IO queue size 128, less than required. 00:07:35.293 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:35.293 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:07:35.293 Initialization complete. Launching workers. 00:07:35.293 ======================================================== 00:07:35.293 Latency(us) 00:07:35.293 Device Information : IOPS MiB/s Average min max 00:07:35.293 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 23194.43 11.33 5518.36 3015.44 9970.72 00:07:35.293 ======================================================== 00:07:35.293 Total : 23194.43 11.33 5518.36 3015.44 9970.72 00:07:35.293 00:07:35.293 23:48:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1048 00:07:35.293 23:48:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1048 00:07:35.552 true 00:07:35.552 23:48:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3824830 00:07:35.552 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (3824830) - No such process 00:07:35.552 23:48:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 3824830 00:07:35.552 23:48:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:35.811 23:48:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:36.070 23:48:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:07:36.070 23:48:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:07:36.070 23:48:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:07:36.070 23:48:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:36.070 23:48:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:07:36.070 null0 00:07:36.070 23:48:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:36.070 23:48:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:36.070 23:48:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:07:36.328 null1 00:07:36.328 23:48:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:36.328 23:48:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:36.328 23:48:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:07:36.587 null2 00:07:36.587 23:48:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:36.587 23:48:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:36.587 23:48:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:07:36.845 null3 00:07:36.845 23:48:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:36.845 23:48:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:36.845 23:48:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:07:37.103 null4 00:07:37.103 23:48:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:37.103 23:48:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:37.103 23:48:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:07:37.103 null5 00:07:37.103 23:48:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:37.103 23:48:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:37.103 23:48:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:07:37.362 null6 00:07:37.362 23:48:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:37.362 23:48:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:37.362 23:48:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:07:37.621 null7 00:07:37.621 23:48:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:37.621 23:48:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:37.621 23:48:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:07:37.621 23:48:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:37.621 23:48:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:07:37.621 23:48:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:37.621 23:48:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:07:37.621 23:48:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:37.621 23:48:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:37.621 23:48:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:37.621 23:48:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:37.621 23:48:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:37.621 23:48:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:37.621 23:48:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:37.621 23:48:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:07:37.621 23:48:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:37.621 23:48:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:07:37.621 23:48:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:37.621 23:48:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:37.621 23:48:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:37.621 23:48:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:37.621 23:48:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:37.621 23:48:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:07:37.621 23:48:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:37.621 23:48:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:07:37.621 23:48:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:37.621 23:48:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:37.621 23:48:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:37.621 23:48:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:37.621 23:48:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:37.621 23:48:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:07:37.621 23:48:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:37.621 23:48:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:07:37.621 23:48:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:37.621 23:48:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:37.621 23:48:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:37.621 23:48:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:37.621 23:48:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:37.621 23:48:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:07:37.621 23:48:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:37.621 23:48:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:07:37.621 23:48:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:37.621 23:48:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:37.621 23:48:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:37.621 23:48:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:37.621 23:48:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:37.621 23:48:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:37.621 23:48:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:07:37.621 23:48:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:07:37.621 23:48:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:37.621 23:48:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:37.621 23:48:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:07:37.621 23:48:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:37.622 23:48:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:37.622 23:48:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:37.622 23:48:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:07:37.622 23:48:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:37.622 23:48:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:37.622 23:48:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:37.622 23:48:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:37.622 23:48:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:37.622 23:48:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:37.622 23:48:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:37.622 23:48:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:07:37.622 23:48:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 3830857 3830859 3830860 3830862 3830864 3830866 3830867 3830869 00:07:37.622 23:48:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:07:37.622 23:48:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:37.622 23:48:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:37.622 23:48:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:37.881 23:48:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:37.881 23:48:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:37.881 23:48:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:37.881 23:48:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:37.881 23:48:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:37.881 23:48:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:37.881 23:48:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:37.881 23:48:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:37.881 23:48:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:37.881 23:48:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:37.881 23:48:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:37.881 23:48:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:37.881 23:48:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:37.881 23:48:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:37.881 23:48:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:37.881 23:48:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:37.881 23:48:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:37.881 23:48:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:37.881 23:48:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:37.881 23:48:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:37.881 23:48:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:37.881 23:48:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:37.881 23:48:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:37.881 23:48:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:37.881 23:48:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:37.881 23:48:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:37.881 23:48:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:37.881 23:48:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:37.881 23:48:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:37.881 23:48:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:37.881 23:48:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:37.881 23:48:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:38.140 23:48:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:38.140 23:48:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:38.140 23:48:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:38.140 23:48:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:38.140 23:48:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:38.140 23:48:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:38.140 23:48:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:38.140 23:48:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:38.400 23:48:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:38.400 23:48:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:38.400 23:48:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:38.400 23:48:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:38.400 23:48:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:38.400 23:48:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:38.400 23:48:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:38.400 23:48:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:38.400 23:48:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:38.400 23:48:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:38.400 23:48:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:38.400 23:48:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:38.400 23:48:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:38.400 23:48:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:38.400 23:48:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:38.400 23:48:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:38.400 23:48:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:38.400 23:48:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:38.400 23:48:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:38.400 23:48:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:38.400 23:48:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:38.400 23:48:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:38.400 23:48:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:38.400 23:48:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:38.659 23:48:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:38.659 23:48:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:38.659 23:48:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:38.659 23:48:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:38.659 23:48:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:38.659 23:48:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:38.659 23:48:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:38.659 23:48:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:38.918 23:48:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:38.918 23:48:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:38.918 23:48:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:38.918 23:48:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:38.918 23:48:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:38.918 23:48:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:38.919 23:48:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:38.919 23:48:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:38.919 23:48:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:38.919 23:48:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:38.919 23:48:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:38.919 23:48:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:38.919 23:48:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:38.919 23:48:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:38.919 23:48:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:38.919 23:48:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:38.919 23:48:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:38.919 23:48:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:38.919 23:48:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:38.919 23:48:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:38.919 23:48:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:38.919 23:48:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:38.919 23:48:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:38.919 23:48:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:38.919 23:48:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:38.919 23:48:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:38.919 23:48:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:38.919 23:48:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:38.919 23:48:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:38.919 23:48:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:38.919 23:48:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:38.919 23:48:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:39.177 23:48:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:39.177 23:48:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:39.177 23:48:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:39.177 23:48:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:39.177 23:48:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:39.177 23:48:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:39.177 23:48:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:39.177 23:48:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:39.177 23:48:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:39.177 23:48:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:39.177 23:48:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:39.177 23:48:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:39.177 23:48:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:39.177 23:48:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:39.177 23:48:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:39.177 23:48:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:39.177 23:48:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:39.178 23:48:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:39.178 23:48:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:39.178 23:48:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:39.178 23:48:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:39.178 23:48:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:39.178 23:48:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:39.178 23:48:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:39.436 23:48:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:39.436 23:48:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:39.436 23:48:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:39.436 23:48:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:39.436 23:48:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:39.436 23:48:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:39.436 23:48:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:39.436 23:48:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:39.696 23:48:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:39.696 23:48:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:39.696 23:48:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:39.696 23:48:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:39.696 23:48:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:39.696 23:48:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:39.696 23:48:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:39.696 23:48:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:39.696 23:48:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:39.696 23:48:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:39.696 23:48:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:39.696 23:48:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:39.696 23:48:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:39.696 23:48:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:39.696 23:48:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:39.696 23:48:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:39.696 23:48:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:39.696 23:48:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:39.696 23:48:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:39.696 23:48:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:39.696 23:48:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:39.696 23:48:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:39.696 23:48:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:39.696 23:48:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:39.955 23:48:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:39.955 23:48:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:39.955 23:48:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:39.955 23:48:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:39.955 23:48:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:39.955 23:48:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:39.955 23:48:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:39.955 23:48:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:39.955 23:48:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:39.955 23:48:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:39.955 23:48:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:39.955 23:48:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:39.955 23:48:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:39.955 23:48:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:39.955 23:48:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:39.955 23:48:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:39.955 23:48:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:39.955 23:48:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:39.955 23:48:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:39.955 23:48:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:39.955 23:48:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:39.955 23:48:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:39.955 23:48:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:39.955 23:48:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:39.955 23:48:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:39.955 23:48:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:39.955 23:48:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:39.955 23:48:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:39.955 23:48:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:39.955 23:48:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:39.955 23:48:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:39.955 23:48:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:40.214 23:48:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:40.214 23:48:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:40.214 23:48:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:40.214 23:48:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:40.214 23:48:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:40.214 23:48:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:40.214 23:48:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:40.214 23:48:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:40.473 23:48:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:40.473 23:48:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:40.473 23:48:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:40.473 23:48:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:40.473 23:48:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:40.473 23:48:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:40.473 23:48:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:40.473 23:48:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:40.473 23:48:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:40.473 23:48:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:40.473 23:48:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:40.473 23:48:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:40.473 23:48:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:40.473 23:48:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:40.473 23:48:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:40.473 23:48:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:40.473 23:48:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:40.473 23:48:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:40.473 23:48:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:40.473 23:48:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:40.473 23:48:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:40.473 23:48:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:40.473 23:48:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:40.473 23:48:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:40.732 23:48:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:40.732 23:48:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:40.732 23:48:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:40.732 23:48:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:40.732 23:48:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:40.732 23:48:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:40.732 23:48:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:40.732 23:48:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:40.991 23:48:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:40.991 23:48:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:40.991 23:48:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:40.991 23:48:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:40.991 23:48:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:40.991 23:48:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:40.991 23:48:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:40.991 23:48:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:40.991 23:48:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:40.991 23:48:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:40.991 23:48:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:40.991 23:48:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:40.991 23:48:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:40.991 23:48:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:40.991 23:48:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:40.991 23:48:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:40.991 23:48:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:40.991 23:48:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:40.991 23:48:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:40.991 23:48:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:40.991 23:48:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:40.991 23:48:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:40.991 23:48:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:40.991 23:48:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:40.991 23:48:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:40.991 23:48:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:40.991 23:48:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:41.250 23:48:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:41.250 23:48:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:41.250 23:48:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:41.250 23:48:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:41.250 23:48:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:41.250 23:48:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:41.250 23:48:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:41.250 23:48:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:41.250 23:48:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:41.250 23:48:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:41.250 23:48:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:41.250 23:48:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:41.250 23:48:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:41.250 23:48:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:41.250 23:48:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:41.250 23:48:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:41.250 23:48:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:41.250 23:48:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:41.250 23:48:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:41.250 23:48:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:41.250 23:48:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:41.250 23:48:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:41.250 23:48:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:41.250 23:48:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:41.250 23:48:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:41.250 23:48:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:41.250 23:48:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:41.250 23:48:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:41.250 23:48:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:41.509 23:48:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:41.509 23:48:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:41.509 23:48:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:41.509 23:48:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:41.509 23:48:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:41.509 23:48:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:41.509 23:48:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:41.509 23:48:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:41.767 23:48:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:41.767 23:48:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:41.767 23:48:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:41.767 23:48:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:41.767 23:48:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:41.767 23:48:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:41.767 23:48:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:41.767 23:48:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:41.767 23:48:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:41.767 23:48:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:41.767 23:48:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:41.767 23:48:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:41.767 23:48:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:41.767 23:48:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:41.768 23:48:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:41.768 23:48:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:41.768 23:48:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:07:41.768 23:48:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:07:41.768 23:48:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:41.768 23:48:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:07:41.768 23:48:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:41.768 23:48:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:07:41.768 23:48:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:41.768 23:48:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:41.768 rmmod nvme_tcp 00:07:41.768 rmmod nvme_fabrics 00:07:41.768 rmmod nvme_keyring 00:07:41.768 23:48:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:41.768 23:48:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:07:41.768 23:48:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:07:41.768 23:48:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@517 -- # '[' -n 3824348 ']' 00:07:41.768 23:48:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # killprocess 3824348 00:07:41.768 23:48:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # '[' -z 3824348 ']' 00:07:41.768 23:48:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # kill -0 3824348 00:07:41.768 23:48:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # uname 00:07:41.768 23:48:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:41.768 23:48:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3824348 00:07:41.768 23:48:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:07:41.768 23:48:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:07:41.768 23:48:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3824348' 00:07:41.768 killing process with pid 3824348 00:07:41.768 23:48:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@973 -- # kill 3824348 00:07:41.768 23:48:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@978 -- # wait 3824348 00:07:43.145 23:48:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:43.145 23:48:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:43.145 23:48:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:43.145 23:48:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:07:43.145 23:48:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-save 00:07:43.145 23:48:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:43.145 23:48:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-restore 00:07:43.145 23:48:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:43.145 23:48:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:43.145 23:48:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:43.145 23:48:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:43.145 23:48:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:45.680 23:48:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:45.680 00:07:45.680 real 0m48.985s 00:07:45.680 user 3m26.007s 00:07:45.680 sys 0m16.765s 00:07:45.680 23:48:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:45.680 23:48:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:45.681 ************************************ 00:07:45.681 END TEST nvmf_ns_hotplug_stress 00:07:45.681 ************************************ 00:07:45.681 23:48:24 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:07:45.681 23:48:24 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:45.681 23:48:24 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:45.681 23:48:24 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:45.681 ************************************ 00:07:45.681 START TEST nvmf_delete_subsystem 00:07:45.681 ************************************ 00:07:45.681 23:48:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:07:45.681 * Looking for test storage... 00:07:45.681 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:45.681 23:48:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:45.681 23:48:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # lcov --version 00:07:45.681 23:48:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:45.681 23:48:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:45.681 23:48:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:45.681 23:48:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:45.681 23:48:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:45.681 23:48:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:07:45.681 23:48:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:07:45.681 23:48:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:07:45.681 23:48:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:07:45.681 23:48:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:07:45.681 23:48:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:07:45.681 23:48:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:07:45.681 23:48:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:45.681 23:48:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:07:45.681 23:48:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:07:45.681 23:48:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:45.681 23:48:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:45.681 23:48:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:07:45.681 23:48:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:07:45.681 23:48:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:45.681 23:48:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:07:45.681 23:48:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:07:45.681 23:48:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:07:45.681 23:48:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:07:45.681 23:48:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:45.681 23:48:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:07:45.681 23:48:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:07:45.681 23:48:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:45.681 23:48:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:45.681 23:48:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:07:45.681 23:48:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:45.681 23:48:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:45.681 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:45.681 --rc genhtml_branch_coverage=1 00:07:45.681 --rc genhtml_function_coverage=1 00:07:45.681 --rc genhtml_legend=1 00:07:45.681 --rc geninfo_all_blocks=1 00:07:45.681 --rc geninfo_unexecuted_blocks=1 00:07:45.681 00:07:45.681 ' 00:07:45.681 23:48:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:45.681 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:45.681 --rc genhtml_branch_coverage=1 00:07:45.681 --rc genhtml_function_coverage=1 00:07:45.681 --rc genhtml_legend=1 00:07:45.681 --rc geninfo_all_blocks=1 00:07:45.681 --rc geninfo_unexecuted_blocks=1 00:07:45.681 00:07:45.681 ' 00:07:45.681 23:48:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:45.681 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:45.681 --rc genhtml_branch_coverage=1 00:07:45.681 --rc genhtml_function_coverage=1 00:07:45.681 --rc genhtml_legend=1 00:07:45.681 --rc geninfo_all_blocks=1 00:07:45.681 --rc geninfo_unexecuted_blocks=1 00:07:45.681 00:07:45.681 ' 00:07:45.681 23:48:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:45.681 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:45.681 --rc genhtml_branch_coverage=1 00:07:45.681 --rc genhtml_function_coverage=1 00:07:45.681 --rc genhtml_legend=1 00:07:45.681 --rc geninfo_all_blocks=1 00:07:45.681 --rc geninfo_unexecuted_blocks=1 00:07:45.681 00:07:45.681 ' 00:07:45.681 23:48:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:45.681 23:48:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:07:45.681 23:48:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:45.681 23:48:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:45.681 23:48:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:45.681 23:48:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:45.681 23:48:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:45.681 23:48:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:45.681 23:48:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:45.681 23:48:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:45.681 23:48:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:45.681 23:48:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:45.681 23:48:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:07:45.681 23:48:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:07:45.681 23:48:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:45.681 23:48:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:45.681 23:48:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:45.681 23:48:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:45.681 23:48:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:45.681 23:48:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:07:45.681 23:48:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:45.681 23:48:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:45.681 23:48:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:45.681 23:48:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:45.681 23:48:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:45.681 23:48:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:45.681 23:48:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:07:45.682 23:48:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:45.682 23:48:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:07:45.682 23:48:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:45.682 23:48:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:45.682 23:48:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:45.682 23:48:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:45.682 23:48:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:45.682 23:48:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:45.682 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:45.682 23:48:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:45.682 23:48:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:45.682 23:48:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:45.682 23:48:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:07:45.682 23:48:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:45.682 23:48:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:45.682 23:48:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:45.682 23:48:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:45.682 23:48:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:45.682 23:48:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:45.682 23:48:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:45.682 23:48:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:45.682 23:48:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:07:45.682 23:48:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:07:45.682 23:48:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:07:45.682 23:48:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:50.954 23:48:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:50.954 23:48:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:07:50.954 23:48:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:50.954 23:48:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:50.954 23:48:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:50.954 23:48:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:50.954 23:48:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:50.954 23:48:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:07:50.954 23:48:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:50.954 23:48:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:07:50.954 23:48:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:07:50.954 23:48:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:07:50.954 23:48:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:07:50.954 23:48:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:07:50.954 23:48:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:07:50.954 23:48:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:50.954 23:48:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:50.954 23:48:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:50.954 23:48:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:50.954 23:48:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:50.954 23:48:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:50.954 23:48:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:50.954 23:48:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:50.954 23:48:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:50.954 23:48:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:50.954 23:48:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:50.954 23:48:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:50.954 23:48:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:50.954 23:48:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:07:50.954 23:48:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:07:50.954 23:48:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:07:50.954 23:48:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:07:50.954 23:48:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:50.955 23:48:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:50.955 23:48:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:07:50.955 Found 0000:af:00.0 (0x8086 - 0x159b) 00:07:50.955 23:48:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:50.955 23:48:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:50.955 23:48:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:50.955 23:48:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:50.955 23:48:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:50.955 23:48:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:50.955 23:48:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:07:50.955 Found 0000:af:00.1 (0x8086 - 0x159b) 00:07:50.955 23:48:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:50.955 23:48:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:50.955 23:48:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:50.955 23:48:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:50.955 23:48:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:50.955 23:48:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:50.955 23:48:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:07:50.955 23:48:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:07:50.955 23:48:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:50.955 23:48:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:50.955 23:48:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:50.955 23:48:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:50.955 23:48:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:50.955 23:48:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:50.955 23:48:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:50.955 23:48:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:07:50.955 Found net devices under 0000:af:00.0: cvl_0_0 00:07:50.955 23:48:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:50.955 23:48:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:50.955 23:48:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:50.955 23:48:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:50.955 23:48:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:50.955 23:48:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:50.955 23:48:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:50.955 23:48:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:50.955 23:48:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:07:50.955 Found net devices under 0000:af:00.1: cvl_0_1 00:07:50.955 23:48:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:50.955 23:48:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:07:50.955 23:48:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # is_hw=yes 00:07:50.955 23:48:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:07:50.955 23:48:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:07:50.955 23:48:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:07:50.955 23:48:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:50.955 23:48:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:50.955 23:48:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:50.955 23:48:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:50.955 23:48:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:50.955 23:48:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:50.955 23:48:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:50.955 23:48:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:50.955 23:48:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:50.955 23:48:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:50.955 23:48:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:50.955 23:48:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:50.955 23:48:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:50.955 23:48:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:50.955 23:48:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:50.955 23:48:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:50.955 23:48:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:50.955 23:48:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:50.955 23:48:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:51.214 23:48:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:51.214 23:48:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:51.214 23:48:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:51.214 23:48:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:51.214 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:51.214 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.230 ms 00:07:51.214 00:07:51.214 --- 10.0.0.2 ping statistics --- 00:07:51.214 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:51.214 rtt min/avg/max/mdev = 0.230/0.230/0.230/0.000 ms 00:07:51.214 23:48:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:51.214 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:51.214 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.119 ms 00:07:51.214 00:07:51.214 --- 10.0.0.1 ping statistics --- 00:07:51.214 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:51.214 rtt min/avg/max/mdev = 0.119/0.119/0.119/0.000 ms 00:07:51.214 23:48:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:51.214 23:48:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # return 0 00:07:51.214 23:48:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:51.214 23:48:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:51.214 23:48:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:51.214 23:48:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:51.214 23:48:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:51.214 23:48:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:51.214 23:48:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:51.214 23:48:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:07:51.214 23:48:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:51.214 23:48:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:51.214 23:48:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:51.214 23:48:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # nvmfpid=3835515 00:07:51.214 23:48:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:07:51.214 23:48:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # waitforlisten 3835515 00:07:51.214 23:48:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # '[' -z 3835515 ']' 00:07:51.214 23:48:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:51.214 23:48:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:51.214 23:48:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:51.214 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:51.214 23:48:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:51.214 23:48:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:51.214 [2024-12-13 23:48:30.309911] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:07:51.214 [2024-12-13 23:48:30.310005] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:51.473 [2024-12-13 23:48:30.429370] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:51.473 [2024-12-13 23:48:30.534099] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:51.473 [2024-12-13 23:48:30.534147] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:51.473 [2024-12-13 23:48:30.534159] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:51.473 [2024-12-13 23:48:30.534169] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:51.473 [2024-12-13 23:48:30.534179] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:51.473 [2024-12-13 23:48:30.536272] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:51.473 [2024-12-13 23:48:30.536281] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:07:52.041 23:48:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:52.041 23:48:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@868 -- # return 0 00:07:52.041 23:48:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:52.041 23:48:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:52.041 23:48:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:52.041 23:48:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:52.041 23:48:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:52.041 23:48:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:52.041 23:48:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:52.041 [2024-12-13 23:48:31.147253] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:52.041 23:48:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:52.041 23:48:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:07:52.041 23:48:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:52.041 23:48:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:52.041 23:48:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:52.041 23:48:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:52.041 23:48:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:52.041 23:48:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:52.041 [2024-12-13 23:48:31.167492] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:52.041 23:48:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:52.041 23:48:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:07:52.041 23:48:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:52.041 23:48:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:52.041 NULL1 00:07:52.041 23:48:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:52.041 23:48:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:07:52.041 23:48:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:52.041 23:48:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:52.299 Delay0 00:07:52.299 23:48:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:52.299 23:48:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:52.299 23:48:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:52.299 23:48:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:52.299 23:48:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:52.299 23:48:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=3835637 00:07:52.299 23:48:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:07:52.299 23:48:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:07:52.299 [2024-12-13 23:48:31.300952] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:07:54.203 23:48:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:54.203 23:48:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:54.203 23:48:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:54.462 Read completed with error (sct=0, sc=8) 00:07:54.462 Read completed with error (sct=0, sc=8) 00:07:54.462 Read completed with error (sct=0, sc=8) 00:07:54.462 starting I/O failed: -6 00:07:54.462 Read completed with error (sct=0, sc=8) 00:07:54.462 Write completed with error (sct=0, sc=8) 00:07:54.462 Read completed with error (sct=0, sc=8) 00:07:54.462 Read completed with error (sct=0, sc=8) 00:07:54.462 starting I/O failed: -6 00:07:54.462 Read completed with error (sct=0, sc=8) 00:07:54.462 Read completed with error (sct=0, sc=8) 00:07:54.462 Write completed with error (sct=0, sc=8) 00:07:54.462 Read completed with error (sct=0, sc=8) 00:07:54.462 starting I/O failed: -6 00:07:54.462 Read completed with error (sct=0, sc=8) 00:07:54.462 Write completed with error (sct=0, sc=8) 00:07:54.462 Read completed with error (sct=0, sc=8) 00:07:54.462 Read completed with error (sct=0, sc=8) 00:07:54.462 starting I/O failed: -6 00:07:54.462 Read completed with error (sct=0, sc=8) 00:07:54.462 Read completed with error (sct=0, sc=8) 00:07:54.462 Write completed with error (sct=0, sc=8) 00:07:54.462 Read completed with error (sct=0, sc=8) 00:07:54.462 starting I/O failed: -6 00:07:54.462 Write completed with error (sct=0, sc=8) 00:07:54.462 Read completed with error (sct=0, sc=8) 00:07:54.462 Read completed with error (sct=0, sc=8) 00:07:54.462 Read completed with error (sct=0, sc=8) 00:07:54.462 starting I/O failed: -6 00:07:54.462 Read completed with error (sct=0, sc=8) 00:07:54.462 Write completed with error (sct=0, sc=8) 00:07:54.462 Write completed with error (sct=0, sc=8) 00:07:54.462 Write completed with error (sct=0, sc=8) 00:07:54.462 starting I/O failed: -6 00:07:54.462 Read completed with error (sct=0, sc=8) 00:07:54.462 Write completed with error (sct=0, sc=8) 00:07:54.462 Read completed with error (sct=0, sc=8) 00:07:54.462 Write completed with error (sct=0, sc=8) 00:07:54.462 starting I/O failed: -6 00:07:54.462 Read completed with error (sct=0, sc=8) 00:07:54.462 Read completed with error (sct=0, sc=8) 00:07:54.462 Read completed with error (sct=0, sc=8) 00:07:54.462 Read completed with error (sct=0, sc=8) 00:07:54.462 starting I/O failed: -6 00:07:54.462 Read completed with error (sct=0, sc=8) 00:07:54.462 Write completed with error (sct=0, sc=8) 00:07:54.462 Read completed with error (sct=0, sc=8) 00:07:54.462 Read completed with error (sct=0, sc=8) 00:07:54.462 starting I/O failed: -6 00:07:54.462 Read completed with error (sct=0, sc=8) 00:07:54.462 Write completed with error (sct=0, sc=8) 00:07:54.462 Read completed with error (sct=0, sc=8) 00:07:54.462 Read completed with error (sct=0, sc=8) 00:07:54.462 starting I/O failed: -6 00:07:54.462 Read completed with error (sct=0, sc=8) 00:07:54.462 Read completed with error (sct=0, sc=8) 00:07:54.462 Write completed with error (sct=0, sc=8) 00:07:54.462 Read completed with error (sct=0, sc=8) 00:07:54.462 starting I/O failed: -6 00:07:54.462 Read completed with error (sct=0, sc=8) 00:07:54.462 Read completed with error (sct=0, sc=8) 00:07:54.462 Read completed with error (sct=0, sc=8) 00:07:54.462 Write completed with error (sct=0, sc=8) 00:07:54.462 starting I/O failed: -6 00:07:54.462 Read completed with error (sct=0, sc=8) 00:07:54.462 Write completed with error (sct=0, sc=8) 00:07:54.462 [2024-12-13 23:48:33.475815] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500001ed00 is same with the state(6) to be set 00:07:54.462 Read completed with error (sct=0, sc=8) 00:07:54.462 starting I/O failed: -6 00:07:54.462 Read completed with error (sct=0, sc=8) 00:07:54.462 Read completed with error (sct=0, sc=8) 00:07:54.462 Read completed with error (sct=0, sc=8) 00:07:54.462 Read completed with error (sct=0, sc=8) 00:07:54.462 starting I/O failed: -6 00:07:54.463 Read completed with error (sct=0, sc=8) 00:07:54.463 Read completed with error (sct=0, sc=8) 00:07:54.463 Write completed with error (sct=0, sc=8) 00:07:54.463 Read completed with error (sct=0, sc=8) 00:07:54.463 starting I/O failed: -6 00:07:54.463 Read completed with error (sct=0, sc=8) 00:07:54.463 Read completed with error (sct=0, sc=8) 00:07:54.463 Read completed with error (sct=0, sc=8) 00:07:54.463 Write completed with error (sct=0, sc=8) 00:07:54.463 starting I/O failed: -6 00:07:54.463 Read completed with error (sct=0, sc=8) 00:07:54.463 Read completed with error (sct=0, sc=8) 00:07:54.463 Read completed with error (sct=0, sc=8) 00:07:54.463 Write completed with error (sct=0, sc=8) 00:07:54.463 starting I/O failed: -6 00:07:54.463 Read completed with error (sct=0, sc=8) 00:07:54.463 Read completed with error (sct=0, sc=8) 00:07:54.463 Write completed with error (sct=0, sc=8) 00:07:54.463 Read completed with error (sct=0, sc=8) 00:07:54.463 starting I/O failed: -6 00:07:54.463 Read completed with error (sct=0, sc=8) 00:07:54.463 Write completed with error (sct=0, sc=8) 00:07:54.463 Write completed with error (sct=0, sc=8) 00:07:54.463 Read completed with error (sct=0, sc=8) 00:07:54.463 starting I/O failed: -6 00:07:54.463 Write completed with error (sct=0, sc=8) 00:07:54.463 Write completed with error (sct=0, sc=8) 00:07:54.463 Read completed with error (sct=0, sc=8) 00:07:54.463 Read completed with error (sct=0, sc=8) 00:07:54.463 starting I/O failed: -6 00:07:54.463 Write completed with error (sct=0, sc=8) 00:07:54.463 Write completed with error (sct=0, sc=8) 00:07:54.463 Write completed with error (sct=0, sc=8) 00:07:54.463 Read completed with error (sct=0, sc=8) 00:07:54.463 starting I/O failed: -6 00:07:54.463 Read completed with error (sct=0, sc=8) 00:07:54.463 Read completed with error (sct=0, sc=8) 00:07:54.463 Read completed with error (sct=0, sc=8) 00:07:54.463 Read completed with error (sct=0, sc=8) 00:07:54.463 starting I/O failed: -6 00:07:54.463 Read completed with error (sct=0, sc=8) 00:07:54.463 Write completed with error (sct=0, sc=8) 00:07:54.463 Read completed with error (sct=0, sc=8) 00:07:54.463 [2024-12-13 23:48:33.476571] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500001fe80 is same with the state(6) to be set 00:07:54.463 Read completed with error (sct=0, sc=8) 00:07:54.463 Read completed with error (sct=0, sc=8) 00:07:54.463 Read completed with error (sct=0, sc=8) 00:07:54.463 Read completed with error (sct=0, sc=8) 00:07:54.463 Write completed with error (sct=0, sc=8) 00:07:54.463 Read completed with error (sct=0, sc=8) 00:07:54.463 Read completed with error (sct=0, sc=8) 00:07:54.463 Read completed with error (sct=0, sc=8) 00:07:54.463 Write completed with error (sct=0, sc=8) 00:07:54.463 Read completed with error (sct=0, sc=8) 00:07:54.463 Write completed with error (sct=0, sc=8) 00:07:54.463 Read completed with error (sct=0, sc=8) 00:07:54.463 Write completed with error (sct=0, sc=8) 00:07:54.463 Read completed with error (sct=0, sc=8) 00:07:54.463 Read completed with error (sct=0, sc=8) 00:07:54.463 Read completed with error (sct=0, sc=8) 00:07:54.463 Write completed with error (sct=0, sc=8) 00:07:54.463 Write completed with error (sct=0, sc=8) 00:07:54.463 Read completed with error (sct=0, sc=8) 00:07:54.463 Write completed with error (sct=0, sc=8) 00:07:54.463 Read completed with error (sct=0, sc=8) 00:07:54.463 Read completed with error (sct=0, sc=8) 00:07:54.463 Read completed with error (sct=0, sc=8) 00:07:54.463 Read completed with error (sct=0, sc=8) 00:07:54.463 Write completed with error (sct=0, sc=8) 00:07:54.463 Read completed with error (sct=0, sc=8) 00:07:54.463 Read completed with error (sct=0, sc=8) 00:07:54.463 Read completed with error (sct=0, sc=8) 00:07:54.463 Write completed with error (sct=0, sc=8) 00:07:54.463 Read completed with error (sct=0, sc=8) 00:07:54.463 Read completed with error (sct=0, sc=8) 00:07:54.463 Write completed with error (sct=0, sc=8) 00:07:54.463 Write completed with error (sct=0, sc=8) 00:07:54.463 Read completed with error (sct=0, sc=8) 00:07:54.463 Read completed with error (sct=0, sc=8) 00:07:54.463 Read completed with error (sct=0, sc=8) 00:07:54.463 Read completed with error (sct=0, sc=8) 00:07:54.463 Read completed with error (sct=0, sc=8) 00:07:54.463 Read completed with error (sct=0, sc=8) 00:07:54.463 Write completed with error (sct=0, sc=8) 00:07:54.463 Write completed with error (sct=0, sc=8) 00:07:54.463 Read completed with error (sct=0, sc=8) 00:07:54.463 Read completed with error (sct=0, sc=8) 00:07:54.463 Write completed with error (sct=0, sc=8) 00:07:54.463 Read completed with error (sct=0, sc=8) 00:07:54.463 Write completed with error (sct=0, sc=8) 00:07:54.463 Write completed with error (sct=0, sc=8) 00:07:54.463 Read completed with error (sct=0, sc=8) 00:07:54.463 Write completed with error (sct=0, sc=8) 00:07:54.463 Read completed with error (sct=0, sc=8) 00:07:54.463 [2024-12-13 23:48:33.477268] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000020100 is same with the state(6) to be set 00:07:54.463 Read completed with error (sct=0, sc=8) 00:07:54.463 Read completed with error (sct=0, sc=8) 00:07:54.463 Read completed with error (sct=0, sc=8) 00:07:54.463 Read completed with error (sct=0, sc=8) 00:07:54.463 Read completed with error (sct=0, sc=8) 00:07:54.463 Read completed with error (sct=0, sc=8) 00:07:54.463 Read completed with error (sct=0, sc=8) 00:07:54.463 Write completed with error (sct=0, sc=8) 00:07:54.463 Read completed with error (sct=0, sc=8) 00:07:54.463 Write completed with error (sct=0, sc=8) 00:07:54.463 Read completed with error (sct=0, sc=8) 00:07:54.463 Write completed with error (sct=0, sc=8) 00:07:54.463 Read completed with error (sct=0, sc=8) 00:07:54.463 Read completed with error (sct=0, sc=8) 00:07:54.463 Write completed with error (sct=0, sc=8) 00:07:54.463 Write completed with error (sct=0, sc=8) 00:07:54.463 Write completed with error (sct=0, sc=8) 00:07:54.463 Read completed with error (sct=0, sc=8) 00:07:54.463 Read completed with error (sct=0, sc=8) 00:07:54.463 Read completed with error (sct=0, sc=8) 00:07:54.463 Read completed with error (sct=0, sc=8) 00:07:54.463 Read completed with error (sct=0, sc=8) 00:07:54.463 Read completed with error (sct=0, sc=8) 00:07:54.463 Read completed with error (sct=0, sc=8) 00:07:54.463 Read completed with error (sct=0, sc=8) 00:07:54.463 Read completed with error (sct=0, sc=8) 00:07:54.463 Read completed with error (sct=0, sc=8) 00:07:54.463 Read completed with error (sct=0, sc=8) 00:07:54.463 Read completed with error (sct=0, sc=8) 00:07:54.463 Read completed with error (sct=0, sc=8) 00:07:54.463 Write completed with error (sct=0, sc=8) 00:07:54.463 Write completed with error (sct=0, sc=8) 00:07:54.463 Read completed with error (sct=0, sc=8) 00:07:54.463 Read completed with error (sct=0, sc=8) 00:07:54.463 Write completed with error (sct=0, sc=8) 00:07:54.463 Write completed with error (sct=0, sc=8) 00:07:54.463 Read completed with error (sct=0, sc=8) 00:07:54.463 Read completed with error (sct=0, sc=8) 00:07:54.463 Read completed with error (sct=0, sc=8) 00:07:54.463 Write completed with error (sct=0, sc=8) 00:07:54.463 Read completed with error (sct=0, sc=8) 00:07:54.463 Read completed with error (sct=0, sc=8) 00:07:54.463 Read completed with error (sct=0, sc=8) 00:07:54.463 Read completed with error (sct=0, sc=8) 00:07:54.463 Write completed with error (sct=0, sc=8) 00:07:54.463 Write completed with error (sct=0, sc=8) 00:07:54.463 Read completed with error (sct=0, sc=8) 00:07:54.463 Read completed with error (sct=0, sc=8) 00:07:54.463 Write completed with error (sct=0, sc=8) 00:07:54.463 Read completed with error (sct=0, sc=8) 00:07:54.463 [2024-12-13 23:48:33.478380] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000020600 is same with the state(6) to be set 00:07:55.400 [2024-12-13 23:48:34.439847] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500001e080 is same with the state(6) to be set 00:07:55.400 Write completed with error (sct=0, sc=8) 00:07:55.400 Write completed with error (sct=0, sc=8) 00:07:55.400 Write completed with error (sct=0, sc=8) 00:07:55.400 Read completed with error (sct=0, sc=8) 00:07:55.400 Read completed with error (sct=0, sc=8) 00:07:55.400 Write completed with error (sct=0, sc=8) 00:07:55.400 Read completed with error (sct=0, sc=8) 00:07:55.400 Write completed with error (sct=0, sc=8) 00:07:55.400 Read completed with error (sct=0, sc=8) 00:07:55.400 Read completed with error (sct=0, sc=8) 00:07:55.400 Read completed with error (sct=0, sc=8) 00:07:55.400 Read completed with error (sct=0, sc=8) 00:07:55.400 Read completed with error (sct=0, sc=8) 00:07:55.400 Read completed with error (sct=0, sc=8) 00:07:55.400 Read completed with error (sct=0, sc=8) 00:07:55.400 Write completed with error (sct=0, sc=8) 00:07:55.400 Read completed with error (sct=0, sc=8) 00:07:55.400 Read completed with error (sct=0, sc=8) 00:07:55.400 Read completed with error (sct=0, sc=8) 00:07:55.400 Read completed with error (sct=0, sc=8) 00:07:55.400 Write completed with error (sct=0, sc=8) 00:07:55.400 Write completed with error (sct=0, sc=8) 00:07:55.400 Read completed with error (sct=0, sc=8) 00:07:55.400 Write completed with error (sct=0, sc=8) 00:07:55.400 Read completed with error (sct=0, sc=8) 00:07:55.400 Write completed with error (sct=0, sc=8) 00:07:55.400 Write completed with error (sct=0, sc=8) 00:07:55.401 Write completed with error (sct=0, sc=8) 00:07:55.401 Read completed with error (sct=0, sc=8) 00:07:55.401 Read completed with error (sct=0, sc=8) 00:07:55.401 Write completed with error (sct=0, sc=8) 00:07:55.401 Read completed with error (sct=0, sc=8) 00:07:55.401 Read completed with error (sct=0, sc=8) 00:07:55.401 Read completed with error (sct=0, sc=8) 00:07:55.401 [2024-12-13 23:48:34.478079] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500001ea80 is same with the state(6) to be set 00:07:55.401 Read completed with error (sct=0, sc=8) 00:07:55.401 Read completed with error (sct=0, sc=8) 00:07:55.401 Read completed with error (sct=0, sc=8) 00:07:55.401 Read completed with error (sct=0, sc=8) 00:07:55.401 Write completed with error (sct=0, sc=8) 00:07:55.401 Write completed with error (sct=0, sc=8) 00:07:55.401 Read completed with error (sct=0, sc=8) 00:07:55.401 Read completed with error (sct=0, sc=8) 00:07:55.401 Write completed with error (sct=0, sc=8) 00:07:55.401 Read completed with error (sct=0, sc=8) 00:07:55.401 Read completed with error (sct=0, sc=8) 00:07:55.401 Write completed with error (sct=0, sc=8) 00:07:55.401 Read completed with error (sct=0, sc=8) 00:07:55.401 Write completed with error (sct=0, sc=8) 00:07:55.401 Read completed with error (sct=0, sc=8) 00:07:55.401 Read completed with error (sct=0, sc=8) 00:07:55.401 Read completed with error (sct=0, sc=8) 00:07:55.401 Read completed with error (sct=0, sc=8) 00:07:55.401 [2024-12-13 23:48:34.478839] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000020380 is same with the state(6) to be set 00:07:55.401 Write completed with error (sct=0, sc=8) 00:07:55.401 Write completed with error (sct=0, sc=8) 00:07:55.401 Read completed with error (sct=0, sc=8) 00:07:55.401 Read completed with error (sct=0, sc=8) 00:07:55.401 Read completed with error (sct=0, sc=8) 00:07:55.401 Read completed with error (sct=0, sc=8) 00:07:55.401 Write completed with error (sct=0, sc=8) 00:07:55.401 Write completed with error (sct=0, sc=8) 00:07:55.401 Write completed with error (sct=0, sc=8) 00:07:55.401 Read completed with error (sct=0, sc=8) 00:07:55.401 Read completed with error (sct=0, sc=8) 00:07:55.401 Write completed with error (sct=0, sc=8) 00:07:55.401 Read completed with error (sct=0, sc=8) 00:07:55.401 Read completed with error (sct=0, sc=8) 00:07:55.401 Read completed with error (sct=0, sc=8) 00:07:55.401 Read completed with error (sct=0, sc=8) 00:07:55.401 Write completed with error (sct=0, sc=8) 00:07:55.401 Read completed with error (sct=0, sc=8) 00:07:55.401 Write completed with error (sct=0, sc=8) 00:07:55.401 Read completed with error (sct=0, sc=8) 00:07:55.401 Read completed with error (sct=0, sc=8) 00:07:55.401 Read completed with error (sct=0, sc=8) 00:07:55.401 Read completed with error (sct=0, sc=8) 00:07:55.401 Write completed with error (sct=0, sc=8) 00:07:55.401 Read completed with error (sct=0, sc=8) 00:07:55.401 Read completed with error (sct=0, sc=8) 00:07:55.401 Read completed with error (sct=0, sc=8) 00:07:55.401 Write completed with error (sct=0, sc=8) 00:07:55.401 Write completed with error (sct=0, sc=8) 00:07:55.401 Read completed with error (sct=0, sc=8) 00:07:55.401 Read completed with error (sct=0, sc=8) 00:07:55.401 Read completed with error (sct=0, sc=8) 00:07:55.401 Read completed with error (sct=0, sc=8) 00:07:55.401 Write completed with error (sct=0, sc=8) 00:07:55.401 [2024-12-13 23:48:34.480390] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500001ef80 is same with the state(6) to be set 00:07:55.401 Write completed with error (sct=0, sc=8) 00:07:55.401 Read completed with error (sct=0, sc=8) 00:07:55.401 Read completed with error (sct=0, sc=8) 00:07:55.401 Write completed with error (sct=0, sc=8) 00:07:55.401 Read completed with error (sct=0, sc=8) 00:07:55.401 Read completed with error (sct=0, sc=8) 00:07:55.401 Write completed with error (sct=0, sc=8) 00:07:55.401 Write completed with error (sct=0, sc=8) 00:07:55.401 Write completed with error (sct=0, sc=8) 00:07:55.401 Read completed with error (sct=0, sc=8) 00:07:55.401 Read completed with error (sct=0, sc=8) 00:07:55.401 Read completed with error (sct=0, sc=8) 00:07:55.401 Read completed with error (sct=0, sc=8) 00:07:55.401 Read completed with error (sct=0, sc=8) 00:07:55.401 Read completed with error (sct=0, sc=8) 00:07:55.401 Read completed with error (sct=0, sc=8) 00:07:55.401 Read completed with error (sct=0, sc=8) 00:07:55.401 Read completed with error (sct=0, sc=8) 00:07:55.401 Read completed with error (sct=0, sc=8) 00:07:55.401 Read completed with error (sct=0, sc=8) 00:07:55.401 Read completed with error (sct=0, sc=8) 00:07:55.401 Read completed with error (sct=0, sc=8) 00:07:55.401 Write completed with error (sct=0, sc=8) 00:07:55.401 Write completed with error (sct=0, sc=8) 00:07:55.401 Read completed with error (sct=0, sc=8) 00:07:55.401 Read completed with error (sct=0, sc=8) 00:07:55.401 Read completed with error (sct=0, sc=8) 00:07:55.401 Read completed with error (sct=0, sc=8) 00:07:55.401 Read completed with error (sct=0, sc=8) 00:07:55.401 Read completed with error (sct=0, sc=8) 00:07:55.401 Read completed with error (sct=0, sc=8) 00:07:55.401 Read completed with error (sct=0, sc=8) 00:07:55.401 Read completed with error (sct=0, sc=8) 00:07:55.401 Read completed with error (sct=0, sc=8) 00:07:55.401 Read completed with error (sct=0, sc=8) 00:07:55.401 [2024-12-13 23:48:34.484844] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500001e800 is same with the state(6) to be set 00:07:55.401 23:48:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:55.401 23:48:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:07:55.401 23:48:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 3835637 00:07:55.401 23:48:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:07:55.401 Initializing NVMe Controllers 00:07:55.401 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:07:55.401 Controller IO queue size 128, less than required. 00:07:55.401 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:55.401 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:07:55.401 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:07:55.401 Initialization complete. Launching workers. 00:07:55.401 ======================================================== 00:07:55.401 Latency(us) 00:07:55.401 Device Information : IOPS MiB/s Average min max 00:07:55.401 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 188.54 0.09 950035.99 972.78 1013200.55 00:07:55.401 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 157.78 0.08 868093.00 716.44 1013794.70 00:07:55.401 ======================================================== 00:07:55.401 Total : 346.31 0.17 912703.80 716.44 1013794.70 00:07:55.401 00:07:55.401 [2024-12-13 23:48:34.490128] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500001e080 (9): Bad file descriptor 00:07:55.401 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:07:55.968 23:48:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:07:55.968 23:48:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 3835637 00:07:55.968 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (3835637) - No such process 00:07:55.968 23:48:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 3835637 00:07:55.968 23:48:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # local es=0 00:07:55.968 23:48:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@654 -- # valid_exec_arg wait 3835637 00:07:55.968 23:48:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # local arg=wait 00:07:55.968 23:48:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:55.968 23:48:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # type -t wait 00:07:55.968 23:48:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:55.968 23:48:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # wait 3835637 00:07:55.968 23:48:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # es=1 00:07:55.968 23:48:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:55.968 23:48:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:55.968 23:48:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:55.968 23:48:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:07:55.968 23:48:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:55.968 23:48:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:55.968 23:48:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:55.968 23:48:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:55.968 23:48:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:55.968 23:48:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:55.968 [2024-12-13 23:48:35.015322] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:55.968 23:48:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:55.968 23:48:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:55.968 23:48:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:55.968 23:48:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:55.968 23:48:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:55.968 23:48:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=3836304 00:07:55.968 23:48:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:07:55.968 23:48:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:07:55.968 23:48:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3836304 00:07:55.968 23:48:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:56.225 [2024-12-13 23:48:35.127638] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:07:56.483 23:48:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:56.483 23:48:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3836304 00:07:56.483 23:48:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:57.050 23:48:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:57.050 23:48:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3836304 00:07:57.050 23:48:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:57.618 23:48:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:57.618 23:48:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3836304 00:07:57.618 23:48:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:58.184 23:48:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:58.184 23:48:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3836304 00:07:58.184 23:48:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:58.443 23:48:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:58.443 23:48:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3836304 00:07:58.443 23:48:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:59.010 23:48:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:59.010 23:48:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3836304 00:07:59.010 23:48:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:59.406 Initializing NVMe Controllers 00:07:59.406 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:07:59.406 Controller IO queue size 128, less than required. 00:07:59.406 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:59.406 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:07:59.406 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:07:59.406 Initialization complete. Launching workers. 00:07:59.406 ======================================================== 00:07:59.406 Latency(us) 00:07:59.406 Device Information : IOPS MiB/s Average min max 00:07:59.406 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1004827.18 1000188.83 1040863.82 00:07:59.406 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1004372.08 1000170.72 1014123.73 00:07:59.406 ======================================================== 00:07:59.406 Total : 256.00 0.12 1004599.63 1000170.72 1040863.82 00:07:59.406 00:07:59.688 23:48:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:59.688 23:48:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3836304 00:07:59.688 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (3836304) - No such process 00:07:59.688 23:48:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 3836304 00:07:59.688 23:48:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:07:59.689 23:48:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:07:59.689 23:48:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:59.689 23:48:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:07:59.689 23:48:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:59.689 23:48:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:07:59.689 23:48:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:59.689 23:48:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:59.689 rmmod nvme_tcp 00:07:59.689 rmmod nvme_fabrics 00:07:59.689 rmmod nvme_keyring 00:07:59.689 23:48:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:59.689 23:48:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:07:59.689 23:48:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:07:59.689 23:48:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@517 -- # '[' -n 3835515 ']' 00:07:59.689 23:48:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # killprocess 3835515 00:07:59.689 23:48:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # '[' -z 3835515 ']' 00:07:59.689 23:48:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # kill -0 3835515 00:07:59.689 23:48:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # uname 00:07:59.689 23:48:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:59.689 23:48:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3835515 00:07:59.689 23:48:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:59.689 23:48:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:59.689 23:48:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3835515' 00:07:59.689 killing process with pid 3835515 00:07:59.689 23:48:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@973 -- # kill 3835515 00:07:59.689 23:48:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@978 -- # wait 3835515 00:08:01.067 23:48:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:01.067 23:48:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:01.067 23:48:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:01.067 23:48:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:08:01.067 23:48:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-save 00:08:01.067 23:48:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:01.067 23:48:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-restore 00:08:01.067 23:48:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:01.067 23:48:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:01.067 23:48:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:01.067 23:48:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:01.067 23:48:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:02.973 23:48:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:02.973 00:08:02.973 real 0m17.621s 00:08:02.973 user 0m32.270s 00:08:02.973 sys 0m5.391s 00:08:02.973 23:48:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:02.973 23:48:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:02.973 ************************************ 00:08:02.973 END TEST nvmf_delete_subsystem 00:08:02.973 ************************************ 00:08:02.973 23:48:41 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:08:02.973 23:48:41 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:02.973 23:48:41 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:02.973 23:48:41 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:02.973 ************************************ 00:08:02.973 START TEST nvmf_host_management 00:08:02.973 ************************************ 00:08:02.973 23:48:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:08:02.973 * Looking for test storage... 00:08:02.973 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:02.973 23:48:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:02.973 23:48:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1711 -- # lcov --version 00:08:02.973 23:48:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:02.973 23:48:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:02.973 23:48:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:02.973 23:48:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:02.973 23:48:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:02.973 23:48:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:08:02.973 23:48:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:08:02.973 23:48:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:08:02.973 23:48:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:08:02.973 23:48:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:08:02.973 23:48:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:08:02.973 23:48:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:08:02.973 23:48:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:02.973 23:48:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:08:02.973 23:48:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:08:02.973 23:48:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:02.974 23:48:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:03.233 23:48:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:08:03.233 23:48:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:08:03.233 23:48:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:03.233 23:48:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:08:03.233 23:48:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:08:03.233 23:48:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:08:03.233 23:48:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:08:03.233 23:48:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:03.233 23:48:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:08:03.233 23:48:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:08:03.233 23:48:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:03.233 23:48:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:03.233 23:48:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:08:03.233 23:48:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:03.233 23:48:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:03.233 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:03.233 --rc genhtml_branch_coverage=1 00:08:03.233 --rc genhtml_function_coverage=1 00:08:03.233 --rc genhtml_legend=1 00:08:03.233 --rc geninfo_all_blocks=1 00:08:03.233 --rc geninfo_unexecuted_blocks=1 00:08:03.233 00:08:03.233 ' 00:08:03.233 23:48:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:03.233 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:03.233 --rc genhtml_branch_coverage=1 00:08:03.233 --rc genhtml_function_coverage=1 00:08:03.233 --rc genhtml_legend=1 00:08:03.233 --rc geninfo_all_blocks=1 00:08:03.233 --rc geninfo_unexecuted_blocks=1 00:08:03.233 00:08:03.233 ' 00:08:03.233 23:48:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:03.233 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:03.233 --rc genhtml_branch_coverage=1 00:08:03.233 --rc genhtml_function_coverage=1 00:08:03.233 --rc genhtml_legend=1 00:08:03.233 --rc geninfo_all_blocks=1 00:08:03.233 --rc geninfo_unexecuted_blocks=1 00:08:03.233 00:08:03.233 ' 00:08:03.233 23:48:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:03.233 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:03.233 --rc genhtml_branch_coverage=1 00:08:03.233 --rc genhtml_function_coverage=1 00:08:03.233 --rc genhtml_legend=1 00:08:03.233 --rc geninfo_all_blocks=1 00:08:03.233 --rc geninfo_unexecuted_blocks=1 00:08:03.233 00:08:03.233 ' 00:08:03.233 23:48:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:03.233 23:48:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:08:03.233 23:48:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:03.233 23:48:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:03.233 23:48:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:03.233 23:48:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:03.233 23:48:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:03.233 23:48:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:03.233 23:48:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:03.233 23:48:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:03.233 23:48:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:03.233 23:48:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:03.233 23:48:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:08:03.233 23:48:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:08:03.233 23:48:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:03.233 23:48:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:03.233 23:48:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:03.233 23:48:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:03.233 23:48:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:03.233 23:48:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:08:03.233 23:48:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:03.233 23:48:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:03.233 23:48:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:03.234 23:48:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:03.234 23:48:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:03.234 23:48:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:03.234 23:48:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:08:03.234 23:48:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:03.234 23:48:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:08:03.234 23:48:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:03.234 23:48:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:03.234 23:48:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:03.234 23:48:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:03.234 23:48:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:03.234 23:48:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:03.234 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:03.234 23:48:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:03.234 23:48:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:03.234 23:48:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:03.234 23:48:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:03.234 23:48:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:03.234 23:48:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:08:03.234 23:48:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:03.234 23:48:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:03.234 23:48:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:03.234 23:48:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:03.234 23:48:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:03.234 23:48:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:03.234 23:48:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:03.234 23:48:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:03.234 23:48:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:03.234 23:48:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:03.234 23:48:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:08:03.234 23:48:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:09.804 23:48:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:09.804 23:48:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:08:09.804 23:48:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:09.804 23:48:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:09.804 23:48:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:09.804 23:48:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:09.804 23:48:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:09.804 23:48:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:08:09.804 23:48:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:09.804 23:48:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:08:09.804 23:48:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:08:09.804 23:48:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:08:09.804 23:48:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:08:09.804 23:48:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:08:09.804 23:48:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:08:09.804 23:48:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:09.804 23:48:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:09.804 23:48:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:09.804 23:48:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:09.804 23:48:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:09.804 23:48:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:09.804 23:48:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:09.804 23:48:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:09.804 23:48:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:09.804 23:48:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:09.804 23:48:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:09.804 23:48:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:09.804 23:48:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:09.804 23:48:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:09.804 23:48:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:09.804 23:48:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:09.804 23:48:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:09.804 23:48:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:09.804 23:48:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:09.804 23:48:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:08:09.804 Found 0000:af:00.0 (0x8086 - 0x159b) 00:08:09.804 23:48:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:09.804 23:48:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:09.804 23:48:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:09.804 23:48:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:09.804 23:48:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:09.804 23:48:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:09.804 23:48:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:08:09.804 Found 0000:af:00.1 (0x8086 - 0x159b) 00:08:09.804 23:48:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:09.804 23:48:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:09.804 23:48:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:09.804 23:48:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:09.804 23:48:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:09.804 23:48:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:09.804 23:48:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:09.804 23:48:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:09.804 23:48:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:09.804 23:48:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:09.804 23:48:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:09.804 23:48:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:09.804 23:48:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:09.804 23:48:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:09.804 23:48:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:09.804 23:48:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:08:09.804 Found net devices under 0000:af:00.0: cvl_0_0 00:08:09.804 23:48:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:09.804 23:48:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:09.804 23:48:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:09.805 23:48:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:09.805 23:48:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:09.805 23:48:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:09.805 23:48:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:09.805 23:48:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:09.805 23:48:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:08:09.805 Found net devices under 0000:af:00.1: cvl_0_1 00:08:09.805 23:48:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:09.805 23:48:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:09.805 23:48:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # is_hw=yes 00:08:09.805 23:48:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:09.805 23:48:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:09.805 23:48:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:09.805 23:48:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:09.805 23:48:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:09.805 23:48:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:09.805 23:48:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:09.805 23:48:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:09.805 23:48:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:09.805 23:48:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:09.805 23:48:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:09.805 23:48:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:09.805 23:48:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:09.805 23:48:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:09.805 23:48:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:09.805 23:48:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:09.805 23:48:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:09.805 23:48:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:09.805 23:48:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:09.805 23:48:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:09.805 23:48:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:09.805 23:48:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:09.805 23:48:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:09.805 23:48:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:09.805 23:48:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:09.805 23:48:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:09.805 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:09.805 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.434 ms 00:08:09.805 00:08:09.805 --- 10.0.0.2 ping statistics --- 00:08:09.805 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:09.805 rtt min/avg/max/mdev = 0.434/0.434/0.434/0.000 ms 00:08:09.805 23:48:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:09.805 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:09.805 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.192 ms 00:08:09.805 00:08:09.805 --- 10.0.0.1 ping statistics --- 00:08:09.805 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:09.805 rtt min/avg/max/mdev = 0.192/0.192/0.192/0.000 ms 00:08:09.805 23:48:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:09.805 23:48:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@450 -- # return 0 00:08:09.805 23:48:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:09.805 23:48:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:09.805 23:48:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:09.805 23:48:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:09.805 23:48:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:09.805 23:48:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:09.805 23:48:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:09.805 23:48:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:08:09.805 23:48:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:08:09.805 23:48:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:08:09.805 23:48:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:09.805 23:48:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:09.805 23:48:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:09.805 23:48:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=3840678 00:08:09.805 23:48:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 3840678 00:08:09.805 23:48:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:08:09.805 23:48:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 3840678 ']' 00:08:09.805 23:48:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:09.805 23:48:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:09.805 23:48:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:09.805 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:09.805 23:48:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:09.805 23:48:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:09.805 [2024-12-13 23:48:48.099927] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:08:09.805 [2024-12-13 23:48:48.100038] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:09.805 [2024-12-13 23:48:48.220025] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:09.805 [2024-12-13 23:48:48.334688] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:09.805 [2024-12-13 23:48:48.334734] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:09.805 [2024-12-13 23:48:48.334744] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:09.805 [2024-12-13 23:48:48.334756] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:09.805 [2024-12-13 23:48:48.334764] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:09.805 [2024-12-13 23:48:48.337216] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:08:09.805 [2024-12-13 23:48:48.337293] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:08:09.805 [2024-12-13 23:48:48.337360] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:08:09.805 [2024-12-13 23:48:48.337375] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 4 00:08:09.805 23:48:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:09.805 23:48:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:08:09.805 23:48:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:09.805 23:48:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:09.805 23:48:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:09.805 23:48:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:09.805 23:48:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:09.805 23:48:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:09.805 23:48:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:09.805 [2024-12-13 23:48:48.943557] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:10.065 23:48:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:10.065 23:48:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:08:10.065 23:48:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:10.065 23:48:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:10.065 23:48:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:08:10.065 23:48:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:08:10.065 23:48:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:08:10.065 23:48:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:10.065 23:48:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:10.065 Malloc0 00:08:10.065 [2024-12-13 23:48:49.068939] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:10.065 23:48:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:10.065 23:48:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:08:10.065 23:48:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:10.065 23:48:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:10.065 23:48:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=3840825 00:08:10.065 23:48:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 3840825 /var/tmp/bdevperf.sock 00:08:10.065 23:48:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 3840825 ']' 00:08:10.065 23:48:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:10.065 23:48:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:08:10.065 23:48:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:08:10.065 23:48:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:10.065 23:48:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:10.065 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:10.065 23:48:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:08:10.065 23:48:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:10.065 23:48:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:08:10.065 23:48:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:10.065 23:48:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:10.065 23:48:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:10.065 { 00:08:10.065 "params": { 00:08:10.065 "name": "Nvme$subsystem", 00:08:10.065 "trtype": "$TEST_TRANSPORT", 00:08:10.065 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:10.065 "adrfam": "ipv4", 00:08:10.065 "trsvcid": "$NVMF_PORT", 00:08:10.065 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:10.065 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:10.065 "hdgst": ${hdgst:-false}, 00:08:10.065 "ddgst": ${ddgst:-false} 00:08:10.065 }, 00:08:10.065 "method": "bdev_nvme_attach_controller" 00:08:10.065 } 00:08:10.065 EOF 00:08:10.065 )") 00:08:10.065 23:48:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:08:10.065 23:48:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:08:10.065 23:48:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:08:10.065 23:48:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:10.065 "params": { 00:08:10.065 "name": "Nvme0", 00:08:10.065 "trtype": "tcp", 00:08:10.065 "traddr": "10.0.0.2", 00:08:10.065 "adrfam": "ipv4", 00:08:10.065 "trsvcid": "4420", 00:08:10.065 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:10.065 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:08:10.065 "hdgst": false, 00:08:10.065 "ddgst": false 00:08:10.065 }, 00:08:10.065 "method": "bdev_nvme_attach_controller" 00:08:10.065 }' 00:08:10.065 [2024-12-13 23:48:49.192497] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:08:10.065 [2024-12-13 23:48:49.192585] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3840825 ] 00:08:10.327 [2024-12-13 23:48:49.312427] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:10.327 [2024-12-13 23:48:49.427762] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:08:10.896 Running I/O for 10 seconds... 00:08:10.896 23:48:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:10.896 23:48:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:08:10.896 23:48:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:08:10.896 23:48:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:10.896 23:48:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:11.156 23:48:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:11.156 23:48:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:11.156 23:48:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:08:11.156 23:48:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:08:11.156 23:48:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:08:11.156 23:48:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:08:11.156 23:48:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:08:11.156 23:48:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:08:11.156 23:48:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:08:11.157 23:48:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:08:11.157 23:48:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:08:11.157 23:48:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:11.157 23:48:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:11.157 23:48:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:11.157 23:48:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=295 00:08:11.157 23:48:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 295 -ge 100 ']' 00:08:11.157 23:48:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:08:11.157 23:48:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break 00:08:11.157 23:48:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:08:11.157 23:48:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:08:11.157 23:48:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:11.157 23:48:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:11.157 23:48:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:11.157 23:48:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:08:11.157 23:48:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:11.157 23:48:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:11.157 [2024-12-13 23:48:50.090408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:51072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:11.157 [2024-12-13 23:48:50.090465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:11.157 [2024-12-13 23:48:50.090491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:51200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:11.157 [2024-12-13 23:48:50.090502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:11.157 [2024-12-13 23:48:50.090515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:51328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:11.157 [2024-12-13 23:48:50.090525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:11.157 [2024-12-13 23:48:50.090537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:51456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:11.157 [2024-12-13 23:48:50.090548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:11.157 [2024-12-13 23:48:50.090559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:51584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:11.157 [2024-12-13 23:48:50.090568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:11.157 [2024-12-13 23:48:50.090581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:51712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:11.157 [2024-12-13 23:48:50.090601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:11.157 [2024-12-13 23:48:50.090613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:51840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:11.157 [2024-12-13 23:48:50.090623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:11.157 [2024-12-13 23:48:50.090634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:51968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:11.157 [2024-12-13 23:48:50.090644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:11.157 [2024-12-13 23:48:50.090655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:52096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:11.157 [2024-12-13 23:48:50.090665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:11.157 [2024-12-13 23:48:50.090676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:49152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:11.157 [2024-12-13 23:48:50.090686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:11.157 [2024-12-13 23:48:50.090701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:49280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:11.157 [2024-12-13 23:48:50.090710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:11.157 [2024-12-13 23:48:50.090722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:52224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:11.157 [2024-12-13 23:48:50.090732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:11.157 [2024-12-13 23:48:50.090744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:52352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:11.157 [2024-12-13 23:48:50.090753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:11.157 [2024-12-13 23:48:50.090765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:52480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:11.157 [2024-12-13 23:48:50.090774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:11.157 [2024-12-13 23:48:50.090786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:49408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:11.157 [2024-12-13 23:48:50.090796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:11.157 [2024-12-13 23:48:50.090808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:52608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:11.157 [2024-12-13 23:48:50.090818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:11.157 [2024-12-13 23:48:50.090830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:52736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:11.157 [2024-12-13 23:48:50.090840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:11.157 [2024-12-13 23:48:50.090852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:52864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:11.157 [2024-12-13 23:48:50.090863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:11.157 [2024-12-13 23:48:50.090875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:49536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:11.157 [2024-12-13 23:48:50.090886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:11.157 [2024-12-13 23:48:50.090898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:49664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:11.157 [2024-12-13 23:48:50.090908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:11.157 [2024-12-13 23:48:50.090919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:52992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:11.157 [2024-12-13 23:48:50.090929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:11.157 [2024-12-13 23:48:50.090942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:53120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:11.157 [2024-12-13 23:48:50.090953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:11.157 [2024-12-13 23:48:50.090965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:53248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:11.157 [2024-12-13 23:48:50.090976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:11.157 [2024-12-13 23:48:50.090988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:53376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:11.157 [2024-12-13 23:48:50.090999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:11.157 [2024-12-13 23:48:50.091011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:49792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:11.157 [2024-12-13 23:48:50.091021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:11.157 [2024-12-13 23:48:50.091032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:53504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:11.157 [2024-12-13 23:48:50.091042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:11.157 [2024-12-13 23:48:50.091053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:53632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:11.157 [2024-12-13 23:48:50.091063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:11.157 [2024-12-13 23:48:50.091073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:53760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:11.157 [2024-12-13 23:48:50.091083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:11.157 [2024-12-13 23:48:50.091094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:49920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:11.157 [2024-12-13 23:48:50.091104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:11.157 [2024-12-13 23:48:50.091116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:53888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:11.157 [2024-12-13 23:48:50.091125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:11.157 [2024-12-13 23:48:50.091137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:50048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:11.157 [2024-12-13 23:48:50.091146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:11.157 [2024-12-13 23:48:50.091159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:54016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:11.157 [2024-12-13 23:48:50.091169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:11.158 [2024-12-13 23:48:50.091181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:54144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:11.158 [2024-12-13 23:48:50.091190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:11.158 [2024-12-13 23:48:50.091202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:54272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:11.158 [2024-12-13 23:48:50.091212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:11.158 [2024-12-13 23:48:50.091223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:50176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:11.158 [2024-12-13 23:48:50.091235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:11.158 [2024-12-13 23:48:50.091248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:54400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:11.158 [2024-12-13 23:48:50.091257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:11.158 [2024-12-13 23:48:50.091268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:54528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:11.158 [2024-12-13 23:48:50.091279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:11.158 [2024-12-13 23:48:50.091291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:50304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:11.158 [2024-12-13 23:48:50.091300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:11.158 [2024-12-13 23:48:50.091312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:54656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:11.158 [2024-12-13 23:48:50.091322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:11.158 [2024-12-13 23:48:50.091333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:54784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:11.158 [2024-12-13 23:48:50.091343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:11.158 [2024-12-13 23:48:50.091354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:54912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:11.158 [2024-12-13 23:48:50.091364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:11.158 [2024-12-13 23:48:50.091376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:55040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:11.158 [2024-12-13 23:48:50.091385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:11.158 [2024-12-13 23:48:50.091397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:55168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:11.158 [2024-12-13 23:48:50.091406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:11.158 [2024-12-13 23:48:50.091417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:55296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:11.158 [2024-12-13 23:48:50.091427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:11.158 [2024-12-13 23:48:50.091442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:55424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:11.158 [2024-12-13 23:48:50.091453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:11.158 [2024-12-13 23:48:50.091464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:55552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:11.158 [2024-12-13 23:48:50.091473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:11.158 [2024-12-13 23:48:50.091485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:55680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:11.158 [2024-12-13 23:48:50.091494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:11.158 [2024-12-13 23:48:50.091506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:55808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:11.158 [2024-12-13 23:48:50.091518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:11.158 [2024-12-13 23:48:50.091530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:55936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:11.158 [2024-12-13 23:48:50.091540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:11.158 [2024-12-13 23:48:50.091551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:56064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:11.158 [2024-12-13 23:48:50.091560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:11.158 [2024-12-13 23:48:50.091572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:56192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:11.158 [2024-12-13 23:48:50.091583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:11.158 [2024-12-13 23:48:50.091595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:56320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:11.158 [2024-12-13 23:48:50.091605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:11.158 [2024-12-13 23:48:50.091616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:56448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:11.158 [2024-12-13 23:48:50.091626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:11.158 [2024-12-13 23:48:50.091638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:56576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:11.158 [2024-12-13 23:48:50.091647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:11.158 [2024-12-13 23:48:50.091658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:56704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:11.158 [2024-12-13 23:48:50.091668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:11.158 [2024-12-13 23:48:50.091680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:56832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:11.158 [2024-12-13 23:48:50.091690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:11.158 [2024-12-13 23:48:50.091702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:56960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:11.158 [2024-12-13 23:48:50.091711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:11.158 [2024-12-13 23:48:50.091722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:50432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:11.158 [2024-12-13 23:48:50.091732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:11.158 [2024-12-13 23:48:50.091743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:57088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:11.158 [2024-12-13 23:48:50.091753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:11.158 [2024-12-13 23:48:50.091763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:57216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:11.158 [2024-12-13 23:48:50.091773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:11.158 [2024-12-13 23:48:50.091786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:50560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:11.158 [2024-12-13 23:48:50.091796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:11.158 [2024-12-13 23:48:50.091808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:50688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:11.158 [2024-12-13 23:48:50.091817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:11.158 [2024-12-13 23:48:50.091829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:50816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:11.158 [2024-12-13 23:48:50.091840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:11.158 [2024-12-13 23:48:50.091852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:50944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:11.158 [2024-12-13 23:48:50.091861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:11.158 [2024-12-13 23:48:50.093157] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:08:11.158 task offset: 51072 on job bdev=Nvme0n1 fails 00:08:11.158 00:08:11.158 Latency(us) 00:08:11.158 [2024-12-13T22:48:50.299Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:11.158 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:08:11.158 Job: Nvme0n1 ended in about 0.23 seconds with error 00:08:11.158 Verification LBA range: start 0x0 length 0x400 00:08:11.158 Nvme0n1 : 0.23 1670.45 104.40 278.41 0.00 31404.97 2137.72 30583.47 00:08:11.158 [2024-12-13T22:48:50.299Z] =================================================================================================================== 00:08:11.158 [2024-12-13T22:48:50.299Z] Total : 1670.45 104.40 278.41 0.00 31404.97 2137.72 30583.47 00:08:11.158 23:48:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:11.158 23:48:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:08:11.158 [2024-12-13 23:48:50.109740] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:11.158 [2024-12-13 23:48:50.109782] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:08:11.158 [2024-12-13 23:48:50.115522] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:08:12.095 23:48:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 3840825 00:08:12.095 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (3840825) - No such process 00:08:12.095 23:48:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # true 00:08:12.095 23:48:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:08:12.095 23:48:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:08:12.095 23:48:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:08:12.095 23:48:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:08:12.095 23:48:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:08:12.095 23:48:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:12.095 23:48:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:12.095 { 00:08:12.095 "params": { 00:08:12.095 "name": "Nvme$subsystem", 00:08:12.095 "trtype": "$TEST_TRANSPORT", 00:08:12.095 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:12.095 "adrfam": "ipv4", 00:08:12.095 "trsvcid": "$NVMF_PORT", 00:08:12.095 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:12.095 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:12.095 "hdgst": ${hdgst:-false}, 00:08:12.095 "ddgst": ${ddgst:-false} 00:08:12.095 }, 00:08:12.095 "method": "bdev_nvme_attach_controller" 00:08:12.095 } 00:08:12.095 EOF 00:08:12.095 )") 00:08:12.095 23:48:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:08:12.095 23:48:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:08:12.095 23:48:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:08:12.095 23:48:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:12.095 "params": { 00:08:12.095 "name": "Nvme0", 00:08:12.095 "trtype": "tcp", 00:08:12.095 "traddr": "10.0.0.2", 00:08:12.095 "adrfam": "ipv4", 00:08:12.095 "trsvcid": "4420", 00:08:12.095 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:12.095 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:08:12.095 "hdgst": false, 00:08:12.095 "ddgst": false 00:08:12.095 }, 00:08:12.095 "method": "bdev_nvme_attach_controller" 00:08:12.095 }' 00:08:12.095 [2024-12-13 23:48:51.185288] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:08:12.095 [2024-12-13 23:48:51.185373] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3841193 ] 00:08:12.353 [2024-12-13 23:48:51.298277] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:12.353 [2024-12-13 23:48:51.412359] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:08:12.921 Running I/O for 1 seconds... 00:08:14.299 1792.00 IOPS, 112.00 MiB/s 00:08:14.299 Latency(us) 00:08:14.299 [2024-12-13T22:48:53.440Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:14.299 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:08:14.299 Verification LBA range: start 0x0 length 0x400 00:08:14.299 Nvme0n1 : 1.03 1807.32 112.96 0.00 0.00 34837.09 5274.09 30583.47 00:08:14.299 [2024-12-13T22:48:53.441Z] =================================================================================================================== 00:08:14.300 [2024-12-13T22:48:53.441Z] Total : 1807.32 112.96 0.00 0.00 34837.09 5274.09 30583.47 00:08:14.867 23:48:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:08:14.867 23:48:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:08:14.867 23:48:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:08:14.867 23:48:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:08:14.867 23:48:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:08:14.867 23:48:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:14.867 23:48:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:08:14.867 23:48:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:14.867 23:48:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:08:14.867 23:48:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:14.867 23:48:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:14.867 rmmod nvme_tcp 00:08:14.867 rmmod nvme_fabrics 00:08:14.867 rmmod nvme_keyring 00:08:15.126 23:48:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:15.126 23:48:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:08:15.126 23:48:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:08:15.126 23:48:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 3840678 ']' 00:08:15.126 23:48:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 3840678 00:08:15.126 23:48:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@954 -- # '[' -z 3840678 ']' 00:08:15.126 23:48:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@958 -- # kill -0 3840678 00:08:15.126 23:48:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # uname 00:08:15.126 23:48:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:15.126 23:48:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3840678 00:08:15.126 23:48:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:08:15.126 23:48:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:08:15.126 23:48:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3840678' 00:08:15.126 killing process with pid 3840678 00:08:15.126 23:48:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@973 -- # kill 3840678 00:08:15.126 23:48:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@978 -- # wait 3840678 00:08:16.505 [2024-12-13 23:48:55.338045] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:08:16.505 23:48:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:16.505 23:48:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:16.505 23:48:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:16.505 23:48:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:08:16.505 23:48:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 00:08:16.505 23:48:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:16.505 23:48:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 00:08:16.505 23:48:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:16.505 23:48:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:16.505 23:48:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:16.505 23:48:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:16.506 23:48:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:18.411 23:48:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:18.411 23:48:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:08:18.411 00:08:18.411 real 0m15.522s 00:08:18.411 user 0m33.059s 00:08:18.411 sys 0m5.759s 00:08:18.411 23:48:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:18.411 23:48:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:18.411 ************************************ 00:08:18.411 END TEST nvmf_host_management 00:08:18.411 ************************************ 00:08:18.411 23:48:57 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:08:18.411 23:48:57 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:18.411 23:48:57 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:18.411 23:48:57 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:18.411 ************************************ 00:08:18.411 START TEST nvmf_lvol 00:08:18.411 ************************************ 00:08:18.411 23:48:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:08:18.671 * Looking for test storage... 00:08:18.671 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:18.671 23:48:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:18.671 23:48:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1711 -- # lcov --version 00:08:18.671 23:48:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:18.671 23:48:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:18.671 23:48:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:18.671 23:48:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:18.671 23:48:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:18.671 23:48:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:08:18.671 23:48:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:08:18.671 23:48:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:08:18.671 23:48:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:08:18.671 23:48:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:08:18.671 23:48:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:08:18.671 23:48:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:08:18.671 23:48:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:18.671 23:48:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:08:18.671 23:48:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:08:18.671 23:48:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:18.671 23:48:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:18.671 23:48:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:08:18.671 23:48:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:08:18.671 23:48:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:18.671 23:48:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:08:18.671 23:48:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:08:18.671 23:48:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:08:18.671 23:48:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:08:18.671 23:48:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:18.671 23:48:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:08:18.671 23:48:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:08:18.671 23:48:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:18.671 23:48:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:18.671 23:48:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:08:18.671 23:48:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:18.671 23:48:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:18.671 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:18.671 --rc genhtml_branch_coverage=1 00:08:18.671 --rc genhtml_function_coverage=1 00:08:18.671 --rc genhtml_legend=1 00:08:18.671 --rc geninfo_all_blocks=1 00:08:18.671 --rc geninfo_unexecuted_blocks=1 00:08:18.671 00:08:18.671 ' 00:08:18.671 23:48:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:18.671 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:18.671 --rc genhtml_branch_coverage=1 00:08:18.671 --rc genhtml_function_coverage=1 00:08:18.671 --rc genhtml_legend=1 00:08:18.671 --rc geninfo_all_blocks=1 00:08:18.671 --rc geninfo_unexecuted_blocks=1 00:08:18.671 00:08:18.671 ' 00:08:18.671 23:48:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:18.671 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:18.671 --rc genhtml_branch_coverage=1 00:08:18.671 --rc genhtml_function_coverage=1 00:08:18.671 --rc genhtml_legend=1 00:08:18.671 --rc geninfo_all_blocks=1 00:08:18.671 --rc geninfo_unexecuted_blocks=1 00:08:18.671 00:08:18.671 ' 00:08:18.671 23:48:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:18.671 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:18.671 --rc genhtml_branch_coverage=1 00:08:18.671 --rc genhtml_function_coverage=1 00:08:18.671 --rc genhtml_legend=1 00:08:18.671 --rc geninfo_all_blocks=1 00:08:18.671 --rc geninfo_unexecuted_blocks=1 00:08:18.671 00:08:18.671 ' 00:08:18.671 23:48:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:18.671 23:48:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:08:18.671 23:48:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:18.671 23:48:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:18.671 23:48:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:18.671 23:48:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:18.671 23:48:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:18.672 23:48:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:18.672 23:48:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:18.672 23:48:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:18.672 23:48:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:18.672 23:48:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:18.672 23:48:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:08:18.672 23:48:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:08:18.672 23:48:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:18.672 23:48:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:18.672 23:48:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:18.672 23:48:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:18.672 23:48:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:18.672 23:48:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:08:18.672 23:48:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:18.672 23:48:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:18.672 23:48:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:18.672 23:48:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:18.672 23:48:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:18.672 23:48:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:18.672 23:48:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:08:18.672 23:48:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:18.672 23:48:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:08:18.672 23:48:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:18.672 23:48:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:18.672 23:48:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:18.672 23:48:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:18.672 23:48:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:18.672 23:48:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:18.672 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:18.672 23:48:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:18.672 23:48:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:18.672 23:48:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:18.672 23:48:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:18.672 23:48:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:18.672 23:48:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:08:18.672 23:48:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:08:18.672 23:48:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:18.672 23:48:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:08:18.672 23:48:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:18.672 23:48:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:18.672 23:48:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:18.672 23:48:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:18.672 23:48:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:18.672 23:48:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:18.672 23:48:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:18.672 23:48:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:18.672 23:48:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:18.672 23:48:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:18.672 23:48:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:08:18.672 23:48:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:23.943 23:49:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:23.943 23:49:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:08:23.943 23:49:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:23.943 23:49:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:23.943 23:49:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:23.943 23:49:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:23.943 23:49:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:23.943 23:49:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:08:23.943 23:49:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:23.943 23:49:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:08:23.943 23:49:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:08:23.943 23:49:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:08:23.943 23:49:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:08:23.943 23:49:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:08:23.943 23:49:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:08:23.943 23:49:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:23.943 23:49:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:23.943 23:49:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:23.943 23:49:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:23.943 23:49:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:23.943 23:49:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:23.943 23:49:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:23.943 23:49:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:23.943 23:49:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:23.943 23:49:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:23.943 23:49:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:23.943 23:49:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:23.943 23:49:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:23.943 23:49:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:23.943 23:49:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:23.943 23:49:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:23.943 23:49:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:23.943 23:49:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:23.943 23:49:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:23.943 23:49:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:08:23.943 Found 0000:af:00.0 (0x8086 - 0x159b) 00:08:23.943 23:49:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:23.943 23:49:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:23.943 23:49:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:23.943 23:49:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:23.943 23:49:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:23.943 23:49:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:23.943 23:49:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:08:23.943 Found 0000:af:00.1 (0x8086 - 0x159b) 00:08:23.943 23:49:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:23.943 23:49:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:23.943 23:49:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:23.943 23:49:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:23.943 23:49:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:23.943 23:49:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:23.943 23:49:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:23.943 23:49:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:23.943 23:49:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:23.943 23:49:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:23.943 23:49:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:23.943 23:49:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:23.943 23:49:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:23.943 23:49:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:23.943 23:49:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:23.943 23:49:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:08:23.943 Found net devices under 0000:af:00.0: cvl_0_0 00:08:23.943 23:49:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:23.943 23:49:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:23.943 23:49:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:23.943 23:49:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:23.943 23:49:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:23.943 23:49:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:23.943 23:49:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:23.943 23:49:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:23.943 23:49:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:08:23.943 Found net devices under 0000:af:00.1: cvl_0_1 00:08:23.943 23:49:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:23.943 23:49:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:23.943 23:49:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # is_hw=yes 00:08:23.943 23:49:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:23.943 23:49:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:23.943 23:49:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:23.943 23:49:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:23.943 23:49:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:23.943 23:49:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:23.943 23:49:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:23.943 23:49:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:23.943 23:49:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:23.943 23:49:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:23.943 23:49:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:23.943 23:49:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:23.943 23:49:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:23.943 23:49:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:23.943 23:49:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:23.943 23:49:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:23.943 23:49:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:23.943 23:49:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:23.943 23:49:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:23.943 23:49:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:23.943 23:49:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:23.943 23:49:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:23.943 23:49:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:23.943 23:49:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:23.943 23:49:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:23.943 23:49:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:23.943 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:23.943 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.399 ms 00:08:23.943 00:08:23.943 --- 10.0.0.2 ping statistics --- 00:08:23.943 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:23.943 rtt min/avg/max/mdev = 0.399/0.399/0.399/0.000 ms 00:08:23.944 23:49:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:23.944 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:23.944 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.193 ms 00:08:23.944 00:08:23.944 --- 10.0.0.1 ping statistics --- 00:08:23.944 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:23.944 rtt min/avg/max/mdev = 0.193/0.193/0.193/0.000 ms 00:08:23.944 23:49:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:23.944 23:49:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@450 -- # return 0 00:08:23.944 23:49:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:23.944 23:49:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:23.944 23:49:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:23.944 23:49:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:23.944 23:49:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:23.944 23:49:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:23.944 23:49:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:23.944 23:49:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:08:23.944 23:49:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:23.944 23:49:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:23.944 23:49:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:23.944 23:49:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=3845334 00:08:23.944 23:49:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 3845334 00:08:23.944 23:49:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@835 -- # '[' -z 3845334 ']' 00:08:23.944 23:49:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:23.944 23:49:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:23.944 23:49:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:23.944 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:23.944 23:49:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:23.944 23:49:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:23.944 23:49:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:08:23.944 [2024-12-13 23:49:02.937812] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:08:23.944 [2024-12-13 23:49:02.937902] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:23.944 [2024-12-13 23:49:03.055378] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:24.204 [2024-12-13 23:49:03.162883] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:24.204 [2024-12-13 23:49:03.162922] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:24.204 [2024-12-13 23:49:03.162932] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:24.204 [2024-12-13 23:49:03.162958] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:24.204 [2024-12-13 23:49:03.162967] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:24.204 [2024-12-13 23:49:03.165218] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:08:24.204 [2024-12-13 23:49:03.165288] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:08:24.204 [2024-12-13 23:49:03.165293] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:08:24.771 23:49:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:24.771 23:49:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@868 -- # return 0 00:08:24.771 23:49:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:24.771 23:49:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:24.771 23:49:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:24.771 23:49:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:24.771 23:49:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:08:25.030 [2024-12-13 23:49:03.944848] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:25.030 23:49:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:25.289 23:49:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:08:25.289 23:49:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:25.548 23:49:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:08:25.548 23:49:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:08:25.807 23:49:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:08:25.807 23:49:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=ce18586c-c5e2-4735-967c-9fab497625c1 00:08:25.807 23:49:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u ce18586c-c5e2-4735-967c-9fab497625c1 lvol 20 00:08:26.065 23:49:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=422d0047-6ca4-4d8a-bc6b-3c8520ca8f75 00:08:26.065 23:49:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:26.325 23:49:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 422d0047-6ca4-4d8a-bc6b-3c8520ca8f75 00:08:26.584 23:49:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:08:26.584 [2024-12-13 23:49:05.667393] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:26.584 23:49:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:26.842 23:49:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=3845823 00:08:26.842 23:49:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:08:26.842 23:49:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:08:27.779 23:49:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 422d0047-6ca4-4d8a-bc6b-3c8520ca8f75 MY_SNAPSHOT 00:08:28.038 23:49:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=77a0f5e6-b6dd-469c-8732-1dea1ae2b1cd 00:08:28.038 23:49:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 422d0047-6ca4-4d8a-bc6b-3c8520ca8f75 30 00:08:28.297 23:49:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 77a0f5e6-b6dd-469c-8732-1dea1ae2b1cd MY_CLONE 00:08:28.555 23:49:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=c21aee8a-14d5-4dc4-b761-e9cd7385269f 00:08:28.555 23:49:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate c21aee8a-14d5-4dc4-b761-e9cd7385269f 00:08:29.123 23:49:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 3845823 00:08:39.104 Initializing NVMe Controllers 00:08:39.104 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:08:39.104 Controller IO queue size 128, less than required. 00:08:39.104 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:39.104 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:08:39.104 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:08:39.104 Initialization complete. Launching workers. 00:08:39.104 ======================================================== 00:08:39.104 Latency(us) 00:08:39.104 Device Information : IOPS MiB/s Average min max 00:08:39.104 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 11152.30 43.56 11480.38 239.66 164673.00 00:08:39.105 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 10892.40 42.55 11752.98 2718.87 183146.81 00:08:39.105 ======================================================== 00:08:39.105 Total : 22044.70 86.11 11615.07 239.66 183146.81 00:08:39.105 00:08:39.105 23:49:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:39.105 23:49:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 422d0047-6ca4-4d8a-bc6b-3c8520ca8f75 00:08:39.105 23:49:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u ce18586c-c5e2-4735-967c-9fab497625c1 00:08:39.105 23:49:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:08:39.105 23:49:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:08:39.105 23:49:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:08:39.105 23:49:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:39.105 23:49:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:08:39.105 23:49:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:39.105 23:49:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:08:39.105 23:49:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:39.105 23:49:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:39.105 rmmod nvme_tcp 00:08:39.105 rmmod nvme_fabrics 00:08:39.105 rmmod nvme_keyring 00:08:39.105 23:49:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:39.105 23:49:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:08:39.105 23:49:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:08:39.105 23:49:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 3845334 ']' 00:08:39.105 23:49:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 3845334 00:08:39.105 23:49:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@954 -- # '[' -z 3845334 ']' 00:08:39.105 23:49:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@958 -- # kill -0 3845334 00:08:39.105 23:49:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # uname 00:08:39.105 23:49:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:39.105 23:49:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3845334 00:08:39.105 23:49:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:39.105 23:49:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:39.105 23:49:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3845334' 00:08:39.105 killing process with pid 3845334 00:08:39.105 23:49:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@973 -- # kill 3845334 00:08:39.105 23:49:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@978 -- # wait 3845334 00:08:39.673 23:49:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:39.673 23:49:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:39.673 23:49:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:39.673 23:49:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:08:39.673 23:49:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 00:08:39.673 23:49:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:39.673 23:49:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 00:08:39.673 23:49:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:39.673 23:49:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:39.673 23:49:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:39.673 23:49:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:39.673 23:49:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:42.209 23:49:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:42.209 00:08:42.209 real 0m23.199s 00:08:42.209 user 1m8.442s 00:08:42.209 sys 0m6.998s 00:08:42.209 23:49:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:42.209 23:49:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:42.209 ************************************ 00:08:42.209 END TEST nvmf_lvol 00:08:42.209 ************************************ 00:08:42.209 23:49:20 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:08:42.209 23:49:20 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:42.209 23:49:20 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:42.209 23:49:20 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:42.209 ************************************ 00:08:42.209 START TEST nvmf_lvs_grow 00:08:42.209 ************************************ 00:08:42.209 23:49:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:08:42.209 * Looking for test storage... 00:08:42.209 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:42.209 23:49:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:42.209 23:49:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # lcov --version 00:08:42.209 23:49:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:42.209 23:49:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:42.209 23:49:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:42.209 23:49:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:42.209 23:49:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:42.209 23:49:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:08:42.209 23:49:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:08:42.209 23:49:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:08:42.209 23:49:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:08:42.209 23:49:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:08:42.209 23:49:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:08:42.209 23:49:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:08:42.209 23:49:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:42.209 23:49:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:08:42.209 23:49:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:08:42.209 23:49:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:42.209 23:49:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:42.209 23:49:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:08:42.209 23:49:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:08:42.209 23:49:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:42.209 23:49:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:08:42.209 23:49:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:08:42.209 23:49:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:08:42.209 23:49:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:08:42.209 23:49:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:42.209 23:49:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:08:42.209 23:49:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:08:42.209 23:49:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:42.209 23:49:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:42.209 23:49:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:08:42.209 23:49:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:42.209 23:49:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:42.209 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:42.209 --rc genhtml_branch_coverage=1 00:08:42.209 --rc genhtml_function_coverage=1 00:08:42.209 --rc genhtml_legend=1 00:08:42.209 --rc geninfo_all_blocks=1 00:08:42.209 --rc geninfo_unexecuted_blocks=1 00:08:42.209 00:08:42.209 ' 00:08:42.209 23:49:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:42.209 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:42.209 --rc genhtml_branch_coverage=1 00:08:42.209 --rc genhtml_function_coverage=1 00:08:42.209 --rc genhtml_legend=1 00:08:42.209 --rc geninfo_all_blocks=1 00:08:42.209 --rc geninfo_unexecuted_blocks=1 00:08:42.209 00:08:42.209 ' 00:08:42.209 23:49:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:42.209 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:42.209 --rc genhtml_branch_coverage=1 00:08:42.209 --rc genhtml_function_coverage=1 00:08:42.209 --rc genhtml_legend=1 00:08:42.209 --rc geninfo_all_blocks=1 00:08:42.209 --rc geninfo_unexecuted_blocks=1 00:08:42.209 00:08:42.209 ' 00:08:42.209 23:49:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:42.209 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:42.209 --rc genhtml_branch_coverage=1 00:08:42.209 --rc genhtml_function_coverage=1 00:08:42.209 --rc genhtml_legend=1 00:08:42.209 --rc geninfo_all_blocks=1 00:08:42.209 --rc geninfo_unexecuted_blocks=1 00:08:42.209 00:08:42.209 ' 00:08:42.209 23:49:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:42.209 23:49:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:08:42.209 23:49:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:42.209 23:49:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:42.209 23:49:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:42.209 23:49:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:42.209 23:49:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:42.209 23:49:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:42.209 23:49:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:42.209 23:49:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:42.209 23:49:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:42.209 23:49:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:42.209 23:49:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:08:42.209 23:49:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:08:42.209 23:49:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:42.209 23:49:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:42.209 23:49:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:42.209 23:49:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:42.209 23:49:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:42.209 23:49:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:08:42.209 23:49:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:42.209 23:49:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:42.209 23:49:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:42.209 23:49:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:42.210 23:49:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:42.210 23:49:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:42.210 23:49:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:08:42.210 23:49:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:42.210 23:49:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:08:42.210 23:49:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:42.210 23:49:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:42.210 23:49:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:42.210 23:49:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:42.210 23:49:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:42.210 23:49:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:42.210 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:42.210 23:49:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:42.210 23:49:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:42.210 23:49:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:42.210 23:49:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:42.210 23:49:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:08:42.210 23:49:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:08:42.210 23:49:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:42.210 23:49:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:42.210 23:49:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:42.210 23:49:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:42.210 23:49:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:42.210 23:49:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:42.210 23:49:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:42.210 23:49:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:42.210 23:49:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:42.210 23:49:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:42.210 23:49:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:08:42.210 23:49:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:47.482 23:49:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:47.482 23:49:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:08:47.482 23:49:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:47.482 23:49:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:47.482 23:49:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:47.482 23:49:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:47.482 23:49:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:47.482 23:49:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:08:47.482 23:49:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:47.482 23:49:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:08:47.482 23:49:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:08:47.482 23:49:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:08:47.482 23:49:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:08:47.482 23:49:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:08:47.482 23:49:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:08:47.482 23:49:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:47.482 23:49:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:47.482 23:49:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:47.482 23:49:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:47.482 23:49:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:47.482 23:49:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:47.482 23:49:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:47.482 23:49:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:47.482 23:49:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:47.482 23:49:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:47.482 23:49:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:47.482 23:49:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:47.482 23:49:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:47.482 23:49:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:47.483 23:49:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:47.483 23:49:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:47.483 23:49:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:47.483 23:49:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:47.483 23:49:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:47.483 23:49:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:08:47.483 Found 0000:af:00.0 (0x8086 - 0x159b) 00:08:47.483 23:49:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:47.483 23:49:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:47.483 23:49:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:47.483 23:49:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:47.483 23:49:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:47.483 23:49:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:47.483 23:49:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:08:47.483 Found 0000:af:00.1 (0x8086 - 0x159b) 00:08:47.483 23:49:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:47.483 23:49:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:47.483 23:49:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:47.483 23:49:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:47.483 23:49:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:47.483 23:49:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:47.483 23:49:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:47.483 23:49:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:47.483 23:49:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:47.483 23:49:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:47.483 23:49:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:47.483 23:49:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:47.483 23:49:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:47.483 23:49:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:47.483 23:49:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:47.483 23:49:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:08:47.483 Found net devices under 0000:af:00.0: cvl_0_0 00:08:47.483 23:49:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:47.483 23:49:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:47.483 23:49:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:47.483 23:49:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:47.483 23:49:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:47.483 23:49:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:47.483 23:49:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:47.483 23:49:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:47.483 23:49:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:08:47.483 Found net devices under 0000:af:00.1: cvl_0_1 00:08:47.483 23:49:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:47.483 23:49:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:47.483 23:49:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # is_hw=yes 00:08:47.483 23:49:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:47.483 23:49:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:47.483 23:49:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:47.483 23:49:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:47.483 23:49:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:47.483 23:49:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:47.483 23:49:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:47.483 23:49:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:47.483 23:49:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:47.483 23:49:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:47.483 23:49:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:47.483 23:49:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:47.483 23:49:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:47.483 23:49:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:47.483 23:49:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:47.483 23:49:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:47.483 23:49:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:47.483 23:49:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:47.483 23:49:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:47.483 23:49:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:47.483 23:49:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:47.483 23:49:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:47.483 23:49:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:47.483 23:49:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:47.483 23:49:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:47.483 23:49:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:47.483 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:47.483 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.471 ms 00:08:47.483 00:08:47.483 --- 10.0.0.2 ping statistics --- 00:08:47.483 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:47.483 rtt min/avg/max/mdev = 0.471/0.471/0.471/0.000 ms 00:08:47.483 23:49:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:47.483 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:47.483 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.203 ms 00:08:47.483 00:08:47.483 --- 10.0.0.1 ping statistics --- 00:08:47.483 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:47.483 rtt min/avg/max/mdev = 0.203/0.203/0.203/0.000 ms 00:08:47.483 23:49:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:47.483 23:49:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@450 -- # return 0 00:08:47.483 23:49:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:47.483 23:49:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:47.483 23:49:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:47.483 23:49:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:47.483 23:49:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:47.483 23:49:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:47.483 23:49:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:47.483 23:49:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:08:47.483 23:49:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:47.483 23:49:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:47.483 23:49:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:47.483 23:49:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=3851311 00:08:47.483 23:49:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 3851311 00:08:47.483 23:49:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:08:47.483 23:49:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # '[' -z 3851311 ']' 00:08:47.483 23:49:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:47.483 23:49:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:47.483 23:49:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:47.483 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:47.483 23:49:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:47.483 23:49:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:47.483 [2024-12-13 23:49:26.379456] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:08:47.483 [2024-12-13 23:49:26.379548] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:47.483 [2024-12-13 23:49:26.497526] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:47.483 [2024-12-13 23:49:26.604701] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:47.483 [2024-12-13 23:49:26.604744] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:47.483 [2024-12-13 23:49:26.604757] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:47.483 [2024-12-13 23:49:26.604767] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:47.483 [2024-12-13 23:49:26.604775] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:47.483 [2024-12-13 23:49:26.606217] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:08:48.051 23:49:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:48.051 23:49:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@868 -- # return 0 00:08:48.051 23:49:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:48.051 23:49:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:48.051 23:49:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:48.311 23:49:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:48.311 23:49:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:08:48.311 [2024-12-13 23:49:27.388850] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:48.311 23:49:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:08:48.311 23:49:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:48.311 23:49:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:48.311 23:49:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:48.311 ************************************ 00:08:48.311 START TEST lvs_grow_clean 00:08:48.311 ************************************ 00:08:48.311 23:49:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1129 -- # lvs_grow 00:08:48.311 23:49:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:08:48.311 23:49:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:08:48.311 23:49:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:08:48.311 23:49:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:08:48.311 23:49:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:08:48.311 23:49:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:08:48.311 23:49:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:48.311 23:49:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:48.570 23:49:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:48.570 23:49:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:08:48.570 23:49:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:08:48.829 23:49:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=e673dfef-d40c-413d-9b86-192f6d0b5e03 00:08:48.829 23:49:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e673dfef-d40c-413d-9b86-192f6d0b5e03 00:08:48.829 23:49:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:08:49.088 23:49:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:08:49.088 23:49:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:08:49.088 23:49:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u e673dfef-d40c-413d-9b86-192f6d0b5e03 lvol 150 00:08:49.088 23:49:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=4b95b9c5-8dd2-4514-b4eb-b87cae670c9c 00:08:49.088 23:49:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:49.088 23:49:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:08:49.347 [2024-12-13 23:49:28.392503] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:08:49.347 [2024-12-13 23:49:28.392593] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:08:49.347 true 00:08:49.347 23:49:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:08:49.347 23:49:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e673dfef-d40c-413d-9b86-192f6d0b5e03 00:08:49.606 23:49:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:08:49.606 23:49:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:49.865 23:49:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 4b95b9c5-8dd2-4514-b4eb-b87cae670c9c 00:08:49.865 23:49:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:08:50.124 [2024-12-13 23:49:29.146937] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:50.124 23:49:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:50.383 23:49:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=3851811 00:08:50.383 23:49:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:50.383 23:49:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 3851811 /var/tmp/bdevperf.sock 00:08:50.383 23:49:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:08:50.383 23:49:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # '[' -z 3851811 ']' 00:08:50.383 23:49:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:50.383 23:49:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:50.383 23:49:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:50.383 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:50.383 23:49:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:50.383 23:49:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:08:50.383 [2024-12-13 23:49:29.409579] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:08:50.383 [2024-12-13 23:49:29.409668] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3851811 ] 00:08:50.383 [2024-12-13 23:49:29.521539] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:50.642 [2024-12-13 23:49:29.632434] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:08:51.211 23:49:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:51.211 23:49:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@868 -- # return 0 00:08:51.211 23:49:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:08:51.779 Nvme0n1 00:08:51.779 23:49:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:08:51.779 [ 00:08:51.779 { 00:08:51.779 "name": "Nvme0n1", 00:08:51.779 "aliases": [ 00:08:51.779 "4b95b9c5-8dd2-4514-b4eb-b87cae670c9c" 00:08:51.779 ], 00:08:51.779 "product_name": "NVMe disk", 00:08:51.779 "block_size": 4096, 00:08:51.779 "num_blocks": 38912, 00:08:51.779 "uuid": "4b95b9c5-8dd2-4514-b4eb-b87cae670c9c", 00:08:51.779 "numa_id": 1, 00:08:51.779 "assigned_rate_limits": { 00:08:51.779 "rw_ios_per_sec": 0, 00:08:51.779 "rw_mbytes_per_sec": 0, 00:08:51.779 "r_mbytes_per_sec": 0, 00:08:51.779 "w_mbytes_per_sec": 0 00:08:51.779 }, 00:08:51.779 "claimed": false, 00:08:51.779 "zoned": false, 00:08:51.779 "supported_io_types": { 00:08:51.779 "read": true, 00:08:51.779 "write": true, 00:08:51.779 "unmap": true, 00:08:51.779 "flush": true, 00:08:51.779 "reset": true, 00:08:51.779 "nvme_admin": true, 00:08:51.779 "nvme_io": true, 00:08:51.779 "nvme_io_md": false, 00:08:51.779 "write_zeroes": true, 00:08:51.779 "zcopy": false, 00:08:51.779 "get_zone_info": false, 00:08:51.779 "zone_management": false, 00:08:51.779 "zone_append": false, 00:08:51.779 "compare": true, 00:08:51.779 "compare_and_write": true, 00:08:51.779 "abort": true, 00:08:51.779 "seek_hole": false, 00:08:51.779 "seek_data": false, 00:08:51.779 "copy": true, 00:08:51.779 "nvme_iov_md": false 00:08:51.779 }, 00:08:51.779 "memory_domains": [ 00:08:51.779 { 00:08:51.779 "dma_device_id": "system", 00:08:51.779 "dma_device_type": 1 00:08:51.779 } 00:08:51.779 ], 00:08:51.779 "driver_specific": { 00:08:51.779 "nvme": [ 00:08:51.779 { 00:08:51.779 "trid": { 00:08:51.779 "trtype": "TCP", 00:08:51.779 "adrfam": "IPv4", 00:08:51.779 "traddr": "10.0.0.2", 00:08:51.779 "trsvcid": "4420", 00:08:51.779 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:08:51.779 }, 00:08:51.779 "ctrlr_data": { 00:08:51.779 "cntlid": 1, 00:08:51.779 "vendor_id": "0x8086", 00:08:51.779 "model_number": "SPDK bdev Controller", 00:08:51.779 "serial_number": "SPDK0", 00:08:51.779 "firmware_revision": "25.01", 00:08:51.779 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:51.779 "oacs": { 00:08:51.779 "security": 0, 00:08:51.779 "format": 0, 00:08:51.779 "firmware": 0, 00:08:51.779 "ns_manage": 0 00:08:51.779 }, 00:08:51.779 "multi_ctrlr": true, 00:08:51.779 "ana_reporting": false 00:08:51.779 }, 00:08:51.779 "vs": { 00:08:51.779 "nvme_version": "1.3" 00:08:51.779 }, 00:08:51.779 "ns_data": { 00:08:51.779 "id": 1, 00:08:51.779 "can_share": true 00:08:51.779 } 00:08:51.779 } 00:08:51.779 ], 00:08:51.779 "mp_policy": "active_passive" 00:08:51.779 } 00:08:51.779 } 00:08:51.779 ] 00:08:51.779 23:49:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=3852043 00:08:51.779 23:49:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:08:51.779 23:49:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:08:52.038 Running I/O for 10 seconds... 00:08:52.975 Latency(us) 00:08:52.975 [2024-12-13T22:49:32.116Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:52.975 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:52.975 Nvme0n1 : 1.00 20321.00 79.38 0.00 0.00 0.00 0.00 0.00 00:08:52.975 [2024-12-13T22:49:32.116Z] =================================================================================================================== 00:08:52.975 [2024-12-13T22:49:32.116Z] Total : 20321.00 79.38 0.00 0.00 0.00 0.00 0.00 00:08:52.975 00:08:53.912 23:49:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u e673dfef-d40c-413d-9b86-192f6d0b5e03 00:08:53.913 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:53.913 Nvme0n1 : 2.00 20480.00 80.00 0.00 0.00 0.00 0.00 0.00 00:08:53.913 [2024-12-13T22:49:33.054Z] =================================================================================================================== 00:08:53.913 [2024-12-13T22:49:33.054Z] Total : 20480.00 80.00 0.00 0.00 0.00 0.00 0.00 00:08:53.913 00:08:53.913 true 00:08:54.171 23:49:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e673dfef-d40c-413d-9b86-192f6d0b5e03 00:08:54.171 23:49:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:08:54.172 23:49:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:08:54.172 23:49:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:08:54.172 23:49:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 3852043 00:08:55.108 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:55.108 Nvme0n1 : 3.00 20513.33 80.13 0.00 0.00 0.00 0.00 0.00 00:08:55.108 [2024-12-13T22:49:34.249Z] =================================================================================================================== 00:08:55.108 [2024-12-13T22:49:34.249Z] Total : 20513.33 80.13 0.00 0.00 0.00 0.00 0.00 00:08:55.108 00:08:56.044 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:56.044 Nvme0n1 : 4.00 20580.75 80.39 0.00 0.00 0.00 0.00 0.00 00:08:56.044 [2024-12-13T22:49:35.185Z] =================================================================================================================== 00:08:56.044 [2024-12-13T22:49:35.185Z] Total : 20580.75 80.39 0.00 0.00 0.00 0.00 0.00 00:08:56.044 00:08:56.981 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:56.981 Nvme0n1 : 5.00 20619.80 80.55 0.00 0.00 0.00 0.00 0.00 00:08:56.981 [2024-12-13T22:49:36.122Z] =================================================================================================================== 00:08:56.981 [2024-12-13T22:49:36.122Z] Total : 20619.80 80.55 0.00 0.00 0.00 0.00 0.00 00:08:56.981 00:08:58.079 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:58.079 Nvme0n1 : 6.00 20634.50 80.60 0.00 0.00 0.00 0.00 0.00 00:08:58.079 [2024-12-13T22:49:37.220Z] =================================================================================================================== 00:08:58.079 [2024-12-13T22:49:37.220Z] Total : 20634.50 80.60 0.00 0.00 0.00 0.00 0.00 00:08:58.079 00:08:59.017 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:59.017 Nvme0n1 : 7.00 20662.57 80.71 0.00 0.00 0.00 0.00 0.00 00:08:59.017 [2024-12-13T22:49:38.158Z] =================================================================================================================== 00:08:59.017 [2024-12-13T22:49:38.158Z] Total : 20662.57 80.71 0.00 0.00 0.00 0.00 0.00 00:08:59.017 00:08:59.954 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:59.954 Nvme0n1 : 8.00 20685.12 80.80 0.00 0.00 0.00 0.00 0.00 00:08:59.954 [2024-12-13T22:49:39.095Z] =================================================================================================================== 00:08:59.954 [2024-12-13T22:49:39.095Z] Total : 20685.12 80.80 0.00 0.00 0.00 0.00 0.00 00:08:59.954 00:09:00.891 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:00.891 Nvme0n1 : 9.00 20697.33 80.85 0.00 0.00 0.00 0.00 0.00 00:09:00.891 [2024-12-13T22:49:40.032Z] =================================================================================================================== 00:09:00.891 [2024-12-13T22:49:40.032Z] Total : 20697.33 80.85 0.00 0.00 0.00 0.00 0.00 00:09:00.891 00:09:01.828 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:01.828 Nvme0n1 : 10.00 20678.80 80.78 0.00 0.00 0.00 0.00 0.00 00:09:01.828 [2024-12-13T22:49:40.969Z] =================================================================================================================== 00:09:01.828 [2024-12-13T22:49:40.969Z] Total : 20678.80 80.78 0.00 0.00 0.00 0.00 0.00 00:09:01.828 00:09:02.087 00:09:02.087 Latency(us) 00:09:02.087 [2024-12-13T22:49:41.228Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:02.087 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:02.087 Nvme0n1 : 10.01 20680.87 80.78 0.00 0.00 6186.29 3651.29 12483.05 00:09:02.087 [2024-12-13T22:49:41.228Z] =================================================================================================================== 00:09:02.087 [2024-12-13T22:49:41.228Z] Total : 20680.87 80.78 0.00 0.00 6186.29 3651.29 12483.05 00:09:02.087 { 00:09:02.087 "results": [ 00:09:02.087 { 00:09:02.087 "job": "Nvme0n1", 00:09:02.087 "core_mask": "0x2", 00:09:02.087 "workload": "randwrite", 00:09:02.087 "status": "finished", 00:09:02.087 "queue_depth": 128, 00:09:02.087 "io_size": 4096, 00:09:02.087 "runtime": 10.005186, 00:09:02.087 "iops": 20680.874898277754, 00:09:02.087 "mibps": 80.78466757139748, 00:09:02.087 "io_failed": 0, 00:09:02.087 "io_timeout": 0, 00:09:02.087 "avg_latency_us": 6186.29195251075, 00:09:02.087 "min_latency_us": 3651.2914285714287, 00:09:02.087 "max_latency_us": 12483.047619047618 00:09:02.087 } 00:09:02.087 ], 00:09:02.087 "core_count": 1 00:09:02.087 } 00:09:02.087 23:49:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 3851811 00:09:02.087 23:49:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # '[' -z 3851811 ']' 00:09:02.088 23:49:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # kill -0 3851811 00:09:02.088 23:49:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # uname 00:09:02.088 23:49:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:02.088 23:49:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3851811 00:09:02.088 23:49:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:09:02.088 23:49:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:09:02.088 23:49:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3851811' 00:09:02.088 killing process with pid 3851811 00:09:02.088 23:49:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@973 -- # kill 3851811 00:09:02.088 Received shutdown signal, test time was about 10.000000 seconds 00:09:02.088 00:09:02.088 Latency(us) 00:09:02.088 [2024-12-13T22:49:41.229Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:02.088 [2024-12-13T22:49:41.229Z] =================================================================================================================== 00:09:02.088 [2024-12-13T22:49:41.229Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:09:02.088 23:49:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@978 -- # wait 3851811 00:09:03.026 23:49:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:03.026 23:49:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:09:03.284 23:49:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e673dfef-d40c-413d-9b86-192f6d0b5e03 00:09:03.284 23:49:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:09:03.544 23:49:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:09:03.544 23:49:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:09:03.544 23:49:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:03.544 [2024-12-13 23:49:42.661327] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:09:03.803 23:49:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e673dfef-d40c-413d-9b86-192f6d0b5e03 00:09:03.803 23:49:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # local es=0 00:09:03.803 23:49:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e673dfef-d40c-413d-9b86-192f6d0b5e03 00:09:03.803 23:49:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:03.803 23:49:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:03.803 23:49:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:03.803 23:49:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:03.803 23:49:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:03.803 23:49:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:03.803 23:49:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:03.803 23:49:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:09:03.803 23:49:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e673dfef-d40c-413d-9b86-192f6d0b5e03 00:09:03.803 request: 00:09:03.803 { 00:09:03.803 "uuid": "e673dfef-d40c-413d-9b86-192f6d0b5e03", 00:09:03.803 "method": "bdev_lvol_get_lvstores", 00:09:03.803 "req_id": 1 00:09:03.803 } 00:09:03.803 Got JSON-RPC error response 00:09:03.803 response: 00:09:03.803 { 00:09:03.803 "code": -19, 00:09:03.803 "message": "No such device" 00:09:03.803 } 00:09:03.803 23:49:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # es=1 00:09:03.803 23:49:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:03.803 23:49:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:09:03.803 23:49:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:03.803 23:49:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:04.062 aio_bdev 00:09:04.062 23:49:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 4b95b9c5-8dd2-4514-b4eb-b87cae670c9c 00:09:04.062 23:49:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local bdev_name=4b95b9c5-8dd2-4514-b4eb-b87cae670c9c 00:09:04.062 23:49:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:04.062 23:49:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # local i 00:09:04.062 23:49:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:04.062 23:49:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:04.062 23:49:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:09:04.321 23:49:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 4b95b9c5-8dd2-4514-b4eb-b87cae670c9c -t 2000 00:09:04.321 [ 00:09:04.321 { 00:09:04.321 "name": "4b95b9c5-8dd2-4514-b4eb-b87cae670c9c", 00:09:04.321 "aliases": [ 00:09:04.321 "lvs/lvol" 00:09:04.321 ], 00:09:04.321 "product_name": "Logical Volume", 00:09:04.321 "block_size": 4096, 00:09:04.321 "num_blocks": 38912, 00:09:04.321 "uuid": "4b95b9c5-8dd2-4514-b4eb-b87cae670c9c", 00:09:04.321 "assigned_rate_limits": { 00:09:04.321 "rw_ios_per_sec": 0, 00:09:04.321 "rw_mbytes_per_sec": 0, 00:09:04.321 "r_mbytes_per_sec": 0, 00:09:04.321 "w_mbytes_per_sec": 0 00:09:04.321 }, 00:09:04.321 "claimed": false, 00:09:04.321 "zoned": false, 00:09:04.321 "supported_io_types": { 00:09:04.321 "read": true, 00:09:04.321 "write": true, 00:09:04.321 "unmap": true, 00:09:04.321 "flush": false, 00:09:04.321 "reset": true, 00:09:04.321 "nvme_admin": false, 00:09:04.321 "nvme_io": false, 00:09:04.321 "nvme_io_md": false, 00:09:04.321 "write_zeroes": true, 00:09:04.321 "zcopy": false, 00:09:04.322 "get_zone_info": false, 00:09:04.322 "zone_management": false, 00:09:04.322 "zone_append": false, 00:09:04.322 "compare": false, 00:09:04.322 "compare_and_write": false, 00:09:04.322 "abort": false, 00:09:04.322 "seek_hole": true, 00:09:04.322 "seek_data": true, 00:09:04.322 "copy": false, 00:09:04.322 "nvme_iov_md": false 00:09:04.322 }, 00:09:04.322 "driver_specific": { 00:09:04.322 "lvol": { 00:09:04.322 "lvol_store_uuid": "e673dfef-d40c-413d-9b86-192f6d0b5e03", 00:09:04.322 "base_bdev": "aio_bdev", 00:09:04.322 "thin_provision": false, 00:09:04.322 "num_allocated_clusters": 38, 00:09:04.322 "snapshot": false, 00:09:04.322 "clone": false, 00:09:04.322 "esnap_clone": false 00:09:04.322 } 00:09:04.322 } 00:09:04.322 } 00:09:04.322 ] 00:09:04.322 23:49:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@911 -- # return 0 00:09:04.322 23:49:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e673dfef-d40c-413d-9b86-192f6d0b5e03 00:09:04.322 23:49:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:09:04.581 23:49:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:09:04.581 23:49:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e673dfef-d40c-413d-9b86-192f6d0b5e03 00:09:04.581 23:49:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:09:04.840 23:49:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:09:04.840 23:49:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 4b95b9c5-8dd2-4514-b4eb-b87cae670c9c 00:09:04.840 23:49:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u e673dfef-d40c-413d-9b86-192f6d0b5e03 00:09:05.099 23:49:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:05.358 23:49:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:05.358 00:09:05.358 real 0m16.934s 00:09:05.358 user 0m16.537s 00:09:05.358 sys 0m1.562s 00:09:05.358 23:49:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:05.358 23:49:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:09:05.358 ************************************ 00:09:05.358 END TEST lvs_grow_clean 00:09:05.358 ************************************ 00:09:05.358 23:49:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:09:05.358 23:49:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:05.358 23:49:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:05.358 23:49:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:05.358 ************************************ 00:09:05.358 START TEST lvs_grow_dirty 00:09:05.358 ************************************ 00:09:05.358 23:49:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1129 -- # lvs_grow dirty 00:09:05.358 23:49:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:09:05.358 23:49:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:09:05.358 23:49:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:09:05.358 23:49:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:09:05.358 23:49:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:09:05.358 23:49:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:09:05.358 23:49:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:05.358 23:49:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:05.358 23:49:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:05.617 23:49:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:09:05.617 23:49:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:09:05.876 23:49:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=9729e59e-f4f4-417d-abb6-7d2c681e1a74 00:09:05.876 23:49:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9729e59e-f4f4-417d-abb6-7d2c681e1a74 00:09:05.876 23:49:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:09:06.136 23:49:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:09:06.136 23:49:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:09:06.136 23:49:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 9729e59e-f4f4-417d-abb6-7d2c681e1a74 lvol 150 00:09:06.136 23:49:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=674e72ca-7e33-41df-bdb1-3b381685edb9 00:09:06.136 23:49:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:06.136 23:49:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:09:06.395 [2024-12-13 23:49:45.405725] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:09:06.395 [2024-12-13 23:49:45.405810] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:09:06.395 true 00:09:06.395 23:49:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9729e59e-f4f4-417d-abb6-7d2c681e1a74 00:09:06.395 23:49:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:09:06.654 23:49:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:09:06.654 23:49:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:09:06.654 23:49:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 674e72ca-7e33-41df-bdb1-3b381685edb9 00:09:06.912 23:49:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:09:07.171 [2024-12-13 23:49:46.164096] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:07.171 23:49:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:07.430 23:49:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=3854715 00:09:07.430 23:49:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:07.430 23:49:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:09:07.430 23:49:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 3854715 /var/tmp/bdevperf.sock 00:09:07.430 23:49:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 3854715 ']' 00:09:07.430 23:49:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:09:07.430 23:49:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:07.430 23:49:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:09:07.430 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:09:07.430 23:49:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:07.430 23:49:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:07.430 [2024-12-13 23:49:46.421554] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:09:07.430 [2024-12-13 23:49:46.421645] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3854715 ] 00:09:07.430 [2024-12-13 23:49:46.534127] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:07.689 [2024-12-13 23:49:46.645787] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:09:08.257 23:49:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:08.257 23:49:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:09:08.257 23:49:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:09:08.516 Nvme0n1 00:09:08.516 23:49:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:09:08.775 [ 00:09:08.775 { 00:09:08.775 "name": "Nvme0n1", 00:09:08.775 "aliases": [ 00:09:08.775 "674e72ca-7e33-41df-bdb1-3b381685edb9" 00:09:08.775 ], 00:09:08.775 "product_name": "NVMe disk", 00:09:08.775 "block_size": 4096, 00:09:08.775 "num_blocks": 38912, 00:09:08.775 "uuid": "674e72ca-7e33-41df-bdb1-3b381685edb9", 00:09:08.775 "numa_id": 1, 00:09:08.775 "assigned_rate_limits": { 00:09:08.775 "rw_ios_per_sec": 0, 00:09:08.775 "rw_mbytes_per_sec": 0, 00:09:08.775 "r_mbytes_per_sec": 0, 00:09:08.775 "w_mbytes_per_sec": 0 00:09:08.775 }, 00:09:08.775 "claimed": false, 00:09:08.775 "zoned": false, 00:09:08.775 "supported_io_types": { 00:09:08.775 "read": true, 00:09:08.775 "write": true, 00:09:08.775 "unmap": true, 00:09:08.775 "flush": true, 00:09:08.775 "reset": true, 00:09:08.775 "nvme_admin": true, 00:09:08.775 "nvme_io": true, 00:09:08.775 "nvme_io_md": false, 00:09:08.775 "write_zeroes": true, 00:09:08.775 "zcopy": false, 00:09:08.775 "get_zone_info": false, 00:09:08.775 "zone_management": false, 00:09:08.775 "zone_append": false, 00:09:08.775 "compare": true, 00:09:08.775 "compare_and_write": true, 00:09:08.775 "abort": true, 00:09:08.775 "seek_hole": false, 00:09:08.775 "seek_data": false, 00:09:08.775 "copy": true, 00:09:08.775 "nvme_iov_md": false 00:09:08.775 }, 00:09:08.775 "memory_domains": [ 00:09:08.775 { 00:09:08.775 "dma_device_id": "system", 00:09:08.775 "dma_device_type": 1 00:09:08.775 } 00:09:08.775 ], 00:09:08.775 "driver_specific": { 00:09:08.775 "nvme": [ 00:09:08.775 { 00:09:08.775 "trid": { 00:09:08.775 "trtype": "TCP", 00:09:08.775 "adrfam": "IPv4", 00:09:08.775 "traddr": "10.0.0.2", 00:09:08.775 "trsvcid": "4420", 00:09:08.775 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:09:08.775 }, 00:09:08.775 "ctrlr_data": { 00:09:08.775 "cntlid": 1, 00:09:08.775 "vendor_id": "0x8086", 00:09:08.775 "model_number": "SPDK bdev Controller", 00:09:08.775 "serial_number": "SPDK0", 00:09:08.775 "firmware_revision": "25.01", 00:09:08.775 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:09:08.775 "oacs": { 00:09:08.775 "security": 0, 00:09:08.775 "format": 0, 00:09:08.775 "firmware": 0, 00:09:08.775 "ns_manage": 0 00:09:08.775 }, 00:09:08.775 "multi_ctrlr": true, 00:09:08.775 "ana_reporting": false 00:09:08.775 }, 00:09:08.775 "vs": { 00:09:08.775 "nvme_version": "1.3" 00:09:08.775 }, 00:09:08.775 "ns_data": { 00:09:08.775 "id": 1, 00:09:08.775 "can_share": true 00:09:08.775 } 00:09:08.775 } 00:09:08.775 ], 00:09:08.775 "mp_policy": "active_passive" 00:09:08.775 } 00:09:08.775 } 00:09:08.775 ] 00:09:08.775 23:49:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=3855013 00:09:08.775 23:49:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:09:08.775 23:49:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:09:09.034 Running I/O for 10 seconds... 00:09:09.971 Latency(us) 00:09:09.971 [2024-12-13T22:49:49.112Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:09.971 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:09.971 Nvme0n1 : 1.00 20386.00 79.63 0.00 0.00 0.00 0.00 0.00 00:09:09.971 [2024-12-13T22:49:49.112Z] =================================================================================================================== 00:09:09.971 [2024-12-13T22:49:49.112Z] Total : 20386.00 79.63 0.00 0.00 0.00 0.00 0.00 00:09:09.971 00:09:10.907 23:49:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 9729e59e-f4f4-417d-abb6-7d2c681e1a74 00:09:10.907 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:10.907 Nvme0n1 : 2.00 20459.00 79.92 0.00 0.00 0.00 0.00 0.00 00:09:10.907 [2024-12-13T22:49:50.048Z] =================================================================================================================== 00:09:10.907 [2024-12-13T22:49:50.048Z] Total : 20459.00 79.92 0.00 0.00 0.00 0.00 0.00 00:09:10.907 00:09:10.907 true 00:09:11.166 23:49:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:09:11.166 23:49:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9729e59e-f4f4-417d-abb6-7d2c681e1a74 00:09:11.166 23:49:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:09:11.166 23:49:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:09:11.166 23:49:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 3855013 00:09:12.103 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:12.103 Nvme0n1 : 3.00 20372.33 79.58 0.00 0.00 0.00 0.00 0.00 00:09:12.103 [2024-12-13T22:49:51.244Z] =================================================================================================================== 00:09:12.103 [2024-12-13T22:49:51.244Z] Total : 20372.33 79.58 0.00 0.00 0.00 0.00 0.00 00:09:12.103 00:09:13.039 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:13.039 Nvme0n1 : 4.00 20464.50 79.94 0.00 0.00 0.00 0.00 0.00 00:09:13.039 [2024-12-13T22:49:52.180Z] =================================================================================================================== 00:09:13.039 [2024-12-13T22:49:52.180Z] Total : 20464.50 79.94 0.00 0.00 0.00 0.00 0.00 00:09:13.039 00:09:13.976 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:13.976 Nvme0n1 : 5.00 20540.00 80.23 0.00 0.00 0.00 0.00 0.00 00:09:13.976 [2024-12-13T22:49:53.117Z] =================================================================================================================== 00:09:13.976 [2024-12-13T22:49:53.117Z] Total : 20540.00 80.23 0.00 0.00 0.00 0.00 0.00 00:09:13.976 00:09:14.910 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:14.910 Nvme0n1 : 6.00 20577.67 80.38 0.00 0.00 0.00 0.00 0.00 00:09:14.910 [2024-12-13T22:49:54.051Z] =================================================================================================================== 00:09:14.910 [2024-12-13T22:49:54.051Z] Total : 20577.67 80.38 0.00 0.00 0.00 0.00 0.00 00:09:14.910 00:09:15.847 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:15.847 Nvme0n1 : 7.00 20604.43 80.49 0.00 0.00 0.00 0.00 0.00 00:09:15.847 [2024-12-13T22:49:54.988Z] =================================================================================================================== 00:09:15.847 [2024-12-13T22:49:54.988Z] Total : 20604.43 80.49 0.00 0.00 0.00 0.00 0.00 00:09:15.847 00:09:17.227 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:17.227 Nvme0n1 : 8.00 20628.62 80.58 0.00 0.00 0.00 0.00 0.00 00:09:17.227 [2024-12-13T22:49:56.368Z] =================================================================================================================== 00:09:17.227 [2024-12-13T22:49:56.368Z] Total : 20628.62 80.58 0.00 0.00 0.00 0.00 0.00 00:09:17.227 00:09:18.164 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:18.164 Nvme0n1 : 9.00 20657.89 80.69 0.00 0.00 0.00 0.00 0.00 00:09:18.164 [2024-12-13T22:49:57.305Z] =================================================================================================================== 00:09:18.164 [2024-12-13T22:49:57.305Z] Total : 20657.89 80.69 0.00 0.00 0.00 0.00 0.00 00:09:18.164 00:09:19.103 00:09:19.103 Latency(us) 00:09:19.103 [2024-12-13T22:49:58.244Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:19.103 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:19.103 Nvme0n1 : 10.00 20674.46 80.76 0.00 0.00 6188.08 3682.50 12982.37 00:09:19.103 [2024-12-13T22:49:58.244Z] =================================================================================================================== 00:09:19.103 [2024-12-13T22:49:58.244Z] Total : 20674.46 80.76 0.00 0.00 6188.08 3682.50 12982.37 00:09:19.103 { 00:09:19.103 "results": [ 00:09:19.103 { 00:09:19.103 "job": "Nvme0n1", 00:09:19.103 "core_mask": "0x2", 00:09:19.103 "workload": "randwrite", 00:09:19.103 "status": "finished", 00:09:19.103 "queue_depth": 128, 00:09:19.103 "io_size": 4096, 00:09:19.103 "runtime": 10.001954, 00:09:19.103 "iops": 20674.460210474874, 00:09:19.103 "mibps": 80.75961019716748, 00:09:19.103 "io_failed": 0, 00:09:19.103 "io_timeout": 0, 00:09:19.103 "avg_latency_us": 6188.076922923165, 00:09:19.103 "min_latency_us": 3682.499047619048, 00:09:19.103 "max_latency_us": 12982.369523809524 00:09:19.103 } 00:09:19.103 ], 00:09:19.103 "core_count": 1 00:09:19.103 } 00:09:19.103 23:49:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 3854715 00:09:19.103 23:49:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # '[' -z 3854715 ']' 00:09:19.103 23:49:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # kill -0 3854715 00:09:19.103 23:49:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # uname 00:09:19.103 23:49:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:19.103 23:49:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3854715 00:09:19.103 23:49:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:09:19.103 23:49:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:09:19.103 23:49:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3854715' 00:09:19.103 killing process with pid 3854715 00:09:19.103 23:49:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@973 -- # kill 3854715 00:09:19.103 Received shutdown signal, test time was about 10.000000 seconds 00:09:19.103 00:09:19.103 Latency(us) 00:09:19.103 [2024-12-13T22:49:58.244Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:19.103 [2024-12-13T22:49:58.244Z] =================================================================================================================== 00:09:19.103 [2024-12-13T22:49:58.244Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:09:19.103 23:49:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@978 -- # wait 3854715 00:09:20.040 23:49:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:20.040 23:49:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:09:20.299 23:49:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9729e59e-f4f4-417d-abb6-7d2c681e1a74 00:09:20.299 23:49:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:09:20.558 23:49:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:09:20.558 23:49:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:09:20.558 23:49:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 3851311 00:09:20.558 23:49:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 3851311 00:09:20.558 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 3851311 Killed "${NVMF_APP[@]}" "$@" 00:09:20.558 23:49:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:09:20.558 23:49:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:09:20.558 23:49:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:20.558 23:49:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:20.558 23:49:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:20.558 23:49:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=3856831 00:09:20.558 23:49:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 3856831 00:09:20.558 23:49:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:09:20.558 23:49:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 3856831 ']' 00:09:20.558 23:49:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:20.558 23:49:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:20.558 23:49:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:20.558 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:20.558 23:49:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:20.558 23:49:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:20.558 [2024-12-13 23:49:59.607321] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:09:20.558 [2024-12-13 23:49:59.607410] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:20.817 [2024-12-13 23:49:59.727899] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:20.817 [2024-12-13 23:49:59.832580] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:20.817 [2024-12-13 23:49:59.832637] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:20.817 [2024-12-13 23:49:59.832647] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:20.817 [2024-12-13 23:49:59.832674] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:20.817 [2024-12-13 23:49:59.832684] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:20.817 [2024-12-13 23:49:59.834102] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:09:21.385 23:50:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:21.385 23:50:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:09:21.385 23:50:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:21.385 23:50:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:21.385 23:50:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:21.385 23:50:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:21.385 23:50:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:21.644 [2024-12-13 23:50:00.615516] blobstore.c:4899:bs_recover: *NOTICE*: Performing recovery on blobstore 00:09:21.644 [2024-12-13 23:50:00.615671] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:09:21.644 [2024-12-13 23:50:00.615708] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:09:21.644 23:50:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:09:21.644 23:50:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 674e72ca-7e33-41df-bdb1-3b381685edb9 00:09:21.644 23:50:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=674e72ca-7e33-41df-bdb1-3b381685edb9 00:09:21.644 23:50:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:21.644 23:50:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:09:21.644 23:50:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:21.644 23:50:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:21.644 23:50:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:09:21.903 23:50:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 674e72ca-7e33-41df-bdb1-3b381685edb9 -t 2000 00:09:21.903 [ 00:09:21.903 { 00:09:21.903 "name": "674e72ca-7e33-41df-bdb1-3b381685edb9", 00:09:21.903 "aliases": [ 00:09:21.903 "lvs/lvol" 00:09:21.903 ], 00:09:21.903 "product_name": "Logical Volume", 00:09:21.903 "block_size": 4096, 00:09:21.903 "num_blocks": 38912, 00:09:21.903 "uuid": "674e72ca-7e33-41df-bdb1-3b381685edb9", 00:09:21.903 "assigned_rate_limits": { 00:09:21.903 "rw_ios_per_sec": 0, 00:09:21.903 "rw_mbytes_per_sec": 0, 00:09:21.903 "r_mbytes_per_sec": 0, 00:09:21.903 "w_mbytes_per_sec": 0 00:09:21.903 }, 00:09:21.903 "claimed": false, 00:09:21.903 "zoned": false, 00:09:21.903 "supported_io_types": { 00:09:21.903 "read": true, 00:09:21.903 "write": true, 00:09:21.903 "unmap": true, 00:09:21.903 "flush": false, 00:09:21.903 "reset": true, 00:09:21.903 "nvme_admin": false, 00:09:21.903 "nvme_io": false, 00:09:21.903 "nvme_io_md": false, 00:09:21.903 "write_zeroes": true, 00:09:21.903 "zcopy": false, 00:09:21.903 "get_zone_info": false, 00:09:21.903 "zone_management": false, 00:09:21.903 "zone_append": false, 00:09:21.903 "compare": false, 00:09:21.903 "compare_and_write": false, 00:09:21.903 "abort": false, 00:09:21.903 "seek_hole": true, 00:09:21.903 "seek_data": true, 00:09:21.903 "copy": false, 00:09:21.903 "nvme_iov_md": false 00:09:21.903 }, 00:09:21.903 "driver_specific": { 00:09:21.903 "lvol": { 00:09:21.903 "lvol_store_uuid": "9729e59e-f4f4-417d-abb6-7d2c681e1a74", 00:09:21.903 "base_bdev": "aio_bdev", 00:09:21.903 "thin_provision": false, 00:09:21.903 "num_allocated_clusters": 38, 00:09:21.903 "snapshot": false, 00:09:21.903 "clone": false, 00:09:21.903 "esnap_clone": false 00:09:21.903 } 00:09:21.903 } 00:09:21.903 } 00:09:21.903 ] 00:09:21.903 23:50:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:09:21.903 23:50:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9729e59e-f4f4-417d-abb6-7d2c681e1a74 00:09:21.903 23:50:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:09:22.161 23:50:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:09:22.161 23:50:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9729e59e-f4f4-417d-abb6-7d2c681e1a74 00:09:22.161 23:50:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:09:22.420 23:50:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:09:22.420 23:50:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:22.420 [2024-12-13 23:50:01.543981] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:09:22.679 23:50:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9729e59e-f4f4-417d-abb6-7d2c681e1a74 00:09:22.679 23:50:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # local es=0 00:09:22.679 23:50:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9729e59e-f4f4-417d-abb6-7d2c681e1a74 00:09:22.679 23:50:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:22.679 23:50:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:22.679 23:50:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:22.679 23:50:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:22.679 23:50:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:22.679 23:50:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:22.679 23:50:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:22.679 23:50:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:09:22.679 23:50:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9729e59e-f4f4-417d-abb6-7d2c681e1a74 00:09:22.679 request: 00:09:22.679 { 00:09:22.679 "uuid": "9729e59e-f4f4-417d-abb6-7d2c681e1a74", 00:09:22.679 "method": "bdev_lvol_get_lvstores", 00:09:22.679 "req_id": 1 00:09:22.679 } 00:09:22.679 Got JSON-RPC error response 00:09:22.679 response: 00:09:22.679 { 00:09:22.679 "code": -19, 00:09:22.679 "message": "No such device" 00:09:22.679 } 00:09:22.679 23:50:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # es=1 00:09:22.679 23:50:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:22.679 23:50:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:09:22.679 23:50:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:22.679 23:50:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:22.938 aio_bdev 00:09:22.938 23:50:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 674e72ca-7e33-41df-bdb1-3b381685edb9 00:09:22.938 23:50:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=674e72ca-7e33-41df-bdb1-3b381685edb9 00:09:22.938 23:50:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:22.938 23:50:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:09:22.938 23:50:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:22.938 23:50:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:22.938 23:50:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:09:23.198 23:50:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 674e72ca-7e33-41df-bdb1-3b381685edb9 -t 2000 00:09:23.198 [ 00:09:23.198 { 00:09:23.198 "name": "674e72ca-7e33-41df-bdb1-3b381685edb9", 00:09:23.198 "aliases": [ 00:09:23.198 "lvs/lvol" 00:09:23.198 ], 00:09:23.198 "product_name": "Logical Volume", 00:09:23.198 "block_size": 4096, 00:09:23.198 "num_blocks": 38912, 00:09:23.198 "uuid": "674e72ca-7e33-41df-bdb1-3b381685edb9", 00:09:23.198 "assigned_rate_limits": { 00:09:23.198 "rw_ios_per_sec": 0, 00:09:23.198 "rw_mbytes_per_sec": 0, 00:09:23.198 "r_mbytes_per_sec": 0, 00:09:23.198 "w_mbytes_per_sec": 0 00:09:23.198 }, 00:09:23.198 "claimed": false, 00:09:23.198 "zoned": false, 00:09:23.198 "supported_io_types": { 00:09:23.198 "read": true, 00:09:23.198 "write": true, 00:09:23.198 "unmap": true, 00:09:23.198 "flush": false, 00:09:23.198 "reset": true, 00:09:23.198 "nvme_admin": false, 00:09:23.198 "nvme_io": false, 00:09:23.198 "nvme_io_md": false, 00:09:23.198 "write_zeroes": true, 00:09:23.198 "zcopy": false, 00:09:23.198 "get_zone_info": false, 00:09:23.198 "zone_management": false, 00:09:23.198 "zone_append": false, 00:09:23.198 "compare": false, 00:09:23.198 "compare_and_write": false, 00:09:23.198 "abort": false, 00:09:23.198 "seek_hole": true, 00:09:23.198 "seek_data": true, 00:09:23.198 "copy": false, 00:09:23.198 "nvme_iov_md": false 00:09:23.198 }, 00:09:23.198 "driver_specific": { 00:09:23.198 "lvol": { 00:09:23.198 "lvol_store_uuid": "9729e59e-f4f4-417d-abb6-7d2c681e1a74", 00:09:23.198 "base_bdev": "aio_bdev", 00:09:23.198 "thin_provision": false, 00:09:23.198 "num_allocated_clusters": 38, 00:09:23.198 "snapshot": false, 00:09:23.198 "clone": false, 00:09:23.198 "esnap_clone": false 00:09:23.198 } 00:09:23.198 } 00:09:23.198 } 00:09:23.198 ] 00:09:23.198 23:50:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:09:23.198 23:50:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9729e59e-f4f4-417d-abb6-7d2c681e1a74 00:09:23.198 23:50:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:09:23.457 23:50:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:09:23.457 23:50:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9729e59e-f4f4-417d-abb6-7d2c681e1a74 00:09:23.457 23:50:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:09:23.716 23:50:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:09:23.716 23:50:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 674e72ca-7e33-41df-bdb1-3b381685edb9 00:09:23.974 23:50:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 9729e59e-f4f4-417d-abb6-7d2c681e1a74 00:09:23.974 23:50:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:24.233 23:50:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:24.233 00:09:24.233 real 0m18.839s 00:09:24.233 user 0m48.578s 00:09:24.233 sys 0m3.874s 00:09:24.233 23:50:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:24.233 23:50:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:24.233 ************************************ 00:09:24.233 END TEST lvs_grow_dirty 00:09:24.233 ************************************ 00:09:24.233 23:50:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:09:24.233 23:50:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # type=--id 00:09:24.233 23:50:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # id=0 00:09:24.233 23:50:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:09:24.233 23:50:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:09:24.233 23:50:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:09:24.233 23:50:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:09:24.233 23:50:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@824 -- # for n in $shm_files 00:09:24.233 23:50:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:09:24.233 nvmf_trace.0 00:09:24.233 23:50:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # return 0 00:09:24.233 23:50:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:09:24.233 23:50:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:24.233 23:50:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:09:24.233 23:50:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:24.233 23:50:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:09:24.233 23:50:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:24.233 23:50:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:24.492 rmmod nvme_tcp 00:09:24.492 rmmod nvme_fabrics 00:09:24.492 rmmod nvme_keyring 00:09:24.492 23:50:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:24.492 23:50:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:09:24.492 23:50:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:09:24.492 23:50:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 3856831 ']' 00:09:24.492 23:50:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 3856831 00:09:24.492 23:50:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # '[' -z 3856831 ']' 00:09:24.492 23:50:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # kill -0 3856831 00:09:24.492 23:50:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # uname 00:09:24.492 23:50:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:24.492 23:50:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3856831 00:09:24.492 23:50:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:24.492 23:50:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:24.492 23:50:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3856831' 00:09:24.492 killing process with pid 3856831 00:09:24.492 23:50:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@973 -- # kill 3856831 00:09:24.492 23:50:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@978 -- # wait 3856831 00:09:25.429 23:50:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:25.429 23:50:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:25.429 23:50:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:25.429 23:50:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:09:25.429 23:50:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 00:09:25.429 23:50:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:25.429 23:50:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 00:09:25.429 23:50:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:25.429 23:50:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:25.429 23:50:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:25.429 23:50:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:25.429 23:50:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:27.974 23:50:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:27.974 00:09:27.974 real 0m45.793s 00:09:27.974 user 1m11.898s 00:09:27.974 sys 0m9.920s 00:09:27.974 23:50:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:27.974 23:50:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:27.974 ************************************ 00:09:27.974 END TEST nvmf_lvs_grow 00:09:27.974 ************************************ 00:09:27.974 23:50:06 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:09:27.974 23:50:06 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:27.974 23:50:06 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:27.974 23:50:06 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:27.974 ************************************ 00:09:27.974 START TEST nvmf_bdev_io_wait 00:09:27.974 ************************************ 00:09:27.974 23:50:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:09:27.974 * Looking for test storage... 00:09:27.974 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:27.974 23:50:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:27.974 23:50:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # lcov --version 00:09:27.974 23:50:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:27.974 23:50:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:27.974 23:50:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:27.974 23:50:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:27.974 23:50:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:27.974 23:50:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:09:27.974 23:50:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:09:27.974 23:50:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:09:27.974 23:50:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:09:27.974 23:50:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:09:27.974 23:50:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:09:27.974 23:50:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:09:27.974 23:50:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:27.974 23:50:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:09:27.974 23:50:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:09:27.974 23:50:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:27.974 23:50:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:27.974 23:50:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:09:27.974 23:50:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:09:27.974 23:50:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:27.974 23:50:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:09:27.974 23:50:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:09:27.974 23:50:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:09:27.974 23:50:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:09:27.974 23:50:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:27.974 23:50:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:09:27.974 23:50:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:09:27.974 23:50:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:27.974 23:50:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:27.974 23:50:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:09:27.974 23:50:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:27.974 23:50:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:27.974 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:27.974 --rc genhtml_branch_coverage=1 00:09:27.974 --rc genhtml_function_coverage=1 00:09:27.974 --rc genhtml_legend=1 00:09:27.974 --rc geninfo_all_blocks=1 00:09:27.974 --rc geninfo_unexecuted_blocks=1 00:09:27.974 00:09:27.974 ' 00:09:27.974 23:50:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:27.974 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:27.974 --rc genhtml_branch_coverage=1 00:09:27.974 --rc genhtml_function_coverage=1 00:09:27.974 --rc genhtml_legend=1 00:09:27.974 --rc geninfo_all_blocks=1 00:09:27.974 --rc geninfo_unexecuted_blocks=1 00:09:27.974 00:09:27.974 ' 00:09:27.974 23:50:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:27.974 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:27.974 --rc genhtml_branch_coverage=1 00:09:27.974 --rc genhtml_function_coverage=1 00:09:27.974 --rc genhtml_legend=1 00:09:27.974 --rc geninfo_all_blocks=1 00:09:27.974 --rc geninfo_unexecuted_blocks=1 00:09:27.974 00:09:27.974 ' 00:09:27.974 23:50:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:27.974 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:27.974 --rc genhtml_branch_coverage=1 00:09:27.974 --rc genhtml_function_coverage=1 00:09:27.974 --rc genhtml_legend=1 00:09:27.974 --rc geninfo_all_blocks=1 00:09:27.974 --rc geninfo_unexecuted_blocks=1 00:09:27.974 00:09:27.974 ' 00:09:27.974 23:50:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:27.974 23:50:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:09:27.974 23:50:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:27.974 23:50:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:27.974 23:50:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:27.974 23:50:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:27.974 23:50:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:27.974 23:50:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:27.974 23:50:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:27.974 23:50:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:27.974 23:50:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:27.974 23:50:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:27.974 23:50:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:09:27.974 23:50:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:09:27.974 23:50:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:27.974 23:50:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:27.974 23:50:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:27.974 23:50:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:27.974 23:50:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:27.974 23:50:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:09:27.974 23:50:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:27.974 23:50:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:27.974 23:50:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:27.974 23:50:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:27.974 23:50:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:27.974 23:50:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:27.974 23:50:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:09:27.975 23:50:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:27.975 23:50:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:09:27.975 23:50:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:27.975 23:50:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:27.975 23:50:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:27.975 23:50:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:27.975 23:50:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:27.975 23:50:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:27.975 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:27.975 23:50:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:27.975 23:50:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:27.975 23:50:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:27.975 23:50:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:27.975 23:50:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:27.975 23:50:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:09:27.975 23:50:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:27.975 23:50:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:27.975 23:50:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:27.975 23:50:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:27.975 23:50:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:27.975 23:50:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:27.975 23:50:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:27.975 23:50:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:27.975 23:50:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:27.975 23:50:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:27.975 23:50:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:09:27.975 23:50:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:33.250 23:50:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:33.250 23:50:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:09:33.250 23:50:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:33.250 23:50:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:33.250 23:50:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:33.250 23:50:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:33.250 23:50:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:33.250 23:50:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:09:33.250 23:50:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:33.250 23:50:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:09:33.250 23:50:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:09:33.250 23:50:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:09:33.250 23:50:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:09:33.250 23:50:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:09:33.250 23:50:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:09:33.250 23:50:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:33.250 23:50:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:33.250 23:50:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:33.250 23:50:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:33.250 23:50:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:33.250 23:50:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:33.250 23:50:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:33.250 23:50:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:33.250 23:50:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:33.250 23:50:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:33.251 23:50:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:33.251 23:50:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:33.251 23:50:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:33.251 23:50:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:33.251 23:50:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:33.251 23:50:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:33.251 23:50:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:33.251 23:50:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:33.251 23:50:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:33.251 23:50:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:09:33.251 Found 0000:af:00.0 (0x8086 - 0x159b) 00:09:33.251 23:50:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:33.251 23:50:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:33.251 23:50:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:33.251 23:50:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:33.251 23:50:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:33.251 23:50:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:33.251 23:50:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:09:33.251 Found 0000:af:00.1 (0x8086 - 0x159b) 00:09:33.251 23:50:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:33.251 23:50:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:33.251 23:50:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:33.251 23:50:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:33.251 23:50:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:33.251 23:50:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:33.251 23:50:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:33.251 23:50:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:33.251 23:50:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:33.251 23:50:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:33.251 23:50:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:33.251 23:50:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:33.251 23:50:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:33.251 23:50:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:33.251 23:50:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:33.251 23:50:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:09:33.251 Found net devices under 0000:af:00.0: cvl_0_0 00:09:33.251 23:50:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:33.251 23:50:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:33.251 23:50:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:33.251 23:50:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:33.251 23:50:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:33.251 23:50:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:33.251 23:50:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:33.251 23:50:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:33.251 23:50:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:09:33.251 Found net devices under 0000:af:00.1: cvl_0_1 00:09:33.251 23:50:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:33.251 23:50:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:33.251 23:50:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # is_hw=yes 00:09:33.251 23:50:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:33.251 23:50:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:33.251 23:50:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:33.251 23:50:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:33.251 23:50:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:33.251 23:50:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:33.251 23:50:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:33.251 23:50:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:33.251 23:50:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:33.251 23:50:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:33.251 23:50:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:33.251 23:50:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:33.251 23:50:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:33.251 23:50:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:33.251 23:50:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:33.251 23:50:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:33.251 23:50:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:33.251 23:50:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:33.251 23:50:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:33.251 23:50:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:33.251 23:50:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:33.251 23:50:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:33.251 23:50:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:33.251 23:50:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:33.251 23:50:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:33.251 23:50:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:33.251 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:33.251 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.409 ms 00:09:33.251 00:09:33.251 --- 10.0.0.2 ping statistics --- 00:09:33.251 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:33.251 rtt min/avg/max/mdev = 0.409/0.409/0.409/0.000 ms 00:09:33.251 23:50:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:33.251 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:33.251 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.080 ms 00:09:33.251 00:09:33.251 --- 10.0.0.1 ping statistics --- 00:09:33.251 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:33.251 rtt min/avg/max/mdev = 0.080/0.080/0.080/0.000 ms 00:09:33.251 23:50:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:33.251 23:50:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # return 0 00:09:33.251 23:50:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:33.251 23:50:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:33.251 23:50:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:33.251 23:50:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:33.251 23:50:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:33.251 23:50:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:33.251 23:50:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:33.251 23:50:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:09:33.251 23:50:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:33.251 23:50:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:33.251 23:50:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:33.251 23:50:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=3861118 00:09:33.251 23:50:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 3861118 00:09:33.251 23:50:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:09:33.251 23:50:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # '[' -z 3861118 ']' 00:09:33.251 23:50:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:33.251 23:50:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:33.251 23:50:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:33.251 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:33.251 23:50:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:33.251 23:50:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:33.511 [2024-12-13 23:50:12.392748] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:09:33.511 [2024-12-13 23:50:12.392838] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:33.511 [2024-12-13 23:50:12.516076] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:33.511 [2024-12-13 23:50:12.616697] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:33.511 [2024-12-13 23:50:12.616749] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:33.511 [2024-12-13 23:50:12.616760] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:33.511 [2024-12-13 23:50:12.616786] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:33.511 [2024-12-13 23:50:12.616795] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:33.511 [2024-12-13 23:50:12.619150] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:09:33.511 [2024-12-13 23:50:12.619222] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:09:33.511 [2024-12-13 23:50:12.619326] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:09:33.511 [2024-12-13 23:50:12.619336] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:09:34.078 23:50:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:34.078 23:50:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@868 -- # return 0 00:09:34.078 23:50:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:34.078 23:50:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:34.078 23:50:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:34.337 23:50:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:34.337 23:50:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:09:34.337 23:50:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.337 23:50:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:34.337 23:50:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:34.337 23:50:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:09:34.337 23:50:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.337 23:50:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:34.596 23:50:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:34.596 23:50:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:34.596 23:50:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.596 23:50:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:34.596 [2024-12-13 23:50:13.486318] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:34.596 23:50:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:34.596 23:50:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:34.596 23:50:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.596 23:50:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:34.596 Malloc0 00:09:34.596 23:50:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:34.596 23:50:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:34.596 23:50:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.596 23:50:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:34.596 23:50:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:34.596 23:50:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:34.596 23:50:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.596 23:50:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:34.596 23:50:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:34.596 23:50:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:34.596 23:50:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.596 23:50:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:34.596 [2024-12-13 23:50:13.597670] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:34.596 23:50:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:34.596 23:50:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=3861319 00:09:34.596 23:50:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:09:34.596 23:50:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:09:34.596 23:50:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=3861322 00:09:34.596 23:50:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:09:34.596 23:50:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:09:34.596 23:50:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:34.596 23:50:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:34.596 { 00:09:34.596 "params": { 00:09:34.596 "name": "Nvme$subsystem", 00:09:34.596 "trtype": "$TEST_TRANSPORT", 00:09:34.596 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:34.596 "adrfam": "ipv4", 00:09:34.596 "trsvcid": "$NVMF_PORT", 00:09:34.596 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:34.596 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:34.596 "hdgst": ${hdgst:-false}, 00:09:34.596 "ddgst": ${ddgst:-false} 00:09:34.596 }, 00:09:34.596 "method": "bdev_nvme_attach_controller" 00:09:34.596 } 00:09:34.596 EOF 00:09:34.596 )") 00:09:34.596 23:50:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:09:34.596 23:50:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:09:34.596 23:50:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=3861325 00:09:34.596 23:50:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:09:34.596 23:50:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:09:34.596 23:50:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:34.596 23:50:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:09:34.596 23:50:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:09:34.596 23:50:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:34.596 { 00:09:34.596 "params": { 00:09:34.596 "name": "Nvme$subsystem", 00:09:34.596 "trtype": "$TEST_TRANSPORT", 00:09:34.596 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:34.596 "adrfam": "ipv4", 00:09:34.596 "trsvcid": "$NVMF_PORT", 00:09:34.596 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:34.596 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:34.596 "hdgst": ${hdgst:-false}, 00:09:34.596 "ddgst": ${ddgst:-false} 00:09:34.596 }, 00:09:34.596 "method": "bdev_nvme_attach_controller" 00:09:34.596 } 00:09:34.596 EOF 00:09:34.596 )") 00:09:34.597 23:50:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=3861329 00:09:34.597 23:50:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:09:34.597 23:50:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:09:34.597 23:50:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:09:34.597 23:50:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:09:34.597 23:50:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:34.597 23:50:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:34.597 { 00:09:34.597 "params": { 00:09:34.597 "name": "Nvme$subsystem", 00:09:34.597 "trtype": "$TEST_TRANSPORT", 00:09:34.597 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:34.597 "adrfam": "ipv4", 00:09:34.597 "trsvcid": "$NVMF_PORT", 00:09:34.597 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:34.597 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:34.597 "hdgst": ${hdgst:-false}, 00:09:34.597 "ddgst": ${ddgst:-false} 00:09:34.597 }, 00:09:34.597 "method": "bdev_nvme_attach_controller" 00:09:34.597 } 00:09:34.597 EOF 00:09:34.597 )") 00:09:34.597 23:50:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:09:34.597 23:50:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:09:34.597 23:50:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:09:34.597 23:50:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:09:34.597 23:50:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:09:34.597 23:50:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:34.597 23:50:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:34.597 { 00:09:34.597 "params": { 00:09:34.597 "name": "Nvme$subsystem", 00:09:34.597 "trtype": "$TEST_TRANSPORT", 00:09:34.597 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:34.597 "adrfam": "ipv4", 00:09:34.597 "trsvcid": "$NVMF_PORT", 00:09:34.597 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:34.597 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:34.597 "hdgst": ${hdgst:-false}, 00:09:34.597 "ddgst": ${ddgst:-false} 00:09:34.597 }, 00:09:34.597 "method": "bdev_nvme_attach_controller" 00:09:34.597 } 00:09:34.597 EOF 00:09:34.597 )") 00:09:34.597 23:50:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:09:34.597 23:50:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 3861319 00:09:34.597 23:50:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:09:34.597 23:50:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:09:34.597 23:50:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:09:34.597 23:50:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:09:34.597 23:50:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:09:34.597 23:50:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:34.597 "params": { 00:09:34.597 "name": "Nvme1", 00:09:34.597 "trtype": "tcp", 00:09:34.597 "traddr": "10.0.0.2", 00:09:34.597 "adrfam": "ipv4", 00:09:34.597 "trsvcid": "4420", 00:09:34.597 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:34.597 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:34.597 "hdgst": false, 00:09:34.597 "ddgst": false 00:09:34.597 }, 00:09:34.597 "method": "bdev_nvme_attach_controller" 00:09:34.597 }' 00:09:34.597 23:50:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:09:34.597 23:50:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:09:34.597 23:50:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:34.597 "params": { 00:09:34.597 "name": "Nvme1", 00:09:34.597 "trtype": "tcp", 00:09:34.597 "traddr": "10.0.0.2", 00:09:34.597 "adrfam": "ipv4", 00:09:34.597 "trsvcid": "4420", 00:09:34.597 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:34.597 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:34.597 "hdgst": false, 00:09:34.597 "ddgst": false 00:09:34.597 }, 00:09:34.597 "method": "bdev_nvme_attach_controller" 00:09:34.597 }' 00:09:34.597 23:50:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:09:34.597 23:50:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:34.597 "params": { 00:09:34.597 "name": "Nvme1", 00:09:34.597 "trtype": "tcp", 00:09:34.597 "traddr": "10.0.0.2", 00:09:34.597 "adrfam": "ipv4", 00:09:34.597 "trsvcid": "4420", 00:09:34.597 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:34.597 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:34.597 "hdgst": false, 00:09:34.597 "ddgst": false 00:09:34.597 }, 00:09:34.597 "method": "bdev_nvme_attach_controller" 00:09:34.597 }' 00:09:34.597 23:50:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:09:34.597 23:50:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:34.597 "params": { 00:09:34.597 "name": "Nvme1", 00:09:34.597 "trtype": "tcp", 00:09:34.597 "traddr": "10.0.0.2", 00:09:34.597 "adrfam": "ipv4", 00:09:34.597 "trsvcid": "4420", 00:09:34.597 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:34.597 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:34.597 "hdgst": false, 00:09:34.597 "ddgst": false 00:09:34.597 }, 00:09:34.597 "method": "bdev_nvme_attach_controller" 00:09:34.597 }' 00:09:34.597 [2024-12-13 23:50:13.678152] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:09:34.597 [2024-12-13 23:50:13.678243] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:09:34.597 [2024-12-13 23:50:13.680770] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:09:34.597 [2024-12-13 23:50:13.680869] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:09:34.597 [2024-12-13 23:50:13.681612] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:09:34.597 [2024-12-13 23:50:13.681687] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:09:34.597 [2024-12-13 23:50:13.685097] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:09:34.597 [2024-12-13 23:50:13.685183] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:09:34.856 [2024-12-13 23:50:13.871671] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:34.856 [2024-12-13 23:50:13.970709] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:34.856 [2024-12-13 23:50:13.981624] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 5 00:09:35.114 [2024-12-13 23:50:14.072303] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:35.114 [2024-12-13 23:50:14.076442] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 4 00:09:35.114 [2024-12-13 23:50:14.132807] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:35.114 [2024-12-13 23:50:14.182590] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 6 00:09:35.114 [2024-12-13 23:50:14.239674] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 7 00:09:35.373 Running I/O for 1 seconds... 00:09:35.374 Running I/O for 1 seconds... 00:09:35.633 Running I/O for 1 seconds... 00:09:35.891 Running I/O for 1 seconds... 00:09:36.469 7888.00 IOPS, 30.81 MiB/s 00:09:36.469 Latency(us) 00:09:36.469 [2024-12-13T22:50:15.610Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:36.469 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:09:36.469 Nvme1n1 : 1.07 7507.23 29.33 0.00 0.00 16173.19 8176.40 69405.74 00:09:36.469 [2024-12-13T22:50:15.610Z] =================================================================================================================== 00:09:36.469 [2024-12-13T22:50:15.610Z] Total : 7507.23 29.33 0.00 0.00 16173.19 8176.40 69405.74 00:09:36.469 9979.00 IOPS, 38.98 MiB/s 00:09:36.469 Latency(us) 00:09:36.469 [2024-12-13T22:50:15.610Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:36.469 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:09:36.469 Nvme1n1 : 1.01 10044.15 39.23 0.00 0.00 12696.30 5461.33 22719.15 00:09:36.469 [2024-12-13T22:50:15.610Z] =================================================================================================================== 00:09:36.469 [2024-12-13T22:50:15.610Z] Total : 10044.15 39.23 0.00 0.00 12696.30 5461.33 22719.15 00:09:36.469 213720.00 IOPS, 834.84 MiB/s 00:09:36.469 Latency(us) 00:09:36.469 [2024-12-13T22:50:15.610Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:36.469 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:09:36.469 Nvme1n1 : 1.00 213366.34 833.46 0.00 0.00 596.89 267.22 1614.99 00:09:36.469 [2024-12-13T22:50:15.610Z] =================================================================================================================== 00:09:36.469 [2024-12-13T22:50:15.610Z] Total : 213366.34 833.46 0.00 0.00 596.89 267.22 1614.99 00:09:36.727 9644.00 IOPS, 37.67 MiB/s 00:09:36.728 Latency(us) 00:09:36.728 [2024-12-13T22:50:15.869Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:36.728 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:09:36.728 Nvme1n1 : 1.01 9743.21 38.06 0.00 0.00 13105.08 3073.95 39945.75 00:09:36.728 [2024-12-13T22:50:15.869Z] =================================================================================================================== 00:09:36.728 [2024-12-13T22:50:15.869Z] Total : 9743.21 38.06 0.00 0.00 13105.08 3073.95 39945.75 00:09:37.295 23:50:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 3861322 00:09:37.295 23:50:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 3861325 00:09:37.295 23:50:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 3861329 00:09:37.554 23:50:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:37.554 23:50:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.554 23:50:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:37.554 23:50:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.554 23:50:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:09:37.554 23:50:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:09:37.554 23:50:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:37.554 23:50:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:09:37.554 23:50:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:37.554 23:50:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:09:37.554 23:50:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:37.554 23:50:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:37.554 rmmod nvme_tcp 00:09:37.554 rmmod nvme_fabrics 00:09:37.554 rmmod nvme_keyring 00:09:37.554 23:50:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:37.554 23:50:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:09:37.554 23:50:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:09:37.554 23:50:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 3861118 ']' 00:09:37.554 23:50:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 3861118 00:09:37.554 23:50:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # '[' -z 3861118 ']' 00:09:37.554 23:50:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # kill -0 3861118 00:09:37.555 23:50:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # uname 00:09:37.555 23:50:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:37.555 23:50:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3861118 00:09:37.555 23:50:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:37.555 23:50:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:37.555 23:50:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3861118' 00:09:37.555 killing process with pid 3861118 00:09:37.555 23:50:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@973 -- # kill 3861118 00:09:37.555 23:50:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@978 -- # wait 3861118 00:09:38.959 23:50:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:38.959 23:50:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:38.959 23:50:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:38.959 23:50:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:09:38.959 23:50:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 00:09:38.959 23:50:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:38.959 23:50:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 00:09:38.959 23:50:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:38.959 23:50:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:38.959 23:50:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:38.959 23:50:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:38.959 23:50:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:40.863 23:50:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:40.863 00:09:40.863 real 0m13.080s 00:09:40.863 user 0m29.579s 00:09:40.863 sys 0m6.160s 00:09:40.863 23:50:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:40.863 23:50:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:40.863 ************************************ 00:09:40.863 END TEST nvmf_bdev_io_wait 00:09:40.863 ************************************ 00:09:40.863 23:50:19 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:09:40.863 23:50:19 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:40.863 23:50:19 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:40.863 23:50:19 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:40.863 ************************************ 00:09:40.863 START TEST nvmf_queue_depth 00:09:40.863 ************************************ 00:09:40.863 23:50:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:09:40.863 * Looking for test storage... 00:09:40.863 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:40.863 23:50:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:40.863 23:50:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:40.863 23:50:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # lcov --version 00:09:40.863 23:50:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:40.863 23:50:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:40.863 23:50:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:40.863 23:50:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:40.863 23:50:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:09:40.863 23:50:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:09:40.863 23:50:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:09:40.863 23:50:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:09:40.863 23:50:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:09:40.863 23:50:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:09:40.863 23:50:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:09:40.863 23:50:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:40.863 23:50:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:09:40.863 23:50:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:09:40.863 23:50:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:40.863 23:50:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:40.863 23:50:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:09:40.863 23:50:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:09:40.863 23:50:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:40.863 23:50:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:09:40.863 23:50:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:09:40.863 23:50:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:09:40.863 23:50:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:09:40.863 23:50:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:40.863 23:50:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:09:40.863 23:50:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:09:40.863 23:50:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:40.863 23:50:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:40.863 23:50:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:09:40.863 23:50:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:40.863 23:50:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:40.863 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:40.863 --rc genhtml_branch_coverage=1 00:09:40.863 --rc genhtml_function_coverage=1 00:09:40.863 --rc genhtml_legend=1 00:09:40.863 --rc geninfo_all_blocks=1 00:09:40.863 --rc geninfo_unexecuted_blocks=1 00:09:40.863 00:09:40.863 ' 00:09:40.863 23:50:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:40.863 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:40.863 --rc genhtml_branch_coverage=1 00:09:40.863 --rc genhtml_function_coverage=1 00:09:40.863 --rc genhtml_legend=1 00:09:40.863 --rc geninfo_all_blocks=1 00:09:40.863 --rc geninfo_unexecuted_blocks=1 00:09:40.863 00:09:40.863 ' 00:09:40.863 23:50:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:40.863 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:40.863 --rc genhtml_branch_coverage=1 00:09:40.863 --rc genhtml_function_coverage=1 00:09:40.863 --rc genhtml_legend=1 00:09:40.863 --rc geninfo_all_blocks=1 00:09:40.863 --rc geninfo_unexecuted_blocks=1 00:09:40.863 00:09:40.863 ' 00:09:40.863 23:50:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:40.863 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:40.863 --rc genhtml_branch_coverage=1 00:09:40.863 --rc genhtml_function_coverage=1 00:09:40.863 --rc genhtml_legend=1 00:09:40.863 --rc geninfo_all_blocks=1 00:09:40.863 --rc geninfo_unexecuted_blocks=1 00:09:40.863 00:09:40.863 ' 00:09:40.863 23:50:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:40.863 23:50:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:09:40.863 23:50:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:40.863 23:50:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:40.863 23:50:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:40.863 23:50:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:40.863 23:50:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:40.863 23:50:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:40.863 23:50:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:40.863 23:50:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:40.863 23:50:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:40.863 23:50:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:41.124 23:50:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:09:41.124 23:50:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:09:41.124 23:50:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:41.124 23:50:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:41.124 23:50:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:41.124 23:50:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:41.124 23:50:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:41.124 23:50:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:09:41.124 23:50:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:41.124 23:50:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:41.124 23:50:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:41.124 23:50:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:41.124 23:50:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:41.124 23:50:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:41.124 23:50:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:09:41.124 23:50:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:41.124 23:50:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:09:41.124 23:50:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:41.124 23:50:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:41.124 23:50:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:41.124 23:50:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:41.124 23:50:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:41.124 23:50:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:41.124 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:41.124 23:50:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:41.124 23:50:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:41.124 23:50:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:41.124 23:50:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:09:41.124 23:50:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:09:41.124 23:50:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:09:41.124 23:50:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:09:41.124 23:50:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:41.124 23:50:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:41.124 23:50:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:41.124 23:50:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:41.124 23:50:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:41.124 23:50:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:41.124 23:50:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:41.124 23:50:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:41.124 23:50:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:41.124 23:50:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:41.124 23:50:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:09:41.124 23:50:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:46.536 23:50:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:46.536 23:50:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:09:46.536 23:50:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:46.536 23:50:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:46.536 23:50:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:46.536 23:50:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:46.536 23:50:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:46.536 23:50:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:09:46.536 23:50:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:46.536 23:50:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:09:46.536 23:50:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:09:46.536 23:50:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:09:46.536 23:50:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:09:46.536 23:50:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:09:46.536 23:50:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:09:46.536 23:50:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:46.536 23:50:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:46.536 23:50:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:46.536 23:50:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:46.536 23:50:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:46.536 23:50:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:46.536 23:50:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:46.536 23:50:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:46.536 23:50:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:46.536 23:50:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:46.536 23:50:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:46.536 23:50:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:46.536 23:50:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:46.536 23:50:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:46.536 23:50:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:46.536 23:50:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:46.536 23:50:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:46.536 23:50:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:46.536 23:50:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:46.536 23:50:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:09:46.536 Found 0000:af:00.0 (0x8086 - 0x159b) 00:09:46.536 23:50:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:46.536 23:50:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:46.536 23:50:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:46.536 23:50:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:46.536 23:50:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:46.537 23:50:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:46.537 23:50:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:09:46.537 Found 0000:af:00.1 (0x8086 - 0x159b) 00:09:46.537 23:50:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:46.537 23:50:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:46.537 23:50:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:46.537 23:50:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:46.537 23:50:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:46.537 23:50:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:46.537 23:50:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:46.537 23:50:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:46.537 23:50:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:46.537 23:50:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:46.537 23:50:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:46.537 23:50:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:46.537 23:50:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:46.537 23:50:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:46.537 23:50:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:46.537 23:50:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:09:46.537 Found net devices under 0000:af:00.0: cvl_0_0 00:09:46.537 23:50:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:46.537 23:50:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:46.537 23:50:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:46.537 23:50:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:46.537 23:50:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:46.537 23:50:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:46.537 23:50:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:46.537 23:50:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:46.537 23:50:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:09:46.537 Found net devices under 0000:af:00.1: cvl_0_1 00:09:46.537 23:50:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:46.537 23:50:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:46.537 23:50:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # is_hw=yes 00:09:46.537 23:50:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:46.537 23:50:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:46.537 23:50:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:46.537 23:50:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:46.537 23:50:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:46.537 23:50:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:46.537 23:50:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:46.537 23:50:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:46.537 23:50:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:46.537 23:50:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:46.537 23:50:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:46.537 23:50:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:46.537 23:50:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:46.537 23:50:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:46.537 23:50:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:46.537 23:50:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:46.537 23:50:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:46.537 23:50:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:46.537 23:50:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:46.537 23:50:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:46.537 23:50:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:46.537 23:50:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:46.537 23:50:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:46.537 23:50:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:46.537 23:50:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:46.537 23:50:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:46.537 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:46.537 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.434 ms 00:09:46.537 00:09:46.537 --- 10.0.0.2 ping statistics --- 00:09:46.537 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:46.537 rtt min/avg/max/mdev = 0.434/0.434/0.434/0.000 ms 00:09:46.537 23:50:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:46.537 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:46.537 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.187 ms 00:09:46.537 00:09:46.537 --- 10.0.0.1 ping statistics --- 00:09:46.537 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:46.537 rtt min/avg/max/mdev = 0.187/0.187/0.187/0.000 ms 00:09:46.537 23:50:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:46.537 23:50:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@450 -- # return 0 00:09:46.537 23:50:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:46.537 23:50:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:46.537 23:50:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:46.537 23:50:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:46.537 23:50:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:46.537 23:50:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:46.537 23:50:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:46.537 23:50:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:09:46.537 23:50:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:46.537 23:50:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:46.537 23:50:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:46.537 23:50:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=3865460 00:09:46.537 23:50:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:09:46.537 23:50:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 3865460 00:09:46.537 23:50:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 3865460 ']' 00:09:46.537 23:50:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:46.537 23:50:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:46.537 23:50:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:46.537 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:46.537 23:50:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:46.537 23:50:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:46.537 [2024-12-13 23:50:25.508078] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:09:46.537 [2024-12-13 23:50:25.508173] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:46.537 [2024-12-13 23:50:25.628468] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:46.802 [2024-12-13 23:50:25.730350] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:46.802 [2024-12-13 23:50:25.730393] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:46.802 [2024-12-13 23:50:25.730403] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:46.803 [2024-12-13 23:50:25.730414] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:46.803 [2024-12-13 23:50:25.730421] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:46.803 [2024-12-13 23:50:25.731675] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:09:47.375 23:50:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:47.375 23:50:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:09:47.375 23:50:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:47.375 23:50:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:47.375 23:50:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:47.375 23:50:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:47.375 23:50:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:47.375 23:50:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:47.375 23:50:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:47.375 [2024-12-13 23:50:26.350722] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:47.375 23:50:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:47.375 23:50:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:47.375 23:50:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:47.375 23:50:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:47.375 Malloc0 00:09:47.375 23:50:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:47.375 23:50:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:47.375 23:50:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:47.375 23:50:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:47.375 23:50:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:47.375 23:50:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:47.375 23:50:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:47.375 23:50:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:47.375 23:50:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:47.375 23:50:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:47.375 23:50:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:47.375 23:50:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:47.375 [2024-12-13 23:50:26.476051] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:47.375 23:50:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:47.375 23:50:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=3865696 00:09:47.375 23:50:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:09:47.375 23:50:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:47.375 23:50:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 3865696 /var/tmp/bdevperf.sock 00:09:47.375 23:50:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 3865696 ']' 00:09:47.375 23:50:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:09:47.375 23:50:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:47.375 23:50:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:09:47.375 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:09:47.375 23:50:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:47.375 23:50:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:47.634 [2024-12-13 23:50:26.554108] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:09:47.634 [2024-12-13 23:50:26.554193] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3865696 ] 00:09:47.634 [2024-12-13 23:50:26.665750] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:47.893 [2024-12-13 23:50:26.776892] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:09:48.460 23:50:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:48.460 23:50:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:09:48.460 23:50:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:09:48.460 23:50:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:48.460 23:50:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:48.460 NVMe0n1 00:09:48.460 23:50:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:48.460 23:50:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:09:48.719 Running I/O for 10 seconds... 00:09:50.589 10240.00 IOPS, 40.00 MiB/s [2024-12-13T22:50:30.666Z] 10680.50 IOPS, 41.72 MiB/s [2024-12-13T22:50:32.043Z] 10581.33 IOPS, 41.33 MiB/s [2024-12-13T22:50:32.979Z] 10621.75 IOPS, 41.49 MiB/s [2024-12-13T22:50:33.916Z] 10645.80 IOPS, 41.59 MiB/s [2024-12-13T22:50:34.853Z] 10732.67 IOPS, 41.92 MiB/s [2024-12-13T22:50:35.790Z] 10759.57 IOPS, 42.03 MiB/s [2024-12-13T22:50:36.727Z] 10754.25 IOPS, 42.01 MiB/s [2024-12-13T22:50:38.104Z] 10793.22 IOPS, 42.16 MiB/s [2024-12-13T22:50:38.104Z] 10783.40 IOPS, 42.12 MiB/s 00:09:58.963 Latency(us) 00:09:58.963 [2024-12-13T22:50:38.104Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:58.963 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:09:58.963 Verification LBA range: start 0x0 length 0x4000 00:09:58.963 NVMe0n1 : 10.06 10814.55 42.24 0.00 0.00 94295.01 16103.13 61915.92 00:09:58.963 [2024-12-13T22:50:38.104Z] =================================================================================================================== 00:09:58.963 [2024-12-13T22:50:38.105Z] Total : 10814.55 42.24 0.00 0.00 94295.01 16103.13 61915.92 00:09:58.964 { 00:09:58.964 "results": [ 00:09:58.964 { 00:09:58.964 "job": "NVMe0n1", 00:09:58.964 "core_mask": "0x1", 00:09:58.964 "workload": "verify", 00:09:58.964 "status": "finished", 00:09:58.964 "verify_range": { 00:09:58.964 "start": 0, 00:09:58.964 "length": 16384 00:09:58.964 }, 00:09:58.964 "queue_depth": 1024, 00:09:58.964 "io_size": 4096, 00:09:58.964 "runtime": 10.061165, 00:09:58.964 "iops": 10814.552787872975, 00:09:58.964 "mibps": 42.24434682762881, 00:09:58.964 "io_failed": 0, 00:09:58.964 "io_timeout": 0, 00:09:58.964 "avg_latency_us": 94295.01109653748, 00:09:58.964 "min_latency_us": 16103.131428571429, 00:09:58.964 "max_latency_us": 61915.91619047619 00:09:58.964 } 00:09:58.964 ], 00:09:58.964 "core_count": 1 00:09:58.964 } 00:09:58.964 23:50:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 3865696 00:09:58.964 23:50:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 3865696 ']' 00:09:58.964 23:50:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 3865696 00:09:58.964 23:50:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:09:58.964 23:50:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:58.964 23:50:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3865696 00:09:58.964 23:50:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:58.964 23:50:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:58.964 23:50:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3865696' 00:09:58.964 killing process with pid 3865696 00:09:58.964 23:50:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 3865696 00:09:58.964 Received shutdown signal, test time was about 10.000000 seconds 00:09:58.964 00:09:58.964 Latency(us) 00:09:58.964 [2024-12-13T22:50:38.105Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:58.964 [2024-12-13T22:50:38.105Z] =================================================================================================================== 00:09:58.964 [2024-12-13T22:50:38.105Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:09:58.964 23:50:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 3865696 00:09:59.901 23:50:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:09:59.901 23:50:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:09:59.901 23:50:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:59.901 23:50:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:09:59.901 23:50:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:59.901 23:50:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:09:59.901 23:50:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:59.901 23:50:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:59.901 rmmod nvme_tcp 00:09:59.901 rmmod nvme_fabrics 00:09:59.901 rmmod nvme_keyring 00:09:59.901 23:50:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:59.901 23:50:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:09:59.901 23:50:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:09:59.901 23:50:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 3865460 ']' 00:09:59.901 23:50:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 3865460 00:09:59.901 23:50:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 3865460 ']' 00:09:59.901 23:50:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 3865460 00:09:59.901 23:50:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:09:59.901 23:50:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:59.901 23:50:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3865460 00:09:59.901 23:50:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:09:59.901 23:50:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:09:59.901 23:50:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3865460' 00:09:59.901 killing process with pid 3865460 00:09:59.901 23:50:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 3865460 00:09:59.901 23:50:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 3865460 00:10:01.280 23:50:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:01.280 23:50:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:01.280 23:50:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:01.280 23:50:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:10:01.280 23:50:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 00:10:01.280 23:50:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:01.280 23:50:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 00:10:01.280 23:50:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:01.280 23:50:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:01.280 23:50:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:01.280 23:50:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:01.280 23:50:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:03.185 23:50:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:03.185 00:10:03.185 real 0m22.314s 00:10:03.185 user 0m27.670s 00:10:03.185 sys 0m5.777s 00:10:03.185 23:50:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:03.185 23:50:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:03.185 ************************************ 00:10:03.185 END TEST nvmf_queue_depth 00:10:03.185 ************************************ 00:10:03.185 23:50:42 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:10:03.185 23:50:42 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:03.185 23:50:42 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:03.185 23:50:42 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:03.185 ************************************ 00:10:03.185 START TEST nvmf_target_multipath 00:10:03.185 ************************************ 00:10:03.185 23:50:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:10:03.185 * Looking for test storage... 00:10:03.185 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:03.185 23:50:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:03.185 23:50:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # lcov --version 00:10:03.185 23:50:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:03.445 23:50:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:03.445 23:50:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:03.445 23:50:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:03.445 23:50:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:03.445 23:50:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:10:03.445 23:50:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:10:03.445 23:50:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:10:03.445 23:50:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:10:03.445 23:50:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:10:03.445 23:50:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:10:03.445 23:50:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:10:03.445 23:50:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:03.445 23:50:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:10:03.445 23:50:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:10:03.445 23:50:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:03.445 23:50:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:03.445 23:50:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:10:03.445 23:50:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:10:03.445 23:50:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:03.445 23:50:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:10:03.445 23:50:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:10:03.445 23:50:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:10:03.445 23:50:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:10:03.445 23:50:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:03.445 23:50:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:10:03.445 23:50:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:10:03.445 23:50:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:03.445 23:50:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:03.445 23:50:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:10:03.445 23:50:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:03.445 23:50:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:03.445 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:03.445 --rc genhtml_branch_coverage=1 00:10:03.445 --rc genhtml_function_coverage=1 00:10:03.445 --rc genhtml_legend=1 00:10:03.446 --rc geninfo_all_blocks=1 00:10:03.446 --rc geninfo_unexecuted_blocks=1 00:10:03.446 00:10:03.446 ' 00:10:03.446 23:50:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:03.446 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:03.446 --rc genhtml_branch_coverage=1 00:10:03.446 --rc genhtml_function_coverage=1 00:10:03.446 --rc genhtml_legend=1 00:10:03.446 --rc geninfo_all_blocks=1 00:10:03.446 --rc geninfo_unexecuted_blocks=1 00:10:03.446 00:10:03.446 ' 00:10:03.446 23:50:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:03.446 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:03.446 --rc genhtml_branch_coverage=1 00:10:03.446 --rc genhtml_function_coverage=1 00:10:03.446 --rc genhtml_legend=1 00:10:03.446 --rc geninfo_all_blocks=1 00:10:03.446 --rc geninfo_unexecuted_blocks=1 00:10:03.446 00:10:03.446 ' 00:10:03.446 23:50:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:03.446 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:03.446 --rc genhtml_branch_coverage=1 00:10:03.446 --rc genhtml_function_coverage=1 00:10:03.446 --rc genhtml_legend=1 00:10:03.446 --rc geninfo_all_blocks=1 00:10:03.446 --rc geninfo_unexecuted_blocks=1 00:10:03.446 00:10:03.446 ' 00:10:03.446 23:50:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:03.446 23:50:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:10:03.446 23:50:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:03.446 23:50:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:03.446 23:50:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:03.446 23:50:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:03.446 23:50:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:03.446 23:50:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:03.446 23:50:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:03.446 23:50:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:03.446 23:50:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:03.446 23:50:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:03.446 23:50:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:10:03.446 23:50:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:10:03.446 23:50:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:03.446 23:50:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:03.446 23:50:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:03.446 23:50:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:03.446 23:50:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:03.446 23:50:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:10:03.446 23:50:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:03.446 23:50:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:03.446 23:50:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:03.446 23:50:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:03.446 23:50:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:03.446 23:50:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:03.446 23:50:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:10:03.446 23:50:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:03.446 23:50:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:10:03.446 23:50:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:03.446 23:50:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:03.446 23:50:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:03.446 23:50:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:03.446 23:50:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:03.446 23:50:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:03.446 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:03.446 23:50:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:03.446 23:50:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:03.446 23:50:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:03.446 23:50:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:03.446 23:50:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:03.446 23:50:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:10:03.446 23:50:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:03.446 23:50:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:10:03.446 23:50:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:03.446 23:50:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:03.446 23:50:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:03.446 23:50:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:03.446 23:50:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:03.446 23:50:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:03.446 23:50:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:03.446 23:50:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:03.446 23:50:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:03.446 23:50:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:03.446 23:50:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:10:03.446 23:50:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:10:08.716 23:50:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:08.716 23:50:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:10:08.716 23:50:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:08.716 23:50:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:08.716 23:50:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:08.716 23:50:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:08.716 23:50:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:08.716 23:50:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:10:08.716 23:50:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:08.716 23:50:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:10:08.716 23:50:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:10:08.716 23:50:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:10:08.716 23:50:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:10:08.716 23:50:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:10:08.716 23:50:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:10:08.716 23:50:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:08.716 23:50:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:08.716 23:50:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:08.716 23:50:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:08.716 23:50:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:08.717 23:50:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:08.717 23:50:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:08.717 23:50:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:08.717 23:50:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:08.717 23:50:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:08.717 23:50:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:08.717 23:50:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:08.717 23:50:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:08.717 23:50:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:08.717 23:50:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:08.717 23:50:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:08.717 23:50:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:08.717 23:50:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:08.717 23:50:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:08.717 23:50:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:10:08.717 Found 0000:af:00.0 (0x8086 - 0x159b) 00:10:08.717 23:50:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:08.717 23:50:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:08.717 23:50:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:08.717 23:50:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:08.717 23:50:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:08.717 23:50:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:08.717 23:50:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:10:08.717 Found 0000:af:00.1 (0x8086 - 0x159b) 00:10:08.717 23:50:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:08.717 23:50:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:08.717 23:50:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:08.717 23:50:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:08.717 23:50:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:08.717 23:50:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:08.717 23:50:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:08.717 23:50:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:08.717 23:50:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:08.717 23:50:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:08.717 23:50:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:08.717 23:50:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:08.717 23:50:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:08.717 23:50:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:08.717 23:50:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:08.717 23:50:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:10:08.717 Found net devices under 0000:af:00.0: cvl_0_0 00:10:08.717 23:50:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:08.717 23:50:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:08.717 23:50:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:08.717 23:50:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:08.717 23:50:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:08.717 23:50:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:08.717 23:50:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:08.717 23:50:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:08.717 23:50:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:10:08.717 Found net devices under 0000:af:00.1: cvl_0_1 00:10:08.717 23:50:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:08.717 23:50:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:08.717 23:50:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # is_hw=yes 00:10:08.717 23:50:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:08.717 23:50:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:08.717 23:50:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:08.717 23:50:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:08.717 23:50:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:08.717 23:50:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:08.717 23:50:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:08.717 23:50:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:08.717 23:50:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:08.717 23:50:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:08.717 23:50:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:08.717 23:50:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:08.717 23:50:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:08.717 23:50:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:08.717 23:50:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:08.717 23:50:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:08.717 23:50:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:08.717 23:50:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:08.717 23:50:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:08.717 23:50:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:08.717 23:50:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:08.717 23:50:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:08.717 23:50:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:08.717 23:50:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:08.717 23:50:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:08.717 23:50:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:08.717 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:08.717 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.478 ms 00:10:08.717 00:10:08.717 --- 10.0.0.2 ping statistics --- 00:10:08.717 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:08.717 rtt min/avg/max/mdev = 0.478/0.478/0.478/0.000 ms 00:10:08.717 23:50:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:08.717 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:08.717 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.218 ms 00:10:08.717 00:10:08.717 --- 10.0.0.1 ping statistics --- 00:10:08.717 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:08.717 rtt min/avg/max/mdev = 0.218/0.218/0.218/0.000 ms 00:10:08.717 23:50:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:08.717 23:50:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@450 -- # return 0 00:10:08.717 23:50:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:08.717 23:50:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:08.717 23:50:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:08.717 23:50:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:08.717 23:50:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:08.717 23:50:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:08.717 23:50:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:08.717 23:50:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:10:08.717 23:50:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:10:08.717 only one NIC for nvmf test 00:10:08.717 23:50:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:10:08.717 23:50:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:08.717 23:50:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:10:08.717 23:50:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:08.717 23:50:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:10:08.717 23:50:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:08.717 23:50:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:08.717 rmmod nvme_tcp 00:10:08.717 rmmod nvme_fabrics 00:10:08.717 rmmod nvme_keyring 00:10:08.717 23:50:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:08.717 23:50:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:10:08.717 23:50:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:10:08.718 23:50:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:10:08.718 23:50:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:08.718 23:50:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:08.718 23:50:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:08.718 23:50:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:10:08.718 23:50:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:10:08.718 23:50:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:08.718 23:50:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:10:08.718 23:50:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:08.718 23:50:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:08.718 23:50:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:08.718 23:50:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:08.718 23:50:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:11.255 23:50:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:11.255 23:50:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:10:11.255 23:50:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:10:11.255 23:50:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:11.255 23:50:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:10:11.255 23:50:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:11.255 23:50:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:10:11.255 23:50:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:11.255 23:50:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:11.255 23:50:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:11.255 23:50:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:10:11.255 23:50:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:10:11.255 23:50:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:10:11.255 23:50:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:11.255 23:50:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:11.255 23:50:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:11.255 23:50:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:10:11.255 23:50:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:10:11.255 23:50:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:11.255 23:50:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:10:11.255 23:50:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:11.255 23:50:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:11.255 23:50:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:11.255 23:50:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:11.255 23:50:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:11.255 23:50:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:11.255 00:10:11.255 real 0m7.659s 00:10:11.255 user 0m1.624s 00:10:11.255 sys 0m3.970s 00:10:11.255 23:50:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:11.255 23:50:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:10:11.255 ************************************ 00:10:11.255 END TEST nvmf_target_multipath 00:10:11.255 ************************************ 00:10:11.255 23:50:49 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:10:11.255 23:50:49 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:11.255 23:50:49 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:11.255 23:50:49 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:11.255 ************************************ 00:10:11.255 START TEST nvmf_zcopy 00:10:11.255 ************************************ 00:10:11.255 23:50:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:10:11.255 * Looking for test storage... 00:10:11.255 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:11.255 23:50:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:11.255 23:50:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1711 -- # lcov --version 00:10:11.255 23:50:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:11.255 23:50:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:11.255 23:50:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:11.255 23:50:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:11.255 23:50:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:11.255 23:50:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:10:11.255 23:50:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:10:11.255 23:50:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:10:11.255 23:50:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:10:11.255 23:50:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:10:11.255 23:50:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:10:11.255 23:50:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:10:11.255 23:50:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:11.255 23:50:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:10:11.255 23:50:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:10:11.255 23:50:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:11.255 23:50:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:11.255 23:50:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:10:11.255 23:50:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:10:11.255 23:50:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:11.255 23:50:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:10:11.255 23:50:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:10:11.255 23:50:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:10:11.255 23:50:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:10:11.255 23:50:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:11.255 23:50:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:10:11.255 23:50:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:10:11.255 23:50:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:11.255 23:50:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:11.255 23:50:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:10:11.255 23:50:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:11.255 23:50:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:11.255 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:11.255 --rc genhtml_branch_coverage=1 00:10:11.255 --rc genhtml_function_coverage=1 00:10:11.255 --rc genhtml_legend=1 00:10:11.255 --rc geninfo_all_blocks=1 00:10:11.255 --rc geninfo_unexecuted_blocks=1 00:10:11.255 00:10:11.255 ' 00:10:11.255 23:50:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:11.255 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:11.255 --rc genhtml_branch_coverage=1 00:10:11.255 --rc genhtml_function_coverage=1 00:10:11.255 --rc genhtml_legend=1 00:10:11.255 --rc geninfo_all_blocks=1 00:10:11.255 --rc geninfo_unexecuted_blocks=1 00:10:11.255 00:10:11.255 ' 00:10:11.255 23:50:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:11.255 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:11.255 --rc genhtml_branch_coverage=1 00:10:11.255 --rc genhtml_function_coverage=1 00:10:11.255 --rc genhtml_legend=1 00:10:11.255 --rc geninfo_all_blocks=1 00:10:11.255 --rc geninfo_unexecuted_blocks=1 00:10:11.255 00:10:11.255 ' 00:10:11.255 23:50:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:11.255 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:11.255 --rc genhtml_branch_coverage=1 00:10:11.255 --rc genhtml_function_coverage=1 00:10:11.255 --rc genhtml_legend=1 00:10:11.255 --rc geninfo_all_blocks=1 00:10:11.255 --rc geninfo_unexecuted_blocks=1 00:10:11.255 00:10:11.255 ' 00:10:11.255 23:50:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:11.255 23:50:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:10:11.255 23:50:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:11.255 23:50:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:11.255 23:50:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:11.255 23:50:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:11.255 23:50:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:11.255 23:50:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:11.255 23:50:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:11.255 23:50:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:11.255 23:50:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:11.255 23:50:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:11.255 23:50:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:10:11.255 23:50:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:10:11.255 23:50:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:11.255 23:50:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:11.255 23:50:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:11.255 23:50:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:11.255 23:50:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:11.255 23:50:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:10:11.256 23:50:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:11.256 23:50:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:11.256 23:50:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:11.256 23:50:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:11.256 23:50:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:11.256 23:50:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:11.256 23:50:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:10:11.256 23:50:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:11.256 23:50:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:10:11.256 23:50:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:11.256 23:50:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:11.256 23:50:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:11.256 23:50:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:11.256 23:50:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:11.256 23:50:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:11.256 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:11.256 23:50:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:11.256 23:50:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:11.256 23:50:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:11.256 23:50:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:10:11.256 23:50:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:11.256 23:50:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:11.256 23:50:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:11.256 23:50:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:11.256 23:50:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:11.256 23:50:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:11.256 23:50:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:11.256 23:50:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:11.256 23:50:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:11.256 23:50:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:11.256 23:50:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:10:11.256 23:50:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:16.529 23:50:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:16.529 23:50:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:10:16.529 23:50:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:16.529 23:50:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:16.529 23:50:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:16.529 23:50:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:16.529 23:50:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:16.529 23:50:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:10:16.529 23:50:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:16.529 23:50:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:10:16.529 23:50:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:10:16.529 23:50:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:10:16.529 23:50:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:10:16.529 23:50:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:10:16.529 23:50:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:10:16.529 23:50:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:16.529 23:50:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:16.529 23:50:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:16.529 23:50:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:16.529 23:50:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:16.529 23:50:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:16.529 23:50:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:16.529 23:50:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:16.529 23:50:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:16.529 23:50:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:16.529 23:50:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:16.529 23:50:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:16.529 23:50:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:16.529 23:50:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:16.529 23:50:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:16.529 23:50:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:16.529 23:50:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:16.529 23:50:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:16.529 23:50:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:16.529 23:50:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:10:16.529 Found 0000:af:00.0 (0x8086 - 0x159b) 00:10:16.529 23:50:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:16.529 23:50:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:16.529 23:50:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:16.529 23:50:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:16.529 23:50:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:16.529 23:50:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:16.529 23:50:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:10:16.529 Found 0000:af:00.1 (0x8086 - 0x159b) 00:10:16.529 23:50:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:16.529 23:50:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:16.529 23:50:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:16.529 23:50:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:16.529 23:50:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:16.530 23:50:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:16.530 23:50:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:16.530 23:50:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:16.530 23:50:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:16.530 23:50:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:16.530 23:50:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:16.530 23:50:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:16.530 23:50:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:16.530 23:50:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:16.530 23:50:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:16.530 23:50:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:10:16.530 Found net devices under 0000:af:00.0: cvl_0_0 00:10:16.530 23:50:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:16.530 23:50:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:16.530 23:50:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:16.530 23:50:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:16.530 23:50:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:16.530 23:50:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:16.530 23:50:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:16.530 23:50:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:16.530 23:50:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:10:16.530 Found net devices under 0000:af:00.1: cvl_0_1 00:10:16.530 23:50:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:16.530 23:50:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:16.530 23:50:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # is_hw=yes 00:10:16.530 23:50:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:16.530 23:50:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:16.530 23:50:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:16.530 23:50:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:16.530 23:50:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:16.530 23:50:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:16.530 23:50:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:16.530 23:50:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:16.530 23:50:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:16.530 23:50:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:16.530 23:50:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:16.530 23:50:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:16.530 23:50:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:16.530 23:50:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:16.530 23:50:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:16.530 23:50:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:16.530 23:50:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:16.530 23:50:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:16.530 23:50:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:16.530 23:50:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:16.530 23:50:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:16.530 23:50:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:16.530 23:50:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:16.530 23:50:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:16.530 23:50:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:16.530 23:50:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:16.530 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:16.530 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.521 ms 00:10:16.530 00:10:16.530 --- 10.0.0.2 ping statistics --- 00:10:16.530 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:16.530 rtt min/avg/max/mdev = 0.521/0.521/0.521/0.000 ms 00:10:16.530 23:50:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:16.530 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:16.530 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.208 ms 00:10:16.530 00:10:16.530 --- 10.0.0.1 ping statistics --- 00:10:16.530 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:16.530 rtt min/avg/max/mdev = 0.208/0.208/0.208/0.000 ms 00:10:16.530 23:50:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:16.530 23:50:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@450 -- # return 0 00:10:16.530 23:50:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:16.530 23:50:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:16.530 23:50:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:16.530 23:50:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:16.530 23:50:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:16.530 23:50:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:16.530 23:50:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:16.530 23:50:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:10:16.530 23:50:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:16.530 23:50:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:16.530 23:50:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:16.530 23:50:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:10:16.530 23:50:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=3874644 00:10:16.530 23:50:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 3874644 00:10:16.530 23:50:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@835 -- # '[' -z 3874644 ']' 00:10:16.530 23:50:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:16.530 23:50:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:16.530 23:50:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:16.530 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:16.530 23:50:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:16.530 23:50:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:16.530 [2024-12-13 23:50:55.612785] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:10:16.530 [2024-12-13 23:50:55.612881] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:16.790 [2024-12-13 23:50:55.730159] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:16.790 [2024-12-13 23:50:55.832519] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:16.790 [2024-12-13 23:50:55.832563] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:16.790 [2024-12-13 23:50:55.832573] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:16.790 [2024-12-13 23:50:55.832598] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:16.790 [2024-12-13 23:50:55.832606] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:16.790 [2024-12-13 23:50:55.833999] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:10:17.357 23:50:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:17.357 23:50:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@868 -- # return 0 00:10:17.357 23:50:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:17.357 23:50:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:17.357 23:50:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:17.357 23:50:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:17.357 23:50:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:10:17.357 23:50:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:10:17.357 23:50:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.357 23:50:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:17.357 [2024-12-13 23:50:56.456040] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:17.357 23:50:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.357 23:50:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:10:17.357 23:50:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.357 23:50:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:17.357 23:50:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.357 23:50:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:17.357 23:50:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.357 23:50:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:17.357 [2024-12-13 23:50:56.472200] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:17.357 23:50:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.357 23:50:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:10:17.357 23:50:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.357 23:50:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:17.357 23:50:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.357 23:50:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:10:17.358 23:50:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.358 23:50:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:17.617 malloc0 00:10:17.617 23:50:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.617 23:50:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:10:17.617 23:50:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.617 23:50:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:17.617 23:50:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.617 23:50:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:10:17.617 23:50:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:10:17.617 23:50:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:10:17.617 23:50:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:10:17.617 23:50:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:10:17.617 23:50:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:10:17.617 { 00:10:17.617 "params": { 00:10:17.617 "name": "Nvme$subsystem", 00:10:17.617 "trtype": "$TEST_TRANSPORT", 00:10:17.617 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:17.617 "adrfam": "ipv4", 00:10:17.617 "trsvcid": "$NVMF_PORT", 00:10:17.617 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:17.617 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:17.617 "hdgst": ${hdgst:-false}, 00:10:17.617 "ddgst": ${ddgst:-false} 00:10:17.617 }, 00:10:17.617 "method": "bdev_nvme_attach_controller" 00:10:17.617 } 00:10:17.617 EOF 00:10:17.617 )") 00:10:17.617 23:50:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:10:17.617 23:50:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:10:17.617 23:50:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:10:17.617 23:50:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:10:17.617 "params": { 00:10:17.617 "name": "Nvme1", 00:10:17.617 "trtype": "tcp", 00:10:17.617 "traddr": "10.0.0.2", 00:10:17.617 "adrfam": "ipv4", 00:10:17.617 "trsvcid": "4420", 00:10:17.617 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:17.617 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:17.617 "hdgst": false, 00:10:17.617 "ddgst": false 00:10:17.617 }, 00:10:17.617 "method": "bdev_nvme_attach_controller" 00:10:17.617 }' 00:10:17.617 [2024-12-13 23:50:56.604708] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:10:17.617 [2024-12-13 23:50:56.604792] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3874886 ] 00:10:17.617 [2024-12-13 23:50:56.715627] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:17.876 [2024-12-13 23:50:56.823533] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:10:18.444 Running I/O for 10 seconds... 00:10:20.317 7458.00 IOPS, 58.27 MiB/s [2024-12-13T22:51:00.394Z] 7492.50 IOPS, 58.54 MiB/s [2024-12-13T22:51:01.772Z] 7488.67 IOPS, 58.51 MiB/s [2024-12-13T22:51:02.340Z] 7491.50 IOPS, 58.53 MiB/s [2024-12-13T22:51:03.717Z] 7511.60 IOPS, 58.68 MiB/s [2024-12-13T22:51:04.653Z] 7513.67 IOPS, 58.70 MiB/s [2024-12-13T22:51:05.590Z] 7523.57 IOPS, 58.78 MiB/s [2024-12-13T22:51:06.527Z] 7531.75 IOPS, 58.84 MiB/s [2024-12-13T22:51:07.464Z] 7535.00 IOPS, 58.87 MiB/s [2024-12-13T22:51:07.464Z] 7535.70 IOPS, 58.87 MiB/s 00:10:28.323 Latency(us) 00:10:28.323 [2024-12-13T22:51:07.464Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:28.323 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:10:28.323 Verification LBA range: start 0x0 length 0x1000 00:10:28.323 Nvme1n1 : 10.01 7538.02 58.89 0.00 0.00 16933.17 2278.16 24466.77 00:10:28.323 [2024-12-13T22:51:07.464Z] =================================================================================================================== 00:10:28.323 [2024-12-13T22:51:07.464Z] Total : 7538.02 58.89 0.00 0.00 16933.17 2278.16 24466.77 00:10:29.261 23:51:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=3877250 00:10:29.261 23:51:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:10:29.261 23:51:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:29.261 23:51:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:10:29.261 23:51:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:10:29.261 23:51:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:10:29.261 23:51:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:10:29.261 23:51:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:10:29.261 23:51:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:10:29.261 { 00:10:29.261 "params": { 00:10:29.261 "name": "Nvme$subsystem", 00:10:29.261 "trtype": "$TEST_TRANSPORT", 00:10:29.261 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:29.261 "adrfam": "ipv4", 00:10:29.261 "trsvcid": "$NVMF_PORT", 00:10:29.261 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:29.261 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:29.261 "hdgst": ${hdgst:-false}, 00:10:29.261 "ddgst": ${ddgst:-false} 00:10:29.261 }, 00:10:29.261 "method": "bdev_nvme_attach_controller" 00:10:29.261 } 00:10:29.261 EOF 00:10:29.261 )") 00:10:29.261 [2024-12-13 23:51:08.256783] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.261 [2024-12-13 23:51:08.256820] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.261 23:51:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:10:29.261 23:51:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:10:29.261 [2024-12-13 23:51:08.264788] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.261 [2024-12-13 23:51:08.264813] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.261 23:51:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:10:29.261 23:51:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:10:29.261 "params": { 00:10:29.261 "name": "Nvme1", 00:10:29.261 "trtype": "tcp", 00:10:29.261 "traddr": "10.0.0.2", 00:10:29.261 "adrfam": "ipv4", 00:10:29.261 "trsvcid": "4420", 00:10:29.261 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:29.261 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:29.261 "hdgst": false, 00:10:29.261 "ddgst": false 00:10:29.261 }, 00:10:29.261 "method": "bdev_nvme_attach_controller" 00:10:29.261 }' 00:10:29.261 [2024-12-13 23:51:08.272776] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.261 [2024-12-13 23:51:08.272797] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.261 [2024-12-13 23:51:08.280799] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.261 [2024-12-13 23:51:08.280820] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.261 [2024-12-13 23:51:08.288819] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.261 [2024-12-13 23:51:08.288839] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.261 [2024-12-13 23:51:08.300840] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.261 [2024-12-13 23:51:08.300860] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.261 [2024-12-13 23:51:08.308887] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.261 [2024-12-13 23:51:08.308908] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.261 [2024-12-13 23:51:08.316893] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.261 [2024-12-13 23:51:08.316913] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.261 [2024-12-13 23:51:08.323459] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:10:29.261 [2024-12-13 23:51:08.323532] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3877250 ] 00:10:29.261 [2024-12-13 23:51:08.324902] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.261 [2024-12-13 23:51:08.324922] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.261 [2024-12-13 23:51:08.332951] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.261 [2024-12-13 23:51:08.332973] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.261 [2024-12-13 23:51:08.340954] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.261 [2024-12-13 23:51:08.340975] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.261 [2024-12-13 23:51:08.348986] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.261 [2024-12-13 23:51:08.349008] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.261 [2024-12-13 23:51:08.357002] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.261 [2024-12-13 23:51:08.357022] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.261 [2024-12-13 23:51:08.365027] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.261 [2024-12-13 23:51:08.365047] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.261 [2024-12-13 23:51:08.373047] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.261 [2024-12-13 23:51:08.373066] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.261 [2024-12-13 23:51:08.381067] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.261 [2024-12-13 23:51:08.381086] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.261 [2024-12-13 23:51:08.389084] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.261 [2024-12-13 23:51:08.389103] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.261 [2024-12-13 23:51:08.397112] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.261 [2024-12-13 23:51:08.397131] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.520 [2024-12-13 23:51:08.405123] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.520 [2024-12-13 23:51:08.405142] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.520 [2024-12-13 23:51:08.413152] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.520 [2024-12-13 23:51:08.413171] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.520 [2024-12-13 23:51:08.421172] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.520 [2024-12-13 23:51:08.421190] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.520 [2024-12-13 23:51:08.429184] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.520 [2024-12-13 23:51:08.429203] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.520 [2024-12-13 23:51:08.435917] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:29.520 [2024-12-13 23:51:08.437216] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.520 [2024-12-13 23:51:08.437240] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.520 [2024-12-13 23:51:08.445246] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.520 [2024-12-13 23:51:08.445266] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.520 [2024-12-13 23:51:08.453280] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.520 [2024-12-13 23:51:08.453302] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.520 [2024-12-13 23:51:08.461302] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.520 [2024-12-13 23:51:08.461325] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.520 [2024-12-13 23:51:08.469314] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.520 [2024-12-13 23:51:08.469333] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.520 [2024-12-13 23:51:08.477336] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.520 [2024-12-13 23:51:08.477354] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.520 [2024-12-13 23:51:08.485357] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.520 [2024-12-13 23:51:08.485376] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.520 [2024-12-13 23:51:08.493364] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.520 [2024-12-13 23:51:08.493383] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.520 [2024-12-13 23:51:08.501398] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.520 [2024-12-13 23:51:08.501417] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.520 [2024-12-13 23:51:08.509418] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.520 [2024-12-13 23:51:08.509446] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.521 [2024-12-13 23:51:08.517429] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.521 [2024-12-13 23:51:08.517456] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.521 [2024-12-13 23:51:08.525467] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.521 [2024-12-13 23:51:08.525485] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.521 [2024-12-13 23:51:08.533477] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.521 [2024-12-13 23:51:08.533495] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.521 [2024-12-13 23:51:08.541526] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.521 [2024-12-13 23:51:08.541545] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.521 [2024-12-13 23:51:08.547087] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:10:29.521 [2024-12-13 23:51:08.549546] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.521 [2024-12-13 23:51:08.549565] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.521 [2024-12-13 23:51:08.557574] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.521 [2024-12-13 23:51:08.557593] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.521 [2024-12-13 23:51:08.565595] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.521 [2024-12-13 23:51:08.565615] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.521 [2024-12-13 23:51:08.573607] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.521 [2024-12-13 23:51:08.573625] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.521 [2024-12-13 23:51:08.581618] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.521 [2024-12-13 23:51:08.581637] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.521 [2024-12-13 23:51:08.589652] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.521 [2024-12-13 23:51:08.589681] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.521 [2024-12-13 23:51:08.597673] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.521 [2024-12-13 23:51:08.597692] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.521 [2024-12-13 23:51:08.605707] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.521 [2024-12-13 23:51:08.605725] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.521 [2024-12-13 23:51:08.613732] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.521 [2024-12-13 23:51:08.613750] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.521 [2024-12-13 23:51:08.621735] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.521 [2024-12-13 23:51:08.621753] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.521 [2024-12-13 23:51:08.629771] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.521 [2024-12-13 23:51:08.629790] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.521 [2024-12-13 23:51:08.637795] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.521 [2024-12-13 23:51:08.637815] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.521 [2024-12-13 23:51:08.645822] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.521 [2024-12-13 23:51:08.645842] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.521 [2024-12-13 23:51:08.653860] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.521 [2024-12-13 23:51:08.653880] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.780 [2024-12-13 23:51:08.661868] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.780 [2024-12-13 23:51:08.661887] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.780 [2024-12-13 23:51:08.669884] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.780 [2024-12-13 23:51:08.669902] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.781 [2024-12-13 23:51:08.677906] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.781 [2024-12-13 23:51:08.677924] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.781 [2024-12-13 23:51:08.685914] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.781 [2024-12-13 23:51:08.685933] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.781 [2024-12-13 23:51:08.693946] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.781 [2024-12-13 23:51:08.693965] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.781 [2024-12-13 23:51:08.701973] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.781 [2024-12-13 23:51:08.701991] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.781 [2024-12-13 23:51:08.709979] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.781 [2024-12-13 23:51:08.709997] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.781 [2024-12-13 23:51:08.718014] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.781 [2024-12-13 23:51:08.718032] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.781 [2024-12-13 23:51:08.726021] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.781 [2024-12-13 23:51:08.726039] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.781 [2024-12-13 23:51:08.734059] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.781 [2024-12-13 23:51:08.734077] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.781 [2024-12-13 23:51:08.742078] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.781 [2024-12-13 23:51:08.742096] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.781 [2024-12-13 23:51:08.750103] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.781 [2024-12-13 23:51:08.750120] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.781 [2024-12-13 23:51:08.758127] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.781 [2024-12-13 23:51:08.758146] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.781 [2024-12-13 23:51:08.766149] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.781 [2024-12-13 23:51:08.766167] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.781 [2024-12-13 23:51:08.774166] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.781 [2024-12-13 23:51:08.774185] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.781 [2024-12-13 23:51:08.782204] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.781 [2024-12-13 23:51:08.782223] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.781 [2024-12-13 23:51:08.790207] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.781 [2024-12-13 23:51:08.790226] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.781 [2024-12-13 23:51:08.798235] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.781 [2024-12-13 23:51:08.798254] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.781 [2024-12-13 23:51:08.806255] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.781 [2024-12-13 23:51:08.806274] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.781 [2024-12-13 23:51:08.814265] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.781 [2024-12-13 23:51:08.814283] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.781 [2024-12-13 23:51:08.822304] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.781 [2024-12-13 23:51:08.822322] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.781 [2024-12-13 23:51:08.830317] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.781 [2024-12-13 23:51:08.830335] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.781 [2024-12-13 23:51:08.838330] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.781 [2024-12-13 23:51:08.838349] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.781 [2024-12-13 23:51:08.846376] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.781 [2024-12-13 23:51:08.846394] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.781 [2024-12-13 23:51:08.854373] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.781 [2024-12-13 23:51:08.854392] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.781 [2024-12-13 23:51:08.862406] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.781 [2024-12-13 23:51:08.862424] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.781 [2024-12-13 23:51:08.870425] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.781 [2024-12-13 23:51:08.870449] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.781 [2024-12-13 23:51:08.878445] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.781 [2024-12-13 23:51:08.878463] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.781 [2024-12-13 23:51:08.886473] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.781 [2024-12-13 23:51:08.886491] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.781 [2024-12-13 23:51:08.894498] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.781 [2024-12-13 23:51:08.894516] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.781 [2024-12-13 23:51:08.902528] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.781 [2024-12-13 23:51:08.902550] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.781 [2024-12-13 23:51:08.910552] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.781 [2024-12-13 23:51:08.910572] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.781 [2024-12-13 23:51:08.918567] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.781 [2024-12-13 23:51:08.918587] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.040 [2024-12-13 23:51:08.926597] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.040 [2024-12-13 23:51:08.926617] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.040 [2024-12-13 23:51:08.934628] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.040 [2024-12-13 23:51:08.934649] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.040 [2024-12-13 23:51:08.942632] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.040 [2024-12-13 23:51:08.942652] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.040 [2024-12-13 23:51:08.950667] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.040 [2024-12-13 23:51:08.950686] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.040 [2024-12-13 23:51:08.998731] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.040 [2024-12-13 23:51:08.998757] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.040 [2024-12-13 23:51:09.002837] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.040 [2024-12-13 23:51:09.002859] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.040 [2024-12-13 23:51:09.010863] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.040 [2024-12-13 23:51:09.010884] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.040 Running I/O for 5 seconds... 00:10:30.040 [2024-12-13 23:51:09.022375] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.040 [2024-12-13 23:51:09.022401] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.040 [2024-12-13 23:51:09.030592] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.040 [2024-12-13 23:51:09.030616] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.041 [2024-12-13 23:51:09.041817] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.041 [2024-12-13 23:51:09.041842] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.041 [2024-12-13 23:51:09.051747] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.041 [2024-12-13 23:51:09.051771] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.041 [2024-12-13 23:51:09.059465] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.041 [2024-12-13 23:51:09.059503] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.041 [2024-12-13 23:51:09.070786] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.041 [2024-12-13 23:51:09.070810] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.041 [2024-12-13 23:51:09.079428] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.041 [2024-12-13 23:51:09.079459] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.041 [2024-12-13 23:51:09.089763] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.041 [2024-12-13 23:51:09.089787] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.041 [2024-12-13 23:51:09.097779] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.041 [2024-12-13 23:51:09.097802] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.041 [2024-12-13 23:51:09.109436] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.041 [2024-12-13 23:51:09.109466] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.041 [2024-12-13 23:51:09.118044] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.041 [2024-12-13 23:51:09.118072] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.041 [2024-12-13 23:51:09.129683] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.041 [2024-12-13 23:51:09.129707] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.041 [2024-12-13 23:51:09.139488] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.041 [2024-12-13 23:51:09.139512] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.041 [2024-12-13 23:51:09.147035] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.041 [2024-12-13 23:51:09.147058] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.041 [2024-12-13 23:51:09.158071] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.041 [2024-12-13 23:51:09.158095] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.041 [2024-12-13 23:51:09.166472] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.041 [2024-12-13 23:51:09.166495] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.041 [2024-12-13 23:51:09.177254] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.041 [2024-12-13 23:51:09.177278] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.300 [2024-12-13 23:51:09.187041] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.300 [2024-12-13 23:51:09.187065] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.300 [2024-12-13 23:51:09.194850] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.300 [2024-12-13 23:51:09.194874] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.300 [2024-12-13 23:51:09.206219] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.300 [2024-12-13 23:51:09.206243] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.300 [2024-12-13 23:51:09.214749] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.300 [2024-12-13 23:51:09.214772] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.300 [2024-12-13 23:51:09.223382] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.300 [2024-12-13 23:51:09.223405] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.300 [2024-12-13 23:51:09.233450] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.300 [2024-12-13 23:51:09.233489] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.300 [2024-12-13 23:51:09.241577] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.300 [2024-12-13 23:51:09.241601] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.300 [2024-12-13 23:51:09.252445] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.300 [2024-12-13 23:51:09.252469] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.300 [2024-12-13 23:51:09.261026] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.300 [2024-12-13 23:51:09.261049] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.300 [2024-12-13 23:51:09.270103] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.300 [2024-12-13 23:51:09.270127] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.300 [2024-12-13 23:51:09.279435] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.300 [2024-12-13 23:51:09.279465] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.300 [2024-12-13 23:51:09.288577] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.300 [2024-12-13 23:51:09.288600] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.300 [2024-12-13 23:51:09.297458] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.300 [2024-12-13 23:51:09.297486] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.300 [2024-12-13 23:51:09.306094] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.300 [2024-12-13 23:51:09.306118] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.300 [2024-12-13 23:51:09.314709] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.300 [2024-12-13 23:51:09.314731] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.300 [2024-12-13 23:51:09.323593] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.300 [2024-12-13 23:51:09.323617] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.300 [2024-12-13 23:51:09.332276] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.300 [2024-12-13 23:51:09.332299] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.300 [2024-12-13 23:51:09.341495] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.300 [2024-12-13 23:51:09.341519] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.300 [2024-12-13 23:51:09.350156] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.300 [2024-12-13 23:51:09.350178] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.300 [2024-12-13 23:51:09.359175] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.300 [2024-12-13 23:51:09.359198] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.300 [2024-12-13 23:51:09.367924] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.300 [2024-12-13 23:51:09.367946] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.301 [2024-12-13 23:51:09.376751] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.301 [2024-12-13 23:51:09.376774] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.301 [2024-12-13 23:51:09.385811] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.301 [2024-12-13 23:51:09.385834] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.301 [2024-12-13 23:51:09.394736] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.301 [2024-12-13 23:51:09.394760] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.301 [2024-12-13 23:51:09.403737] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.301 [2024-12-13 23:51:09.403760] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.301 [2024-12-13 23:51:09.412751] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.301 [2024-12-13 23:51:09.412775] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.301 [2024-12-13 23:51:09.421401] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.301 [2024-12-13 23:51:09.421424] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.301 [2024-12-13 23:51:09.429975] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.301 [2024-12-13 23:51:09.429999] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.301 [2024-12-13 23:51:09.439286] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.301 [2024-12-13 23:51:09.439310] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.560 [2024-12-13 23:51:09.448422] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.560 [2024-12-13 23:51:09.448454] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.560 [2024-12-13 23:51:09.457303] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.560 [2024-12-13 23:51:09.457327] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.560 [2024-12-13 23:51:09.466139] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.560 [2024-12-13 23:51:09.466167] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.560 [2024-12-13 23:51:09.475063] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.560 [2024-12-13 23:51:09.475087] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.560 [2024-12-13 23:51:09.484273] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.560 [2024-12-13 23:51:09.484298] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.560 [2024-12-13 23:51:09.493123] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.560 [2024-12-13 23:51:09.493146] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.560 [2024-12-13 23:51:09.501976] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.560 [2024-12-13 23:51:09.501999] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.560 [2024-12-13 23:51:09.510829] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.560 [2024-12-13 23:51:09.510852] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.560 [2024-12-13 23:51:09.519638] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.560 [2024-12-13 23:51:09.519662] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.560 [2024-12-13 23:51:09.529609] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.560 [2024-12-13 23:51:09.529634] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.560 [2024-12-13 23:51:09.538193] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.560 [2024-12-13 23:51:09.538217] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.560 [2024-12-13 23:51:09.547223] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.560 [2024-12-13 23:51:09.547247] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.560 [2024-12-13 23:51:09.556098] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.560 [2024-12-13 23:51:09.556123] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.560 [2024-12-13 23:51:09.564982] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.560 [2024-12-13 23:51:09.565006] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.560 [2024-12-13 23:51:09.573982] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.560 [2024-12-13 23:51:09.574005] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.560 [2024-12-13 23:51:09.582958] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.560 [2024-12-13 23:51:09.582986] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.560 [2024-12-13 23:51:09.591893] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.560 [2024-12-13 23:51:09.591916] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.560 [2024-12-13 23:51:09.600717] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.560 [2024-12-13 23:51:09.600740] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.560 [2024-12-13 23:51:09.609910] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.560 [2024-12-13 23:51:09.609934] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.560 [2024-12-13 23:51:09.618678] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.560 [2024-12-13 23:51:09.618702] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.560 [2024-12-13 23:51:09.627697] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.560 [2024-12-13 23:51:09.627721] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.560 [2024-12-13 23:51:09.636471] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.560 [2024-12-13 23:51:09.636496] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.560 [2024-12-13 23:51:09.645312] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.560 [2024-12-13 23:51:09.645335] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.560 [2024-12-13 23:51:09.654323] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.560 [2024-12-13 23:51:09.654347] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.560 [2024-12-13 23:51:09.663633] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.560 [2024-12-13 23:51:09.663657] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.560 [2024-12-13 23:51:09.672544] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.560 [2024-12-13 23:51:09.672568] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.560 [2024-12-13 23:51:09.682305] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.560 [2024-12-13 23:51:09.682329] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.560 [2024-12-13 23:51:09.692064] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.560 [2024-12-13 23:51:09.692088] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.560 [2024-12-13 23:51:09.700647] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.560 [2024-12-13 23:51:09.700671] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.891 [2024-12-13 23:51:09.709576] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.891 [2024-12-13 23:51:09.709601] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.891 [2024-12-13 23:51:09.719714] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.891 [2024-12-13 23:51:09.719738] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.891 [2024-12-13 23:51:09.728097] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.891 [2024-12-13 23:51:09.728122] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.891 [2024-12-13 23:51:09.738978] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.891 [2024-12-13 23:51:09.739003] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.891 [2024-12-13 23:51:09.747705] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.891 [2024-12-13 23:51:09.747729] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.891 [2024-12-13 23:51:09.758069] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.891 [2024-12-13 23:51:09.758093] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.891 [2024-12-13 23:51:09.766753] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.891 [2024-12-13 23:51:09.766782] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.891 [2024-12-13 23:51:09.777119] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.891 [2024-12-13 23:51:09.777143] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.891 [2024-12-13 23:51:09.786865] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.891 [2024-12-13 23:51:09.786889] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.891 [2024-12-13 23:51:09.795376] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.891 [2024-12-13 23:51:09.795399] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.891 [2024-12-13 23:51:09.805462] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.891 [2024-12-13 23:51:09.805487] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.891 [2024-12-13 23:51:09.815070] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.891 [2024-12-13 23:51:09.815094] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.891 [2024-12-13 23:51:09.822620] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.891 [2024-12-13 23:51:09.822654] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.891 [2024-12-13 23:51:09.833687] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.891 [2024-12-13 23:51:09.833712] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.891 [2024-12-13 23:51:09.842183] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.891 [2024-12-13 23:51:09.842207] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.891 [2024-12-13 23:51:09.852417] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.891 [2024-12-13 23:51:09.852451] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.891 [2024-12-13 23:51:09.860329] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.891 [2024-12-13 23:51:09.860353] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.891 [2024-12-13 23:51:09.871259] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.891 [2024-12-13 23:51:09.871283] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.891 [2024-12-13 23:51:09.879830] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.891 [2024-12-13 23:51:09.879855] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.891 [2024-12-13 23:51:09.888665] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.891 [2024-12-13 23:51:09.888689] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.891 [2024-12-13 23:51:09.897504] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.891 [2024-12-13 23:51:09.897529] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.891 [2024-12-13 23:51:09.906269] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.891 [2024-12-13 23:51:09.906293] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.891 [2024-12-13 23:51:09.915143] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.891 [2024-12-13 23:51:09.915166] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.891 [2024-12-13 23:51:09.924412] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.891 [2024-12-13 23:51:09.924436] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.891 [2024-12-13 23:51:09.933355] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.891 [2024-12-13 23:51:09.933379] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.891 [2024-12-13 23:51:09.942488] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.891 [2024-12-13 23:51:09.942512] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.891 [2024-12-13 23:51:09.951299] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.891 [2024-12-13 23:51:09.951322] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.891 [2024-12-13 23:51:09.959976] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.891 [2024-12-13 23:51:09.960000] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.891 [2024-12-13 23:51:09.968715] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.891 [2024-12-13 23:51:09.968739] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.891 [2024-12-13 23:51:09.978668] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.891 [2024-12-13 23:51:09.978702] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.891 [2024-12-13 23:51:09.988694] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.891 [2024-12-13 23:51:09.988717] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.891 [2024-12-13 23:51:09.998154] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.891 [2024-12-13 23:51:09.998177] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.891 [2024-12-13 23:51:10.008829] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.891 [2024-12-13 23:51:10.008858] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.891 14217.00 IOPS, 111.07 MiB/s [2024-12-13T22:51:10.032Z] [2024-12-13 23:51:10.018593] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.891 [2024-12-13 23:51:10.018616] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.891 [2024-12-13 23:51:10.030272] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.891 [2024-12-13 23:51:10.030314] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.151 [2024-12-13 23:51:10.039887] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.151 [2024-12-13 23:51:10.039911] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.151 [2024-12-13 23:51:10.050584] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.151 [2024-12-13 23:51:10.050609] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.151 [2024-12-13 23:51:10.059465] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.151 [2024-12-13 23:51:10.059490] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.151 [2024-12-13 23:51:10.068251] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.151 [2024-12-13 23:51:10.068276] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.151 [2024-12-13 23:51:10.077457] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.151 [2024-12-13 23:51:10.077612] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.151 [2024-12-13 23:51:10.086968] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.151 [2024-12-13 23:51:10.086993] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.151 [2024-12-13 23:51:10.095938] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.151 [2024-12-13 23:51:10.095961] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.151 [2024-12-13 23:51:10.104941] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.151 [2024-12-13 23:51:10.104965] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.151 [2024-12-13 23:51:10.113707] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.151 [2024-12-13 23:51:10.113730] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.151 [2024-12-13 23:51:10.122795] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.151 [2024-12-13 23:51:10.122819] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.151 [2024-12-13 23:51:10.131891] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.151 [2024-12-13 23:51:10.131914] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.151 [2024-12-13 23:51:10.141009] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.151 [2024-12-13 23:51:10.141033] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.151 [2024-12-13 23:51:10.150206] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.151 [2024-12-13 23:51:10.150231] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.151 [2024-12-13 23:51:10.159306] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.151 [2024-12-13 23:51:10.159334] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.151 [2024-12-13 23:51:10.168305] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.151 [2024-12-13 23:51:10.168328] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.151 [2024-12-13 23:51:10.177316] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.151 [2024-12-13 23:51:10.177340] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.151 [2024-12-13 23:51:10.186606] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.151 [2024-12-13 23:51:10.186629] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.151 [2024-12-13 23:51:10.195276] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.151 [2024-12-13 23:51:10.195299] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.151 [2024-12-13 23:51:10.204203] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.151 [2024-12-13 23:51:10.204226] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.151 [2024-12-13 23:51:10.213308] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.151 [2024-12-13 23:51:10.213331] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.151 [2024-12-13 23:51:10.222032] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.151 [2024-12-13 23:51:10.222056] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.151 [2024-12-13 23:51:10.231309] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.151 [2024-12-13 23:51:10.231332] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.151 [2024-12-13 23:51:10.240333] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.151 [2024-12-13 23:51:10.240356] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.151 [2024-12-13 23:51:10.249568] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.151 [2024-12-13 23:51:10.249592] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.151 [2024-12-13 23:51:10.258688] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.151 [2024-12-13 23:51:10.258712] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.151 [2024-12-13 23:51:10.267780] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.151 [2024-12-13 23:51:10.267803] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.151 [2024-12-13 23:51:10.276472] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.151 [2024-12-13 23:51:10.276496] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.151 [2024-12-13 23:51:10.285498] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.151 [2024-12-13 23:51:10.285521] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.411 [2024-12-13 23:51:10.294711] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.411 [2024-12-13 23:51:10.294735] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.411 [2024-12-13 23:51:10.303749] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.411 [2024-12-13 23:51:10.303772] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.411 [2024-12-13 23:51:10.312900] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.411 [2024-12-13 23:51:10.312923] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.411 [2024-12-13 23:51:10.322174] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.411 [2024-12-13 23:51:10.322198] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.411 [2024-12-13 23:51:10.331201] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.411 [2024-12-13 23:51:10.331229] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.411 [2024-12-13 23:51:10.340225] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.411 [2024-12-13 23:51:10.340249] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.411 [2024-12-13 23:51:10.349024] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.411 [2024-12-13 23:51:10.349048] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.411 [2024-12-13 23:51:10.357876] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.411 [2024-12-13 23:51:10.357899] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.411 [2024-12-13 23:51:10.367000] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.411 [2024-12-13 23:51:10.367023] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.411 [2024-12-13 23:51:10.376080] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.411 [2024-12-13 23:51:10.376103] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.411 [2024-12-13 23:51:10.385563] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.411 [2024-12-13 23:51:10.385587] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.411 [2024-12-13 23:51:10.394408] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.411 [2024-12-13 23:51:10.394431] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.411 [2024-12-13 23:51:10.403355] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.411 [2024-12-13 23:51:10.403379] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.411 [2024-12-13 23:51:10.412343] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.411 [2024-12-13 23:51:10.412366] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.411 [2024-12-13 23:51:10.421194] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.411 [2024-12-13 23:51:10.421217] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.411 [2024-12-13 23:51:10.429931] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.411 [2024-12-13 23:51:10.429954] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.411 [2024-12-13 23:51:10.438616] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.411 [2024-12-13 23:51:10.438640] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.411 [2024-12-13 23:51:10.447733] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.411 [2024-12-13 23:51:10.447757] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.411 [2024-12-13 23:51:10.456791] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.411 [2024-12-13 23:51:10.456815] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.411 [2024-12-13 23:51:10.465816] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.411 [2024-12-13 23:51:10.465839] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.411 [2024-12-13 23:51:10.474958] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.411 [2024-12-13 23:51:10.474982] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.411 [2024-12-13 23:51:10.484323] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.411 [2024-12-13 23:51:10.484347] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.411 [2024-12-13 23:51:10.492955] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.411 [2024-12-13 23:51:10.492978] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.411 [2024-12-13 23:51:10.502200] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.411 [2024-12-13 23:51:10.502228] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.411 [2024-12-13 23:51:10.511462] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.411 [2024-12-13 23:51:10.511485] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.411 [2024-12-13 23:51:10.520654] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.411 [2024-12-13 23:51:10.520678] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.411 [2024-12-13 23:51:10.529674] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.411 [2024-12-13 23:51:10.529697] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.411 [2024-12-13 23:51:10.538814] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.411 [2024-12-13 23:51:10.538838] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.411 [2024-12-13 23:51:10.547499] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.411 [2024-12-13 23:51:10.547523] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.671 [2024-12-13 23:51:10.556363] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.671 [2024-12-13 23:51:10.556386] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.671 [2024-12-13 23:51:10.565013] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.671 [2024-12-13 23:51:10.565036] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.671 [2024-12-13 23:51:10.574051] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.671 [2024-12-13 23:51:10.574075] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.671 [2024-12-13 23:51:10.583500] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.671 [2024-12-13 23:51:10.583524] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.671 [2024-12-13 23:51:10.592623] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.671 [2024-12-13 23:51:10.592645] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.671 [2024-12-13 23:51:10.601879] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.671 [2024-12-13 23:51:10.601902] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.671 [2024-12-13 23:51:10.610913] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.671 [2024-12-13 23:51:10.610937] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.671 [2024-12-13 23:51:10.619622] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.671 [2024-12-13 23:51:10.619644] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.671 [2024-12-13 23:51:10.628721] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.671 [2024-12-13 23:51:10.628744] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.671 [2024-12-13 23:51:10.637819] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.671 [2024-12-13 23:51:10.637842] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.671 [2024-12-13 23:51:10.647062] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.671 [2024-12-13 23:51:10.647085] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.671 [2024-12-13 23:51:10.655928] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.671 [2024-12-13 23:51:10.655951] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.671 [2024-12-13 23:51:10.665149] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.671 [2024-12-13 23:51:10.665172] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.671 [2024-12-13 23:51:10.674269] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.671 [2024-12-13 23:51:10.674296] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.671 [2024-12-13 23:51:10.683181] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.671 [2024-12-13 23:51:10.683204] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.671 [2024-12-13 23:51:10.692037] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.671 [2024-12-13 23:51:10.692060] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.671 [2024-12-13 23:51:10.700768] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.671 [2024-12-13 23:51:10.700791] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.671 [2024-12-13 23:51:10.709539] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.671 [2024-12-13 23:51:10.709562] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.671 [2024-12-13 23:51:10.718282] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.671 [2024-12-13 23:51:10.718305] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.671 [2024-12-13 23:51:10.726870] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.671 [2024-12-13 23:51:10.726893] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.671 [2024-12-13 23:51:10.736014] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.671 [2024-12-13 23:51:10.736038] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.671 [2024-12-13 23:51:10.744806] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.671 [2024-12-13 23:51:10.744830] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.671 [2024-12-13 23:51:10.753529] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.671 [2024-12-13 23:51:10.753553] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.671 [2024-12-13 23:51:10.762362] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.671 [2024-12-13 23:51:10.762385] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.671 [2024-12-13 23:51:10.772238] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.671 [2024-12-13 23:51:10.772261] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.671 [2024-12-13 23:51:10.781809] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.671 [2024-12-13 23:51:10.781833] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.671 [2024-12-13 23:51:10.789567] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.671 [2024-12-13 23:51:10.789590] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.671 [2024-12-13 23:51:10.800358] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.671 [2024-12-13 23:51:10.800382] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.671 [2024-12-13 23:51:10.809133] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.671 [2024-12-13 23:51:10.809156] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.931 [2024-12-13 23:51:10.817962] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.931 [2024-12-13 23:51:10.817985] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.931 [2024-12-13 23:51:10.826810] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.931 [2024-12-13 23:51:10.826833] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.931 [2024-12-13 23:51:10.835638] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.931 [2024-12-13 23:51:10.835661] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.931 [2024-12-13 23:51:10.844467] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.931 [2024-12-13 23:51:10.844491] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.931 [2024-12-13 23:51:10.853390] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.931 [2024-12-13 23:51:10.853413] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.931 [2024-12-13 23:51:10.862330] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.931 [2024-12-13 23:51:10.862354] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.931 [2024-12-13 23:51:10.871164] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.931 [2024-12-13 23:51:10.871187] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.931 [2024-12-13 23:51:10.880235] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.931 [2024-12-13 23:51:10.880257] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.931 [2024-12-13 23:51:10.889406] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.931 [2024-12-13 23:51:10.889429] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.931 [2024-12-13 23:51:10.898564] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.931 [2024-12-13 23:51:10.898588] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.931 [2024-12-13 23:51:10.907295] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.931 [2024-12-13 23:51:10.907318] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.931 [2024-12-13 23:51:10.916179] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.931 [2024-12-13 23:51:10.916203] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.931 [2024-12-13 23:51:10.925149] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.931 [2024-12-13 23:51:10.925173] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.931 [2024-12-13 23:51:10.934203] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.931 [2024-12-13 23:51:10.934228] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.931 [2024-12-13 23:51:10.943273] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.931 [2024-12-13 23:51:10.943297] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.931 [2024-12-13 23:51:10.952356] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.931 [2024-12-13 23:51:10.952379] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.931 [2024-12-13 23:51:10.961309] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.931 [2024-12-13 23:51:10.961334] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.931 [2024-12-13 23:51:10.970434] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.931 [2024-12-13 23:51:10.970467] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.931 [2024-12-13 23:51:10.979162] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.931 [2024-12-13 23:51:10.979186] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.931 [2024-12-13 23:51:10.988034] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.931 [2024-12-13 23:51:10.988058] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.931 [2024-12-13 23:51:10.996803] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.931 [2024-12-13 23:51:10.996826] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.931 [2024-12-13 23:51:11.005552] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.931 [2024-12-13 23:51:11.005575] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.931 [2024-12-13 23:51:11.014643] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.931 [2024-12-13 23:51:11.014666] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.931 14182.50 IOPS, 110.80 MiB/s [2024-12-13T22:51:11.072Z] [2024-12-13 23:51:11.023479] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.931 [2024-12-13 23:51:11.023503] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.931 [2024-12-13 23:51:11.032464] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.931 [2024-12-13 23:51:11.032489] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.931 [2024-12-13 23:51:11.041229] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.931 [2024-12-13 23:51:11.041253] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.931 [2024-12-13 23:51:11.049960] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.931 [2024-12-13 23:51:11.049984] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.931 [2024-12-13 23:51:11.058977] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.931 [2024-12-13 23:51:11.059001] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.931 [2024-12-13 23:51:11.067798] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.931 [2024-12-13 23:51:11.067823] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.191 [2024-12-13 23:51:11.076858] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.191 [2024-12-13 23:51:11.076886] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.191 [2024-12-13 23:51:11.086001] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.191 [2024-12-13 23:51:11.086026] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.191 [2024-12-13 23:51:11.094797] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.191 [2024-12-13 23:51:11.094821] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.191 [2024-12-13 23:51:11.103765] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.191 [2024-12-13 23:51:11.103789] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.191 [2024-12-13 23:51:11.112987] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.191 [2024-12-13 23:51:11.113011] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.191 [2024-12-13 23:51:11.121977] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.191 [2024-12-13 23:51:11.122001] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.191 [2024-12-13 23:51:11.131039] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.191 [2024-12-13 23:51:11.131063] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.191 [2024-12-13 23:51:11.140012] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.191 [2024-12-13 23:51:11.140036] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.191 [2024-12-13 23:51:11.149061] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.191 [2024-12-13 23:51:11.149085] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.191 [2024-12-13 23:51:11.157634] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.191 [2024-12-13 23:51:11.157659] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.191 [2024-12-13 23:51:11.166478] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.191 [2024-12-13 23:51:11.166503] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.191 [2024-12-13 23:51:11.175298] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.191 [2024-12-13 23:51:11.175322] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.191 [2024-12-13 23:51:11.184255] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.191 [2024-12-13 23:51:11.184280] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.191 [2024-12-13 23:51:11.193166] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.191 [2024-12-13 23:51:11.193190] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.191 [2024-12-13 23:51:11.202056] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.191 [2024-12-13 23:51:11.202079] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.191 [2024-12-13 23:51:11.211127] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.191 [2024-12-13 23:51:11.211151] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.191 [2024-12-13 23:51:11.220010] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.191 [2024-12-13 23:51:11.220034] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.191 [2024-12-13 23:51:11.229005] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.191 [2024-12-13 23:51:11.229028] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.191 [2024-12-13 23:51:11.237811] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.191 [2024-12-13 23:51:11.237835] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.191 [2024-12-13 23:51:11.246609] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.191 [2024-12-13 23:51:11.246633] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.191 [2024-12-13 23:51:11.255553] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.191 [2024-12-13 23:51:11.255577] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.191 [2024-12-13 23:51:11.264562] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.191 [2024-12-13 23:51:11.264585] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.191 [2024-12-13 23:51:11.273387] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.191 [2024-12-13 23:51:11.273411] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.191 [2024-12-13 23:51:11.282068] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.191 [2024-12-13 23:51:11.282092] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.191 [2024-12-13 23:51:11.290780] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.191 [2024-12-13 23:51:11.290803] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.191 [2024-12-13 23:51:11.299415] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.191 [2024-12-13 23:51:11.299446] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.191 [2024-12-13 23:51:11.308049] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.191 [2024-12-13 23:51:11.308072] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.191 [2024-12-13 23:51:11.316481] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.191 [2024-12-13 23:51:11.316503] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.191 [2024-12-13 23:51:11.325487] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.191 [2024-12-13 23:51:11.325510] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.451 [2024-12-13 23:51:11.334378] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.451 [2024-12-13 23:51:11.334401] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.451 [2024-12-13 23:51:11.343418] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.451 [2024-12-13 23:51:11.343453] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.451 [2024-12-13 23:51:11.352391] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.451 [2024-12-13 23:51:11.352414] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.451 [2024-12-13 23:51:11.361759] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.451 [2024-12-13 23:51:11.361783] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.451 [2024-12-13 23:51:11.370491] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.451 [2024-12-13 23:51:11.370514] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.451 [2024-12-13 23:51:11.379671] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.451 [2024-12-13 23:51:11.379694] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.451 [2024-12-13 23:51:11.388349] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.451 [2024-12-13 23:51:11.388371] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.451 [2024-12-13 23:51:11.397146] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.451 [2024-12-13 23:51:11.397169] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.451 [2024-12-13 23:51:11.406287] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.451 [2024-12-13 23:51:11.406310] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.451 [2024-12-13 23:51:11.415279] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.451 [2024-12-13 23:51:11.415302] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.451 [2024-12-13 23:51:11.424150] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.451 [2024-12-13 23:51:11.424174] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.451 [2024-12-13 23:51:11.433072] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.451 [2024-12-13 23:51:11.433095] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.451 [2024-12-13 23:51:11.441692] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.451 [2024-12-13 23:51:11.441714] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.451 [2024-12-13 23:51:11.450649] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.451 [2024-12-13 23:51:11.450672] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.451 [2024-12-13 23:51:11.459709] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.451 [2024-12-13 23:51:11.459732] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.451 [2024-12-13 23:51:11.468664] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.451 [2024-12-13 23:51:11.468687] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.451 [2024-12-13 23:51:11.477447] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.451 [2024-12-13 23:51:11.477470] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.451 [2024-12-13 23:51:11.486567] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.451 [2024-12-13 23:51:11.486590] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.451 [2024-12-13 23:51:11.495394] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.451 [2024-12-13 23:51:11.495417] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.451 [2024-12-13 23:51:11.504207] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.451 [2024-12-13 23:51:11.504230] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.451 [2024-12-13 23:51:11.513355] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.451 [2024-12-13 23:51:11.513382] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.451 [2024-12-13 23:51:11.522227] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.451 [2024-12-13 23:51:11.522249] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.451 [2024-12-13 23:51:11.531274] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.451 [2024-12-13 23:51:11.531297] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.451 [2024-12-13 23:51:11.540258] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.451 [2024-12-13 23:51:11.540286] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.451 [2024-12-13 23:51:11.549230] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.451 [2024-12-13 23:51:11.549253] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.451 [2024-12-13 23:51:11.557931] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.451 [2024-12-13 23:51:11.557955] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.451 [2024-12-13 23:51:11.566789] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.451 [2024-12-13 23:51:11.566812] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.451 [2024-12-13 23:51:11.575683] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.451 [2024-12-13 23:51:11.575706] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.451 [2024-12-13 23:51:11.584888] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.451 [2024-12-13 23:51:11.584912] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.711 [2024-12-13 23:51:11.594037] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.711 [2024-12-13 23:51:11.594060] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.711 [2024-12-13 23:51:11.603090] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.711 [2024-12-13 23:51:11.603113] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.711 [2024-12-13 23:51:11.612027] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.711 [2024-12-13 23:51:11.612051] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.711 [2024-12-13 23:51:11.620782] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.711 [2024-12-13 23:51:11.620804] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.711 [2024-12-13 23:51:11.629706] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.711 [2024-12-13 23:51:11.629729] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.711 [2024-12-13 23:51:11.638772] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.711 [2024-12-13 23:51:11.638795] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.711 [2024-12-13 23:51:11.648014] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.711 [2024-12-13 23:51:11.648037] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.711 [2024-12-13 23:51:11.656750] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.711 [2024-12-13 23:51:11.656773] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.711 [2024-12-13 23:51:11.665589] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.711 [2024-12-13 23:51:11.665612] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.711 [2024-12-13 23:51:11.674600] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.711 [2024-12-13 23:51:11.674623] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.711 [2024-12-13 23:51:11.683341] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.711 [2024-12-13 23:51:11.683374] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.711 [2024-12-13 23:51:11.692094] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.711 [2024-12-13 23:51:11.692116] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.711 [2024-12-13 23:51:11.701001] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.711 [2024-12-13 23:51:11.701024] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.711 [2024-12-13 23:51:11.710074] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.711 [2024-12-13 23:51:11.710098] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.711 [2024-12-13 23:51:11.719048] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.711 [2024-12-13 23:51:11.719071] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.711 [2024-12-13 23:51:11.727890] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.711 [2024-12-13 23:51:11.727913] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.711 [2024-12-13 23:51:11.736915] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.711 [2024-12-13 23:51:11.736938] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.711 [2024-12-13 23:51:11.745995] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.711 [2024-12-13 23:51:11.746018] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.711 [2024-12-13 23:51:11.755000] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.711 [2024-12-13 23:51:11.755023] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.711 [2024-12-13 23:51:11.763570] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.711 [2024-12-13 23:51:11.763593] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.711 [2024-12-13 23:51:11.772335] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.711 [2024-12-13 23:51:11.772357] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.711 [2024-12-13 23:51:11.781569] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.711 [2024-12-13 23:51:11.781592] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.711 [2024-12-13 23:51:11.790636] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.711 [2024-12-13 23:51:11.790659] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.711 [2024-12-13 23:51:11.799690] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.711 [2024-12-13 23:51:11.799713] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.711 [2024-12-13 23:51:11.808863] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.711 [2024-12-13 23:51:11.808886] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.711 [2024-12-13 23:51:11.817978] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.711 [2024-12-13 23:51:11.818001] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.711 [2024-12-13 23:51:11.826866] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.711 [2024-12-13 23:51:11.826889] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.711 [2024-12-13 23:51:11.835786] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.711 [2024-12-13 23:51:11.835809] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.711 [2024-12-13 23:51:11.844663] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.711 [2024-12-13 23:51:11.844686] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.971 [2024-12-13 23:51:11.853865] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.971 [2024-12-13 23:51:11.853893] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.971 [2024-12-13 23:51:11.862812] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.971 [2024-12-13 23:51:11.862835] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.971 [2024-12-13 23:51:11.871757] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.971 [2024-12-13 23:51:11.871780] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.971 [2024-12-13 23:51:11.880568] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.971 [2024-12-13 23:51:11.880592] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.971 [2024-12-13 23:51:11.889702] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.971 [2024-12-13 23:51:11.889726] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.971 [2024-12-13 23:51:11.898652] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.971 [2024-12-13 23:51:11.898676] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.971 [2024-12-13 23:51:11.907867] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.971 [2024-12-13 23:51:11.907890] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.971 [2024-12-13 23:51:11.916730] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.971 [2024-12-13 23:51:11.916753] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.971 [2024-12-13 23:51:11.925359] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.971 [2024-12-13 23:51:11.925381] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.971 [2024-12-13 23:51:11.934159] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.971 [2024-12-13 23:51:11.934183] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.971 [2024-12-13 23:51:11.943156] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.971 [2024-12-13 23:51:11.943180] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.971 [2024-12-13 23:51:11.951782] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.971 [2024-12-13 23:51:11.951804] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.971 [2024-12-13 23:51:11.960594] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.971 [2024-12-13 23:51:11.960617] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.971 [2024-12-13 23:51:11.969300] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.971 [2024-12-13 23:51:11.969323] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.971 [2024-12-13 23:51:11.978017] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.971 [2024-12-13 23:51:11.978040] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.971 [2024-12-13 23:51:11.986642] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.971 [2024-12-13 23:51:11.986664] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.971 [2024-12-13 23:51:11.995647] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.971 [2024-12-13 23:51:11.995670] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.971 [2024-12-13 23:51:12.004705] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.971 [2024-12-13 23:51:12.004728] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.971 [2024-12-13 23:51:12.013690] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.971 [2024-12-13 23:51:12.013713] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.971 14220.00 IOPS, 111.09 MiB/s [2024-12-13T22:51:12.112Z] [2024-12-13 23:51:12.022375] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.971 [2024-12-13 23:51:12.022398] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.971 [2024-12-13 23:51:12.031228] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.971 [2024-12-13 23:51:12.031251] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.971 [2024-12-13 23:51:12.041221] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.971 [2024-12-13 23:51:12.041245] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.971 [2024-12-13 23:51:12.049607] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.971 [2024-12-13 23:51:12.049630] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.971 [2024-12-13 23:51:12.059773] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.971 [2024-12-13 23:51:12.059796] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.971 [2024-12-13 23:51:12.069421] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.971 [2024-12-13 23:51:12.069450] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.971 [2024-12-13 23:51:12.076944] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.971 [2024-12-13 23:51:12.076967] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.971 [2024-12-13 23:51:12.088375] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.971 [2024-12-13 23:51:12.088398] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.971 [2024-12-13 23:51:12.096899] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.971 [2024-12-13 23:51:12.096922] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.971 [2024-12-13 23:51:12.106063] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.971 [2024-12-13 23:51:12.106088] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.231 [2024-12-13 23:51:12.114946] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.231 [2024-12-13 23:51:12.114970] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.231 [2024-12-13 23:51:12.123612] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.231 [2024-12-13 23:51:12.123634] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.231 [2024-12-13 23:51:12.132355] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.231 [2024-12-13 23:51:12.132378] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.231 [2024-12-13 23:51:12.141003] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.231 [2024-12-13 23:51:12.141026] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.231 [2024-12-13 23:51:12.149421] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.231 [2024-12-13 23:51:12.149451] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.231 [2024-12-13 23:51:12.158108] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.231 [2024-12-13 23:51:12.158132] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.231 [2024-12-13 23:51:12.166845] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.231 [2024-12-13 23:51:12.166869] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.231 [2024-12-13 23:51:12.175615] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.231 [2024-12-13 23:51:12.175638] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.231 [2024-12-13 23:51:12.184136] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.231 [2024-12-13 23:51:12.184160] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.231 [2024-12-13 23:51:12.193088] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.231 [2024-12-13 23:51:12.193110] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.231 [2024-12-13 23:51:12.202131] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.231 [2024-12-13 23:51:12.202154] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.231 [2024-12-13 23:51:12.211058] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.231 [2024-12-13 23:51:12.211081] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.231 [2024-12-13 23:51:12.219903] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.231 [2024-12-13 23:51:12.219927] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.231 [2024-12-13 23:51:12.229230] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.231 [2024-12-13 23:51:12.229253] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.231 [2024-12-13 23:51:12.238050] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.231 [2024-12-13 23:51:12.238073] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.231 [2024-12-13 23:51:12.246791] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.231 [2024-12-13 23:51:12.246815] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.231 [2024-12-13 23:51:12.255629] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.231 [2024-12-13 23:51:12.255652] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.231 [2024-12-13 23:51:12.264313] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.231 [2024-12-13 23:51:12.264335] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.231 [2024-12-13 23:51:12.273231] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.231 [2024-12-13 23:51:12.273256] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.231 [2024-12-13 23:51:12.282125] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.231 [2024-12-13 23:51:12.282149] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.231 [2024-12-13 23:51:12.290819] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.231 [2024-12-13 23:51:12.290843] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.231 [2024-12-13 23:51:12.299605] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.231 [2024-12-13 23:51:12.299629] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.231 [2024-12-13 23:51:12.308685] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.231 [2024-12-13 23:51:12.308710] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.231 [2024-12-13 23:51:12.317508] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.231 [2024-12-13 23:51:12.317533] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.231 [2024-12-13 23:51:12.326236] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.231 [2024-12-13 23:51:12.326260] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.231 [2024-12-13 23:51:12.334990] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.231 [2024-12-13 23:51:12.335013] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.231 [2024-12-13 23:51:12.343850] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.231 [2024-12-13 23:51:12.343874] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.231 [2024-12-13 23:51:12.352687] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.231 [2024-12-13 23:51:12.352714] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.231 [2024-12-13 23:51:12.361710] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.231 [2024-12-13 23:51:12.361734] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.231 [2024-12-13 23:51:12.370525] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.231 [2024-12-13 23:51:12.370548] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.491 [2024-12-13 23:51:12.379401] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.491 [2024-12-13 23:51:12.379424] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.491 [2024-12-13 23:51:12.388502] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.491 [2024-12-13 23:51:12.388525] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.491 [2024-12-13 23:51:12.397314] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.491 [2024-12-13 23:51:12.397337] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.491 [2024-12-13 23:51:12.406748] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.491 [2024-12-13 23:51:12.406772] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.491 [2024-12-13 23:51:12.415708] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.491 [2024-12-13 23:51:12.415732] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.491 [2024-12-13 23:51:12.424533] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.491 [2024-12-13 23:51:12.424557] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.491 [2024-12-13 23:51:12.433268] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.491 [2024-12-13 23:51:12.433291] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.491 [2024-12-13 23:51:12.442233] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.491 [2024-12-13 23:51:12.442256] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.491 [2024-12-13 23:51:12.451074] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.491 [2024-12-13 23:51:12.451097] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.491 [2024-12-13 23:51:12.460155] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.491 [2024-12-13 23:51:12.460179] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.491 [2024-12-13 23:51:12.469027] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.491 [2024-12-13 23:51:12.469050] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.491 [2024-12-13 23:51:12.477856] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.491 [2024-12-13 23:51:12.477880] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.491 [2024-12-13 23:51:12.486734] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.491 [2024-12-13 23:51:12.486757] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.491 [2024-12-13 23:51:12.495564] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.491 [2024-12-13 23:51:12.495588] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.491 [2024-12-13 23:51:12.504483] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.491 [2024-12-13 23:51:12.504506] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.491 [2024-12-13 23:51:12.513474] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.491 [2024-12-13 23:51:12.513497] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.491 [2024-12-13 23:51:12.522483] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.491 [2024-12-13 23:51:12.522511] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.491 [2024-12-13 23:51:12.531470] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.491 [2024-12-13 23:51:12.531493] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.491 [2024-12-13 23:51:12.540945] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.491 [2024-12-13 23:51:12.540969] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.491 [2024-12-13 23:51:12.550065] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.491 [2024-12-13 23:51:12.550089] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.491 [2024-12-13 23:51:12.558826] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.491 [2024-12-13 23:51:12.558850] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.491 [2024-12-13 23:51:12.567445] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.491 [2024-12-13 23:51:12.567468] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.491 [2024-12-13 23:51:12.576302] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.491 [2024-12-13 23:51:12.576325] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.491 [2024-12-13 23:51:12.585354] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.491 [2024-12-13 23:51:12.585378] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.491 [2024-12-13 23:51:12.594209] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.491 [2024-12-13 23:51:12.594233] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.491 [2024-12-13 23:51:12.602852] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.491 [2024-12-13 23:51:12.602876] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.491 [2024-12-13 23:51:12.611738] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.491 [2024-12-13 23:51:12.611762] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.491 [2024-12-13 23:51:12.620778] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.491 [2024-12-13 23:51:12.620801] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.491 [2024-12-13 23:51:12.629651] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.491 [2024-12-13 23:51:12.629674] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.751 [2024-12-13 23:51:12.648989] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.751 [2024-12-13 23:51:12.649016] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.751 [2024-12-13 23:51:12.660527] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.751 [2024-12-13 23:51:12.660551] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.751 [2024-12-13 23:51:12.668994] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.751 [2024-12-13 23:51:12.669017] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.751 [2024-12-13 23:51:12.677970] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.751 [2024-12-13 23:51:12.677993] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.751 [2024-12-13 23:51:12.687620] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.751 [2024-12-13 23:51:12.687644] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.751 [2024-12-13 23:51:12.696618] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.751 [2024-12-13 23:51:12.696642] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.751 [2024-12-13 23:51:12.705220] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.751 [2024-12-13 23:51:12.705248] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.751 [2024-12-13 23:51:12.714047] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.751 [2024-12-13 23:51:12.714069] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.751 [2024-12-13 23:51:12.722572] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.751 [2024-12-13 23:51:12.722595] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.751 [2024-12-13 23:51:12.731649] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.751 [2024-12-13 23:51:12.731672] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.751 [2024-12-13 23:51:12.740397] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.751 [2024-12-13 23:51:12.740420] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.751 [2024-12-13 23:51:12.749306] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.751 [2024-12-13 23:51:12.749329] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.751 [2024-12-13 23:51:12.758278] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.751 [2024-12-13 23:51:12.758301] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.751 [2024-12-13 23:51:12.767349] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.751 [2024-12-13 23:51:12.767373] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.751 [2024-12-13 23:51:12.776099] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.751 [2024-12-13 23:51:12.776123] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.751 [2024-12-13 23:51:12.785064] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.751 [2024-12-13 23:51:12.785086] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.751 [2024-12-13 23:51:12.793810] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.751 [2024-12-13 23:51:12.793833] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.751 [2024-12-13 23:51:12.802618] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.751 [2024-12-13 23:51:12.802641] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.751 [2024-12-13 23:51:12.811504] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.751 [2024-12-13 23:51:12.811528] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.751 [2024-12-13 23:51:12.820376] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.751 [2024-12-13 23:51:12.820398] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.751 [2024-12-13 23:51:12.829289] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.751 [2024-12-13 23:51:12.829312] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.751 [2024-12-13 23:51:12.838427] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.751 [2024-12-13 23:51:12.838457] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.751 [2024-12-13 23:51:12.847358] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.751 [2024-12-13 23:51:12.847382] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.751 [2024-12-13 23:51:12.856203] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.751 [2024-12-13 23:51:12.856226] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.751 [2024-12-13 23:51:12.864926] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.751 [2024-12-13 23:51:12.864949] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.751 [2024-12-13 23:51:12.873737] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.751 [2024-12-13 23:51:12.873765] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.751 [2024-12-13 23:51:12.882620] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.751 [2024-12-13 23:51:12.882643] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.751 [2024-12-13 23:51:12.891689] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.751 [2024-12-13 23:51:12.891712] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.011 [2024-12-13 23:51:12.900906] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.011 [2024-12-13 23:51:12.900930] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.011 [2024-12-13 23:51:12.909720] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.011 [2024-12-13 23:51:12.909743] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.011 [2024-12-13 23:51:12.918758] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.011 [2024-12-13 23:51:12.918781] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.011 [2024-12-13 23:51:12.927713] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.011 [2024-12-13 23:51:12.927735] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.011 [2024-12-13 23:51:12.936658] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.011 [2024-12-13 23:51:12.936691] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.011 [2024-12-13 23:51:12.945598] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.011 [2024-12-13 23:51:12.945620] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.011 [2024-12-13 23:51:12.954261] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.011 [2024-12-13 23:51:12.954284] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.011 [2024-12-13 23:51:12.963057] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.011 [2024-12-13 23:51:12.963079] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.011 [2024-12-13 23:51:12.971776] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.011 [2024-12-13 23:51:12.971798] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.011 [2024-12-13 23:51:12.980604] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.011 [2024-12-13 23:51:12.980627] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.011 [2024-12-13 23:51:12.989293] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.011 [2024-12-13 23:51:12.989315] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.011 [2024-12-13 23:51:12.998199] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.011 [2024-12-13 23:51:12.998222] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.011 [2024-12-13 23:51:13.007052] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.011 [2024-12-13 23:51:13.007074] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.011 [2024-12-13 23:51:13.016129] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.011 [2024-12-13 23:51:13.016152] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.011 14250.00 IOPS, 111.33 MiB/s [2024-12-13T22:51:13.152Z] [2024-12-13 23:51:13.024989] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.011 [2024-12-13 23:51:13.025012] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.011 [2024-12-13 23:51:13.034026] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.011 [2024-12-13 23:51:13.034049] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.011 [2024-12-13 23:51:13.043035] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.011 [2024-12-13 23:51:13.043057] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.011 [2024-12-13 23:51:13.051805] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.011 [2024-12-13 23:51:13.051827] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.011 [2024-12-13 23:51:13.060878] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.011 [2024-12-13 23:51:13.060901] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.011 [2024-12-13 23:51:13.069538] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.011 [2024-12-13 23:51:13.069562] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.011 [2024-12-13 23:51:13.078406] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.011 [2024-12-13 23:51:13.078428] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.011 [2024-12-13 23:51:13.086879] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.011 [2024-12-13 23:51:13.086902] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.011 [2024-12-13 23:51:13.095906] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.011 [2024-12-13 23:51:13.095930] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.011 [2024-12-13 23:51:13.105086] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.011 [2024-12-13 23:51:13.105111] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.011 [2024-12-13 23:51:13.114068] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.011 [2024-12-13 23:51:13.114092] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.011 [2024-12-13 23:51:13.122938] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.011 [2024-12-13 23:51:13.122961] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.011 [2024-12-13 23:51:13.131969] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.011 [2024-12-13 23:51:13.131994] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.011 [2024-12-13 23:51:13.141074] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.011 [2024-12-13 23:51:13.141097] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.011 [2024-12-13 23:51:13.150294] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.011 [2024-12-13 23:51:13.150318] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.348 [2024-12-13 23:51:13.159841] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.348 [2024-12-13 23:51:13.159865] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.348 [2024-12-13 23:51:13.168974] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.348 [2024-12-13 23:51:13.168997] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.348 [2024-12-13 23:51:13.177859] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.348 [2024-12-13 23:51:13.177882] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.348 [2024-12-13 23:51:13.186862] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.348 [2024-12-13 23:51:13.186885] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.348 [2024-12-13 23:51:13.195870] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.348 [2024-12-13 23:51:13.195894] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.348 [2024-12-13 23:51:13.204963] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.348 [2024-12-13 23:51:13.204986] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.348 [2024-12-13 23:51:13.214032] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.348 [2024-12-13 23:51:13.214054] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.348 [2024-12-13 23:51:13.222897] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.348 [2024-12-13 23:51:13.222919] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.348 [2024-12-13 23:51:13.231814] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.348 [2024-12-13 23:51:13.231839] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.348 [2024-12-13 23:51:13.240832] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.348 [2024-12-13 23:51:13.240855] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.348 [2024-12-13 23:51:13.250030] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.348 [2024-12-13 23:51:13.250053] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.348 [2024-12-13 23:51:13.259191] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.348 [2024-12-13 23:51:13.259214] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.348 [2024-12-13 23:51:13.268074] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.348 [2024-12-13 23:51:13.268098] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.348 [2024-12-13 23:51:13.276969] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.348 [2024-12-13 23:51:13.276992] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.348 [2024-12-13 23:51:13.286346] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.348 [2024-12-13 23:51:13.286369] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.348 [2024-12-13 23:51:13.295208] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.348 [2024-12-13 23:51:13.295233] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.348 [2024-12-13 23:51:13.303940] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.348 [2024-12-13 23:51:13.303965] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.348 [2024-12-13 23:51:13.312862] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.348 [2024-12-13 23:51:13.312887] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.348 [2024-12-13 23:51:13.321590] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.348 [2024-12-13 23:51:13.321615] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.348 [2024-12-13 23:51:13.330719] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.348 [2024-12-13 23:51:13.330744] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.348 [2024-12-13 23:51:13.339725] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.348 [2024-12-13 23:51:13.339748] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.348 [2024-12-13 23:51:13.348515] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.348 [2024-12-13 23:51:13.348540] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.348 [2024-12-13 23:51:13.357556] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.348 [2024-12-13 23:51:13.357580] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.348 [2024-12-13 23:51:13.366497] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.348 [2024-12-13 23:51:13.366520] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.348 [2024-12-13 23:51:13.375226] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.348 [2024-12-13 23:51:13.375249] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.348 [2024-12-13 23:51:13.383990] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.348 [2024-12-13 23:51:13.384015] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.348 [2024-12-13 23:51:13.392679] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.348 [2024-12-13 23:51:13.392702] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.348 [2024-12-13 23:51:13.401431] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.348 [2024-12-13 23:51:13.401462] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.348 [2024-12-13 23:51:13.410184] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.348 [2024-12-13 23:51:13.410208] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.348 [2024-12-13 23:51:13.418881] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.348 [2024-12-13 23:51:13.418904] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.348 [2024-12-13 23:51:13.427582] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.348 [2024-12-13 23:51:13.427605] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.348 [2024-12-13 23:51:13.437522] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.348 [2024-12-13 23:51:13.437545] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.348 [2024-12-13 23:51:13.447383] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.348 [2024-12-13 23:51:13.447406] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.348 [2024-12-13 23:51:13.455225] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.348 [2024-12-13 23:51:13.455248] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.629 [2024-12-13 23:51:13.466511] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.629 [2024-12-13 23:51:13.466534] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.629 [2024-12-13 23:51:13.475377] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.629 [2024-12-13 23:51:13.475399] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.629 [2024-12-13 23:51:13.484456] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.629 [2024-12-13 23:51:13.484479] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.629 [2024-12-13 23:51:13.493107] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.629 [2024-12-13 23:51:13.493130] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.629 [2024-12-13 23:51:13.501997] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.629 [2024-12-13 23:51:13.502020] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.629 [2024-12-13 23:51:13.510799] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.629 [2024-12-13 23:51:13.510822] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.629 [2024-12-13 23:51:13.519649] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.629 [2024-12-13 23:51:13.519672] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.629 [2024-12-13 23:51:13.528663] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.629 [2024-12-13 23:51:13.528687] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.629 [2024-12-13 23:51:13.537433] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.629 [2024-12-13 23:51:13.537464] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.629 [2024-12-13 23:51:13.546448] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.629 [2024-12-13 23:51:13.546476] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.629 [2024-12-13 23:51:13.555315] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.629 [2024-12-13 23:51:13.555338] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.629 [2024-12-13 23:51:13.563896] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.629 [2024-12-13 23:51:13.563919] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.629 [2024-12-13 23:51:13.572854] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.629 [2024-12-13 23:51:13.572877] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.629 [2024-12-13 23:51:13.581765] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.629 [2024-12-13 23:51:13.581788] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.629 [2024-12-13 23:51:13.590617] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.629 [2024-12-13 23:51:13.590640] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.629 [2024-12-13 23:51:13.599597] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.629 [2024-12-13 23:51:13.599621] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.629 [2024-12-13 23:51:13.608562] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.629 [2024-12-13 23:51:13.608585] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.629 [2024-12-13 23:51:13.617379] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.629 [2024-12-13 23:51:13.617403] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.629 [2024-12-13 23:51:13.626153] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.629 [2024-12-13 23:51:13.626176] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.629 [2024-12-13 23:51:13.635071] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.629 [2024-12-13 23:51:13.635094] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.629 [2024-12-13 23:51:13.643679] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.629 [2024-12-13 23:51:13.643702] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.629 [2024-12-13 23:51:13.652228] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.629 [2024-12-13 23:51:13.652252] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.629 [2024-12-13 23:51:13.660901] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.629 [2024-12-13 23:51:13.660924] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.629 [2024-12-13 23:51:13.669775] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.629 [2024-12-13 23:51:13.669798] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.629 [2024-12-13 23:51:13.678647] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.629 [2024-12-13 23:51:13.678671] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.629 [2024-12-13 23:51:13.687353] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.629 [2024-12-13 23:51:13.687377] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.629 [2024-12-13 23:51:13.696422] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.629 [2024-12-13 23:51:13.696453] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.629 [2024-12-13 23:51:13.705419] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.629 [2024-12-13 23:51:13.705450] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.629 [2024-12-13 23:51:13.714623] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.629 [2024-12-13 23:51:13.714651] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.630 [2024-12-13 23:51:13.723593] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.630 [2024-12-13 23:51:13.723617] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.630 [2024-12-13 23:51:13.733928] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.630 [2024-12-13 23:51:13.733952] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.630 [2024-12-13 23:51:13.742458] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.630 [2024-12-13 23:51:13.742482] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.630 [2024-12-13 23:51:13.751186] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.630 [2024-12-13 23:51:13.751210] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.630 [2024-12-13 23:51:13.760165] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.630 [2024-12-13 23:51:13.760190] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.630 [2024-12-13 23:51:13.769097] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.630 [2024-12-13 23:51:13.769121] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.889 [2024-12-13 23:51:13.778084] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.889 [2024-12-13 23:51:13.778108] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.889 [2024-12-13 23:51:13.787143] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.889 [2024-12-13 23:51:13.787166] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.889 [2024-12-13 23:51:13.796112] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.889 [2024-12-13 23:51:13.796135] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.889 [2024-12-13 23:51:13.804960] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.889 [2024-12-13 23:51:13.804984] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.889 [2024-12-13 23:51:13.813843] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.889 [2024-12-13 23:51:13.813866] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.889 [2024-12-13 23:51:13.822664] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.889 [2024-12-13 23:51:13.822687] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.889 [2024-12-13 23:51:13.831560] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.889 [2024-12-13 23:51:13.831585] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.889 [2024-12-13 23:51:13.840412] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.889 [2024-12-13 23:51:13.840435] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.889 [2024-12-13 23:51:13.849474] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.889 [2024-12-13 23:51:13.849497] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.889 [2024-12-13 23:51:13.858740] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.889 [2024-12-13 23:51:13.858763] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.889 [2024-12-13 23:51:13.867650] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.889 [2024-12-13 23:51:13.867673] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.889 [2024-12-13 23:51:13.876511] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.889 [2024-12-13 23:51:13.876533] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.889 [2024-12-13 23:51:13.885247] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.889 [2024-12-13 23:51:13.885274] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.889 [2024-12-13 23:51:13.894079] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.889 [2024-12-13 23:51:13.894101] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.889 [2024-12-13 23:51:13.902829] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.889 [2024-12-13 23:51:13.902853] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.889 [2024-12-13 23:51:13.911647] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.889 [2024-12-13 23:51:13.911670] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.889 [2024-12-13 23:51:13.920377] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.889 [2024-12-13 23:51:13.920401] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.889 [2024-12-13 23:51:13.929168] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.889 [2024-12-13 23:51:13.929190] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.889 [2024-12-13 23:51:13.938276] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.889 [2024-12-13 23:51:13.938299] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.889 [2024-12-13 23:51:13.947674] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.889 [2024-12-13 23:51:13.947697] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.889 [2024-12-13 23:51:13.956611] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.889 [2024-12-13 23:51:13.956635] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.889 [2024-12-13 23:51:13.965281] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.889 [2024-12-13 23:51:13.965305] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.889 [2024-12-13 23:51:13.974351] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.889 [2024-12-13 23:51:13.974375] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.889 [2024-12-13 23:51:13.983230] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.889 [2024-12-13 23:51:13.983255] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.889 [2024-12-13 23:51:13.992060] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.889 [2024-12-13 23:51:13.992084] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.889 [2024-12-13 23:51:14.001078] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.889 [2024-12-13 23:51:14.001102] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.889 [2024-12-13 23:51:14.010150] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.889 [2024-12-13 23:51:14.010174] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.889 [2024-12-13 23:51:14.019031] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.890 [2024-12-13 23:51:14.019053] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.890 14258.60 IOPS, 111.40 MiB/s [2024-12-13T22:51:14.031Z] [2024-12-13 23:51:14.027662] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.890 [2024-12-13 23:51:14.027686] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.149 [2024-12-13 23:51:14.033920] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.149 [2024-12-13 23:51:14.033942] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.149 00:10:35.149 Latency(us) 00:10:35.149 [2024-12-13T22:51:14.290Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:35.149 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:10:35.149 Nvme1n1 : 5.01 14259.68 111.40 0.00 0.00 8966.99 3900.95 15853.47 00:10:35.149 [2024-12-13T22:51:14.290Z] =================================================================================================================== 00:10:35.149 [2024-12-13T22:51:14.290Z] Total : 14259.68 111.40 0.00 0.00 8966.99 3900.95 15853.47 00:10:35.149 [2024-12-13 23:51:14.041802] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.149 [2024-12-13 23:51:14.041822] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.149 [2024-12-13 23:51:14.049812] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.149 [2024-12-13 23:51:14.049832] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.149 [2024-12-13 23:51:14.057817] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.149 [2024-12-13 23:51:14.057837] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.149 [2024-12-13 23:51:14.065850] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.149 [2024-12-13 23:51:14.065868] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.149 [2024-12-13 23:51:14.073862] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.149 [2024-12-13 23:51:14.073880] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.149 [2024-12-13 23:51:14.081903] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.149 [2024-12-13 23:51:14.081923] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.149 [2024-12-13 23:51:14.089945] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.149 [2024-12-13 23:51:14.089967] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.149 [2024-12-13 23:51:14.097930] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.149 [2024-12-13 23:51:14.097949] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.149 [2024-12-13 23:51:14.105978] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.149 [2024-12-13 23:51:14.105996] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.149 [2024-12-13 23:51:14.113983] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.149 [2024-12-13 23:51:14.114001] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.149 [2024-12-13 23:51:14.121998] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.149 [2024-12-13 23:51:14.122016] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.149 [2024-12-13 23:51:14.130027] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.149 [2024-12-13 23:51:14.130045] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.149 [2024-12-13 23:51:14.138060] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.149 [2024-12-13 23:51:14.138079] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.149 [2024-12-13 23:51:14.146088] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.149 [2024-12-13 23:51:14.146106] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.149 [2024-12-13 23:51:14.154092] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.149 [2024-12-13 23:51:14.154111] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.149 [2024-12-13 23:51:14.162115] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.149 [2024-12-13 23:51:14.162134] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.149 [2024-12-13 23:51:14.170142] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.149 [2024-12-13 23:51:14.170162] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.149 [2024-12-13 23:51:14.178160] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.149 [2024-12-13 23:51:14.178178] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.149 [2024-12-13 23:51:14.186193] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.149 [2024-12-13 23:51:14.186214] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.149 [2024-12-13 23:51:14.194206] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.149 [2024-12-13 23:51:14.194226] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.149 [2024-12-13 23:51:14.202221] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.149 [2024-12-13 23:51:14.202240] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.149 [2024-12-13 23:51:14.210258] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.149 [2024-12-13 23:51:14.210276] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.149 [2024-12-13 23:51:14.218273] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.149 [2024-12-13 23:51:14.218290] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.149 [2024-12-13 23:51:14.226281] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.149 [2024-12-13 23:51:14.226299] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.149 [2024-12-13 23:51:14.234322] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.149 [2024-12-13 23:51:14.234340] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.149 [2024-12-13 23:51:14.242333] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.149 [2024-12-13 23:51:14.242351] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.149 [2024-12-13 23:51:14.250345] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.149 [2024-12-13 23:51:14.250363] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.149 [2024-12-13 23:51:14.258378] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.149 [2024-12-13 23:51:14.258396] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.149 [2024-12-13 23:51:14.266390] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.149 [2024-12-13 23:51:14.266408] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.149 [2024-12-13 23:51:14.274417] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.149 [2024-12-13 23:51:14.274435] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.150 [2024-12-13 23:51:14.282447] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.150 [2024-12-13 23:51:14.282466] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.409 [2024-12-13 23:51:14.290482] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.409 [2024-12-13 23:51:14.290500] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.409 [2024-12-13 23:51:14.298491] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.409 [2024-12-13 23:51:14.298510] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.409 [2024-12-13 23:51:14.306509] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.409 [2024-12-13 23:51:14.306526] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.409 [2024-12-13 23:51:14.314522] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.409 [2024-12-13 23:51:14.314540] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.409 [2024-12-13 23:51:14.322556] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.409 [2024-12-13 23:51:14.322574] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.409 [2024-12-13 23:51:14.330581] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.409 [2024-12-13 23:51:14.330599] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.409 [2024-12-13 23:51:14.338605] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.409 [2024-12-13 23:51:14.338624] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.409 [2024-12-13 23:51:14.346629] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.409 [2024-12-13 23:51:14.346648] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.409 [2024-12-13 23:51:14.354632] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.409 [2024-12-13 23:51:14.354650] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.409 [2024-12-13 23:51:14.362667] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.409 [2024-12-13 23:51:14.362685] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.409 [2024-12-13 23:51:14.370687] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.409 [2024-12-13 23:51:14.370705] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.409 [2024-12-13 23:51:14.378719] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.409 [2024-12-13 23:51:14.378743] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.409 [2024-12-13 23:51:14.386743] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.409 [2024-12-13 23:51:14.386762] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.409 [2024-12-13 23:51:14.394747] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.409 [2024-12-13 23:51:14.394766] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.409 [2024-12-13 23:51:14.402777] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.409 [2024-12-13 23:51:14.402795] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.409 [2024-12-13 23:51:14.410798] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.409 [2024-12-13 23:51:14.410816] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.409 [2024-12-13 23:51:14.418807] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.409 [2024-12-13 23:51:14.418825] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.409 [2024-12-13 23:51:14.426846] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.409 [2024-12-13 23:51:14.426863] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.409 [2024-12-13 23:51:14.434865] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.409 [2024-12-13 23:51:14.434883] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.409 [2024-12-13 23:51:14.442878] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.409 [2024-12-13 23:51:14.442897] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.409 [2024-12-13 23:51:14.450924] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.409 [2024-12-13 23:51:14.450945] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.409 [2024-12-13 23:51:14.458922] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.409 [2024-12-13 23:51:14.458942] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.409 [2024-12-13 23:51:14.466963] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.409 [2024-12-13 23:51:14.466983] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.409 [2024-12-13 23:51:14.474971] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.409 [2024-12-13 23:51:14.474993] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.409 [2024-12-13 23:51:14.482984] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.409 [2024-12-13 23:51:14.483002] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.409 [2024-12-13 23:51:14.491013] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.409 [2024-12-13 23:51:14.491031] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.409 [2024-12-13 23:51:14.499037] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.409 [2024-12-13 23:51:14.499054] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.409 [2024-12-13 23:51:14.507045] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.409 [2024-12-13 23:51:14.507063] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.409 [2024-12-13 23:51:14.515079] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.409 [2024-12-13 23:51:14.515097] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.409 [2024-12-13 23:51:14.523088] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.409 [2024-12-13 23:51:14.523106] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.409 [2024-12-13 23:51:14.531145] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.409 [2024-12-13 23:51:14.531162] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.409 [2024-12-13 23:51:14.539139] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.409 [2024-12-13 23:51:14.539157] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.409 [2024-12-13 23:51:14.547154] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.409 [2024-12-13 23:51:14.547172] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.668 [2024-12-13 23:51:14.555186] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.669 [2024-12-13 23:51:14.555204] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.669 [2024-12-13 23:51:14.563202] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.669 [2024-12-13 23:51:14.563220] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.669 [2024-12-13 23:51:14.571214] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.669 [2024-12-13 23:51:14.571232] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.669 [2024-12-13 23:51:14.579253] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.669 [2024-12-13 23:51:14.579271] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.669 [2024-12-13 23:51:14.587261] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.669 [2024-12-13 23:51:14.587279] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.669 [2024-12-13 23:51:14.595290] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.669 [2024-12-13 23:51:14.595308] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.669 [2024-12-13 23:51:14.603311] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.669 [2024-12-13 23:51:14.603328] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.669 [2024-12-13 23:51:14.611335] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.669 [2024-12-13 23:51:14.611353] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.669 [2024-12-13 23:51:14.619368] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.669 [2024-12-13 23:51:14.619385] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.669 [2024-12-13 23:51:14.627392] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.669 [2024-12-13 23:51:14.627415] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.669 [2024-12-13 23:51:14.635402] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.669 [2024-12-13 23:51:14.635422] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.669 [2024-12-13 23:51:14.643430] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.669 [2024-12-13 23:51:14.643454] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.669 [2024-12-13 23:51:14.651445] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.669 [2024-12-13 23:51:14.651463] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.669 [2024-12-13 23:51:14.659489] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.669 [2024-12-13 23:51:14.659508] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.669 [2024-12-13 23:51:14.667526] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.669 [2024-12-13 23:51:14.667543] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.669 [2024-12-13 23:51:14.675508] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.669 [2024-12-13 23:51:14.675525] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.669 [2024-12-13 23:51:14.683549] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.669 [2024-12-13 23:51:14.683567] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.669 [2024-12-13 23:51:14.691556] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.669 [2024-12-13 23:51:14.691574] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.669 [2024-12-13 23:51:14.699573] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.669 [2024-12-13 23:51:14.699591] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.669 [2024-12-13 23:51:14.707602] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.669 [2024-12-13 23:51:14.707620] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.669 [2024-12-13 23:51:14.715626] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.669 [2024-12-13 23:51:14.715643] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.669 [2024-12-13 23:51:14.723648] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.669 [2024-12-13 23:51:14.723667] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.669 [2024-12-13 23:51:14.731680] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.669 [2024-12-13 23:51:14.731698] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.669 [2024-12-13 23:51:14.739678] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.669 [2024-12-13 23:51:14.739696] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.669 [2024-12-13 23:51:14.747711] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.669 [2024-12-13 23:51:14.747729] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.669 [2024-12-13 23:51:14.755729] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.669 [2024-12-13 23:51:14.755747] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.669 [2024-12-13 23:51:14.763745] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.669 [2024-12-13 23:51:14.763764] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.669 [2024-12-13 23:51:14.771775] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.669 [2024-12-13 23:51:14.771793] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.669 [2024-12-13 23:51:14.779797] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.669 [2024-12-13 23:51:14.779819] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.669 [2024-12-13 23:51:14.787821] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.669 [2024-12-13 23:51:14.787839] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.669 [2024-12-13 23:51:14.795846] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.669 [2024-12-13 23:51:14.795875] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.669 [2024-12-13 23:51:14.803856] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.669 [2024-12-13 23:51:14.803873] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.928 [2024-12-13 23:51:14.811901] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.928 [2024-12-13 23:51:14.811919] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.928 [2024-12-13 23:51:14.819912] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.928 [2024-12-13 23:51:14.819930] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.928 [2024-12-13 23:51:14.827922] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.928 [2024-12-13 23:51:14.827940] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.928 [2024-12-13 23:51:14.835959] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.928 [2024-12-13 23:51:14.835977] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.928 [2024-12-13 23:51:14.843980] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.928 [2024-12-13 23:51:14.843998] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.928 [2024-12-13 23:51:14.852006] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.928 [2024-12-13 23:51:14.852023] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.928 [2024-12-13 23:51:14.860025] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.928 [2024-12-13 23:51:14.860042] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.928 [2024-12-13 23:51:14.868040] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.928 [2024-12-13 23:51:14.868058] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.928 [2024-12-13 23:51:14.876070] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.928 [2024-12-13 23:51:14.876087] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.928 [2024-12-13 23:51:14.884095] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.928 [2024-12-13 23:51:14.884113] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.928 [2024-12-13 23:51:14.892106] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.928 [2024-12-13 23:51:14.892124] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.928 [2024-12-13 23:51:14.900136] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.928 [2024-12-13 23:51:14.900154] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.928 [2024-12-13 23:51:14.908160] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.928 [2024-12-13 23:51:14.908177] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.928 [2024-12-13 23:51:14.916178] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.928 [2024-12-13 23:51:14.916195] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.928 [2024-12-13 23:51:14.924205] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.928 [2024-12-13 23:51:14.924223] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.928 [2024-12-13 23:51:14.932215] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.928 [2024-12-13 23:51:14.932236] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.928 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (3877250) - No such process 00:10:35.928 23:51:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 3877250 00:10:35.928 23:51:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:35.928 23:51:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.928 23:51:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:35.928 23:51:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.928 23:51:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:10:35.928 23:51:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.928 23:51:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:35.928 delay0 00:10:35.928 23:51:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.928 23:51:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:10:35.928 23:51:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.928 23:51:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:35.928 23:51:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.928 23:51:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:10:36.186 [2024-12-13 23:51:15.110409] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:10:44.304 Initializing NVMe Controllers 00:10:44.304 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:10:44.304 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:10:44.304 Initialization complete. Launching workers. 00:10:44.304 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 294, failed: 12646 00:10:44.304 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 12866, failed to submit 74 00:10:44.304 success 12730, unsuccessful 136, failed 0 00:10:44.304 23:51:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:10:44.304 23:51:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:10:44.304 23:51:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:44.304 23:51:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:10:44.304 23:51:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:44.304 23:51:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:10:44.304 23:51:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:44.304 23:51:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:44.304 rmmod nvme_tcp 00:10:44.304 rmmod nvme_fabrics 00:10:44.304 rmmod nvme_keyring 00:10:44.304 23:51:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:44.304 23:51:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:10:44.304 23:51:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:10:44.304 23:51:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 3874644 ']' 00:10:44.304 23:51:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 3874644 00:10:44.304 23:51:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@954 -- # '[' -z 3874644 ']' 00:10:44.304 23:51:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@958 -- # kill -0 3874644 00:10:44.304 23:51:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # uname 00:10:44.304 23:51:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:44.304 23:51:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3874644 00:10:44.304 23:51:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:10:44.304 23:51:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:10:44.304 23:51:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3874644' 00:10:44.304 killing process with pid 3874644 00:10:44.304 23:51:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@973 -- # kill 3874644 00:10:44.304 23:51:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@978 -- # wait 3874644 00:10:44.564 23:51:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:44.564 23:51:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:44.564 23:51:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:44.564 23:51:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:10:44.564 23:51:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 00:10:44.564 23:51:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:44.564 23:51:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 00:10:44.564 23:51:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:44.564 23:51:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:44.564 23:51:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:44.564 23:51:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:44.564 23:51:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:47.099 23:51:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:47.099 00:10:47.099 real 0m35.713s 00:10:47.099 user 0m49.912s 00:10:47.099 sys 0m11.666s 00:10:47.099 23:51:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:47.099 23:51:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:47.099 ************************************ 00:10:47.099 END TEST nvmf_zcopy 00:10:47.099 ************************************ 00:10:47.099 23:51:25 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:10:47.099 23:51:25 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:47.099 23:51:25 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:47.099 23:51:25 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:47.099 ************************************ 00:10:47.099 START TEST nvmf_nmic 00:10:47.099 ************************************ 00:10:47.099 23:51:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:10:47.099 * Looking for test storage... 00:10:47.099 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:47.099 23:51:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:47.099 23:51:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1711 -- # lcov --version 00:10:47.099 23:51:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:47.100 23:51:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:47.100 23:51:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:47.100 23:51:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:47.100 23:51:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:47.100 23:51:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:10:47.100 23:51:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:10:47.100 23:51:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:10:47.100 23:51:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:10:47.100 23:51:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:10:47.100 23:51:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:10:47.100 23:51:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:10:47.100 23:51:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:47.100 23:51:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:10:47.100 23:51:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:10:47.100 23:51:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:47.100 23:51:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:47.100 23:51:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:10:47.100 23:51:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:10:47.100 23:51:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:47.100 23:51:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:10:47.100 23:51:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:10:47.100 23:51:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:10:47.100 23:51:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:10:47.100 23:51:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:47.100 23:51:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:10:47.100 23:51:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:10:47.100 23:51:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:47.100 23:51:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:47.100 23:51:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:10:47.100 23:51:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:47.100 23:51:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:47.100 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:47.100 --rc genhtml_branch_coverage=1 00:10:47.100 --rc genhtml_function_coverage=1 00:10:47.100 --rc genhtml_legend=1 00:10:47.100 --rc geninfo_all_blocks=1 00:10:47.100 --rc geninfo_unexecuted_blocks=1 00:10:47.100 00:10:47.100 ' 00:10:47.100 23:51:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:47.100 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:47.100 --rc genhtml_branch_coverage=1 00:10:47.100 --rc genhtml_function_coverage=1 00:10:47.100 --rc genhtml_legend=1 00:10:47.100 --rc geninfo_all_blocks=1 00:10:47.100 --rc geninfo_unexecuted_blocks=1 00:10:47.100 00:10:47.100 ' 00:10:47.100 23:51:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:47.100 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:47.100 --rc genhtml_branch_coverage=1 00:10:47.100 --rc genhtml_function_coverage=1 00:10:47.100 --rc genhtml_legend=1 00:10:47.100 --rc geninfo_all_blocks=1 00:10:47.100 --rc geninfo_unexecuted_blocks=1 00:10:47.100 00:10:47.100 ' 00:10:47.100 23:51:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:47.100 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:47.100 --rc genhtml_branch_coverage=1 00:10:47.100 --rc genhtml_function_coverage=1 00:10:47.100 --rc genhtml_legend=1 00:10:47.100 --rc geninfo_all_blocks=1 00:10:47.100 --rc geninfo_unexecuted_blocks=1 00:10:47.100 00:10:47.100 ' 00:10:47.100 23:51:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:47.100 23:51:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:10:47.100 23:51:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:47.100 23:51:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:47.100 23:51:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:47.100 23:51:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:47.100 23:51:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:47.100 23:51:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:47.100 23:51:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:47.100 23:51:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:47.100 23:51:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:47.100 23:51:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:47.100 23:51:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:10:47.100 23:51:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:10:47.100 23:51:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:47.100 23:51:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:47.100 23:51:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:47.100 23:51:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:47.100 23:51:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:47.100 23:51:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:10:47.100 23:51:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:47.100 23:51:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:47.100 23:51:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:47.100 23:51:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:47.100 23:51:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:47.100 23:51:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:47.100 23:51:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:10:47.100 23:51:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:47.100 23:51:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:10:47.100 23:51:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:47.100 23:51:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:47.100 23:51:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:47.100 23:51:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:47.100 23:51:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:47.100 23:51:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:47.100 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:47.100 23:51:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:47.100 23:51:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:47.100 23:51:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:47.100 23:51:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:47.100 23:51:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:47.100 23:51:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:10:47.100 23:51:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:47.100 23:51:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:47.100 23:51:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:47.100 23:51:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:47.100 23:51:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:47.100 23:51:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:47.100 23:51:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:47.100 23:51:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:47.100 23:51:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:47.101 23:51:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:47.101 23:51:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:10:47.101 23:51:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:52.373 23:51:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:52.373 23:51:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:10:52.373 23:51:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:52.373 23:51:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:52.373 23:51:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:52.373 23:51:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:52.373 23:51:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:52.373 23:51:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:10:52.373 23:51:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:52.373 23:51:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:10:52.373 23:51:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:10:52.373 23:51:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:10:52.373 23:51:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:10:52.373 23:51:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:10:52.373 23:51:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:10:52.373 23:51:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:52.373 23:51:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:52.373 23:51:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:52.373 23:51:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:52.373 23:51:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:52.373 23:51:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:52.373 23:51:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:52.373 23:51:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:52.373 23:51:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:52.373 23:51:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:52.373 23:51:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:52.373 23:51:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:52.373 23:51:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:52.373 23:51:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:52.373 23:51:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:52.373 23:51:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:52.373 23:51:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:52.373 23:51:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:52.373 23:51:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:52.373 23:51:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:10:52.373 Found 0000:af:00.0 (0x8086 - 0x159b) 00:10:52.373 23:51:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:52.373 23:51:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:52.373 23:51:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:52.373 23:51:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:52.373 23:51:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:52.373 23:51:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:52.373 23:51:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:10:52.373 Found 0000:af:00.1 (0x8086 - 0x159b) 00:10:52.373 23:51:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:52.373 23:51:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:52.373 23:51:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:52.373 23:51:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:52.373 23:51:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:52.373 23:51:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:52.373 23:51:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:52.373 23:51:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:52.373 23:51:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:52.373 23:51:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:52.373 23:51:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:52.373 23:51:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:52.373 23:51:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:52.373 23:51:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:52.373 23:51:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:52.373 23:51:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:10:52.373 Found net devices under 0000:af:00.0: cvl_0_0 00:10:52.373 23:51:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:52.373 23:51:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:52.373 23:51:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:52.373 23:51:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:52.373 23:51:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:52.373 23:51:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:52.373 23:51:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:52.373 23:51:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:52.373 23:51:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:10:52.373 Found net devices under 0000:af:00.1: cvl_0_1 00:10:52.373 23:51:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:52.373 23:51:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:52.373 23:51:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # is_hw=yes 00:10:52.373 23:51:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:52.373 23:51:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:52.373 23:51:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:52.373 23:51:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:52.373 23:51:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:52.373 23:51:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:52.373 23:51:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:52.373 23:51:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:52.373 23:51:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:52.373 23:51:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:52.373 23:51:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:52.373 23:51:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:52.373 23:51:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:52.373 23:51:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:52.373 23:51:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:52.373 23:51:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:52.373 23:51:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:52.374 23:51:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:52.374 23:51:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:52.374 23:51:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:52.374 23:51:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:52.374 23:51:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:52.374 23:51:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:52.374 23:51:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:52.374 23:51:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:52.374 23:51:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:52.374 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:52.374 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.404 ms 00:10:52.374 00:10:52.374 --- 10.0.0.2 ping statistics --- 00:10:52.374 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:52.374 rtt min/avg/max/mdev = 0.404/0.404/0.404/0.000 ms 00:10:52.374 23:51:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:52.374 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:52.374 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.199 ms 00:10:52.374 00:10:52.374 --- 10.0.0.1 ping statistics --- 00:10:52.374 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:52.374 rtt min/avg/max/mdev = 0.199/0.199/0.199/0.000 ms 00:10:52.374 23:51:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:52.374 23:51:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@450 -- # return 0 00:10:52.374 23:51:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:52.374 23:51:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:52.374 23:51:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:52.374 23:51:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:52.374 23:51:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:52.374 23:51:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:52.374 23:51:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:52.374 23:51:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:10:52.374 23:51:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:52.374 23:51:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:52.374 23:51:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:52.374 23:51:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=3883218 00:10:52.374 23:51:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 3883218 00:10:52.374 23:51:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@835 -- # '[' -z 3883218 ']' 00:10:52.374 23:51:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:52.374 23:51:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:52.374 23:51:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:52.374 23:51:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:52.374 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:52.374 23:51:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:52.374 23:51:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:52.633 [2024-12-13 23:51:31.542531] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:10:52.633 [2024-12-13 23:51:31.542621] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:52.633 [2024-12-13 23:51:31.662827] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:52.633 [2024-12-13 23:51:31.771025] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:52.633 [2024-12-13 23:51:31.771072] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:52.633 [2024-12-13 23:51:31.771086] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:52.633 [2024-12-13 23:51:31.771097] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:52.633 [2024-12-13 23:51:31.771106] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:52.633 [2024-12-13 23:51:31.773551] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:10:52.633 [2024-12-13 23:51:31.773673] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:10:52.633 [2024-12-13 23:51:31.773725] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:10:52.633 [2024-12-13 23:51:31.773736] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:10:53.570 23:51:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:53.570 23:51:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@868 -- # return 0 00:10:53.570 23:51:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:53.570 23:51:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:53.570 23:51:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:53.570 23:51:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:53.570 23:51:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:53.570 23:51:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.570 23:51:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:53.570 [2024-12-13 23:51:32.391798] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:53.570 23:51:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.570 23:51:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:53.570 23:51:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.570 23:51:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:53.570 Malloc0 00:10:53.570 23:51:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.570 23:51:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:53.570 23:51:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.570 23:51:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:53.570 23:51:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.570 23:51:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:53.570 23:51:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.570 23:51:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:53.570 23:51:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.570 23:51:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:53.570 23:51:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.570 23:51:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:53.570 [2024-12-13 23:51:32.518298] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:53.570 23:51:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.570 23:51:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:10:53.570 test case1: single bdev can't be used in multiple subsystems 00:10:53.570 23:51:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:10:53.570 23:51:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.570 23:51:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:53.570 23:51:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.570 23:51:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:10:53.570 23:51:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.570 23:51:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:53.570 23:51:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.570 23:51:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:10:53.570 23:51:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:10:53.570 23:51:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.570 23:51:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:53.570 [2024-12-13 23:51:32.546193] bdev.c:8538:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:10:53.570 [2024-12-13 23:51:32.546225] subsystem.c:2160:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:10:53.570 [2024-12-13 23:51:32.546237] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.570 request: 00:10:53.570 { 00:10:53.570 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:10:53.570 "namespace": { 00:10:53.570 "bdev_name": "Malloc0", 00:10:53.570 "no_auto_visible": false, 00:10:53.570 "hide_metadata": false 00:10:53.570 }, 00:10:53.570 "method": "nvmf_subsystem_add_ns", 00:10:53.570 "req_id": 1 00:10:53.570 } 00:10:53.570 Got JSON-RPC error response 00:10:53.570 response: 00:10:53.570 { 00:10:53.570 "code": -32602, 00:10:53.570 "message": "Invalid parameters" 00:10:53.570 } 00:10:53.570 23:51:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:10:53.570 23:51:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:10:53.570 23:51:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:10:53.570 23:51:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:10:53.570 Adding namespace failed - expected result. 00:10:53.570 23:51:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:10:53.570 test case2: host connect to nvmf target in multiple paths 00:10:53.570 23:51:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:10:53.570 23:51:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.570 23:51:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:53.570 [2024-12-13 23:51:32.558319] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:10:53.570 23:51:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.571 23:51:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:54.948 23:51:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:10:55.884 23:51:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:10:55.884 23:51:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1202 -- # local i=0 00:10:55.884 23:51:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:10:55.884 23:51:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:10:55.884 23:51:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1209 -- # sleep 2 00:10:57.788 23:51:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:10:57.788 23:51:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:10:57.788 23:51:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:10:57.788 23:51:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:10:57.788 23:51:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:10:57.788 23:51:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # return 0 00:10:57.788 23:51:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:10:58.052 [global] 00:10:58.052 thread=1 00:10:58.052 invalidate=1 00:10:58.052 rw=write 00:10:58.052 time_based=1 00:10:58.052 runtime=1 00:10:58.052 ioengine=libaio 00:10:58.052 direct=1 00:10:58.052 bs=4096 00:10:58.052 iodepth=1 00:10:58.052 norandommap=0 00:10:58.052 numjobs=1 00:10:58.052 00:10:58.052 verify_dump=1 00:10:58.052 verify_backlog=512 00:10:58.052 verify_state_save=0 00:10:58.052 do_verify=1 00:10:58.052 verify=crc32c-intel 00:10:58.052 [job0] 00:10:58.052 filename=/dev/nvme0n1 00:10:58.052 Could not set queue depth (nvme0n1) 00:10:58.308 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:58.308 fio-3.35 00:10:58.308 Starting 1 thread 00:10:59.677 00:10:59.677 job0: (groupid=0, jobs=1): err= 0: pid=3884391: Fri Dec 13 23:51:38 2024 00:10:59.677 read: IOPS=22, BW=88.6KiB/s (90.8kB/s)(92.0KiB/1038msec) 00:10:59.677 slat (nsec): min=9595, max=23638, avg=22228.52, stdev=2779.10 00:10:59.677 clat (usec): min=40842, max=42014, avg=41181.47, stdev=412.05 00:10:59.677 lat (usec): min=40856, max=42037, avg=41203.70, stdev=412.54 00:10:59.677 clat percentiles (usec): 00:10:59.677 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:10:59.677 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:10:59.677 | 70.00th=[41157], 80.00th=[41681], 90.00th=[42206], 95.00th=[42206], 00:10:59.677 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:10:59.677 | 99.99th=[42206] 00:10:59.677 write: IOPS=493, BW=1973KiB/s (2020kB/s)(2048KiB/1038msec); 0 zone resets 00:10:59.677 slat (nsec): min=9220, max=40657, avg=10269.23, stdev=2024.08 00:10:59.677 clat (usec): min=133, max=325, avg=162.60, stdev=11.73 00:10:59.677 lat (usec): min=144, max=365, avg=172.87, stdev=12.50 00:10:59.677 clat percentiles (usec): 00:10:59.677 | 1.00th=[ 145], 5.00th=[ 153], 10.00th=[ 155], 20.00th=[ 157], 00:10:59.677 | 30.00th=[ 159], 40.00th=[ 161], 50.00th=[ 161], 60.00th=[ 163], 00:10:59.677 | 70.00th=[ 165], 80.00th=[ 167], 90.00th=[ 172], 95.00th=[ 176], 00:10:59.677 | 99.00th=[ 196], 99.50th=[ 237], 99.90th=[ 326], 99.95th=[ 326], 00:10:59.677 | 99.99th=[ 326] 00:10:59.677 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:10:59.677 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:59.677 lat (usec) : 250=95.33%, 500=0.37% 00:10:59.677 lat (msec) : 50=4.30% 00:10:59.677 cpu : usr=0.00%, sys=0.77%, ctx=535, majf=0, minf=1 00:10:59.677 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:59.677 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:59.677 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:59.677 issued rwts: total=23,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:59.677 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:59.677 00:10:59.677 Run status group 0 (all jobs): 00:10:59.677 READ: bw=88.6KiB/s (90.8kB/s), 88.6KiB/s-88.6KiB/s (90.8kB/s-90.8kB/s), io=92.0KiB (94.2kB), run=1038-1038msec 00:10:59.677 WRITE: bw=1973KiB/s (2020kB/s), 1973KiB/s-1973KiB/s (2020kB/s-2020kB/s), io=2048KiB (2097kB), run=1038-1038msec 00:10:59.677 00:10:59.677 Disk stats (read/write): 00:10:59.677 nvme0n1: ios=69/512, merge=0/0, ticks=1010/84, in_queue=1094, util=95.69% 00:10:59.677 23:51:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:59.934 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:10:59.934 23:51:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:59.934 23:51:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1223 -- # local i=0 00:10:59.934 23:51:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:10:59.934 23:51:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:59.934 23:51:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:10:59.934 23:51:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:00.191 23:51:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1235 -- # return 0 00:11:00.191 23:51:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:11:00.191 23:51:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:11:00.191 23:51:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:00.191 23:51:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:11:00.191 23:51:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:00.191 23:51:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:11:00.191 23:51:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:00.191 23:51:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:00.191 rmmod nvme_tcp 00:11:00.191 rmmod nvme_fabrics 00:11:00.191 rmmod nvme_keyring 00:11:00.191 23:51:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:00.191 23:51:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:11:00.191 23:51:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:11:00.191 23:51:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 3883218 ']' 00:11:00.191 23:51:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 3883218 00:11:00.191 23:51:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@954 -- # '[' -z 3883218 ']' 00:11:00.191 23:51:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@958 -- # kill -0 3883218 00:11:00.191 23:51:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # uname 00:11:00.191 23:51:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:00.191 23:51:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3883218 00:11:00.191 23:51:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:00.191 23:51:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:00.191 23:51:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3883218' 00:11:00.192 killing process with pid 3883218 00:11:00.192 23:51:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@973 -- # kill 3883218 00:11:00.192 23:51:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@978 -- # wait 3883218 00:11:01.562 23:51:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:01.562 23:51:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:01.562 23:51:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:01.562 23:51:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:11:01.562 23:51:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 00:11:01.562 23:51:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:01.562 23:51:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 00:11:01.562 23:51:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:01.562 23:51:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:01.562 23:51:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:01.562 23:51:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:01.562 23:51:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:04.089 23:51:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:04.089 00:11:04.089 real 0m16.941s 00:11:04.089 user 0m41.536s 00:11:04.089 sys 0m5.144s 00:11:04.089 23:51:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:04.089 23:51:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:04.089 ************************************ 00:11:04.089 END TEST nvmf_nmic 00:11:04.089 ************************************ 00:11:04.089 23:51:42 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:11:04.089 23:51:42 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:04.089 23:51:42 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:04.089 23:51:42 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:04.089 ************************************ 00:11:04.089 START TEST nvmf_fio_target 00:11:04.089 ************************************ 00:11:04.089 23:51:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:11:04.089 * Looking for test storage... 00:11:04.089 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:04.089 23:51:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:04.089 23:51:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1711 -- # lcov --version 00:11:04.089 23:51:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:04.089 23:51:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:04.089 23:51:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:04.089 23:51:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:04.089 23:51:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:04.089 23:51:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:11:04.089 23:51:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:11:04.089 23:51:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:11:04.089 23:51:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:11:04.089 23:51:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:11:04.089 23:51:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:11:04.089 23:51:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:11:04.089 23:51:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:04.089 23:51:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:11:04.089 23:51:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:11:04.089 23:51:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:04.089 23:51:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:04.089 23:51:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:11:04.089 23:51:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:11:04.089 23:51:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:04.089 23:51:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:11:04.089 23:51:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:11:04.089 23:51:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:11:04.089 23:51:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:11:04.089 23:51:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:04.089 23:51:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:11:04.089 23:51:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:11:04.089 23:51:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:04.089 23:51:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:04.089 23:51:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:11:04.089 23:51:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:04.089 23:51:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:04.089 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:04.089 --rc genhtml_branch_coverage=1 00:11:04.089 --rc genhtml_function_coverage=1 00:11:04.089 --rc genhtml_legend=1 00:11:04.089 --rc geninfo_all_blocks=1 00:11:04.089 --rc geninfo_unexecuted_blocks=1 00:11:04.089 00:11:04.089 ' 00:11:04.089 23:51:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:04.089 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:04.089 --rc genhtml_branch_coverage=1 00:11:04.089 --rc genhtml_function_coverage=1 00:11:04.089 --rc genhtml_legend=1 00:11:04.089 --rc geninfo_all_blocks=1 00:11:04.089 --rc geninfo_unexecuted_blocks=1 00:11:04.089 00:11:04.089 ' 00:11:04.089 23:51:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:04.089 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:04.089 --rc genhtml_branch_coverage=1 00:11:04.089 --rc genhtml_function_coverage=1 00:11:04.089 --rc genhtml_legend=1 00:11:04.089 --rc geninfo_all_blocks=1 00:11:04.089 --rc geninfo_unexecuted_blocks=1 00:11:04.089 00:11:04.089 ' 00:11:04.089 23:51:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:04.089 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:04.089 --rc genhtml_branch_coverage=1 00:11:04.089 --rc genhtml_function_coverage=1 00:11:04.089 --rc genhtml_legend=1 00:11:04.089 --rc geninfo_all_blocks=1 00:11:04.089 --rc geninfo_unexecuted_blocks=1 00:11:04.089 00:11:04.089 ' 00:11:04.089 23:51:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:04.089 23:51:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:11:04.089 23:51:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:04.089 23:51:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:04.089 23:51:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:04.089 23:51:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:04.089 23:51:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:04.089 23:51:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:04.089 23:51:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:04.089 23:51:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:04.089 23:51:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:04.089 23:51:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:04.089 23:51:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:11:04.089 23:51:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:11:04.089 23:51:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:04.089 23:51:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:04.089 23:51:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:04.089 23:51:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:04.089 23:51:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:04.090 23:51:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:11:04.090 23:51:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:04.090 23:51:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:04.090 23:51:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:04.090 23:51:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:04.090 23:51:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:04.090 23:51:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:04.090 23:51:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:11:04.090 23:51:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:04.090 23:51:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:11:04.090 23:51:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:04.090 23:51:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:04.090 23:51:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:04.090 23:51:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:04.090 23:51:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:04.090 23:51:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:04.090 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:04.090 23:51:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:04.090 23:51:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:04.090 23:51:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:04.090 23:51:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:04.090 23:51:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:04.090 23:51:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:04.090 23:51:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:11:04.090 23:51:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:04.090 23:51:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:04.090 23:51:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:04.090 23:51:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:04.090 23:51:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:04.090 23:51:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:04.090 23:51:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:04.090 23:51:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:04.090 23:51:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:04.090 23:51:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:04.090 23:51:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:11:04.090 23:51:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:11:09.347 23:51:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:09.347 23:51:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:11:09.347 23:51:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:09.347 23:51:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:09.347 23:51:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:09.347 23:51:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:09.347 23:51:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:09.347 23:51:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:11:09.347 23:51:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:09.347 23:51:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:11:09.347 23:51:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:11:09.347 23:51:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:11:09.347 23:51:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:11:09.347 23:51:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:11:09.347 23:51:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:11:09.347 23:51:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:09.347 23:51:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:09.347 23:51:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:09.347 23:51:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:09.347 23:51:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:09.347 23:51:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:09.347 23:51:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:09.347 23:51:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:09.347 23:51:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:09.347 23:51:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:09.347 23:51:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:09.347 23:51:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:09.347 23:51:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:09.347 23:51:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:09.347 23:51:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:09.347 23:51:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:09.347 23:51:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:09.347 23:51:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:09.347 23:51:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:09.347 23:51:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:11:09.347 Found 0000:af:00.0 (0x8086 - 0x159b) 00:11:09.347 23:51:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:09.347 23:51:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:09.347 23:51:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:09.347 23:51:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:09.347 23:51:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:09.347 23:51:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:09.347 23:51:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:11:09.348 Found 0000:af:00.1 (0x8086 - 0x159b) 00:11:09.348 23:51:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:09.348 23:51:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:09.348 23:51:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:09.348 23:51:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:09.348 23:51:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:09.348 23:51:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:09.348 23:51:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:09.348 23:51:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:09.348 23:51:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:09.348 23:51:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:09.348 23:51:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:09.348 23:51:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:09.348 23:51:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:09.348 23:51:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:09.348 23:51:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:09.348 23:51:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:11:09.348 Found net devices under 0000:af:00.0: cvl_0_0 00:11:09.348 23:51:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:09.348 23:51:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:09.348 23:51:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:09.348 23:51:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:09.348 23:51:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:09.348 23:51:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:09.348 23:51:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:09.348 23:51:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:09.348 23:51:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:11:09.348 Found net devices under 0000:af:00.1: cvl_0_1 00:11:09.348 23:51:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:09.348 23:51:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:09.348 23:51:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # is_hw=yes 00:11:09.348 23:51:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:09.348 23:51:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:09.348 23:51:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:09.348 23:51:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:09.348 23:51:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:09.348 23:51:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:09.348 23:51:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:09.348 23:51:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:09.348 23:51:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:09.348 23:51:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:09.348 23:51:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:09.348 23:51:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:09.348 23:51:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:09.348 23:51:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:09.348 23:51:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:09.348 23:51:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:09.348 23:51:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:09.348 23:51:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:09.348 23:51:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:09.348 23:51:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:09.348 23:51:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:09.348 23:51:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:09.348 23:51:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:09.348 23:51:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:09.348 23:51:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:09.348 23:51:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:09.348 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:09.348 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.451 ms 00:11:09.348 00:11:09.348 --- 10.0.0.2 ping statistics --- 00:11:09.348 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:09.348 rtt min/avg/max/mdev = 0.451/0.451/0.451/0.000 ms 00:11:09.348 23:51:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:09.348 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:09.348 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.202 ms 00:11:09.348 00:11:09.348 --- 10.0.0.1 ping statistics --- 00:11:09.348 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:09.348 rtt min/avg/max/mdev = 0.202/0.202/0.202/0.000 ms 00:11:09.348 23:51:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:09.348 23:51:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@450 -- # return 0 00:11:09.348 23:51:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:09.348 23:51:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:09.348 23:51:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:09.348 23:51:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:09.348 23:51:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:09.348 23:51:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:09.348 23:51:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:09.348 23:51:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:11:09.348 23:51:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:09.348 23:51:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:09.348 23:51:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:11:09.348 23:51:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=3888118 00:11:09.348 23:51:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 3888118 00:11:09.348 23:51:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:09.348 23:51:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@835 -- # '[' -z 3888118 ']' 00:11:09.348 23:51:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:09.348 23:51:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:09.348 23:51:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:09.348 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:09.348 23:51:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:09.348 23:51:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:11:09.348 [2024-12-13 23:51:48.021588] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:11:09.348 [2024-12-13 23:51:48.021698] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:09.348 [2024-12-13 23:51:48.141997] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:09.348 [2024-12-13 23:51:48.250755] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:09.348 [2024-12-13 23:51:48.250804] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:09.348 [2024-12-13 23:51:48.250815] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:09.348 [2024-12-13 23:51:48.250839] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:09.348 [2024-12-13 23:51:48.250847] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:09.348 [2024-12-13 23:51:48.253317] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:11:09.348 [2024-12-13 23:51:48.253388] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:11:09.348 [2024-12-13 23:51:48.253495] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:11:09.348 [2024-12-13 23:51:48.253503] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:11:09.914 23:51:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:09.914 23:51:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@868 -- # return 0 00:11:09.914 23:51:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:09.914 23:51:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:09.914 23:51:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:11:09.914 23:51:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:09.914 23:51:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:11:09.914 [2024-12-13 23:51:49.021141] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:10.171 23:51:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:10.429 23:51:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:11:10.429 23:51:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:10.687 23:51:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:11:10.687 23:51:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:10.944 23:51:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:11:10.944 23:51:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:11.202 23:51:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:11:11.202 23:51:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:11:11.460 23:51:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:11.718 23:51:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:11:11.718 23:51:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:11.976 23:51:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:11:11.976 23:51:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:12.233 23:51:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:11:12.233 23:51:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:11:12.233 23:51:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:12.491 23:51:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:11:12.491 23:51:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:12.749 23:51:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:11:12.749 23:51:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:13.007 23:51:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:13.007 [2024-12-13 23:51:52.129823] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:13.265 23:51:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:11:13.265 23:51:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:11:13.523 23:51:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:14.894 23:51:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:11:14.894 23:51:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1202 -- # local i=0 00:11:14.894 23:51:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:11:14.894 23:51:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1204 -- # [[ -n 4 ]] 00:11:14.894 23:51:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1205 -- # nvme_device_counter=4 00:11:14.894 23:51:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1209 -- # sleep 2 00:11:16.788 23:51:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:11:16.788 23:51:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:11:16.788 23:51:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:11:16.788 23:51:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # nvme_devices=4 00:11:16.788 23:51:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:11:16.788 23:51:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # return 0 00:11:16.788 23:51:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:11:16.788 [global] 00:11:16.788 thread=1 00:11:16.788 invalidate=1 00:11:16.788 rw=write 00:11:16.788 time_based=1 00:11:16.788 runtime=1 00:11:16.788 ioengine=libaio 00:11:16.788 direct=1 00:11:16.788 bs=4096 00:11:16.788 iodepth=1 00:11:16.788 norandommap=0 00:11:16.788 numjobs=1 00:11:16.788 00:11:16.788 verify_dump=1 00:11:16.788 verify_backlog=512 00:11:16.788 verify_state_save=0 00:11:16.788 do_verify=1 00:11:16.788 verify=crc32c-intel 00:11:16.788 [job0] 00:11:16.788 filename=/dev/nvme0n1 00:11:16.788 [job1] 00:11:16.788 filename=/dev/nvme0n2 00:11:16.788 [job2] 00:11:16.788 filename=/dev/nvme0n3 00:11:16.788 [job3] 00:11:16.788 filename=/dev/nvme0n4 00:11:16.788 Could not set queue depth (nvme0n1) 00:11:16.788 Could not set queue depth (nvme0n2) 00:11:16.788 Could not set queue depth (nvme0n3) 00:11:16.788 Could not set queue depth (nvme0n4) 00:11:17.046 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:17.046 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:17.046 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:17.046 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:17.046 fio-3.35 00:11:17.046 Starting 4 threads 00:11:18.418 00:11:18.419 job0: (groupid=0, jobs=1): err= 0: pid=3889656: Fri Dec 13 23:51:57 2024 00:11:18.419 read: IOPS=21, BW=86.4KiB/s (88.4kB/s)(88.0KiB/1019msec) 00:11:18.419 slat (nsec): min=10017, max=23576, avg=21476.14, stdev=2640.69 00:11:18.419 clat (usec): min=40811, max=41145, avg=40962.85, stdev=68.45 00:11:18.419 lat (usec): min=40833, max=41168, avg=40984.32, stdev=69.40 00:11:18.419 clat percentiles (usec): 00:11:18.419 | 1.00th=[40633], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:11:18.419 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:11:18.419 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:11:18.419 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:11:18.419 | 99.99th=[41157] 00:11:18.419 write: IOPS=502, BW=2010KiB/s (2058kB/s)(2048KiB/1019msec); 0 zone resets 00:11:18.419 slat (nsec): min=8707, max=40470, avg=11834.68, stdev=2813.97 00:11:18.419 clat (usec): min=157, max=350, avg=213.63, stdev=23.03 00:11:18.419 lat (usec): min=166, max=360, avg=225.47, stdev=23.19 00:11:18.419 clat percentiles (usec): 00:11:18.419 | 1.00th=[ 167], 5.00th=[ 182], 10.00th=[ 188], 20.00th=[ 196], 00:11:18.419 | 30.00th=[ 202], 40.00th=[ 208], 50.00th=[ 210], 60.00th=[ 217], 00:11:18.419 | 70.00th=[ 221], 80.00th=[ 231], 90.00th=[ 241], 95.00th=[ 253], 00:11:18.419 | 99.00th=[ 285], 99.50th=[ 310], 99.90th=[ 351], 99.95th=[ 351], 00:11:18.419 | 99.99th=[ 351] 00:11:18.419 bw ( KiB/s): min= 4087, max= 4087, per=25.54%, avg=4087.00, stdev= 0.00, samples=1 00:11:18.419 iops : min= 1021, max= 1021, avg=1021.00, stdev= 0.00, samples=1 00:11:18.419 lat (usec) : 250=90.26%, 500=5.62% 00:11:18.419 lat (msec) : 50=4.12% 00:11:18.419 cpu : usr=0.49%, sys=0.88%, ctx=534, majf=0, minf=1 00:11:18.419 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:18.419 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:18.419 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:18.419 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:18.419 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:18.419 job1: (groupid=0, jobs=1): err= 0: pid=3889662: Fri Dec 13 23:51:57 2024 00:11:18.419 read: IOPS=21, BW=85.9KiB/s (88.0kB/s)(88.0KiB/1024msec) 00:11:18.419 slat (nsec): min=9199, max=24781, avg=12604.18, stdev=4895.68 00:11:18.419 clat (usec): min=40849, max=42004, avg=41220.04, stdev=418.91 00:11:18.419 lat (usec): min=40860, max=42028, avg=41232.64, stdev=420.85 00:11:18.419 clat percentiles (usec): 00:11:18.419 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:11:18.419 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:11:18.419 | 70.00th=[41157], 80.00th=[41681], 90.00th=[42206], 95.00th=[42206], 00:11:18.419 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:11:18.419 | 99.99th=[42206] 00:11:18.419 write: IOPS=500, BW=2000KiB/s (2048kB/s)(2048KiB/1024msec); 0 zone resets 00:11:18.419 slat (nsec): min=9958, max=43502, avg=11715.39, stdev=2100.21 00:11:18.419 clat (usec): min=158, max=305, avg=212.85, stdev=21.64 00:11:18.419 lat (usec): min=171, max=340, avg=224.57, stdev=21.80 00:11:18.419 clat percentiles (usec): 00:11:18.419 | 1.00th=[ 172], 5.00th=[ 182], 10.00th=[ 188], 20.00th=[ 196], 00:11:18.419 | 30.00th=[ 200], 40.00th=[ 206], 50.00th=[ 210], 60.00th=[ 217], 00:11:18.419 | 70.00th=[ 223], 80.00th=[ 231], 90.00th=[ 241], 95.00th=[ 247], 00:11:18.419 | 99.00th=[ 281], 99.50th=[ 293], 99.90th=[ 306], 99.95th=[ 306], 00:11:18.419 | 99.99th=[ 306] 00:11:18.419 bw ( KiB/s): min= 4087, max= 4087, per=25.54%, avg=4087.00, stdev= 0.00, samples=1 00:11:18.419 iops : min= 1021, max= 1021, avg=1021.00, stdev= 0.00, samples=1 00:11:18.419 lat (usec) : 250=91.95%, 500=3.93% 00:11:18.419 lat (msec) : 50=4.12% 00:11:18.419 cpu : usr=0.98%, sys=0.29%, ctx=534, majf=0, minf=1 00:11:18.419 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:18.419 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:18.419 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:18.419 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:18.419 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:18.419 job2: (groupid=0, jobs=1): err= 0: pid=3889689: Fri Dec 13 23:51:57 2024 00:11:18.419 read: IOPS=2170, BW=8683KiB/s (8892kB/s)(8692KiB/1001msec) 00:11:18.419 slat (nsec): min=7132, max=37504, avg=8100.92, stdev=1250.51 00:11:18.419 clat (usec): min=193, max=333, avg=232.65, stdev=17.99 00:11:18.419 lat (usec): min=201, max=340, avg=240.75, stdev=17.97 00:11:18.419 clat percentiles (usec): 00:11:18.419 | 1.00th=[ 200], 5.00th=[ 206], 10.00th=[ 212], 20.00th=[ 217], 00:11:18.419 | 30.00th=[ 221], 40.00th=[ 227], 50.00th=[ 231], 60.00th=[ 237], 00:11:18.419 | 70.00th=[ 241], 80.00th=[ 249], 90.00th=[ 260], 95.00th=[ 265], 00:11:18.419 | 99.00th=[ 277], 99.50th=[ 277], 99.90th=[ 285], 99.95th=[ 330], 00:11:18.419 | 99.99th=[ 334] 00:11:18.419 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:11:18.419 slat (nsec): min=9987, max=46786, avg=11530.40, stdev=2311.72 00:11:18.419 clat (usec): min=126, max=1362, avg=169.57, stdev=33.37 00:11:18.419 lat (usec): min=143, max=1373, avg=181.10, stdev=33.80 00:11:18.419 clat percentiles (usec): 00:11:18.419 | 1.00th=[ 139], 5.00th=[ 143], 10.00th=[ 145], 20.00th=[ 149], 00:11:18.419 | 30.00th=[ 153], 40.00th=[ 159], 50.00th=[ 165], 60.00th=[ 172], 00:11:18.419 | 70.00th=[ 180], 80.00th=[ 190], 90.00th=[ 202], 95.00th=[ 210], 00:11:18.419 | 99.00th=[ 235], 99.50th=[ 269], 99.90th=[ 310], 99.95th=[ 338], 00:11:18.419 | 99.99th=[ 1369] 00:11:18.419 bw ( KiB/s): min=10091, max=10091, per=63.07%, avg=10091.00, stdev= 0.00, samples=1 00:11:18.419 iops : min= 2522, max= 2522, avg=2522.00, stdev= 0.00, samples=1 00:11:18.419 lat (usec) : 250=91.15%, 500=8.83% 00:11:18.419 lat (msec) : 2=0.02% 00:11:18.419 cpu : usr=3.90%, sys=7.40%, ctx=4733, majf=0, minf=1 00:11:18.419 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:18.419 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:18.419 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:18.419 issued rwts: total=2173,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:18.419 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:18.419 job3: (groupid=0, jobs=1): err= 0: pid=3889701: Fri Dec 13 23:51:57 2024 00:11:18.419 read: IOPS=21, BW=86.7KiB/s (88.8kB/s)(88.0KiB/1015msec) 00:11:18.419 slat (nsec): min=10287, max=23993, avg=22483.95, stdev=2746.07 00:11:18.419 clat (usec): min=40851, max=42003, avg=41336.71, stdev=472.40 00:11:18.419 lat (usec): min=40875, max=42027, avg=41359.19, stdev=472.56 00:11:18.419 clat percentiles (usec): 00:11:18.419 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:11:18.419 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:11:18.419 | 70.00th=[41681], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:11:18.419 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:11:18.419 | 99.99th=[42206] 00:11:18.419 write: IOPS=504, BW=2018KiB/s (2066kB/s)(2048KiB/1015msec); 0 zone resets 00:11:18.419 slat (nsec): min=9758, max=43379, avg=11506.72, stdev=2580.47 00:11:18.419 clat (usec): min=156, max=434, avg=190.58, stdev=19.59 00:11:18.419 lat (usec): min=167, max=478, avg=202.08, stdev=20.84 00:11:18.419 clat percentiles (usec): 00:11:18.419 | 1.00th=[ 167], 5.00th=[ 172], 10.00th=[ 174], 20.00th=[ 178], 00:11:18.419 | 30.00th=[ 182], 40.00th=[ 186], 50.00th=[ 190], 60.00th=[ 192], 00:11:18.419 | 70.00th=[ 196], 80.00th=[ 202], 90.00th=[ 210], 95.00th=[ 217], 00:11:18.419 | 99.00th=[ 231], 99.50th=[ 237], 99.90th=[ 437], 99.95th=[ 437], 00:11:18.419 | 99.99th=[ 437] 00:11:18.419 bw ( KiB/s): min= 4087, max= 4087, per=25.54%, avg=4087.00, stdev= 0.00, samples=1 00:11:18.419 iops : min= 1021, max= 1021, avg=1021.00, stdev= 0.00, samples=1 00:11:18.419 lat (usec) : 250=95.51%, 500=0.37% 00:11:18.419 lat (msec) : 50=4.12% 00:11:18.419 cpu : usr=0.20%, sys=0.69%, ctx=534, majf=0, minf=1 00:11:18.419 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:18.419 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:18.419 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:18.419 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:18.419 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:18.419 00:11:18.419 Run status group 0 (all jobs): 00:11:18.419 READ: bw=8746KiB/s (8956kB/s), 85.9KiB/s-8683KiB/s (88.0kB/s-8892kB/s), io=8956KiB (9171kB), run=1001-1024msec 00:11:18.419 WRITE: bw=15.6MiB/s (16.4MB/s), 2000KiB/s-9.99MiB/s (2048kB/s-10.5MB/s), io=16.0MiB (16.8MB), run=1001-1024msec 00:11:18.419 00:11:18.419 Disk stats (read/write): 00:11:18.419 nvme0n1: ios=66/512, merge=0/0, ticks=687/107, in_queue=794, util=83.37% 00:11:18.419 nvme0n2: ios=16/512, merge=0/0, ticks=659/111, in_queue=770, util=83.47% 00:11:18.419 nvme0n3: ios=1766/2048, merge=0/0, ticks=394/331, in_queue=725, util=87.80% 00:11:18.419 nvme0n4: ios=17/512, merge=0/0, ticks=703/99, in_queue=802, util=89.24% 00:11:18.419 23:51:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:11:18.419 [global] 00:11:18.419 thread=1 00:11:18.419 invalidate=1 00:11:18.419 rw=randwrite 00:11:18.419 time_based=1 00:11:18.419 runtime=1 00:11:18.419 ioengine=libaio 00:11:18.419 direct=1 00:11:18.419 bs=4096 00:11:18.419 iodepth=1 00:11:18.419 norandommap=0 00:11:18.419 numjobs=1 00:11:18.419 00:11:18.419 verify_dump=1 00:11:18.419 verify_backlog=512 00:11:18.419 verify_state_save=0 00:11:18.419 do_verify=1 00:11:18.419 verify=crc32c-intel 00:11:18.419 [job0] 00:11:18.419 filename=/dev/nvme0n1 00:11:18.419 [job1] 00:11:18.419 filename=/dev/nvme0n2 00:11:18.419 [job2] 00:11:18.419 filename=/dev/nvme0n3 00:11:18.419 [job3] 00:11:18.419 filename=/dev/nvme0n4 00:11:18.419 Could not set queue depth (nvme0n1) 00:11:18.419 Could not set queue depth (nvme0n2) 00:11:18.419 Could not set queue depth (nvme0n3) 00:11:18.419 Could not set queue depth (nvme0n4) 00:11:18.677 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:18.677 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:18.677 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:18.677 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:18.677 fio-3.35 00:11:18.677 Starting 4 threads 00:11:20.050 00:11:20.050 job0: (groupid=0, jobs=1): err= 0: pid=3890125: Fri Dec 13 23:51:58 2024 00:11:20.050 read: IOPS=21, BW=87.6KiB/s (89.7kB/s)(88.0KiB/1005msec) 00:11:20.050 slat (nsec): min=9773, max=25788, avg=22361.91, stdev=2989.89 00:11:20.050 clat (usec): min=40874, max=41077, avg=40968.29, stdev=50.23 00:11:20.050 lat (usec): min=40896, max=41102, avg=40990.65, stdev=49.75 00:11:20.050 clat percentiles (usec): 00:11:20.050 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:11:20.050 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:11:20.050 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:11:20.050 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:11:20.050 | 99.99th=[41157] 00:11:20.050 write: IOPS=509, BW=2038KiB/s (2087kB/s)(2048KiB/1005msec); 0 zone resets 00:11:20.050 slat (nsec): min=10411, max=37341, avg=11846.80, stdev=2306.29 00:11:20.050 clat (usec): min=161, max=282, avg=185.15, stdev=14.14 00:11:20.050 lat (usec): min=172, max=318, avg=197.00, stdev=14.74 00:11:20.050 clat percentiles (usec): 00:11:20.050 | 1.00th=[ 163], 5.00th=[ 167], 10.00th=[ 169], 20.00th=[ 174], 00:11:20.050 | 30.00th=[ 178], 40.00th=[ 180], 50.00th=[ 184], 60.00th=[ 186], 00:11:20.050 | 70.00th=[ 190], 80.00th=[ 196], 90.00th=[ 202], 95.00th=[ 210], 00:11:20.050 | 99.00th=[ 227], 99.50th=[ 245], 99.90th=[ 281], 99.95th=[ 281], 00:11:20.050 | 99.99th=[ 281] 00:11:20.050 bw ( KiB/s): min= 4096, max= 4096, per=51.50%, avg=4096.00, stdev= 0.00, samples=1 00:11:20.050 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:11:20.050 lat (usec) : 250=95.51%, 500=0.37% 00:11:20.050 lat (msec) : 50=4.12% 00:11:20.050 cpu : usr=0.60%, sys=0.80%, ctx=537, majf=0, minf=1 00:11:20.050 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:20.050 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:20.050 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:20.050 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:20.050 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:20.050 job1: (groupid=0, jobs=1): err= 0: pid=3890136: Fri Dec 13 23:51:58 2024 00:11:20.050 read: IOPS=21, BW=86.6KiB/s (88.7kB/s)(88.0KiB/1016msec) 00:11:20.050 slat (nsec): min=10725, max=22648, avg=21651.45, stdev=2452.03 00:11:20.050 clat (usec): min=40881, max=41992, avg=41391.37, stdev=478.29 00:11:20.050 lat (usec): min=40904, max=42014, avg=41413.02, stdev=478.41 00:11:20.050 clat percentiles (usec): 00:11:20.050 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:11:20.050 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41681], 00:11:20.050 | 70.00th=[41681], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:11:20.050 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:11:20.050 | 99.99th=[42206] 00:11:20.050 write: IOPS=503, BW=2016KiB/s (2064kB/s)(2048KiB/1016msec); 0 zone resets 00:11:20.050 slat (nsec): min=10842, max=39367, avg=12220.31, stdev=1864.14 00:11:20.050 clat (usec): min=156, max=326, avg=186.48, stdev=13.86 00:11:20.050 lat (usec): min=168, max=365, avg=198.71, stdev=14.55 00:11:20.050 clat percentiles (usec): 00:11:20.050 | 1.00th=[ 163], 5.00th=[ 167], 10.00th=[ 172], 20.00th=[ 176], 00:11:20.050 | 30.00th=[ 180], 40.00th=[ 182], 50.00th=[ 186], 60.00th=[ 188], 00:11:20.050 | 70.00th=[ 192], 80.00th=[ 198], 90.00th=[ 202], 95.00th=[ 210], 00:11:20.050 | 99.00th=[ 219], 99.50th=[ 223], 99.90th=[ 326], 99.95th=[ 326], 00:11:20.050 | 99.99th=[ 326] 00:11:20.050 bw ( KiB/s): min= 4096, max= 4096, per=51.50%, avg=4096.00, stdev= 0.00, samples=1 00:11:20.050 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:11:20.050 lat (usec) : 250=95.69%, 500=0.19% 00:11:20.050 lat (msec) : 50=4.12% 00:11:20.050 cpu : usr=0.30%, sys=0.59%, ctx=536, majf=0, minf=1 00:11:20.050 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:20.050 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:20.050 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:20.050 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:20.050 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:20.050 job2: (groupid=0, jobs=1): err= 0: pid=3890152: Fri Dec 13 23:51:58 2024 00:11:20.050 read: IOPS=21, BW=85.4KiB/s (87.5kB/s)(88.0KiB/1030msec) 00:11:20.050 slat (nsec): min=9739, max=23827, avg=22351.41, stdev=2846.58 00:11:20.050 clat (usec): min=40812, max=42051, avg=41266.43, stdev=460.68 00:11:20.050 lat (usec): min=40835, max=42075, avg=41288.78, stdev=460.25 00:11:20.050 clat percentiles (usec): 00:11:20.050 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:11:20.050 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:11:20.050 | 70.00th=[41681], 80.00th=[41681], 90.00th=[42206], 95.00th=[42206], 00:11:20.050 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:11:20.050 | 99.99th=[42206] 00:11:20.050 write: IOPS=497, BW=1988KiB/s (2036kB/s)(2048KiB/1030msec); 0 zone resets 00:11:20.050 slat (nsec): min=9007, max=36979, avg=9993.16, stdev=1468.47 00:11:20.050 clat (usec): min=168, max=377, avg=224.24, stdev=24.23 00:11:20.050 lat (usec): min=178, max=393, avg=234.24, stdev=24.54 00:11:20.050 clat percentiles (usec): 00:11:20.050 | 1.00th=[ 176], 5.00th=[ 184], 10.00th=[ 188], 20.00th=[ 208], 00:11:20.050 | 30.00th=[ 215], 40.00th=[ 223], 50.00th=[ 229], 60.00th=[ 233], 00:11:20.050 | 70.00th=[ 237], 80.00th=[ 241], 90.00th=[ 247], 95.00th=[ 258], 00:11:20.050 | 99.00th=[ 281], 99.50th=[ 306], 99.90th=[ 379], 99.95th=[ 379], 00:11:20.050 | 99.99th=[ 379] 00:11:20.050 bw ( KiB/s): min= 4096, max= 4096, per=51.50%, avg=4096.00, stdev= 0.00, samples=1 00:11:20.050 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:11:20.050 lat (usec) : 250=88.20%, 500=7.68% 00:11:20.050 lat (msec) : 50=4.12% 00:11:20.050 cpu : usr=0.19%, sys=0.58%, ctx=534, majf=0, minf=2 00:11:20.050 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:20.050 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:20.050 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:20.050 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:20.050 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:20.050 job3: (groupid=0, jobs=1): err= 0: pid=3890158: Fri Dec 13 23:51:58 2024 00:11:20.050 read: IOPS=21, BW=85.4KiB/s (87.5kB/s)(88.0KiB/1030msec) 00:11:20.050 slat (nsec): min=9243, max=24082, avg=22620.64, stdev=3014.24 00:11:20.050 clat (usec): min=40875, max=41982, avg=41275.98, stdev=458.14 00:11:20.050 lat (usec): min=40898, max=42005, avg=41298.60, stdev=457.47 00:11:20.050 clat percentiles (usec): 00:11:20.050 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:11:20.050 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:11:20.050 | 70.00th=[41681], 80.00th=[41681], 90.00th=[42206], 95.00th=[42206], 00:11:20.050 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:11:20.050 | 99.99th=[42206] 00:11:20.050 write: IOPS=497, BW=1988KiB/s (2036kB/s)(2048KiB/1030msec); 0 zone resets 00:11:20.050 slat (nsec): min=8896, max=40118, avg=9855.95, stdev=1527.91 00:11:20.050 clat (usec): min=170, max=361, avg=223.77, stdev=22.75 00:11:20.050 lat (usec): min=181, max=401, avg=233.63, stdev=23.16 00:11:20.050 clat percentiles (usec): 00:11:20.050 | 1.00th=[ 176], 5.00th=[ 182], 10.00th=[ 188], 20.00th=[ 206], 00:11:20.050 | 30.00th=[ 217], 40.00th=[ 223], 50.00th=[ 227], 60.00th=[ 233], 00:11:20.050 | 70.00th=[ 237], 80.00th=[ 241], 90.00th=[ 247], 95.00th=[ 258], 00:11:20.050 | 99.00th=[ 273], 99.50th=[ 285], 99.90th=[ 363], 99.95th=[ 363], 00:11:20.050 | 99.99th=[ 363] 00:11:20.050 bw ( KiB/s): min= 4096, max= 4096, per=51.50%, avg=4096.00, stdev= 0.00, samples=1 00:11:20.050 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:11:20.050 lat (usec) : 250=88.39%, 500=7.49% 00:11:20.050 lat (msec) : 50=4.12% 00:11:20.050 cpu : usr=0.00%, sys=0.78%, ctx=534, majf=0, minf=2 00:11:20.050 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:20.050 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:20.050 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:20.050 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:20.050 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:20.050 00:11:20.050 Run status group 0 (all jobs): 00:11:20.050 READ: bw=342KiB/s (350kB/s), 85.4KiB/s-87.6KiB/s (87.5kB/s-89.7kB/s), io=352KiB (360kB), run=1005-1030msec 00:11:20.050 WRITE: bw=7953KiB/s (8144kB/s), 1988KiB/s-2038KiB/s (2036kB/s-2087kB/s), io=8192KiB (8389kB), run=1005-1030msec 00:11:20.050 00:11:20.050 Disk stats (read/write): 00:11:20.050 nvme0n1: ios=52/512, merge=0/0, ticks=1654/88, in_queue=1742, util=97.80% 00:11:20.050 nvme0n2: ios=54/512, merge=0/0, ticks=1659/89, in_queue=1748, util=99.39% 00:11:20.050 nvme0n3: ios=74/512, merge=0/0, ticks=1217/111, in_queue=1328, util=94.89% 00:11:20.050 nvme0n4: ios=74/512, merge=0/0, ticks=763/114, in_queue=877, util=94.85% 00:11:20.050 23:51:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:11:20.050 [global] 00:11:20.050 thread=1 00:11:20.050 invalidate=1 00:11:20.050 rw=write 00:11:20.050 time_based=1 00:11:20.050 runtime=1 00:11:20.050 ioengine=libaio 00:11:20.050 direct=1 00:11:20.050 bs=4096 00:11:20.050 iodepth=128 00:11:20.050 norandommap=0 00:11:20.050 numjobs=1 00:11:20.050 00:11:20.050 verify_dump=1 00:11:20.050 verify_backlog=512 00:11:20.050 verify_state_save=0 00:11:20.050 do_verify=1 00:11:20.051 verify=crc32c-intel 00:11:20.051 [job0] 00:11:20.051 filename=/dev/nvme0n1 00:11:20.051 [job1] 00:11:20.051 filename=/dev/nvme0n2 00:11:20.051 [job2] 00:11:20.051 filename=/dev/nvme0n3 00:11:20.051 [job3] 00:11:20.051 filename=/dev/nvme0n4 00:11:20.051 Could not set queue depth (nvme0n1) 00:11:20.051 Could not set queue depth (nvme0n2) 00:11:20.051 Could not set queue depth (nvme0n3) 00:11:20.051 Could not set queue depth (nvme0n4) 00:11:20.308 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:20.308 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:20.308 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:20.308 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:20.308 fio-3.35 00:11:20.308 Starting 4 threads 00:11:21.702 00:11:21.702 job0: (groupid=0, jobs=1): err= 0: pid=3890603: Fri Dec 13 23:52:00 2024 00:11:21.702 read: IOPS=3697, BW=14.4MiB/s (15.1MB/s)(14.5MiB/1005msec) 00:11:21.702 slat (nsec): min=1388, max=20807k, avg=110436.66, stdev=878178.18 00:11:21.702 clat (usec): min=1893, max=76938, avg=15004.68, stdev=8375.00 00:11:21.702 lat (usec): min=5767, max=76942, avg=15115.12, stdev=8433.70 00:11:21.702 clat percentiles (usec): 00:11:21.702 | 1.00th=[ 7177], 5.00th=[ 8455], 10.00th=[10028], 20.00th=[10290], 00:11:21.702 | 30.00th=[10814], 40.00th=[11469], 50.00th=[11994], 60.00th=[12649], 00:11:21.702 | 70.00th=[14615], 80.00th=[21103], 90.00th=[22676], 95.00th=[31065], 00:11:21.702 | 99.00th=[56361], 99.50th=[57934], 99.90th=[77071], 99.95th=[77071], 00:11:21.702 | 99.99th=[77071] 00:11:21.702 write: IOPS=4075, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1005msec); 0 zone resets 00:11:21.702 slat (usec): min=2, max=19750, avg=126.34, stdev=865.22 00:11:21.702 clat (usec): min=1962, max=88004, avg=17504.02, stdev=12141.89 00:11:21.702 lat (usec): min=2005, max=88009, avg=17630.35, stdev=12220.39 00:11:21.702 clat percentiles (usec): 00:11:21.702 | 1.00th=[ 6849], 5.00th=[ 9634], 10.00th=[10290], 20.00th=[10421], 00:11:21.702 | 30.00th=[10683], 40.00th=[11338], 50.00th=[13042], 60.00th=[15926], 00:11:21.702 | 70.00th=[20579], 80.00th=[21627], 90.00th=[25297], 95.00th=[39060], 00:11:21.702 | 99.00th=[81265], 99.50th=[85459], 99.90th=[87557], 99.95th=[87557], 00:11:21.702 | 99.99th=[87557] 00:11:21.703 bw ( KiB/s): min=15544, max=17224, per=23.28%, avg=16384.00, stdev=1187.94, samples=2 00:11:21.703 iops : min= 3886, max= 4306, avg=4096.00, stdev=296.98, samples=2 00:11:21.703 lat (msec) : 2=0.03%, 4=0.08%, 10=8.13%, 20=65.80%, 50=23.28% 00:11:21.703 lat (msec) : 100=2.69% 00:11:21.703 cpu : usr=3.98%, sys=4.08%, ctx=401, majf=0, minf=1 00:11:21.703 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:11:21.703 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:21.703 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:21.703 issued rwts: total=3716,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:21.703 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:21.703 job1: (groupid=0, jobs=1): err= 0: pid=3890605: Fri Dec 13 23:52:00 2024 00:11:21.703 read: IOPS=5592, BW=21.8MiB/s (22.9MB/s)(22.0MiB/1007msec) 00:11:21.703 slat (nsec): min=1431, max=14034k, avg=93871.01, stdev=686134.32 00:11:21.703 clat (usec): min=3973, max=30320, avg=11931.23, stdev=3751.90 00:11:21.703 lat (usec): min=3988, max=33898, avg=12025.10, stdev=3797.63 00:11:21.703 clat percentiles (usec): 00:11:21.703 | 1.00th=[ 4686], 5.00th=[ 8455], 10.00th=[ 9372], 20.00th=[10028], 00:11:21.703 | 30.00th=[10159], 40.00th=[10421], 50.00th=[10814], 60.00th=[11076], 00:11:21.703 | 70.00th=[11863], 80.00th=[14091], 90.00th=[16319], 95.00th=[19006], 00:11:21.703 | 99.00th=[27395], 99.50th=[28443], 99.90th=[29492], 99.95th=[29492], 00:11:21.703 | 99.99th=[30278] 00:11:21.703 write: IOPS=5758, BW=22.5MiB/s (23.6MB/s)(22.7MiB/1007msec); 0 zone resets 00:11:21.703 slat (usec): min=2, max=11887, avg=72.76, stdev=467.90 00:11:21.703 clat (usec): min=2811, max=28467, avg=10305.24, stdev=2578.32 00:11:21.703 lat (usec): min=2821, max=28470, avg=10378.00, stdev=2619.70 00:11:21.703 clat percentiles (usec): 00:11:21.703 | 1.00th=[ 3720], 5.00th=[ 5604], 10.00th=[ 6652], 20.00th=[ 8979], 00:11:21.703 | 30.00th=[10028], 40.00th=[10421], 50.00th=[10683], 60.00th=[10945], 00:11:21.703 | 70.00th=[11076], 80.00th=[11207], 90.00th=[11994], 95.00th=[14353], 00:11:21.703 | 99.00th=[20317], 99.50th=[21890], 99.90th=[21890], 99.95th=[22152], 00:11:21.703 | 99.99th=[28443] 00:11:21.703 bw ( KiB/s): min=21456, max=23920, per=32.24%, avg=22688.00, stdev=1742.31, samples=2 00:11:21.703 iops : min= 5364, max= 5980, avg=5672.00, stdev=435.58, samples=2 00:11:21.703 lat (msec) : 4=0.73%, 10=24.49%, 20=72.06%, 50=2.71% 00:11:21.703 cpu : usr=4.37%, sys=7.06%, ctx=583, majf=0, minf=1 00:11:21.703 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:11:21.703 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:21.703 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:21.703 issued rwts: total=5632,5799,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:21.703 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:21.703 job2: (groupid=0, jobs=1): err= 0: pid=3890606: Fri Dec 13 23:52:00 2024 00:11:21.703 read: IOPS=3541, BW=13.8MiB/s (14.5MB/s)(14.0MiB/1012msec) 00:11:21.703 slat (nsec): min=1133, max=17587k, avg=149544.20, stdev=1161184.79 00:11:21.703 clat (usec): min=4684, max=73578, avg=18090.54, stdev=8080.39 00:11:21.703 lat (usec): min=4693, max=73588, avg=18240.08, stdev=8196.53 00:11:21.703 clat percentiles (usec): 00:11:21.703 | 1.00th=[ 4686], 5.00th=[ 9634], 10.00th=[11731], 20.00th=[12911], 00:11:21.703 | 30.00th=[13566], 40.00th=[14615], 50.00th=[17171], 60.00th=[17957], 00:11:21.703 | 70.00th=[19268], 80.00th=[21890], 90.00th=[27919], 95.00th=[30802], 00:11:21.703 | 99.00th=[54264], 99.50th=[72877], 99.90th=[73925], 99.95th=[73925], 00:11:21.703 | 99.99th=[73925] 00:11:21.703 write: IOPS=3752, BW=14.7MiB/s (15.4MB/s)(14.8MiB/1012msec); 0 zone resets 00:11:21.703 slat (usec): min=2, max=19576, avg=103.51, stdev=956.48 00:11:21.703 clat (usec): min=2533, max=73541, avg=16712.84, stdev=8592.24 00:11:21.703 lat (usec): min=2556, max=73545, avg=16816.34, stdev=8662.44 00:11:21.703 clat percentiles (usec): 00:11:21.703 | 1.00th=[ 4686], 5.00th=[ 6521], 10.00th=[ 7963], 20.00th=[10028], 00:11:21.703 | 30.00th=[11469], 40.00th=[13173], 50.00th=[15270], 60.00th=[17695], 00:11:21.703 | 70.00th=[20579], 80.00th=[21627], 90.00th=[23725], 95.00th=[33817], 00:11:21.703 | 99.00th=[48497], 99.50th=[53740], 99.90th=[57934], 99.95th=[73925], 00:11:21.703 | 99.99th=[73925] 00:11:21.703 bw ( KiB/s): min=13024, max=16336, per=20.86%, avg=14680.00, stdev=2341.94, samples=2 00:11:21.703 iops : min= 3256, max= 4084, avg=3670.00, stdev=585.48, samples=2 00:11:21.703 lat (msec) : 4=0.30%, 10=12.50%, 20=56.88%, 50=29.45%, 100=0.87% 00:11:21.703 cpu : usr=2.97%, sys=4.06%, ctx=228, majf=0, minf=1 00:11:21.703 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:11:21.703 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:21.703 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:21.703 issued rwts: total=3584,3798,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:21.703 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:21.703 job3: (groupid=0, jobs=1): err= 0: pid=3890607: Fri Dec 13 23:52:00 2024 00:11:21.703 read: IOPS=4079, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1004msec) 00:11:21.703 slat (nsec): min=1179, max=14279k, avg=114756.08, stdev=835180.72 00:11:21.703 clat (usec): min=1318, max=49107, avg=15184.32, stdev=7462.64 00:11:21.703 lat (usec): min=1326, max=49135, avg=15299.08, stdev=7531.97 00:11:21.703 clat percentiles (usec): 00:11:21.703 | 1.00th=[ 3720], 5.00th=[ 6718], 10.00th=[ 9503], 20.00th=[11469], 00:11:21.703 | 30.00th=[11863], 40.00th=[12125], 50.00th=[12780], 60.00th=[13566], 00:11:21.703 | 70.00th=[15533], 80.00th=[17957], 90.00th=[25560], 95.00th=[34866], 00:11:21.703 | 99.00th=[39584], 99.50th=[41157], 99.90th=[41681], 99.95th=[41681], 00:11:21.703 | 99.99th=[49021] 00:11:21.703 write: IOPS=4092, BW=16.0MiB/s (16.8MB/s)(16.1MiB/1004msec); 0 zone resets 00:11:21.703 slat (usec): min=2, max=22950, avg=111.09, stdev=922.23 00:11:21.703 clat (usec): min=571, max=55437, avg=15869.92, stdev=9158.22 00:11:21.703 lat (usec): min=582, max=55462, avg=15981.01, stdev=9235.01 00:11:21.703 clat percentiles (usec): 00:11:21.703 | 1.00th=[ 2966], 5.00th=[ 6849], 10.00th=[ 8586], 20.00th=[10683], 00:11:21.703 | 30.00th=[11469], 40.00th=[11863], 50.00th=[12125], 60.00th=[13042], 00:11:21.703 | 70.00th=[15926], 80.00th=[20317], 90.00th=[29754], 95.00th=[38011], 00:11:21.703 | 99.00th=[46924], 99.50th=[50070], 99.90th=[51643], 99.95th=[51643], 00:11:21.703 | 99.99th=[55313] 00:11:21.703 bw ( KiB/s): min=13176, max=19592, per=23.28%, avg=16384.00, stdev=4536.80, samples=2 00:11:21.703 iops : min= 3294, max= 4898, avg=4096.00, stdev=1134.20, samples=2 00:11:21.703 lat (usec) : 750=0.02%, 1000=0.10% 00:11:21.703 lat (msec) : 2=0.34%, 4=1.73%, 10=12.26%, 20=66.50%, 50=18.81% 00:11:21.703 lat (msec) : 100=0.24% 00:11:21.703 cpu : usr=3.59%, sys=5.18%, ctx=316, majf=0, minf=2 00:11:21.703 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:11:21.703 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:21.703 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:21.703 issued rwts: total=4096,4109,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:21.703 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:21.703 00:11:21.703 Run status group 0 (all jobs): 00:11:21.703 READ: bw=65.7MiB/s (68.9MB/s), 13.8MiB/s-21.8MiB/s (14.5MB/s-22.9MB/s), io=66.5MiB (69.7MB), run=1004-1012msec 00:11:21.703 WRITE: bw=68.7MiB/s (72.1MB/s), 14.7MiB/s-22.5MiB/s (15.4MB/s-23.6MB/s), io=69.5MiB (72.9MB), run=1004-1012msec 00:11:21.703 00:11:21.703 Disk stats (read/write): 00:11:21.703 nvme0n1: ios=3108/3279, merge=0/0, ticks=31859/39728, in_queue=71587, util=99.00% 00:11:21.703 nvme0n2: ios=4650/5047, merge=0/0, ticks=53092/48651, in_queue=101743, util=95.33% 00:11:21.703 nvme0n3: ios=3111/3191, merge=0/0, ticks=50310/47702, in_queue=98012, util=95.74% 00:11:21.703 nvme0n4: ios=3607/3801, merge=0/0, ticks=36956/46004, in_queue=82960, util=96.44% 00:11:21.703 23:52:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:11:21.703 [global] 00:11:21.703 thread=1 00:11:21.703 invalidate=1 00:11:21.703 rw=randwrite 00:11:21.703 time_based=1 00:11:21.703 runtime=1 00:11:21.703 ioengine=libaio 00:11:21.703 direct=1 00:11:21.703 bs=4096 00:11:21.703 iodepth=128 00:11:21.703 norandommap=0 00:11:21.703 numjobs=1 00:11:21.703 00:11:21.703 verify_dump=1 00:11:21.703 verify_backlog=512 00:11:21.703 verify_state_save=0 00:11:21.703 do_verify=1 00:11:21.703 verify=crc32c-intel 00:11:21.703 [job0] 00:11:21.703 filename=/dev/nvme0n1 00:11:21.703 [job1] 00:11:21.703 filename=/dev/nvme0n2 00:11:21.703 [job2] 00:11:21.703 filename=/dev/nvme0n3 00:11:21.703 [job3] 00:11:21.703 filename=/dev/nvme0n4 00:11:21.703 Could not set queue depth (nvme0n1) 00:11:21.703 Could not set queue depth (nvme0n2) 00:11:21.703 Could not set queue depth (nvme0n3) 00:11:21.703 Could not set queue depth (nvme0n4) 00:11:21.961 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:21.961 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:21.961 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:21.961 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:21.961 fio-3.35 00:11:21.961 Starting 4 threads 00:11:23.332 00:11:23.332 job0: (groupid=0, jobs=1): err= 0: pid=3890970: Fri Dec 13 23:52:02 2024 00:11:23.332 read: IOPS=3169, BW=12.4MiB/s (13.0MB/s)(12.4MiB/1005msec) 00:11:23.332 slat (nsec): min=1561, max=15345k, avg=134705.15, stdev=880904.56 00:11:23.332 clat (usec): min=3443, max=40985, avg=16263.81, stdev=4249.00 00:11:23.332 lat (usec): min=7059, max=41009, avg=16398.51, stdev=4325.29 00:11:23.332 clat percentiles (usec): 00:11:23.332 | 1.00th=[ 7373], 5.00th=[10945], 10.00th=[12518], 20.00th=[13304], 00:11:23.332 | 30.00th=[14091], 40.00th=[14353], 50.00th=[15008], 60.00th=[16188], 00:11:23.332 | 70.00th=[17695], 80.00th=[18482], 90.00th=[22152], 95.00th=[25560], 00:11:23.332 | 99.00th=[29230], 99.50th=[30278], 99.90th=[32113], 99.95th=[37487], 00:11:23.332 | 99.99th=[41157] 00:11:23.332 write: IOPS=3566, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1005msec); 0 zone resets 00:11:23.332 slat (usec): min=2, max=15597, avg=153.18, stdev=740.29 00:11:23.332 clat (usec): min=7377, max=39942, avg=21036.55, stdev=7493.27 00:11:23.332 lat (usec): min=7388, max=39953, avg=21189.73, stdev=7559.12 00:11:23.332 clat percentiles (usec): 00:11:23.332 | 1.00th=[ 9634], 5.00th=[10683], 10.00th=[10814], 20.00th=[13566], 00:11:23.332 | 30.00th=[16188], 40.00th=[17433], 50.00th=[19530], 60.00th=[23200], 00:11:23.332 | 70.00th=[26084], 80.00th=[28443], 90.00th=[30540], 95.00th=[34341], 00:11:23.332 | 99.00th=[37487], 99.50th=[39060], 99.90th=[40109], 99.95th=[40109], 00:11:23.332 | 99.99th=[40109] 00:11:23.332 bw ( KiB/s): min=12792, max=15768, per=21.62%, avg=14280.00, stdev=2104.35, samples=2 00:11:23.332 iops : min= 3198, max= 3942, avg=3570.00, stdev=526.09, samples=2 00:11:23.332 lat (msec) : 4=0.01%, 10=3.03%, 20=63.45%, 50=33.51% 00:11:23.332 cpu : usr=2.69%, sys=4.68%, ctx=357, majf=0, minf=1 00:11:23.332 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:11:23.332 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:23.332 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:23.332 issued rwts: total=3185,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:23.332 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:23.332 job1: (groupid=0, jobs=1): err= 0: pid=3890971: Fri Dec 13 23:52:02 2024 00:11:23.332 read: IOPS=5099, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1004msec) 00:11:23.332 slat (nsec): min=1356, max=5736.0k, avg=92194.83, stdev=477450.30 00:11:23.332 clat (usec): min=7520, max=31792, avg=11884.56, stdev=3537.14 00:11:23.332 lat (usec): min=7524, max=31820, avg=11976.76, stdev=3574.52 00:11:23.332 clat percentiles (usec): 00:11:23.332 | 1.00th=[ 7898], 5.00th=[ 8717], 10.00th=[ 9241], 20.00th=[ 9765], 00:11:23.332 | 30.00th=[10290], 40.00th=[10683], 50.00th=[10814], 60.00th=[11338], 00:11:23.332 | 70.00th=[11731], 80.00th=[12518], 90.00th=[15926], 95.00th=[20579], 00:11:23.332 | 99.00th=[26608], 99.50th=[27395], 99.90th=[27657], 99.95th=[29754], 00:11:23.332 | 99.99th=[31851] 00:11:23.332 write: IOPS=5292, BW=20.7MiB/s (21.7MB/s)(20.8MiB/1004msec); 0 zone resets 00:11:23.332 slat (usec): min=2, max=11479, avg=94.03, stdev=544.21 00:11:23.332 clat (usec): min=3070, max=39044, avg=12320.51, stdev=4798.79 00:11:23.332 lat (usec): min=3713, max=39052, avg=12414.54, stdev=4834.48 00:11:23.332 clat percentiles (usec): 00:11:23.332 | 1.00th=[ 7242], 5.00th=[ 8717], 10.00th=[ 9110], 20.00th=[10159], 00:11:23.332 | 30.00th=[10552], 40.00th=[10683], 50.00th=[10814], 60.00th=[11076], 00:11:23.332 | 70.00th=[11207], 80.00th=[12387], 90.00th=[16909], 95.00th=[24249], 00:11:23.332 | 99.00th=[31589], 99.50th=[34341], 99.90th=[39060], 99.95th=[39060], 00:11:23.332 | 99.99th=[39060] 00:11:23.332 bw ( KiB/s): min=20480, max=21016, per=31.42%, avg=20748.00, stdev=379.01, samples=2 00:11:23.332 iops : min= 5120, max= 5254, avg=5187.00, stdev=94.75, samples=2 00:11:23.332 lat (msec) : 4=0.09%, 10=20.69%, 20=72.14%, 50=7.08% 00:11:23.332 cpu : usr=4.39%, sys=5.08%, ctx=499, majf=0, minf=1 00:11:23.332 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:11:23.332 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:23.332 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:23.332 issued rwts: total=5120,5314,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:23.332 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:23.332 job2: (groupid=0, jobs=1): err= 0: pid=3890972: Fri Dec 13 23:52:02 2024 00:11:23.332 read: IOPS=2547, BW=9.95MiB/s (10.4MB/s)(10.0MiB/1005msec) 00:11:23.332 slat (nsec): min=1526, max=12582k, avg=174213.18, stdev=980273.24 00:11:23.332 clat (usec): min=6227, max=42504, avg=22061.23, stdev=6241.08 00:11:23.332 lat (usec): min=6236, max=44616, avg=22235.45, stdev=6335.59 00:11:23.332 clat percentiles (usec): 00:11:23.332 | 1.00th=[11600], 5.00th=[14746], 10.00th=[15139], 20.00th=[17171], 00:11:23.332 | 30.00th=[18482], 40.00th=[19268], 50.00th=[19792], 60.00th=[21890], 00:11:23.332 | 70.00th=[25297], 80.00th=[27919], 90.00th=[31851], 95.00th=[33424], 00:11:23.332 | 99.00th=[35914], 99.50th=[39060], 99.90th=[42206], 99.95th=[42206], 00:11:23.332 | 99.99th=[42730] 00:11:23.332 write: IOPS=2740, BW=10.7MiB/s (11.2MB/s)(10.8MiB/1005msec); 0 zone resets 00:11:23.332 slat (usec): min=2, max=10540, avg=195.63, stdev=904.04 00:11:23.332 clat (usec): min=3875, max=53055, avg=25699.87, stdev=9174.86 00:11:23.332 lat (usec): min=5871, max=53062, avg=25895.50, stdev=9225.11 00:11:23.332 clat percentiles (usec): 00:11:23.332 | 1.00th=[ 8160], 5.00th=[12518], 10.00th=[15401], 20.00th=[19006], 00:11:23.332 | 30.00th=[20841], 40.00th=[21627], 50.00th=[22938], 60.00th=[26608], 00:11:23.332 | 70.00th=[29492], 80.00th=[33817], 90.00th=[39060], 95.00th=[42730], 00:11:23.332 | 99.00th=[47973], 99.50th=[52167], 99.90th=[53216], 99.95th=[53216], 00:11:23.332 | 99.99th=[53216] 00:11:23.332 bw ( KiB/s): min= 8720, max=12288, per=15.91%, avg=10504.00, stdev=2522.96, samples=2 00:11:23.332 iops : min= 2180, max= 3072, avg=2626.00, stdev=630.74, samples=2 00:11:23.332 lat (msec) : 4=0.02%, 10=1.62%, 20=36.04%, 50=62.06%, 100=0.26% 00:11:23.332 cpu : usr=1.99%, sys=3.29%, ctx=335, majf=0, minf=2 00:11:23.332 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:11:23.332 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:23.332 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:23.332 issued rwts: total=2560,2754,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:23.332 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:23.332 job3: (groupid=0, jobs=1): err= 0: pid=3890973: Fri Dec 13 23:52:02 2024 00:11:23.332 read: IOPS=4585, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1005msec) 00:11:23.332 slat (nsec): min=1217, max=49806k, avg=104882.99, stdev=946588.32 00:11:23.332 clat (usec): min=4575, max=68431, avg=14610.85, stdev=9876.74 00:11:23.332 lat (usec): min=4645, max=68439, avg=14715.73, stdev=9916.63 00:11:23.332 clat percentiles (usec): 00:11:23.332 | 1.00th=[ 6063], 5.00th=[ 8848], 10.00th=[ 9503], 20.00th=[11207], 00:11:23.332 | 30.00th=[11469], 40.00th=[11731], 50.00th=[11863], 60.00th=[12125], 00:11:23.332 | 70.00th=[13304], 80.00th=[15008], 90.00th=[19792], 95.00th=[25297], 00:11:23.332 | 99.00th=[65799], 99.50th=[68682], 99.90th=[68682], 99.95th=[68682], 00:11:23.332 | 99.99th=[68682] 00:11:23.332 write: IOPS=4916, BW=19.2MiB/s (20.1MB/s)(19.3MiB/1005msec); 0 zone resets 00:11:23.332 slat (nsec): min=1963, max=10383k, avg=88717.52, stdev=601383.51 00:11:23.332 clat (usec): min=441, max=27508, avg=12077.04, stdev=2994.10 00:11:23.332 lat (usec): min=1467, max=27540, avg=12165.76, stdev=3034.33 00:11:23.332 clat percentiles (usec): 00:11:23.332 | 1.00th=[ 5669], 5.00th=[ 6652], 10.00th=[ 8848], 20.00th=[10683], 00:11:23.332 | 30.00th=[11338], 40.00th=[11469], 50.00th=[11863], 60.00th=[12125], 00:11:23.333 | 70.00th=[12649], 80.00th=[13042], 90.00th=[16319], 95.00th=[17957], 00:11:23.333 | 99.00th=[20841], 99.50th=[22414], 99.90th=[23987], 99.95th=[24249], 00:11:23.333 | 99.99th=[27395] 00:11:23.333 bw ( KiB/s): min=18024, max=20480, per=29.15%, avg=19252.00, stdev=1736.65, samples=2 00:11:23.333 iops : min= 4506, max= 5120, avg=4813.00, stdev=434.16, samples=2 00:11:23.333 lat (usec) : 500=0.01% 00:11:23.333 lat (msec) : 2=0.02%, 4=0.14%, 10=15.23%, 20=78.55%, 50=4.33% 00:11:23.333 lat (msec) : 100=1.73% 00:11:23.333 cpu : usr=2.89%, sys=5.48%, ctx=380, majf=0, minf=1 00:11:23.333 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:11:23.333 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:23.333 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:23.333 issued rwts: total=4608,4941,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:23.333 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:23.333 00:11:23.333 Run status group 0 (all jobs): 00:11:23.333 READ: bw=60.1MiB/s (63.1MB/s), 9.95MiB/s-19.9MiB/s (10.4MB/s-20.9MB/s), io=60.4MiB (63.4MB), run=1004-1005msec 00:11:23.333 WRITE: bw=64.5MiB/s (67.6MB/s), 10.7MiB/s-20.7MiB/s (11.2MB/s-21.7MB/s), io=64.8MiB (68.0MB), run=1004-1005msec 00:11:23.333 00:11:23.333 Disk stats (read/write): 00:11:23.333 nvme0n1: ios=2610/2967, merge=0/0, ticks=20956/30898, in_queue=51854, util=86.57% 00:11:23.333 nvme0n2: ios=4146/4452, merge=0/0, ticks=17164/17271, in_queue=34435, util=97.97% 00:11:23.333 nvme0n3: ios=2090/2542, merge=0/0, ticks=15156/22038, in_queue=37194, util=89.05% 00:11:23.333 nvme0n4: ios=4135/4396, merge=0/0, ticks=31299/29635, in_queue=60934, util=96.95% 00:11:23.333 23:52:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:11:23.333 23:52:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=3891198 00:11:23.333 23:52:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:11:23.333 23:52:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:11:23.333 [global] 00:11:23.333 thread=1 00:11:23.333 invalidate=1 00:11:23.333 rw=read 00:11:23.333 time_based=1 00:11:23.333 runtime=10 00:11:23.333 ioengine=libaio 00:11:23.333 direct=1 00:11:23.333 bs=4096 00:11:23.333 iodepth=1 00:11:23.333 norandommap=1 00:11:23.333 numjobs=1 00:11:23.333 00:11:23.333 [job0] 00:11:23.333 filename=/dev/nvme0n1 00:11:23.333 [job1] 00:11:23.333 filename=/dev/nvme0n2 00:11:23.333 [job2] 00:11:23.333 filename=/dev/nvme0n3 00:11:23.333 [job3] 00:11:23.333 filename=/dev/nvme0n4 00:11:23.333 Could not set queue depth (nvme0n1) 00:11:23.333 Could not set queue depth (nvme0n2) 00:11:23.333 Could not set queue depth (nvme0n3) 00:11:23.333 Could not set queue depth (nvme0n4) 00:11:23.333 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:23.333 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:23.333 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:23.333 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:23.333 fio-3.35 00:11:23.333 Starting 4 threads 00:11:26.610 23:52:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:11:26.610 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=36466688, buflen=4096 00:11:26.610 fio: pid=3891344, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:11:26.610 23:52:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:11:26.610 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=31387648, buflen=4096 00:11:26.610 fio: pid=3891343, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:11:26.610 23:52:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:26.610 23:52:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:11:26.610 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=22855680, buflen=4096 00:11:26.610 fio: pid=3891341, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:11:26.868 23:52:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:26.868 23:52:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:11:26.868 fio: io_u error on file /dev/nvme0n2: Input/output error: read offset=921600, buflen=4096 00:11:26.868 fio: pid=3891342, err=5/file:io_u.c:1889, func=io_u error, error=Input/output error 00:11:26.868 00:11:26.868 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=3891341: Fri Dec 13 23:52:05 2024 00:11:26.868 read: IOPS=1773, BW=7092KiB/s (7263kB/s)(21.8MiB/3147msec) 00:11:26.868 slat (nsec): min=6110, max=79818, avg=7940.14, stdev=2285.06 00:11:26.868 clat (usec): min=196, max=45252, avg=550.40, stdev=3270.43 00:11:26.868 lat (usec): min=203, max=45274, avg=558.34, stdev=3271.44 00:11:26.868 clat percentiles (usec): 00:11:26.868 | 1.00th=[ 215], 5.00th=[ 225], 10.00th=[ 233], 20.00th=[ 245], 00:11:26.868 | 30.00th=[ 258], 40.00th=[ 269], 50.00th=[ 285], 60.00th=[ 297], 00:11:26.868 | 70.00th=[ 310], 80.00th=[ 322], 90.00th=[ 355], 95.00th=[ 375], 00:11:26.868 | 99.00th=[ 494], 99.50th=[41157], 99.90th=[41157], 99.95th=[42206], 00:11:26.868 | 99.99th=[45351] 00:11:26.868 bw ( KiB/s): min= 96, max=12120, per=28.12%, avg=7435.17, stdev=5756.77, samples=6 00:11:26.868 iops : min= 24, max= 3030, avg=1858.67, stdev=1439.38, samples=6 00:11:26.868 lat (usec) : 250=24.06%, 500=74.97%, 750=0.30% 00:11:26.868 lat (msec) : 50=0.65% 00:11:26.868 cpu : usr=0.86%, sys=2.73%, ctx=5583, majf=0, minf=1 00:11:26.868 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:26.868 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:26.868 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:26.868 issued rwts: total=5581,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:26.868 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:26.868 job1: (groupid=0, jobs=1): err= 5 (file:io_u.c:1889, func=io_u error, error=Input/output error): pid=3891342: Fri Dec 13 23:52:05 2024 00:11:26.868 read: IOPS=66, BW=266KiB/s (272kB/s)(900KiB/3384msec) 00:11:26.868 slat (usec): min=6, max=7071, avg=48.15, stdev=471.37 00:11:26.868 clat (usec): min=230, max=41989, avg=14986.18, stdev=19568.21 00:11:26.868 lat (usec): min=239, max=42012, avg=15003.12, stdev=19578.76 00:11:26.868 clat percentiles (usec): 00:11:26.868 | 1.00th=[ 249], 5.00th=[ 289], 10.00th=[ 302], 20.00th=[ 310], 00:11:26.868 | 30.00th=[ 322], 40.00th=[ 330], 50.00th=[ 363], 60.00th=[ 502], 00:11:26.868 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:11:26.868 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:11:26.868 | 99.99th=[42206] 00:11:26.868 bw ( KiB/s): min= 96, max= 1128, per=1.08%, avg=286.67, stdev=413.51, samples=6 00:11:26.868 iops : min= 24, max= 282, avg=71.67, stdev=103.38, samples=6 00:11:26.868 lat (usec) : 250=1.33%, 500=57.96%, 750=4.42% 00:11:26.868 lat (msec) : 50=35.84% 00:11:26.868 cpu : usr=0.00%, sys=0.33%, ctx=228, majf=0, minf=2 00:11:26.868 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:26.868 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:26.868 complete : 0=0.4%, 4=99.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:26.868 issued rwts: total=226,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:26.868 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:26.868 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=3891343: Fri Dec 13 23:52:05 2024 00:11:26.868 read: IOPS=2628, BW=10.3MiB/s (10.8MB/s)(29.9MiB/2916msec) 00:11:26.868 slat (nsec): min=6658, max=38846, avg=8248.17, stdev=1440.16 00:11:26.868 clat (usec): min=207, max=42087, avg=367.95, stdev=1875.58 00:11:26.868 lat (usec): min=215, max=42110, avg=376.20, stdev=1876.26 00:11:26.868 clat percentiles (usec): 00:11:26.868 | 1.00th=[ 223], 5.00th=[ 233], 10.00th=[ 239], 20.00th=[ 247], 00:11:26.868 | 30.00th=[ 255], 40.00th=[ 265], 50.00th=[ 273], 60.00th=[ 281], 00:11:26.868 | 70.00th=[ 297], 80.00th=[ 310], 90.00th=[ 326], 95.00th=[ 338], 00:11:26.868 | 99.00th=[ 478], 99.50th=[ 506], 99.90th=[41157], 99.95th=[41157], 00:11:26.868 | 99.99th=[42206] 00:11:26.868 bw ( KiB/s): min= 688, max=14424, per=37.28%, avg=9857.60, stdev=5747.62, samples=5 00:11:26.868 iops : min= 172, max= 3606, avg=2464.40, stdev=1436.90, samples=5 00:11:26.868 lat (usec) : 250=23.16%, 500=76.21%, 750=0.39% 00:11:26.868 lat (msec) : 20=0.01%, 50=0.21% 00:11:26.868 cpu : usr=0.79%, sys=2.64%, ctx=7664, majf=0, minf=2 00:11:26.868 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:26.868 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:26.868 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:26.868 issued rwts: total=7664,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:26.868 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:26.868 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=3891344: Fri Dec 13 23:52:05 2024 00:11:26.868 read: IOPS=3280, BW=12.8MiB/s (13.4MB/s)(34.8MiB/2714msec) 00:11:26.868 slat (nsec): min=6910, max=42894, avg=8102.20, stdev=1530.33 00:11:26.868 clat (usec): min=206, max=41439, avg=292.51, stdev=1061.42 00:11:26.868 lat (usec): min=218, max=41446, avg=300.62, stdev=1061.42 00:11:26.868 clat percentiles (usec): 00:11:26.868 | 1.00th=[ 225], 5.00th=[ 233], 10.00th=[ 237], 20.00th=[ 243], 00:11:26.868 | 30.00th=[ 247], 40.00th=[ 251], 50.00th=[ 255], 60.00th=[ 262], 00:11:26.868 | 70.00th=[ 265], 80.00th=[ 273], 90.00th=[ 289], 95.00th=[ 367], 00:11:26.868 | 99.00th=[ 441], 99.50th=[ 494], 99.90th=[ 562], 99.95th=[41157], 00:11:26.868 | 99.99th=[41681] 00:11:26.868 bw ( KiB/s): min= 9920, max=15144, per=49.48%, avg=13084.80, stdev=2039.42, samples=5 00:11:26.868 iops : min= 2480, max= 3786, avg=3271.20, stdev=509.86, samples=5 00:11:26.868 lat (usec) : 250=36.22%, 500=63.33%, 750=0.37% 00:11:26.868 lat (msec) : 50=0.07% 00:11:26.868 cpu : usr=1.81%, sys=5.20%, ctx=8904, majf=0, minf=2 00:11:26.868 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:26.868 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:26.868 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:26.868 issued rwts: total=8904,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:26.868 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:26.868 00:11:26.868 Run status group 0 (all jobs): 00:11:26.868 READ: bw=25.8MiB/s (27.1MB/s), 266KiB/s-12.8MiB/s (272kB/s-13.4MB/s), io=87.4MiB (91.6MB), run=2714-3384msec 00:11:26.868 00:11:26.868 Disk stats (read/write): 00:11:26.868 nvme0n1: ios=5579/0, merge=0/0, ticks=2954/0, in_queue=2954, util=95.78% 00:11:26.868 nvme0n2: ios=224/0, merge=0/0, ticks=3334/0, in_queue=3334, util=96.49% 00:11:26.868 nvme0n3: ios=7515/0, merge=0/0, ticks=2734/0, in_queue=2734, util=96.55% 00:11:26.868 nvme0n4: ios=8572/0, merge=0/0, ticks=2410/0, in_queue=2410, util=96.49% 00:11:27.125 23:52:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:27.125 23:52:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:11:27.382 23:52:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:27.382 23:52:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:11:27.640 23:52:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:27.640 23:52:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:11:27.897 23:52:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:27.897 23:52:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:11:28.155 23:52:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:28.155 23:52:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:11:28.412 23:52:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:11:28.412 23:52:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 3891198 00:11:28.412 23:52:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:11:28.412 23:52:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:29.343 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:29.343 23:52:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:29.343 23:52:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1223 -- # local i=0 00:11:29.343 23:52:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:11:29.343 23:52:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:29.343 23:52:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:11:29.343 23:52:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:29.343 23:52:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1235 -- # return 0 00:11:29.343 23:52:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:11:29.344 23:52:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:11:29.344 nvmf hotplug test: fio failed as expected 00:11:29.344 23:52:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:29.601 23:52:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:11:29.601 23:52:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:11:29.601 23:52:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:11:29.601 23:52:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:11:29.601 23:52:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:11:29.601 23:52:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:29.601 23:52:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:11:29.601 23:52:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:29.601 23:52:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:11:29.601 23:52:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:29.601 23:52:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:29.601 rmmod nvme_tcp 00:11:29.601 rmmod nvme_fabrics 00:11:29.601 rmmod nvme_keyring 00:11:29.601 23:52:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:29.601 23:52:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:11:29.601 23:52:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:11:29.601 23:52:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 3888118 ']' 00:11:29.601 23:52:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 3888118 00:11:29.601 23:52:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@954 -- # '[' -z 3888118 ']' 00:11:29.601 23:52:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@958 -- # kill -0 3888118 00:11:29.601 23:52:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # uname 00:11:29.601 23:52:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:29.601 23:52:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3888118 00:11:29.601 23:52:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:29.601 23:52:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:29.601 23:52:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3888118' 00:11:29.601 killing process with pid 3888118 00:11:29.601 23:52:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@973 -- # kill 3888118 00:11:29.601 23:52:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@978 -- # wait 3888118 00:11:31.105 23:52:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:31.105 23:52:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:31.105 23:52:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:31.105 23:52:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:11:31.105 23:52:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 00:11:31.105 23:52:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:31.105 23:52:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 00:11:31.105 23:52:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:31.105 23:52:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:31.105 23:52:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:31.105 23:52:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:31.105 23:52:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:33.008 23:52:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:33.008 00:11:33.008 real 0m29.234s 00:11:33.008 user 1m58.911s 00:11:33.008 sys 0m7.888s 00:11:33.008 23:52:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:33.008 23:52:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:11:33.008 ************************************ 00:11:33.008 END TEST nvmf_fio_target 00:11:33.008 ************************************ 00:11:33.008 23:52:11 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:11:33.008 23:52:11 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:33.008 23:52:11 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:33.008 23:52:11 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:33.008 ************************************ 00:11:33.008 START TEST nvmf_bdevio 00:11:33.008 ************************************ 00:11:33.008 23:52:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:11:33.008 * Looking for test storage... 00:11:33.008 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:33.008 23:52:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:33.009 23:52:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1711 -- # lcov --version 00:11:33.009 23:52:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:33.268 23:52:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:33.268 23:52:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:33.268 23:52:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:33.268 23:52:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:33.268 23:52:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:11:33.268 23:52:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:11:33.268 23:52:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:11:33.268 23:52:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:11:33.268 23:52:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:11:33.268 23:52:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:11:33.268 23:52:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:11:33.268 23:52:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:33.268 23:52:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:11:33.268 23:52:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:11:33.268 23:52:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:33.268 23:52:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:33.268 23:52:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:11:33.268 23:52:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:11:33.268 23:52:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:33.268 23:52:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:11:33.268 23:52:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:11:33.268 23:52:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:11:33.268 23:52:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:11:33.268 23:52:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:33.268 23:52:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:11:33.268 23:52:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:11:33.268 23:52:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:33.268 23:52:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:33.268 23:52:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:11:33.268 23:52:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:33.268 23:52:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:33.268 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:33.268 --rc genhtml_branch_coverage=1 00:11:33.268 --rc genhtml_function_coverage=1 00:11:33.268 --rc genhtml_legend=1 00:11:33.269 --rc geninfo_all_blocks=1 00:11:33.269 --rc geninfo_unexecuted_blocks=1 00:11:33.269 00:11:33.269 ' 00:11:33.269 23:52:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:33.269 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:33.269 --rc genhtml_branch_coverage=1 00:11:33.269 --rc genhtml_function_coverage=1 00:11:33.269 --rc genhtml_legend=1 00:11:33.269 --rc geninfo_all_blocks=1 00:11:33.269 --rc geninfo_unexecuted_blocks=1 00:11:33.269 00:11:33.269 ' 00:11:33.269 23:52:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:33.269 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:33.269 --rc genhtml_branch_coverage=1 00:11:33.269 --rc genhtml_function_coverage=1 00:11:33.269 --rc genhtml_legend=1 00:11:33.269 --rc geninfo_all_blocks=1 00:11:33.269 --rc geninfo_unexecuted_blocks=1 00:11:33.269 00:11:33.269 ' 00:11:33.269 23:52:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:33.269 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:33.269 --rc genhtml_branch_coverage=1 00:11:33.269 --rc genhtml_function_coverage=1 00:11:33.269 --rc genhtml_legend=1 00:11:33.269 --rc geninfo_all_blocks=1 00:11:33.269 --rc geninfo_unexecuted_blocks=1 00:11:33.269 00:11:33.269 ' 00:11:33.269 23:52:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:33.269 23:52:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:11:33.269 23:52:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:33.269 23:52:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:33.269 23:52:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:33.269 23:52:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:33.269 23:52:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:33.269 23:52:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:33.269 23:52:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:33.269 23:52:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:33.269 23:52:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:33.269 23:52:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:33.269 23:52:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:11:33.269 23:52:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:11:33.269 23:52:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:33.269 23:52:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:33.269 23:52:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:33.269 23:52:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:33.269 23:52:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:33.269 23:52:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:11:33.269 23:52:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:33.269 23:52:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:33.269 23:52:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:33.269 23:52:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:33.269 23:52:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:33.269 23:52:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:33.269 23:52:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:11:33.269 23:52:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:33.269 23:52:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:11:33.269 23:52:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:33.269 23:52:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:33.269 23:52:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:33.269 23:52:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:33.269 23:52:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:33.269 23:52:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:33.269 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:33.269 23:52:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:33.269 23:52:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:33.269 23:52:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:33.269 23:52:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:33.269 23:52:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:33.269 23:52:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:11:33.269 23:52:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:33.269 23:52:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:33.269 23:52:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:33.269 23:52:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:33.269 23:52:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:33.269 23:52:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:33.269 23:52:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:33.269 23:52:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:33.269 23:52:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:33.269 23:52:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:33.269 23:52:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:11:33.269 23:52:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:38.545 23:52:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:38.545 23:52:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:11:38.545 23:52:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:38.545 23:52:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:38.545 23:52:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:38.545 23:52:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:38.545 23:52:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:38.545 23:52:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:11:38.545 23:52:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:38.545 23:52:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:11:38.545 23:52:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:11:38.545 23:52:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:11:38.545 23:52:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:11:38.545 23:52:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:11:38.545 23:52:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:11:38.545 23:52:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:38.545 23:52:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:38.545 23:52:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:38.545 23:52:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:38.545 23:52:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:38.545 23:52:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:38.545 23:52:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:38.545 23:52:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:38.545 23:52:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:38.545 23:52:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:38.545 23:52:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:38.545 23:52:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:38.545 23:52:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:38.545 23:52:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:38.545 23:52:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:38.545 23:52:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:38.546 23:52:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:38.546 23:52:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:38.546 23:52:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:38.546 23:52:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:11:38.546 Found 0000:af:00.0 (0x8086 - 0x159b) 00:11:38.546 23:52:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:38.546 23:52:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:38.546 23:52:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:38.546 23:52:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:38.546 23:52:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:38.546 23:52:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:38.546 23:52:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:11:38.546 Found 0000:af:00.1 (0x8086 - 0x159b) 00:11:38.546 23:52:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:38.546 23:52:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:38.546 23:52:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:38.546 23:52:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:38.546 23:52:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:38.546 23:52:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:38.546 23:52:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:38.546 23:52:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:38.546 23:52:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:38.546 23:52:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:38.546 23:52:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:38.546 23:52:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:38.546 23:52:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:38.546 23:52:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:38.546 23:52:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:38.546 23:52:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:11:38.546 Found net devices under 0000:af:00.0: cvl_0_0 00:11:38.546 23:52:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:38.546 23:52:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:38.546 23:52:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:38.546 23:52:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:38.546 23:52:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:38.546 23:52:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:38.546 23:52:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:38.546 23:52:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:38.546 23:52:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:11:38.546 Found net devices under 0000:af:00.1: cvl_0_1 00:11:38.546 23:52:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:38.546 23:52:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:38.546 23:52:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # is_hw=yes 00:11:38.546 23:52:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:38.546 23:52:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:38.546 23:52:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:38.546 23:52:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:38.546 23:52:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:38.546 23:52:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:38.546 23:52:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:38.546 23:52:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:38.546 23:52:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:38.546 23:52:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:38.546 23:52:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:38.546 23:52:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:38.546 23:52:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:38.546 23:52:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:38.546 23:52:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:38.546 23:52:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:38.546 23:52:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:38.546 23:52:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:38.546 23:52:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:38.546 23:52:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:38.546 23:52:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:38.546 23:52:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:38.546 23:52:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:38.546 23:52:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:38.546 23:52:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:38.546 23:52:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:38.546 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:38.546 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.412 ms 00:11:38.546 00:11:38.546 --- 10.0.0.2 ping statistics --- 00:11:38.546 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:38.546 rtt min/avg/max/mdev = 0.412/0.412/0.412/0.000 ms 00:11:38.546 23:52:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:38.546 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:38.546 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.195 ms 00:11:38.546 00:11:38.546 --- 10.0.0.1 ping statistics --- 00:11:38.546 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:38.546 rtt min/avg/max/mdev = 0.195/0.195/0.195/0.000 ms 00:11:38.546 23:52:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:38.546 23:52:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@450 -- # return 0 00:11:38.546 23:52:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:38.546 23:52:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:38.546 23:52:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:38.546 23:52:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:38.546 23:52:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:38.546 23:52:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:38.546 23:52:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:38.546 23:52:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:11:38.546 23:52:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:38.546 23:52:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:38.546 23:52:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:38.546 23:52:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=3895956 00:11:38.546 23:52:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 3895956 00:11:38.546 23:52:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@835 -- # '[' -z 3895956 ']' 00:11:38.546 23:52:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:38.546 23:52:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:38.546 23:52:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:38.546 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:38.546 23:52:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:38.546 23:52:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:38.546 23:52:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:11:38.546 [2024-12-13 23:52:17.380198] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:11:38.546 [2024-12-13 23:52:17.380305] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:38.546 [2024-12-13 23:52:17.497068] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:38.546 [2024-12-13 23:52:17.602061] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:38.546 [2024-12-13 23:52:17.602105] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:38.546 [2024-12-13 23:52:17.602115] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:38.546 [2024-12-13 23:52:17.602128] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:38.546 [2024-12-13 23:52:17.602136] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:38.546 [2024-12-13 23:52:17.604515] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 4 00:11:38.546 [2024-12-13 23:52:17.604612] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 5 00:11:38.546 [2024-12-13 23:52:17.604671] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:11:38.547 [2024-12-13 23:52:17.604694] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 6 00:11:39.113 23:52:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:39.114 23:52:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@868 -- # return 0 00:11:39.114 23:52:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:39.114 23:52:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:39.114 23:52:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:39.114 23:52:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:39.114 23:52:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:39.114 23:52:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.114 23:52:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:39.114 [2024-12-13 23:52:18.221738] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:39.114 23:52:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.114 23:52:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:11:39.114 23:52:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.114 23:52:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:39.373 Malloc0 00:11:39.373 23:52:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.373 23:52:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:39.373 23:52:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.373 23:52:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:39.373 23:52:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.373 23:52:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:39.373 23:52:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.373 23:52:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:39.373 23:52:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.373 23:52:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:39.373 23:52:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.373 23:52:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:39.373 [2024-12-13 23:52:18.331946] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:39.373 23:52:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.373 23:52:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:11:39.373 23:52:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:11:39.373 23:52:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:11:39.373 23:52:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:11:39.373 23:52:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:11:39.373 23:52:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:11:39.373 { 00:11:39.373 "params": { 00:11:39.373 "name": "Nvme$subsystem", 00:11:39.373 "trtype": "$TEST_TRANSPORT", 00:11:39.373 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:39.373 "adrfam": "ipv4", 00:11:39.373 "trsvcid": "$NVMF_PORT", 00:11:39.373 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:39.373 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:39.373 "hdgst": ${hdgst:-false}, 00:11:39.373 "ddgst": ${ddgst:-false} 00:11:39.373 }, 00:11:39.373 "method": "bdev_nvme_attach_controller" 00:11:39.373 } 00:11:39.373 EOF 00:11:39.373 )") 00:11:39.373 23:52:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:11:39.373 23:52:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:11:39.373 23:52:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:11:39.373 23:52:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:11:39.373 "params": { 00:11:39.373 "name": "Nvme1", 00:11:39.373 "trtype": "tcp", 00:11:39.373 "traddr": "10.0.0.2", 00:11:39.373 "adrfam": "ipv4", 00:11:39.373 "trsvcid": "4420", 00:11:39.373 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:39.373 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:39.373 "hdgst": false, 00:11:39.373 "ddgst": false 00:11:39.373 }, 00:11:39.373 "method": "bdev_nvme_attach_controller" 00:11:39.373 }' 00:11:39.373 [2024-12-13 23:52:18.410769] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:11:39.373 [2024-12-13 23:52:18.410856] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3896131 ] 00:11:39.632 [2024-12-13 23:52:18.527608] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:39.632 [2024-12-13 23:52:18.646523] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:11:39.632 [2024-12-13 23:52:18.646590] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:11:39.632 [2024-12-13 23:52:18.646595] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:11:40.200 I/O targets: 00:11:40.200 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:11:40.200 00:11:40.200 00:11:40.200 CUnit - A unit testing framework for C - Version 2.1-3 00:11:40.200 http://cunit.sourceforge.net/ 00:11:40.200 00:11:40.200 00:11:40.200 Suite: bdevio tests on: Nvme1n1 00:11:40.200 Test: blockdev write read block ...passed 00:11:40.200 Test: blockdev write zeroes read block ...passed 00:11:40.200 Test: blockdev write zeroes read no split ...passed 00:11:40.458 Test: blockdev write zeroes read split ...passed 00:11:40.458 Test: blockdev write zeroes read split partial ...passed 00:11:40.458 Test: blockdev reset ...[2024-12-13 23:52:19.471038] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:11:40.458 [2024-12-13 23:52:19.471163] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000326480 (9): Bad file descriptor 00:11:40.458 [2024-12-13 23:52:19.486395] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:11:40.458 passed 00:11:40.458 Test: blockdev write read 8 blocks ...passed 00:11:40.458 Test: blockdev write read size > 128k ...passed 00:11:40.458 Test: blockdev write read invalid size ...passed 00:11:40.458 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:40.458 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:40.458 Test: blockdev write read max offset ...passed 00:11:40.718 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:40.718 Test: blockdev writev readv 8 blocks ...passed 00:11:40.718 Test: blockdev writev readv 30 x 1block ...passed 00:11:40.718 Test: blockdev writev readv block ...passed 00:11:40.718 Test: blockdev writev readv size > 128k ...passed 00:11:40.718 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:40.718 Test: blockdev comparev and writev ...[2024-12-13 23:52:19.700064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:40.718 [2024-12-13 23:52:19.700109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:11:40.718 [2024-12-13 23:52:19.700130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:40.718 [2024-12-13 23:52:19.700146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:11:40.718 [2024-12-13 23:52:19.700460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:40.718 [2024-12-13 23:52:19.700477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:11:40.718 [2024-12-13 23:52:19.700494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:40.718 [2024-12-13 23:52:19.700505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:11:40.718 [2024-12-13 23:52:19.700806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:40.718 [2024-12-13 23:52:19.700823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:11:40.718 [2024-12-13 23:52:19.700839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:40.718 [2024-12-13 23:52:19.700849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:11:40.718 [2024-12-13 23:52:19.701122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:40.718 [2024-12-13 23:52:19.701144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:11:40.718 [2024-12-13 23:52:19.701161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:40.718 [2024-12-13 23:52:19.701171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:11:40.718 passed 00:11:40.718 Test: blockdev nvme passthru rw ...passed 00:11:40.718 Test: blockdev nvme passthru vendor specific ...[2024-12-13 23:52:19.782959] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:40.718 [2024-12-13 23:52:19.782993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:11:40.718 [2024-12-13 23:52:19.783165] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:40.718 [2024-12-13 23:52:19.783179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:11:40.718 [2024-12-13 23:52:19.783312] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:40.718 [2024-12-13 23:52:19.783325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:11:40.718 [2024-12-13 23:52:19.783454] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:40.718 [2024-12-13 23:52:19.783467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:11:40.718 passed 00:11:40.718 Test: blockdev nvme admin passthru ...passed 00:11:40.718 Test: blockdev copy ...passed 00:11:40.718 00:11:40.718 Run Summary: Type Total Ran Passed Failed Inactive 00:11:40.718 suites 1 1 n/a 0 0 00:11:40.718 tests 23 23 23 0 0 00:11:40.718 asserts 152 152 152 0 n/a 00:11:40.718 00:11:40.718 Elapsed time = 1.332 seconds 00:11:41.653 23:52:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:41.654 23:52:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.654 23:52:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:41.654 23:52:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.654 23:52:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:11:41.654 23:52:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:11:41.654 23:52:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:41.654 23:52:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:11:41.654 23:52:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:41.654 23:52:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:11:41.654 23:52:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:41.654 23:52:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:41.654 rmmod nvme_tcp 00:11:41.654 rmmod nvme_fabrics 00:11:41.654 rmmod nvme_keyring 00:11:41.912 23:52:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:41.912 23:52:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:11:41.912 23:52:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:11:41.912 23:52:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 3895956 ']' 00:11:41.912 23:52:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 3895956 00:11:41.912 23:52:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@954 -- # '[' -z 3895956 ']' 00:11:41.912 23:52:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@958 -- # kill -0 3895956 00:11:41.912 23:52:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # uname 00:11:41.912 23:52:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:41.912 23:52:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3895956 00:11:41.912 23:52:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:11:41.912 23:52:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:11:41.912 23:52:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3895956' 00:11:41.912 killing process with pid 3895956 00:11:41.912 23:52:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@973 -- # kill 3895956 00:11:41.912 23:52:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@978 -- # wait 3895956 00:11:43.291 23:52:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:43.291 23:52:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:43.291 23:52:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:43.291 23:52:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:11:43.291 23:52:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 00:11:43.291 23:52:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:43.291 23:52:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 00:11:43.291 23:52:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:43.291 23:52:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:43.291 23:52:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:43.291 23:52:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:43.291 23:52:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:45.205 23:52:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:45.205 00:11:45.205 real 0m12.257s 00:11:45.205 user 0m23.778s 00:11:45.205 sys 0m4.594s 00:11:45.205 23:52:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:45.205 23:52:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:45.205 ************************************ 00:11:45.205 END TEST nvmf_bdevio 00:11:45.205 ************************************ 00:11:45.205 23:52:24 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:11:45.205 00:11:45.205 real 5m2.195s 00:11:45.205 user 12m1.448s 00:11:45.205 sys 1m35.981s 00:11:45.205 23:52:24 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:45.205 23:52:24 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:45.205 ************************************ 00:11:45.205 END TEST nvmf_target_core 00:11:45.205 ************************************ 00:11:45.205 23:52:24 nvmf_tcp -- nvmf/nvmf.sh@15 -- # run_test nvmf_target_extra /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:11:45.205 23:52:24 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:45.205 23:52:24 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:45.205 23:52:24 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:45.465 ************************************ 00:11:45.465 START TEST nvmf_target_extra 00:11:45.465 ************************************ 00:11:45.465 23:52:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:11:45.465 * Looking for test storage... 00:11:45.465 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:11:45.465 23:52:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:45.465 23:52:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1711 -- # lcov --version 00:11:45.465 23:52:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:45.465 23:52:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:45.465 23:52:24 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:45.465 23:52:24 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:45.465 23:52:24 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:45.465 23:52:24 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # IFS=.-: 00:11:45.465 23:52:24 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # read -ra ver1 00:11:45.465 23:52:24 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # IFS=.-: 00:11:45.465 23:52:24 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # read -ra ver2 00:11:45.465 23:52:24 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@338 -- # local 'op=<' 00:11:45.465 23:52:24 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@340 -- # ver1_l=2 00:11:45.465 23:52:24 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@341 -- # ver2_l=1 00:11:45.465 23:52:24 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:45.465 23:52:24 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@344 -- # case "$op" in 00:11:45.465 23:52:24 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@345 -- # : 1 00:11:45.465 23:52:24 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:45.465 23:52:24 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:45.465 23:52:24 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # decimal 1 00:11:45.465 23:52:24 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=1 00:11:45.465 23:52:24 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:45.465 23:52:24 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 1 00:11:45.465 23:52:24 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # ver1[v]=1 00:11:45.465 23:52:24 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # decimal 2 00:11:45.465 23:52:24 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=2 00:11:45.465 23:52:24 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:45.465 23:52:24 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 2 00:11:45.465 23:52:24 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # ver2[v]=2 00:11:45.465 23:52:24 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:45.465 23:52:24 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:45.465 23:52:24 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # return 0 00:11:45.465 23:52:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:45.465 23:52:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:45.465 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:45.465 --rc genhtml_branch_coverage=1 00:11:45.465 --rc genhtml_function_coverage=1 00:11:45.465 --rc genhtml_legend=1 00:11:45.465 --rc geninfo_all_blocks=1 00:11:45.465 --rc geninfo_unexecuted_blocks=1 00:11:45.465 00:11:45.465 ' 00:11:45.465 23:52:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:45.465 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:45.465 --rc genhtml_branch_coverage=1 00:11:45.465 --rc genhtml_function_coverage=1 00:11:45.465 --rc genhtml_legend=1 00:11:45.465 --rc geninfo_all_blocks=1 00:11:45.465 --rc geninfo_unexecuted_blocks=1 00:11:45.465 00:11:45.465 ' 00:11:45.465 23:52:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:45.465 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:45.465 --rc genhtml_branch_coverage=1 00:11:45.465 --rc genhtml_function_coverage=1 00:11:45.465 --rc genhtml_legend=1 00:11:45.465 --rc geninfo_all_blocks=1 00:11:45.465 --rc geninfo_unexecuted_blocks=1 00:11:45.465 00:11:45.465 ' 00:11:45.465 23:52:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:45.465 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:45.465 --rc genhtml_branch_coverage=1 00:11:45.465 --rc genhtml_function_coverage=1 00:11:45.465 --rc genhtml_legend=1 00:11:45.465 --rc geninfo_all_blocks=1 00:11:45.465 --rc geninfo_unexecuted_blocks=1 00:11:45.465 00:11:45.465 ' 00:11:45.465 23:52:24 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:45.465 23:52:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s 00:11:45.465 23:52:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:45.465 23:52:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:45.465 23:52:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:45.465 23:52:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:45.465 23:52:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:45.465 23:52:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:45.465 23:52:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:45.465 23:52:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:45.465 23:52:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:45.465 23:52:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:45.465 23:52:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:11:45.465 23:52:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:11:45.465 23:52:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:45.465 23:52:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:45.465 23:52:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:45.465 23:52:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:45.465 23:52:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:45.465 23:52:24 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@15 -- # shopt -s extglob 00:11:45.465 23:52:24 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:45.465 23:52:24 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:45.465 23:52:24 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:45.465 23:52:24 nvmf_tcp.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:45.465 23:52:24 nvmf_tcp.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:45.465 23:52:24 nvmf_tcp.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:45.465 23:52:24 nvmf_tcp.nvmf_target_extra -- paths/export.sh@5 -- # export PATH 00:11:45.466 23:52:24 nvmf_tcp.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:45.466 23:52:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@51 -- # : 0 00:11:45.466 23:52:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:45.466 23:52:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:45.466 23:52:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:45.466 23:52:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:45.466 23:52:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:45.466 23:52:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:45.466 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:45.466 23:52:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:45.466 23:52:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:45.466 23:52:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:45.466 23:52:24 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:11:45.466 23:52:24 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@") 00:11:45.466 23:52:24 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 0 -eq 0 ]] 00:11:45.466 23:52:24 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@16 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:11:45.466 23:52:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:45.466 23:52:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:45.466 23:52:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:45.726 ************************************ 00:11:45.726 START TEST nvmf_example 00:11:45.726 ************************************ 00:11:45.726 23:52:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:11:45.726 * Looking for test storage... 00:11:45.726 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:45.726 23:52:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:45.726 23:52:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1711 -- # lcov --version 00:11:45.726 23:52:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:45.726 23:52:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:45.726 23:52:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:45.726 23:52:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:45.726 23:52:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:45.726 23:52:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # IFS=.-: 00:11:45.726 23:52:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # read -ra ver1 00:11:45.726 23:52:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # IFS=.-: 00:11:45.726 23:52:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # read -ra ver2 00:11:45.726 23:52:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@338 -- # local 'op=<' 00:11:45.726 23:52:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@340 -- # ver1_l=2 00:11:45.726 23:52:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@341 -- # ver2_l=1 00:11:45.726 23:52:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:45.726 23:52:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@344 -- # case "$op" in 00:11:45.726 23:52:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@345 -- # : 1 00:11:45.726 23:52:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:45.726 23:52:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:45.726 23:52:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # decimal 1 00:11:45.726 23:52:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=1 00:11:45.726 23:52:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:45.726 23:52:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 1 00:11:45.726 23:52:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # ver1[v]=1 00:11:45.726 23:52:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # decimal 2 00:11:45.726 23:52:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=2 00:11:45.726 23:52:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:45.726 23:52:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 2 00:11:45.726 23:52:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # ver2[v]=2 00:11:45.726 23:52:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:45.726 23:52:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:45.726 23:52:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # return 0 00:11:45.726 23:52:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:45.726 23:52:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:45.726 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:45.726 --rc genhtml_branch_coverage=1 00:11:45.726 --rc genhtml_function_coverage=1 00:11:45.726 --rc genhtml_legend=1 00:11:45.726 --rc geninfo_all_blocks=1 00:11:45.726 --rc geninfo_unexecuted_blocks=1 00:11:45.726 00:11:45.726 ' 00:11:45.726 23:52:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:45.726 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:45.726 --rc genhtml_branch_coverage=1 00:11:45.726 --rc genhtml_function_coverage=1 00:11:45.726 --rc genhtml_legend=1 00:11:45.726 --rc geninfo_all_blocks=1 00:11:45.726 --rc geninfo_unexecuted_blocks=1 00:11:45.726 00:11:45.726 ' 00:11:45.726 23:52:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:45.726 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:45.726 --rc genhtml_branch_coverage=1 00:11:45.726 --rc genhtml_function_coverage=1 00:11:45.726 --rc genhtml_legend=1 00:11:45.726 --rc geninfo_all_blocks=1 00:11:45.726 --rc geninfo_unexecuted_blocks=1 00:11:45.726 00:11:45.726 ' 00:11:45.726 23:52:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:45.726 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:45.726 --rc genhtml_branch_coverage=1 00:11:45.726 --rc genhtml_function_coverage=1 00:11:45.726 --rc genhtml_legend=1 00:11:45.726 --rc geninfo_all_blocks=1 00:11:45.726 --rc geninfo_unexecuted_blocks=1 00:11:45.726 00:11:45.726 ' 00:11:45.726 23:52:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:45.726 23:52:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:11:45.726 23:52:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:45.726 23:52:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:45.726 23:52:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:45.726 23:52:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:45.726 23:52:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:45.726 23:52:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:45.726 23:52:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:45.726 23:52:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:45.726 23:52:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:45.726 23:52:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:45.726 23:52:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:11:45.726 23:52:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:11:45.726 23:52:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:45.726 23:52:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:45.726 23:52:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:45.726 23:52:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:45.726 23:52:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:45.726 23:52:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@15 -- # shopt -s extglob 00:11:45.726 23:52:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:45.726 23:52:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:45.726 23:52:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:45.726 23:52:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:45.726 23:52:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:45.726 23:52:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:45.726 23:52:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@5 -- # export PATH 00:11:45.726 23:52:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:45.726 23:52:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@51 -- # : 0 00:11:45.726 23:52:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:45.726 23:52:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:45.726 23:52:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:45.726 23:52:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:45.726 23:52:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:45.726 23:52:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:45.726 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:45.726 23:52:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:45.726 23:52:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:45.727 23:52:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:45.727 23:52:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:11:45.727 23:52:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:11:45.727 23:52:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:11:45.727 23:52:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:11:45.727 23:52:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:11:45.727 23:52:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:11:45.727 23:52:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:11:45.727 23:52:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:11:45.727 23:52:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:45.727 23:52:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:45.727 23:52:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:11:45.727 23:52:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:45.727 23:52:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:45.727 23:52:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:45.727 23:52:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:45.727 23:52:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:45.727 23:52:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:45.727 23:52:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:45.727 23:52:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:45.727 23:52:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:45.727 23:52:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:45.727 23:52:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@309 -- # xtrace_disable 00:11:45.727 23:52:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:50.997 23:52:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:50.997 23:52:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # pci_devs=() 00:11:50.997 23:52:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:50.997 23:52:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:50.997 23:52:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:50.997 23:52:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:50.997 23:52:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:50.997 23:52:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # net_devs=() 00:11:50.997 23:52:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:50.997 23:52:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # e810=() 00:11:50.997 23:52:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # local -ga e810 00:11:50.997 23:52:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # x722=() 00:11:50.997 23:52:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # local -ga x722 00:11:50.997 23:52:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # mlx=() 00:11:50.997 23:52:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # local -ga mlx 00:11:50.997 23:52:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:50.997 23:52:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:50.997 23:52:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:50.997 23:52:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:50.997 23:52:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:50.997 23:52:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:50.997 23:52:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:50.997 23:52:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:50.997 23:52:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:50.997 23:52:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:50.997 23:52:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:50.997 23:52:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:50.997 23:52:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:50.997 23:52:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:50.997 23:52:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:50.997 23:52:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:50.997 23:52:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:50.997 23:52:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:50.997 23:52:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:50.997 23:52:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:11:50.997 Found 0000:af:00.0 (0x8086 - 0x159b) 00:11:50.997 23:52:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:50.997 23:52:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:50.997 23:52:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:50.997 23:52:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:50.997 23:52:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:50.997 23:52:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:50.997 23:52:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:11:50.997 Found 0000:af:00.1 (0x8086 - 0x159b) 00:11:50.997 23:52:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:50.997 23:52:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:50.997 23:52:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:50.997 23:52:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:50.997 23:52:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:50.997 23:52:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:50.997 23:52:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:50.997 23:52:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:50.997 23:52:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:50.998 23:52:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:50.998 23:52:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:50.998 23:52:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:50.998 23:52:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:50.998 23:52:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:50.998 23:52:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:50.998 23:52:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:11:50.998 Found net devices under 0000:af:00.0: cvl_0_0 00:11:50.998 23:52:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:50.998 23:52:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:50.998 23:52:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:50.998 23:52:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:50.998 23:52:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:50.998 23:52:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:50.998 23:52:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:50.998 23:52:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:50.998 23:52:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:11:50.998 Found net devices under 0000:af:00.1: cvl_0_1 00:11:50.998 23:52:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:50.998 23:52:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:50.998 23:52:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # is_hw=yes 00:11:50.998 23:52:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:50.998 23:52:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:50.998 23:52:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:50.998 23:52:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:50.998 23:52:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:50.998 23:52:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:50.998 23:52:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:50.998 23:52:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:50.998 23:52:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:50.998 23:52:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:50.998 23:52:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:50.998 23:52:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:50.998 23:52:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:50.998 23:52:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:50.998 23:52:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:50.998 23:52:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:50.998 23:52:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:50.998 23:52:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:50.998 23:52:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:50.998 23:52:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:50.998 23:52:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:50.998 23:52:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:50.998 23:52:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:50.998 23:52:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:50.998 23:52:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:50.998 23:52:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:50.998 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:50.998 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.445 ms 00:11:50.998 00:11:50.998 --- 10.0.0.2 ping statistics --- 00:11:50.998 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:50.998 rtt min/avg/max/mdev = 0.445/0.445/0.445/0.000 ms 00:11:50.998 23:52:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:50.998 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:50.998 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.193 ms 00:11:50.998 00:11:50.998 --- 10.0.0.1 ping statistics --- 00:11:50.998 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:50.998 rtt min/avg/max/mdev = 0.193/0.193/0.193/0.000 ms 00:11:50.998 23:52:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:50.998 23:52:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@450 -- # return 0 00:11:50.998 23:52:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:50.998 23:52:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:50.998 23:52:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:50.998 23:52:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:50.998 23:52:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:50.998 23:52:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:50.998 23:52:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:50.998 23:52:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:11:50.998 23:52:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:11:50.998 23:52:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:50.998 23:52:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:50.998 23:52:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:11:50.998 23:52:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:11:50.998 23:52:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=3900183 00:11:50.998 23:52:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:11:50.998 23:52:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:11:50.998 23:52:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 3900183 00:11:50.998 23:52:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@835 -- # '[' -z 3900183 ']' 00:11:50.998 23:52:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:50.998 23:52:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:50.998 23:52:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:50.998 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:50.998 23:52:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:50.998 23:52:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:51.935 23:52:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:51.935 23:52:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@868 -- # return 0 00:11:51.935 23:52:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:11:51.935 23:52:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:51.935 23:52:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:51.935 23:52:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:51.935 23:52:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.935 23:52:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:51.935 23:52:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.935 23:52:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:11:51.935 23:52:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.935 23:52:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:51.935 23:52:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.935 23:52:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:11:51.935 23:52:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:51.935 23:52:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.935 23:52:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:51.935 23:52:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.935 23:52:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:11:51.935 23:52:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:51.935 23:52:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.935 23:52:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:51.935 23:52:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.935 23:52:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:51.935 23:52:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.935 23:52:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:51.935 23:52:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.935 23:52:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:11:51.935 23:52:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:12:04.146 Initializing NVMe Controllers 00:12:04.146 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:12:04.146 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:12:04.146 Initialization complete. Launching workers. 00:12:04.146 ======================================================== 00:12:04.146 Latency(us) 00:12:04.146 Device Information : IOPS MiB/s Average min max 00:12:04.146 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 16564.10 64.70 3863.14 800.22 15333.10 00:12:04.146 ======================================================== 00:12:04.146 Total : 16564.10 64.70 3863.14 800.22 15333.10 00:12:04.146 00:12:04.146 23:52:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:12:04.146 23:52:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:12:04.146 23:52:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:04.146 23:52:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@121 -- # sync 00:12:04.146 23:52:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:04.146 23:52:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@124 -- # set +e 00:12:04.146 23:52:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:04.146 23:52:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:04.146 rmmod nvme_tcp 00:12:04.146 rmmod nvme_fabrics 00:12:04.146 rmmod nvme_keyring 00:12:04.146 23:52:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:04.146 23:52:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@128 -- # set -e 00:12:04.146 23:52:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@129 -- # return 0 00:12:04.146 23:52:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@517 -- # '[' -n 3900183 ']' 00:12:04.146 23:52:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@518 -- # killprocess 3900183 00:12:04.146 23:52:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@954 -- # '[' -z 3900183 ']' 00:12:04.146 23:52:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@958 -- # kill -0 3900183 00:12:04.146 23:52:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@959 -- # uname 00:12:04.146 23:52:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:04.146 23:52:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3900183 00:12:04.146 23:52:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # process_name=nvmf 00:12:04.146 23:52:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@964 -- # '[' nvmf = sudo ']' 00:12:04.146 23:52:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3900183' 00:12:04.146 killing process with pid 3900183 00:12:04.146 23:52:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@973 -- # kill 3900183 00:12:04.146 23:52:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@978 -- # wait 3900183 00:12:04.146 nvmf threads initialize successfully 00:12:04.146 bdev subsystem init successfully 00:12:04.146 created a nvmf target service 00:12:04.146 create targets's poll groups done 00:12:04.146 all subsystems of target started 00:12:04.146 nvmf target is running 00:12:04.146 all subsystems of target stopped 00:12:04.146 destroy targets's poll groups done 00:12:04.146 destroyed the nvmf target service 00:12:04.146 bdev subsystem finish successfully 00:12:04.146 nvmf threads destroy successfully 00:12:04.146 23:52:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:04.146 23:52:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:04.146 23:52:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:04.146 23:52:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@297 -- # iptr 00:12:04.146 23:52:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # iptables-save 00:12:04.146 23:52:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:04.146 23:52:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # iptables-restore 00:12:04.146 23:52:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:04.146 23:52:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:04.146 23:52:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:04.146 23:52:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:04.146 23:52:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:06.051 23:52:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:06.051 23:52:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:12:06.051 23:52:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:06.051 23:52:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:12:06.051 00:12:06.051 real 0m20.294s 00:12:06.051 user 0m49.886s 00:12:06.051 sys 0m5.514s 00:12:06.051 23:52:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:06.051 23:52:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:12:06.051 ************************************ 00:12:06.051 END TEST nvmf_example 00:12:06.051 ************************************ 00:12:06.051 23:52:44 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@17 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:12:06.051 23:52:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:06.051 23:52:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:06.051 23:52:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:06.051 ************************************ 00:12:06.051 START TEST nvmf_filesystem 00:12:06.051 ************************************ 00:12:06.051 23:52:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:12:06.051 * Looking for test storage... 00:12:06.051 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:06.051 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:12:06.051 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # lcov --version 00:12:06.051 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:12:06.051 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:12:06.051 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:06.051 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:06.051 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:06.051 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:12:06.051 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:12:06.051 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:12:06.051 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:12:06.051 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:12:06.051 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:12:06.051 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:12:06.051 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:06.052 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:12:06.052 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:12:06.052 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:06.052 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:06.052 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:12:06.052 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:12:06.052 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:06.052 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:12:06.052 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:12:06.052 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:12:06.052 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:12:06.052 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:06.052 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:12:06.052 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:12:06.052 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:06.052 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:06.052 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:12:06.052 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:06.052 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:12:06.052 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:06.052 --rc genhtml_branch_coverage=1 00:12:06.052 --rc genhtml_function_coverage=1 00:12:06.052 --rc genhtml_legend=1 00:12:06.052 --rc geninfo_all_blocks=1 00:12:06.052 --rc geninfo_unexecuted_blocks=1 00:12:06.052 00:12:06.052 ' 00:12:06.052 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:12:06.052 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:06.052 --rc genhtml_branch_coverage=1 00:12:06.052 --rc genhtml_function_coverage=1 00:12:06.052 --rc genhtml_legend=1 00:12:06.052 --rc geninfo_all_blocks=1 00:12:06.052 --rc geninfo_unexecuted_blocks=1 00:12:06.052 00:12:06.052 ' 00:12:06.052 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:12:06.052 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:06.052 --rc genhtml_branch_coverage=1 00:12:06.052 --rc genhtml_function_coverage=1 00:12:06.052 --rc genhtml_legend=1 00:12:06.052 --rc geninfo_all_blocks=1 00:12:06.052 --rc geninfo_unexecuted_blocks=1 00:12:06.052 00:12:06.052 ' 00:12:06.052 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:12:06.052 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:06.052 --rc genhtml_branch_coverage=1 00:12:06.052 --rc genhtml_function_coverage=1 00:12:06.052 --rc genhtml_legend=1 00:12:06.052 --rc geninfo_all_blocks=1 00:12:06.052 --rc geninfo_unexecuted_blocks=1 00:12:06.052 00:12:06.052 ' 00:12:06.052 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:12:06.052 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:12:06.052 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:12:06.052 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:12:06.052 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:12:06.052 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:12:06.052 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:12:06.052 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:12:06.052 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:12:06.052 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:12:06.052 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=y 00:12:06.052 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:12:06.052 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:12:06.052 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:12:06.052 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:12:06.052 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:12:06.052 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:12:06.052 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:12:06.052 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:12:06.052 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:12:06.052 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:12:06.052 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:12:06.052 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:12:06.052 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:12:06.052 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:12:06.052 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_MAX_NUMA_NODES=1 00:12:06.052 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_PGO_CAPTURE=n 00:12:06.052 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:12:06.052 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:12:06.052 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_LTO=n 00:12:06.052 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_ISCSI_INITIATOR=y 00:12:06.052 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_CET=n 00:12:06.052 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:12:06.052 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_OCF_PATH= 00:12:06.052 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_RDMA_SET_TOS=y 00:12:06.052 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_AIO_FSDEV=y 00:12:06.052 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_HAVE_ARC4RANDOM=y 00:12:06.052 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_HAVE_LIBARCHIVE=n 00:12:06.052 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_UBLK=y 00:12:06.052 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_ISAL_CRYPTO=y 00:12:06.052 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_OPENSSL_PATH= 00:12:06.052 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_OCF=n 00:12:06.052 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUSE=n 00:12:06.052 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_VTUNE_DIR= 00:12:06.052 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_FUZZER_LIB= 00:12:06.052 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_FUZZER=n 00:12:06.052 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_FSDEV=y 00:12:06.052 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:12:06.052 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_CRYPTO=n 00:12:06.052 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_PGO_USE=n 00:12:06.052 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_VHOST=y 00:12:06.052 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_DAOS=n 00:12:06.052 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_DPDK_INC_DIR= 00:12:06.052 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_DAOS_DIR= 00:12:06.052 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_UNIT_TESTS=n 00:12:06.052 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:12:06.052 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_VIRTIO=y 00:12:06.052 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_DPDK_UADK=n 00:12:06.052 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_COVERAGE=y 00:12:06.052 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_RDMA=y 00:12:06.052 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:12:06.052 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_HAVE_LZ4=n 00:12:06.052 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:12:06.052 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_PATH= 00:12:06.052 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_XNVME=n 00:12:06.052 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_VFIO_USER=n 00:12:06.052 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_ARCH=native 00:12:06.052 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_HAVE_EVP_MAC=y 00:12:06.052 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_URING_ZNS=n 00:12:06.053 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_WERROR=y 00:12:06.053 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBBSD=n 00:12:06.053 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_UBSAN=y 00:12:06.053 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:12:06.053 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_IPSEC_MB_DIR= 00:12:06.053 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_GOLANG=n 00:12:06.053 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_ISAL=y 00:12:06.053 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_IDXD_KERNEL=y 00:12:06.053 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_LIB_DIR= 00:12:06.053 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_RDMA_PROV=verbs 00:12:06.053 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_APPS=y 00:12:06.053 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_SHARED=y 00:12:06.053 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_HAVE_KEYUTILS=y 00:12:06.053 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_FC_PATH= 00:12:06.053 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_DPDK_PKG_CONFIG=n 00:12:06.053 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_FC=n 00:12:06.053 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_AVAHI=n 00:12:06.053 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_FIO_PLUGIN=y 00:12:06.053 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_RAID5F=n 00:12:06.053 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_EXAMPLES=y 00:12:06.053 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_TESTS=y 00:12:06.053 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CRYPTO_MLX5=n 00:12:06.053 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_MAX_LCORES=128 00:12:06.053 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@84 -- # CONFIG_IPSEC_MB=n 00:12:06.053 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@85 -- # CONFIG_PGO_DIR= 00:12:06.053 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@86 -- # CONFIG_DEBUG=y 00:12:06.053 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@87 -- # CONFIG_DPDK_COMPRESSDEV=n 00:12:06.053 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@88 -- # CONFIG_CROSS_PREFIX= 00:12:06.053 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@89 -- # CONFIG_COPY_FILE_RANGE=y 00:12:06.053 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@90 -- # CONFIG_URING=n 00:12:06.053 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:12:06.053 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:12:06.053 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:12:06.053 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:12:06.053 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:12:06.053 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:12:06.053 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:12:06.053 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:12:06.053 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:12:06.053 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:12:06.053 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:12:06.053 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:12:06.053 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:12:06.053 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:12:06.053 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:12:06.053 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:12:06.053 #define SPDK_CONFIG_H 00:12:06.053 #define SPDK_CONFIG_AIO_FSDEV 1 00:12:06.053 #define SPDK_CONFIG_APPS 1 00:12:06.053 #define SPDK_CONFIG_ARCH native 00:12:06.053 #define SPDK_CONFIG_ASAN 1 00:12:06.053 #undef SPDK_CONFIG_AVAHI 00:12:06.053 #undef SPDK_CONFIG_CET 00:12:06.053 #define SPDK_CONFIG_COPY_FILE_RANGE 1 00:12:06.053 #define SPDK_CONFIG_COVERAGE 1 00:12:06.053 #define SPDK_CONFIG_CROSS_PREFIX 00:12:06.053 #undef SPDK_CONFIG_CRYPTO 00:12:06.053 #undef SPDK_CONFIG_CRYPTO_MLX5 00:12:06.053 #undef SPDK_CONFIG_CUSTOMOCF 00:12:06.053 #undef SPDK_CONFIG_DAOS 00:12:06.053 #define SPDK_CONFIG_DAOS_DIR 00:12:06.053 #define SPDK_CONFIG_DEBUG 1 00:12:06.053 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:12:06.053 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:12:06.053 #define SPDK_CONFIG_DPDK_INC_DIR 00:12:06.053 #define SPDK_CONFIG_DPDK_LIB_DIR 00:12:06.053 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:12:06.053 #undef SPDK_CONFIG_DPDK_UADK 00:12:06.053 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:12:06.053 #define SPDK_CONFIG_EXAMPLES 1 00:12:06.053 #undef SPDK_CONFIG_FC 00:12:06.053 #define SPDK_CONFIG_FC_PATH 00:12:06.053 #define SPDK_CONFIG_FIO_PLUGIN 1 00:12:06.053 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:12:06.053 #define SPDK_CONFIG_FSDEV 1 00:12:06.053 #undef SPDK_CONFIG_FUSE 00:12:06.053 #undef SPDK_CONFIG_FUZZER 00:12:06.053 #define SPDK_CONFIG_FUZZER_LIB 00:12:06.053 #undef SPDK_CONFIG_GOLANG 00:12:06.053 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:12:06.053 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:12:06.053 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:12:06.053 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:12:06.053 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:12:06.053 #undef SPDK_CONFIG_HAVE_LIBBSD 00:12:06.053 #undef SPDK_CONFIG_HAVE_LZ4 00:12:06.053 #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1 00:12:06.053 #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC 00:12:06.053 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:12:06.053 #define SPDK_CONFIG_IDXD 1 00:12:06.053 #define SPDK_CONFIG_IDXD_KERNEL 1 00:12:06.053 #undef SPDK_CONFIG_IPSEC_MB 00:12:06.053 #define SPDK_CONFIG_IPSEC_MB_DIR 00:12:06.053 #define SPDK_CONFIG_ISAL 1 00:12:06.053 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:12:06.053 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:12:06.053 #define SPDK_CONFIG_LIBDIR 00:12:06.053 #undef SPDK_CONFIG_LTO 00:12:06.053 #define SPDK_CONFIG_MAX_LCORES 128 00:12:06.053 #define SPDK_CONFIG_MAX_NUMA_NODES 1 00:12:06.053 #define SPDK_CONFIG_NVME_CUSE 1 00:12:06.053 #undef SPDK_CONFIG_OCF 00:12:06.053 #define SPDK_CONFIG_OCF_PATH 00:12:06.053 #define SPDK_CONFIG_OPENSSL_PATH 00:12:06.053 #undef SPDK_CONFIG_PGO_CAPTURE 00:12:06.053 #define SPDK_CONFIG_PGO_DIR 00:12:06.053 #undef SPDK_CONFIG_PGO_USE 00:12:06.053 #define SPDK_CONFIG_PREFIX /usr/local 00:12:06.053 #undef SPDK_CONFIG_RAID5F 00:12:06.053 #undef SPDK_CONFIG_RBD 00:12:06.053 #define SPDK_CONFIG_RDMA 1 00:12:06.053 #define SPDK_CONFIG_RDMA_PROV verbs 00:12:06.053 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:12:06.053 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:12:06.053 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:12:06.053 #define SPDK_CONFIG_SHARED 1 00:12:06.053 #undef SPDK_CONFIG_SMA 00:12:06.053 #define SPDK_CONFIG_TESTS 1 00:12:06.053 #undef SPDK_CONFIG_TSAN 00:12:06.053 #define SPDK_CONFIG_UBLK 1 00:12:06.053 #define SPDK_CONFIG_UBSAN 1 00:12:06.053 #undef SPDK_CONFIG_UNIT_TESTS 00:12:06.053 #undef SPDK_CONFIG_URING 00:12:06.053 #define SPDK_CONFIG_URING_PATH 00:12:06.053 #undef SPDK_CONFIG_URING_ZNS 00:12:06.053 #undef SPDK_CONFIG_USDT 00:12:06.053 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:12:06.053 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:12:06.053 #undef SPDK_CONFIG_VFIO_USER 00:12:06.053 #define SPDK_CONFIG_VFIO_USER_DIR 00:12:06.053 #define SPDK_CONFIG_VHOST 1 00:12:06.053 #define SPDK_CONFIG_VIRTIO 1 00:12:06.053 #undef SPDK_CONFIG_VTUNE 00:12:06.053 #define SPDK_CONFIG_VTUNE_DIR 00:12:06.053 #define SPDK_CONFIG_WERROR 1 00:12:06.053 #define SPDK_CONFIG_WPDK_DIR 00:12:06.053 #undef SPDK_CONFIG_XNVME 00:12:06.053 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:12:06.053 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:12:06.053 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:06.053 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:12:06.053 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:06.053 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:06.053 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:06.053 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:06.053 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:06.054 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:06.054 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:12:06.054 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:06.054 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:12:06.054 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:12:06.316 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:12:06.316 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:12:06.316 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:12:06.316 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:12:06.316 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:12:06.316 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:12:06.316 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:12:06.316 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # uname -s 00:12:06.316 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:12:06.316 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:12:06.316 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:12:06.316 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:12:06.316 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:12:06.316 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:12:06.316 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:12:06.316 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:12:06.316 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:12:06.316 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:12:06.316 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:12:06.316 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:12:06.316 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:12:06.316 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:12:06.316 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:12:06.316 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:12:06.316 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power ]] 00:12:06.316 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 1 00:12:06.316 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:12:06.316 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:12:06.316 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:12:06.316 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:12:06.316 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:12:06.316 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:12:06.316 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:12:06.316 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:12:06.316 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:12:06.316 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:12:06.316 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:12:06.316 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:12:06.316 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:12:06.316 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:12:06.316 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:12:06.316 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:12:06.316 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:12:06.316 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:12:06.316 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:12:06.316 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:12:06.316 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:12:06.316 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:12:06.316 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:12:06.316 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:12:06.316 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:12:06.316 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:12:06.316 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:12:06.316 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:12:06.316 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:12:06.316 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:12:06.316 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:12:06.316 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:12:06.316 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:12:06.316 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 0 00:12:06.317 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:12:06.317 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:12:06.317 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:12:06.317 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:12:06.317 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:12:06.317 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:12:06.317 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:12:06.317 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:12:06.317 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:12:06.317 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:12:06.317 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:12:06.317 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:12:06.317 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:12:06.317 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:12:06.317 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:12:06.317 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:12:06.317 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID 00:12:06.317 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:12:06.317 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT 00:12:06.317 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:12:06.317 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS 00:12:06.317 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:12:06.317 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT 00:12:06.317 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:12:06.317 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL 00:12:06.317 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:12:06.317 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS 00:12:06.317 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 1 00:12:06.317 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN 00:12:06.317 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 1 00:12:06.317 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN 00:12:06.317 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 00:12:06.317 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK 00:12:06.317 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:12:06.317 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT 00:12:06.317 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:12:06.317 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO 00:12:06.317 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:12:06.317 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL 00:12:06.317 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:12:06.317 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF 00:12:06.317 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:12:06.317 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD 00:12:06.317 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 0 00:12:06.317 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL 00:12:06.317 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@140 -- # : 00:12:06.317 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK 00:12:06.317 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@142 -- # : true 00:12:06.317 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X 00:12:06.317 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:12:06.317 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:12:06.317 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:12:06.317 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:12:06.317 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:12:06.317 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:12:06.317 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:12:06.317 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:12:06.317 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:12:06.317 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:12:06.317 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@154 -- # : e810 00:12:06.317 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:12:06.317 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:12:06.317 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:12:06.317 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:12:06.317 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:12:06.317 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:12:06.317 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:12:06.317 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:12:06.317 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:12:06.317 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:12:06.317 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:12:06.317 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@166 -- # : 0 00:12:06.317 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:12:06.317 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 00:12:06.317 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:12:06.317 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:12:06.317 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:12:06.317 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@173 -- # : 0 00:12:06.317 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:12:06.318 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@175 -- # : 0 00:12:06.318 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@176 -- # export SPDK_TEST_SETUP 00:12:06.318 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@177 -- # : 0 00:12:06.318 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@178 -- # export SPDK_TEST_NVME_INTERRUPT 00:12:06.318 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:12:06.318 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:12:06.318 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:12:06.318 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:12:06.318 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:12:06.318 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:12:06.318 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@184 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:12:06.318 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@184 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:12:06.318 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:12:06.318 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:12:06.318 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:12:06.318 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:12:06.318 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # export PYTHONDONTWRITEBYTECODE=1 00:12:06.318 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # PYTHONDONTWRITEBYTECODE=1 00:12:06.318 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@199 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:12:06.318 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@199 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:12:06.318 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:12:06.318 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:12:06.318 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@204 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:12:06.318 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@205 -- # rm -rf /var/tmp/asan_suppression_file 00:12:06.318 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@206 -- # cat 00:12:06.318 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # echo leak:libfuse3.so 00:12:06.318 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:12:06.318 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:12:06.318 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:12:06.318 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:12:06.318 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@248 -- # '[' -z /var/spdk/dependencies ']' 00:12:06.318 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@251 -- # export DEPENDENCY_DIR 00:12:06.318 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:12:06.318 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:12:06.318 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:12:06.318 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:12:06.318 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@259 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:12:06.318 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@259 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:12:06.318 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:12:06.318 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:12:06.318 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@262 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:12:06.318 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@262 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:12:06.318 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:12:06.318 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:12:06.318 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@267 -- # _LCOV_MAIN=0 00:12:06.318 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@268 -- # _LCOV_LLVM=1 00:12:06.318 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@269 -- # _LCOV= 00:12:06.319 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # [[ '' == *clang* ]] 00:12:06.319 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # [[ 0 -eq 1 ]] 00:12:06.319 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@272 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:12:06.319 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@273 -- # _lcov_opt[_LCOV_MAIN]= 00:12:06.319 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@275 -- # lcov_opt= 00:12:06.319 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@278 -- # '[' 0 -eq 0 ']' 00:12:06.319 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@279 -- # export valgrind= 00:12:06.319 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@279 -- # valgrind= 00:12:06.319 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # uname -s 00:12:06.319 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # '[' Linux = Linux ']' 00:12:06.319 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@286 -- # HUGEMEM=4096 00:12:06.319 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # export CLEAR_HUGE=yes 00:12:06.319 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # CLEAR_HUGE=yes 00:12:06.319 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@289 -- # MAKE=make 00:12:06.319 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@290 -- # MAKEFLAGS=-j96 00:12:06.319 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # export HUGEMEM=4096 00:12:06.319 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # HUGEMEM=4096 00:12:06.319 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@308 -- # NO_HUGE=() 00:12:06.319 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@309 -- # TEST_MODE= 00:12:06.319 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@310 -- # for i in "$@" 00:12:06.319 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@311 -- # case "$i" in 00:12:06.319 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@316 -- # TEST_TRANSPORT=tcp 00:12:06.319 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@331 -- # [[ -z 3902751 ]] 00:12:06.319 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@331 -- # kill -0 3902751 00:12:06.319 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1696 -- # set_test_storage 2147483648 00:12:06.319 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@341 -- # [[ -v testdir ]] 00:12:06.319 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@343 -- # local requested_size=2147483648 00:12:06.319 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@344 -- # local mount target_dir 00:12:06.319 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@346 -- # local -A mounts fss sizes avails uses 00:12:06.319 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@347 -- # local source fs size avail mount use 00:12:06.319 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@349 -- # local storage_fallback storage_candidates 00:12:06.319 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@351 -- # mktemp -udt spdk.XXXXXX 00:12:06.319 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@351 -- # storage_fallback=/tmp/spdk.tXYx1L 00:12:06.319 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@356 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:12:06.319 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@358 -- # [[ -n '' ]] 00:12:06.319 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # [[ -n '' ]] 00:12:06.319 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@368 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.tXYx1L/tests/target /tmp/spdk.tXYx1L 00:12:06.319 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # requested_size=2214592512 00:12:06.319 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:06.319 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # df -T 00:12:06.319 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # grep -v Filesystem 00:12:06.319 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=spdk_devtmpfs 00:12:06.319 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=devtmpfs 00:12:06.319 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=67108864 00:12:06.319 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=67108864 00:12:06.319 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=0 00:12:06.319 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:06.319 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/pmem0 00:12:06.319 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=ext2 00:12:06.319 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=722997248 00:12:06.319 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=5284429824 00:12:06.319 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=4561432576 00:12:06.319 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:06.319 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=spdk_root 00:12:06.319 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=overlay 00:12:06.319 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=88647094272 00:12:06.319 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=95552421888 00:12:06.319 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=6905327616 00:12:06.319 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:06.319 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:12:06.319 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:12:06.319 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=47764844544 00:12:06.319 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=47776210944 00:12:06.319 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=11366400 00:12:06.319 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:06.319 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:12:06.319 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:12:06.319 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=19087466496 00:12:06.319 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=19110486016 00:12:06.319 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=23019520 00:12:06.319 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:06.319 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:12:06.320 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:12:06.320 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=47775936512 00:12:06.320 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=47776210944 00:12:06.320 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=274432 00:12:06.320 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:06.320 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:12:06.320 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:12:06.320 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=9555226624 00:12:06.320 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=9555238912 00:12:06.320 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=12288 00:12:06.320 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:06.320 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@379 -- # printf '* Looking for test storage...\n' 00:12:06.320 * Looking for test storage... 00:12:06.320 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@381 -- # local target_space new_size 00:12:06.320 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@382 -- # for target_dir in "${storage_candidates[@]}" 00:12:06.320 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:06.320 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # awk '$1 !~ /Filesystem/{print $6}' 00:12:06.320 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # mount=/ 00:12:06.320 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@387 -- # target_space=88647094272 00:12:06.320 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@388 -- # (( target_space == 0 || target_space < requested_size )) 00:12:06.320 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # (( target_space >= requested_size )) 00:12:06.320 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ overlay == tmpfs ]] 00:12:06.320 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ overlay == ramfs ]] 00:12:06.320 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ / == / ]] 00:12:06.320 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@394 -- # new_size=9119920128 00:12:06.320 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@395 -- # (( new_size * 100 / sizes[/] > 95 )) 00:12:06.320 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:06.320 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:06.320 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@401 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:06.320 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:06.320 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@402 -- # return 0 00:12:06.320 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1698 -- # set -o errtrace 00:12:06.320 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1699 -- # shopt -s extdebug 00:12:06.320 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1700 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:12:06.320 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1702 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:12:06.320 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1703 -- # true 00:12:06.320 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1705 -- # xtrace_fd 00:12:06.320 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 15 ]] 00:12:06.320 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/15 ]] 00:12:06.320 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:12:06.320 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:12:06.320 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:12:06.320 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:12:06.320 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:12:06.320 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:12:06.320 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:12:06.320 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # lcov --version 00:12:06.320 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:12:06.320 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:12:06.320 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:06.320 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:06.320 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:06.320 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:12:06.320 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:12:06.320 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:12:06.320 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:12:06.320 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:12:06.320 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:12:06.320 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:12:06.320 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:06.320 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:12:06.320 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:12:06.320 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:06.320 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:06.320 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:12:06.320 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:12:06.320 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:06.320 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:12:06.320 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:12:06.320 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:12:06.320 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:12:06.320 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:06.320 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:12:06.320 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:12:06.320 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:06.320 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:06.320 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:12:06.320 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:06.320 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:12:06.320 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:06.320 --rc genhtml_branch_coverage=1 00:12:06.320 --rc genhtml_function_coverage=1 00:12:06.320 --rc genhtml_legend=1 00:12:06.320 --rc geninfo_all_blocks=1 00:12:06.320 --rc geninfo_unexecuted_blocks=1 00:12:06.320 00:12:06.320 ' 00:12:06.320 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:12:06.320 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:06.320 --rc genhtml_branch_coverage=1 00:12:06.320 --rc genhtml_function_coverage=1 00:12:06.320 --rc genhtml_legend=1 00:12:06.320 --rc geninfo_all_blocks=1 00:12:06.320 --rc geninfo_unexecuted_blocks=1 00:12:06.320 00:12:06.320 ' 00:12:06.320 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:12:06.320 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:06.320 --rc genhtml_branch_coverage=1 00:12:06.320 --rc genhtml_function_coverage=1 00:12:06.320 --rc genhtml_legend=1 00:12:06.320 --rc geninfo_all_blocks=1 00:12:06.320 --rc geninfo_unexecuted_blocks=1 00:12:06.320 00:12:06.320 ' 00:12:06.320 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:12:06.320 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:06.320 --rc genhtml_branch_coverage=1 00:12:06.320 --rc genhtml_function_coverage=1 00:12:06.320 --rc genhtml_legend=1 00:12:06.321 --rc geninfo_all_blocks=1 00:12:06.321 --rc geninfo_unexecuted_blocks=1 00:12:06.321 00:12:06.321 ' 00:12:06.321 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:06.321 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:12:06.321 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:06.321 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:06.321 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:06.321 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:06.321 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:06.321 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:06.321 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:06.321 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:06.321 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:06.321 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:06.321 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:12:06.321 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:12:06.321 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:06.321 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:06.321 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:06.321 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:06.321 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:06.321 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:12:06.321 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:06.321 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:06.321 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:06.321 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:06.321 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:06.321 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:06.321 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:12:06.321 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:06.321 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@51 -- # : 0 00:12:06.321 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:06.321 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:06.321 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:06.321 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:06.321 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:06.321 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:06.321 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:06.321 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:06.321 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:06.321 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:06.321 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:12:06.321 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:12:06.321 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:12:06.321 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:06.321 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:06.321 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:06.321 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:06.321 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:06.321 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:06.321 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:06.321 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:06.321 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:06.321 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:06.321 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@309 -- # xtrace_disable 00:12:06.321 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:12:12.889 23:52:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:12.889 23:52:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # pci_devs=() 00:12:12.889 23:52:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:12.889 23:52:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:12.889 23:52:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:12.889 23:52:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:12.889 23:52:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:12.889 23:52:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # net_devs=() 00:12:12.889 23:52:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:12.889 23:52:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # e810=() 00:12:12.889 23:52:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # local -ga e810 00:12:12.889 23:52:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # x722=() 00:12:12.889 23:52:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # local -ga x722 00:12:12.889 23:52:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # mlx=() 00:12:12.889 23:52:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # local -ga mlx 00:12:12.889 23:52:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:12.889 23:52:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:12.889 23:52:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:12.889 23:52:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:12.889 23:52:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:12.889 23:52:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:12.889 23:52:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:12.889 23:52:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:12.889 23:52:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:12.889 23:52:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:12.890 23:52:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:12.890 23:52:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:12.890 23:52:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:12.890 23:52:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:12.890 23:52:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:12.890 23:52:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:12.890 23:52:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:12.890 23:52:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:12.890 23:52:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:12.890 23:52:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:12:12.890 Found 0000:af:00.0 (0x8086 - 0x159b) 00:12:12.890 23:52:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:12.890 23:52:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:12.890 23:52:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:12.890 23:52:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:12.890 23:52:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:12.890 23:52:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:12.890 23:52:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:12:12.890 Found 0000:af:00.1 (0x8086 - 0x159b) 00:12:12.890 23:52:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:12.890 23:52:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:12.890 23:52:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:12.890 23:52:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:12.890 23:52:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:12.890 23:52:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:12.890 23:52:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:12.890 23:52:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:12.890 23:52:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:12.890 23:52:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:12.890 23:52:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:12.890 23:52:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:12.890 23:52:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:12.890 23:52:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:12.890 23:52:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:12.890 23:52:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:12:12.890 Found net devices under 0000:af:00.0: cvl_0_0 00:12:12.890 23:52:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:12.890 23:52:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:12.890 23:52:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:12.890 23:52:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:12.890 23:52:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:12.890 23:52:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:12.890 23:52:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:12.890 23:52:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:12.890 23:52:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:12:12.890 Found net devices under 0000:af:00.1: cvl_0_1 00:12:12.890 23:52:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:12.890 23:52:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:12.890 23:52:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # is_hw=yes 00:12:12.890 23:52:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:12.890 23:52:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:12.890 23:52:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:12.890 23:52:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:12.890 23:52:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:12.890 23:52:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:12.890 23:52:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:12.890 23:52:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:12.890 23:52:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:12.890 23:52:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:12.890 23:52:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:12.890 23:52:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:12.890 23:52:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:12.890 23:52:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:12.890 23:52:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:12.890 23:52:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:12.890 23:52:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:12.890 23:52:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:12.890 23:52:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:12.890 23:52:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:12.890 23:52:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:12.890 23:52:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:12.890 23:52:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:12.890 23:52:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:12.890 23:52:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:12.890 23:52:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:12.890 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:12.890 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.444 ms 00:12:12.890 00:12:12.890 --- 10.0.0.2 ping statistics --- 00:12:12.890 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:12.890 rtt min/avg/max/mdev = 0.444/0.444/0.444/0.000 ms 00:12:12.890 23:52:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:12.890 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:12.890 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.151 ms 00:12:12.890 00:12:12.890 --- 10.0.0.1 ping statistics --- 00:12:12.890 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:12.890 rtt min/avg/max/mdev = 0.151/0.151/0.151/0.000 ms 00:12:12.890 23:52:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:12.890 23:52:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@450 -- # return 0 00:12:12.890 23:52:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:12.890 23:52:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:12.890 23:52:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:12.890 23:52:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:12.890 23:52:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:12.890 23:52:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:12.890 23:52:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:12.890 23:52:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:12:12.890 23:52:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:12.890 23:52:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:12.890 23:52:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:12:12.890 ************************************ 00:12:12.890 START TEST nvmf_filesystem_no_in_capsule 00:12:12.890 ************************************ 00:12:12.890 23:52:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1129 -- # nvmf_filesystem_part 0 00:12:12.890 23:52:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:12:12.890 23:52:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:12:12.890 23:52:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:12.890 23:52:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:12.890 23:52:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:12.890 23:52:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@509 -- # nvmfpid=3905903 00:12:12.890 23:52:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@510 -- # waitforlisten 3905903 00:12:12.890 23:52:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:12.890 23:52:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@835 -- # '[' -z 3905903 ']' 00:12:12.890 23:52:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:12.891 23:52:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:12.891 23:52:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:12.891 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:12.891 23:52:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:12.891 23:52:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:12.891 [2024-12-13 23:52:51.338135] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:12:12.891 [2024-12-13 23:52:51.338226] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:12.891 [2024-12-13 23:52:51.455363] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:12.891 [2024-12-13 23:52:51.565112] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:12.891 [2024-12-13 23:52:51.565162] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:12.891 [2024-12-13 23:52:51.565172] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:12.891 [2024-12-13 23:52:51.565182] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:12.891 [2024-12-13 23:52:51.565190] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:12.891 [2024-12-13 23:52:51.567921] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:12:12.891 [2024-12-13 23:52:51.567995] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:12:12.891 [2024-12-13 23:52:51.568057] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:12:12.891 [2024-12-13 23:52:51.568066] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:12:13.150 23:52:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:13.150 23:52:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@868 -- # return 0 00:12:13.150 23:52:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:13.150 23:52:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:13.150 23:52:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:13.150 23:52:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:13.150 23:52:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:12:13.150 23:52:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:12:13.150 23:52:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.150 23:52:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:13.150 [2024-12-13 23:52:52.190148] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:13.150 23:52:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.150 23:52:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:12:13.150 23:52:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.150 23:52:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:13.718 Malloc1 00:12:13.718 23:52:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.718 23:52:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:13.718 23:52:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.718 23:52:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:13.718 23:52:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.718 23:52:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:13.718 23:52:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.718 23:52:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:13.718 23:52:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.718 23:52:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:13.718 23:52:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.718 23:52:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:13.718 [2024-12-13 23:52:52.788074] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:13.718 23:52:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.718 23:52:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:12:13.718 23:52:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # local bdev_name=Malloc1 00:12:13.718 23:52:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # local bdev_info 00:12:13.718 23:52:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # local bs 00:12:13.718 23:52:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1385 -- # local nb 00:12:13.718 23:52:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:12:13.718 23:52:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.718 23:52:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:13.718 23:52:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.718 23:52:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:12:13.718 { 00:12:13.718 "name": "Malloc1", 00:12:13.718 "aliases": [ 00:12:13.718 "472d43e7-c9c6-4bae-8602-4b3b48444cca" 00:12:13.718 ], 00:12:13.718 "product_name": "Malloc disk", 00:12:13.718 "block_size": 512, 00:12:13.718 "num_blocks": 1048576, 00:12:13.718 "uuid": "472d43e7-c9c6-4bae-8602-4b3b48444cca", 00:12:13.718 "assigned_rate_limits": { 00:12:13.718 "rw_ios_per_sec": 0, 00:12:13.718 "rw_mbytes_per_sec": 0, 00:12:13.718 "r_mbytes_per_sec": 0, 00:12:13.718 "w_mbytes_per_sec": 0 00:12:13.718 }, 00:12:13.718 "claimed": true, 00:12:13.718 "claim_type": "exclusive_write", 00:12:13.718 "zoned": false, 00:12:13.718 "supported_io_types": { 00:12:13.718 "read": true, 00:12:13.718 "write": true, 00:12:13.718 "unmap": true, 00:12:13.718 "flush": true, 00:12:13.718 "reset": true, 00:12:13.718 "nvme_admin": false, 00:12:13.718 "nvme_io": false, 00:12:13.718 "nvme_io_md": false, 00:12:13.718 "write_zeroes": true, 00:12:13.718 "zcopy": true, 00:12:13.718 "get_zone_info": false, 00:12:13.718 "zone_management": false, 00:12:13.718 "zone_append": false, 00:12:13.718 "compare": false, 00:12:13.718 "compare_and_write": false, 00:12:13.718 "abort": true, 00:12:13.718 "seek_hole": false, 00:12:13.718 "seek_data": false, 00:12:13.718 "copy": true, 00:12:13.718 "nvme_iov_md": false 00:12:13.718 }, 00:12:13.718 "memory_domains": [ 00:12:13.718 { 00:12:13.718 "dma_device_id": "system", 00:12:13.718 "dma_device_type": 1 00:12:13.718 }, 00:12:13.718 { 00:12:13.718 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:13.718 "dma_device_type": 2 00:12:13.718 } 00:12:13.718 ], 00:12:13.718 "driver_specific": {} 00:12:13.718 } 00:12:13.718 ]' 00:12:13.718 23:52:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:12:13.718 23:52:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # bs=512 00:12:13.977 23:52:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:12:13.977 23:52:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # nb=1048576 00:12:13.977 23:52:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1391 -- # bdev_size=512 00:12:13.977 23:52:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1392 -- # echo 512 00:12:13.977 23:52:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:12:13.977 23:52:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:14.914 23:52:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:12:14.914 23:52:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1202 -- # local i=0 00:12:14.914 23:52:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:12:14.914 23:52:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:12:14.914 23:52:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1209 -- # sleep 2 00:12:17.446 23:52:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:12:17.446 23:52:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:12:17.446 23:52:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:12:17.446 23:52:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:12:17.446 23:52:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:12:17.446 23:52:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1212 -- # return 0 00:12:17.446 23:52:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:12:17.446 23:52:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:12:17.446 23:52:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:12:17.446 23:52:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:12:17.446 23:52:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:12:17.446 23:52:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:12:17.446 23:52:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:12:17.446 23:52:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:12:17.446 23:52:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:12:17.447 23:52:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:12:17.447 23:52:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:12:17.447 23:52:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:12:17.705 23:52:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:12:19.082 23:52:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:12:19.082 23:52:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:12:19.082 23:52:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:12:19.082 23:52:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:19.082 23:52:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:19.082 ************************************ 00:12:19.082 START TEST filesystem_ext4 00:12:19.082 ************************************ 00:12:19.082 23:52:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create ext4 nvme0n1 00:12:19.082 23:52:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:12:19.082 23:52:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:12:19.082 23:52:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:12:19.082 23:52:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@930 -- # local fstype=ext4 00:12:19.082 23:52:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:12:19.082 23:52:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@932 -- # local i=0 00:12:19.082 23:52:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@933 -- # local force 00:12:19.082 23:52:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@935 -- # '[' ext4 = ext4 ']' 00:12:19.082 23:52:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@936 -- # force=-F 00:12:19.082 23:52:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@941 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:12:19.082 mke2fs 1.47.0 (5-Feb-2023) 00:12:19.082 Discarding device blocks: 0/522240 done 00:12:19.082 Creating filesystem with 522240 1k blocks and 130560 inodes 00:12:19.082 Filesystem UUID: 64830bc2-7141-4843-a109-c87f4150ad9b 00:12:19.082 Superblock backups stored on blocks: 00:12:19.082 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:12:19.082 00:12:19.082 Allocating group tables: 0/64 done 00:12:19.082 Writing inode tables: 0/64 done 00:12:19.082 Creating journal (8192 blocks): done 00:12:19.082 Writing superblocks and filesystem accounting information: 0/64 done 00:12:19.082 00:12:19.082 23:52:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@949 -- # return 0 00:12:19.082 23:52:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:25.647 23:53:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:25.647 23:53:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:12:25.648 23:53:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:25.648 23:53:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:12:25.648 23:53:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:12:25.648 23:53:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:25.648 23:53:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 3905903 00:12:25.648 23:53:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:25.648 23:53:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:25.648 23:53:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:25.648 23:53:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:25.648 00:12:25.648 real 0m5.842s 00:12:25.648 user 0m0.025s 00:12:25.648 sys 0m0.071s 00:12:25.648 23:53:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:25.648 23:53:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:12:25.648 ************************************ 00:12:25.648 END TEST filesystem_ext4 00:12:25.648 ************************************ 00:12:25.648 23:53:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:12:25.648 23:53:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:12:25.648 23:53:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:25.648 23:53:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:25.648 ************************************ 00:12:25.648 START TEST filesystem_btrfs 00:12:25.648 ************************************ 00:12:25.648 23:53:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create btrfs nvme0n1 00:12:25.648 23:53:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:12:25.648 23:53:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:12:25.648 23:53:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:12:25.648 23:53:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@930 -- # local fstype=btrfs 00:12:25.648 23:53:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:12:25.648 23:53:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@932 -- # local i=0 00:12:25.648 23:53:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@933 -- # local force 00:12:25.648 23:53:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@935 -- # '[' btrfs = ext4 ']' 00:12:25.648 23:53:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@938 -- # force=-f 00:12:25.648 23:53:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@941 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:12:25.648 btrfs-progs v6.8.1 00:12:25.648 See https://btrfs.readthedocs.io for more information. 00:12:25.648 00:12:25.648 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:12:25.648 NOTE: several default settings have changed in version 5.15, please make sure 00:12:25.648 this does not affect your deployments: 00:12:25.648 - DUP for metadata (-m dup) 00:12:25.648 - enabled no-holes (-O no-holes) 00:12:25.648 - enabled free-space-tree (-R free-space-tree) 00:12:25.648 00:12:25.648 Label: (null) 00:12:25.648 UUID: 559d08ba-e473-49d0-91f9-20237d562151 00:12:25.648 Node size: 16384 00:12:25.648 Sector size: 4096 (CPU page size: 4096) 00:12:25.648 Filesystem size: 510.00MiB 00:12:25.648 Block group profiles: 00:12:25.648 Data: single 8.00MiB 00:12:25.648 Metadata: DUP 32.00MiB 00:12:25.648 System: DUP 8.00MiB 00:12:25.648 SSD detected: yes 00:12:25.648 Zoned device: no 00:12:25.648 Features: extref, skinny-metadata, no-holes, free-space-tree 00:12:25.648 Checksum: crc32c 00:12:25.648 Number of devices: 1 00:12:25.648 Devices: 00:12:25.648 ID SIZE PATH 00:12:25.648 1 510.00MiB /dev/nvme0n1p1 00:12:25.648 00:12:25.648 23:53:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@949 -- # return 0 00:12:25.648 23:53:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:25.648 23:53:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:25.648 23:53:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:12:25.648 23:53:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:25.648 23:53:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:12:25.648 23:53:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:12:25.648 23:53:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:25.648 23:53:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 3905903 00:12:25.648 23:53:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:25.648 23:53:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:25.648 23:53:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:25.648 23:53:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:25.648 00:12:25.648 real 0m0.789s 00:12:25.648 user 0m0.031s 00:12:25.648 sys 0m0.106s 00:12:25.648 23:53:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:25.648 23:53:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:12:25.648 ************************************ 00:12:25.648 END TEST filesystem_btrfs 00:12:25.648 ************************************ 00:12:25.648 23:53:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:12:25.648 23:53:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:12:25.648 23:53:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:25.648 23:53:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:25.648 ************************************ 00:12:25.648 START TEST filesystem_xfs 00:12:25.648 ************************************ 00:12:25.648 23:53:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create xfs nvme0n1 00:12:25.648 23:53:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:12:25.648 23:53:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:12:25.648 23:53:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:12:25.648 23:53:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@930 -- # local fstype=xfs 00:12:25.648 23:53:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:12:25.648 23:53:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@932 -- # local i=0 00:12:25.648 23:53:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@933 -- # local force 00:12:25.648 23:53:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@935 -- # '[' xfs = ext4 ']' 00:12:25.648 23:53:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@938 -- # force=-f 00:12:25.648 23:53:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@941 -- # mkfs.xfs -f /dev/nvme0n1p1 00:12:25.648 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:12:25.648 = sectsz=512 attr=2, projid32bit=1 00:12:25.648 = crc=1 finobt=1, sparse=1, rmapbt=0 00:12:25.648 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:12:25.648 data = bsize=4096 blocks=130560, imaxpct=25 00:12:25.648 = sunit=0 swidth=0 blks 00:12:25.648 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:12:25.648 log =internal log bsize=4096 blocks=16384, version=2 00:12:25.648 = sectsz=512 sunit=0 blks, lazy-count=1 00:12:25.648 realtime =none extsz=4096 blocks=0, rtextents=0 00:12:26.584 Discarding blocks...Done. 00:12:26.584 23:53:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@949 -- # return 0 00:12:26.584 23:53:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:28.488 23:53:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:28.488 23:53:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:12:28.488 23:53:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:28.488 23:53:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:12:28.488 23:53:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:12:28.488 23:53:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:28.488 23:53:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 3905903 00:12:28.488 23:53:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:28.488 23:53:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:28.488 23:53:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:28.488 23:53:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:28.488 00:12:28.488 real 0m2.900s 00:12:28.488 user 0m0.025s 00:12:28.488 sys 0m0.073s 00:12:28.488 23:53:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:28.488 23:53:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:12:28.488 ************************************ 00:12:28.488 END TEST filesystem_xfs 00:12:28.488 ************************************ 00:12:28.488 23:53:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:12:28.747 23:53:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:12:28.747 23:53:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:29.006 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:29.006 23:53:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:29.006 23:53:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1223 -- # local i=0 00:12:29.006 23:53:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:12:29.006 23:53:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:29.006 23:53:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:12:29.006 23:53:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:29.006 23:53:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1235 -- # return 0 00:12:29.006 23:53:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:29.006 23:53:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.006 23:53:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:29.006 23:53:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.006 23:53:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:12:29.006 23:53:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 3905903 00:12:29.006 23:53:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # '[' -z 3905903 ']' 00:12:29.006 23:53:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@958 -- # kill -0 3905903 00:12:29.006 23:53:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@959 -- # uname 00:12:29.006 23:53:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:29.006 23:53:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3905903 00:12:29.006 23:53:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:29.006 23:53:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:29.006 23:53:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3905903' 00:12:29.006 killing process with pid 3905903 00:12:29.006 23:53:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@973 -- # kill 3905903 00:12:29.006 23:53:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@978 -- # wait 3905903 00:12:32.295 23:53:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:12:32.295 00:12:32.295 real 0m19.472s 00:12:32.295 user 1m15.135s 00:12:32.295 sys 0m1.547s 00:12:32.295 23:53:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:32.295 23:53:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:32.295 ************************************ 00:12:32.295 END TEST nvmf_filesystem_no_in_capsule 00:12:32.295 ************************************ 00:12:32.295 23:53:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:12:32.295 23:53:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:32.295 23:53:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:32.295 23:53:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:12:32.295 ************************************ 00:12:32.295 START TEST nvmf_filesystem_in_capsule 00:12:32.295 ************************************ 00:12:32.295 23:53:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1129 -- # nvmf_filesystem_part 4096 00:12:32.295 23:53:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:12:32.295 23:53:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:12:32.295 23:53:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:32.295 23:53:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:32.295 23:53:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:32.295 23:53:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@509 -- # nvmfpid=3909308 00:12:32.295 23:53:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@510 -- # waitforlisten 3909308 00:12:32.295 23:53:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:32.295 23:53:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@835 -- # '[' -z 3909308 ']' 00:12:32.295 23:53:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:32.295 23:53:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:32.295 23:53:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:32.295 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:32.295 23:53:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:32.295 23:53:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:32.295 [2024-12-13 23:53:10.880717] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:12:32.295 [2024-12-13 23:53:10.880824] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:32.295 [2024-12-13 23:53:10.998628] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:32.295 [2024-12-13 23:53:11.100237] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:32.295 [2024-12-13 23:53:11.100284] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:32.295 [2024-12-13 23:53:11.100294] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:32.295 [2024-12-13 23:53:11.100304] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:32.295 [2024-12-13 23:53:11.100312] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:32.295 [2024-12-13 23:53:11.102515] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:12:32.295 [2024-12-13 23:53:11.102593] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:12:32.295 [2024-12-13 23:53:11.102655] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:12:32.295 [2024-12-13 23:53:11.102666] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:12:32.554 23:53:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:32.554 23:53:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@868 -- # return 0 00:12:32.554 23:53:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:32.554 23:53:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:32.554 23:53:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:32.813 23:53:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:32.813 23:53:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:12:32.813 23:53:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:12:32.813 23:53:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.813 23:53:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:32.813 [2024-12-13 23:53:11.730022] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:32.813 23:53:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.813 23:53:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:12:32.813 23:53:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.813 23:53:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:33.381 Malloc1 00:12:33.381 23:53:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.381 23:53:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:33.381 23:53:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.381 23:53:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:33.381 23:53:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.381 23:53:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:33.381 23:53:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.381 23:53:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:33.381 23:53:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.381 23:53:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:33.381 23:53:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.381 23:53:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:33.381 [2024-12-13 23:53:12.370233] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:33.381 23:53:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.381 23:53:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:12:33.381 23:53:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # local bdev_name=Malloc1 00:12:33.381 23:53:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # local bdev_info 00:12:33.381 23:53:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # local bs 00:12:33.381 23:53:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1385 -- # local nb 00:12:33.381 23:53:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:12:33.381 23:53:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.381 23:53:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:33.381 23:53:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.381 23:53:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:12:33.381 { 00:12:33.381 "name": "Malloc1", 00:12:33.381 "aliases": [ 00:12:33.381 "798902fd-a541-4dc4-8a26-bb490f732d61" 00:12:33.381 ], 00:12:33.381 "product_name": "Malloc disk", 00:12:33.381 "block_size": 512, 00:12:33.381 "num_blocks": 1048576, 00:12:33.381 "uuid": "798902fd-a541-4dc4-8a26-bb490f732d61", 00:12:33.381 "assigned_rate_limits": { 00:12:33.381 "rw_ios_per_sec": 0, 00:12:33.381 "rw_mbytes_per_sec": 0, 00:12:33.381 "r_mbytes_per_sec": 0, 00:12:33.381 "w_mbytes_per_sec": 0 00:12:33.381 }, 00:12:33.381 "claimed": true, 00:12:33.381 "claim_type": "exclusive_write", 00:12:33.381 "zoned": false, 00:12:33.381 "supported_io_types": { 00:12:33.381 "read": true, 00:12:33.381 "write": true, 00:12:33.381 "unmap": true, 00:12:33.381 "flush": true, 00:12:33.381 "reset": true, 00:12:33.381 "nvme_admin": false, 00:12:33.381 "nvme_io": false, 00:12:33.381 "nvme_io_md": false, 00:12:33.381 "write_zeroes": true, 00:12:33.381 "zcopy": true, 00:12:33.381 "get_zone_info": false, 00:12:33.381 "zone_management": false, 00:12:33.381 "zone_append": false, 00:12:33.381 "compare": false, 00:12:33.381 "compare_and_write": false, 00:12:33.381 "abort": true, 00:12:33.381 "seek_hole": false, 00:12:33.381 "seek_data": false, 00:12:33.381 "copy": true, 00:12:33.381 "nvme_iov_md": false 00:12:33.381 }, 00:12:33.381 "memory_domains": [ 00:12:33.381 { 00:12:33.381 "dma_device_id": "system", 00:12:33.381 "dma_device_type": 1 00:12:33.381 }, 00:12:33.381 { 00:12:33.381 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:33.381 "dma_device_type": 2 00:12:33.381 } 00:12:33.381 ], 00:12:33.381 "driver_specific": {} 00:12:33.381 } 00:12:33.381 ]' 00:12:33.381 23:53:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:12:33.381 23:53:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # bs=512 00:12:33.381 23:53:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:12:33.381 23:53:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # nb=1048576 00:12:33.381 23:53:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1391 -- # bdev_size=512 00:12:33.381 23:53:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1392 -- # echo 512 00:12:33.381 23:53:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:12:33.381 23:53:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:34.759 23:53:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:12:34.759 23:53:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1202 -- # local i=0 00:12:34.759 23:53:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:12:34.759 23:53:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:12:34.759 23:53:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1209 -- # sleep 2 00:12:36.729 23:53:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:12:36.729 23:53:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:12:36.729 23:53:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:12:36.729 23:53:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:12:36.729 23:53:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:12:36.729 23:53:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1212 -- # return 0 00:12:36.729 23:53:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:12:36.729 23:53:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:12:36.729 23:53:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:12:36.729 23:53:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:12:36.729 23:53:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:12:36.729 23:53:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:12:36.729 23:53:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:12:36.729 23:53:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:12:36.729 23:53:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:12:36.729 23:53:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:12:36.729 23:53:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:12:37.010 23:53:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:12:37.578 23:53:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:12:38.515 23:53:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:12:38.515 23:53:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:12:38.515 23:53:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:12:38.515 23:53:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:38.515 23:53:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:38.515 ************************************ 00:12:38.515 START TEST filesystem_in_capsule_ext4 00:12:38.515 ************************************ 00:12:38.515 23:53:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create ext4 nvme0n1 00:12:38.515 23:53:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:12:38.515 23:53:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:12:38.515 23:53:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:12:38.515 23:53:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@930 -- # local fstype=ext4 00:12:38.515 23:53:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:12:38.515 23:53:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@932 -- # local i=0 00:12:38.515 23:53:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@933 -- # local force 00:12:38.515 23:53:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@935 -- # '[' ext4 = ext4 ']' 00:12:38.515 23:53:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@936 -- # force=-F 00:12:38.515 23:53:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@941 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:12:38.515 mke2fs 1.47.0 (5-Feb-2023) 00:12:38.515 Discarding device blocks: 0/522240 done 00:12:38.515 Creating filesystem with 522240 1k blocks and 130560 inodes 00:12:38.515 Filesystem UUID: ee7170eb-d719-44aa-ad9e-39b8cd7c90c5 00:12:38.515 Superblock backups stored on blocks: 00:12:38.515 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:12:38.515 00:12:38.515 Allocating group tables: 0/64 done 00:12:38.515 Writing inode tables: 0/64 done 00:12:38.774 Creating journal (8192 blocks): done 00:12:39.711 Writing superblocks and filesystem accounting information: 0/64 done 00:12:39.711 00:12:39.711 23:53:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@949 -- # return 0 00:12:39.711 23:53:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:46.276 23:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:46.276 23:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:12:46.276 23:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:46.276 23:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:12:46.276 23:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:12:46.276 23:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:46.276 23:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 3909308 00:12:46.276 23:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:46.276 23:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:46.276 23:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:46.276 23:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:46.276 00:12:46.276 real 0m7.303s 00:12:46.276 user 0m0.030s 00:12:46.276 sys 0m0.068s 00:12:46.277 23:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:46.277 23:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:12:46.277 ************************************ 00:12:46.277 END TEST filesystem_in_capsule_ext4 00:12:46.277 ************************************ 00:12:46.277 23:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:12:46.277 23:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:12:46.277 23:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:46.277 23:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:46.277 ************************************ 00:12:46.277 START TEST filesystem_in_capsule_btrfs 00:12:46.277 ************************************ 00:12:46.277 23:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create btrfs nvme0n1 00:12:46.277 23:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:12:46.277 23:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:12:46.277 23:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:12:46.277 23:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@930 -- # local fstype=btrfs 00:12:46.277 23:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:12:46.277 23:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@932 -- # local i=0 00:12:46.277 23:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@933 -- # local force 00:12:46.277 23:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@935 -- # '[' btrfs = ext4 ']' 00:12:46.277 23:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@938 -- # force=-f 00:12:46.277 23:53:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@941 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:12:46.277 btrfs-progs v6.8.1 00:12:46.277 See https://btrfs.readthedocs.io for more information. 00:12:46.277 00:12:46.277 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:12:46.277 NOTE: several default settings have changed in version 5.15, please make sure 00:12:46.277 this does not affect your deployments: 00:12:46.277 - DUP for metadata (-m dup) 00:12:46.277 - enabled no-holes (-O no-holes) 00:12:46.277 - enabled free-space-tree (-R free-space-tree) 00:12:46.277 00:12:46.277 Label: (null) 00:12:46.277 UUID: 620bfe0f-eb9a-460c-a0d2-adbb1ee9ca0d 00:12:46.277 Node size: 16384 00:12:46.277 Sector size: 4096 (CPU page size: 4096) 00:12:46.277 Filesystem size: 510.00MiB 00:12:46.277 Block group profiles: 00:12:46.277 Data: single 8.00MiB 00:12:46.277 Metadata: DUP 32.00MiB 00:12:46.277 System: DUP 8.00MiB 00:12:46.277 SSD detected: yes 00:12:46.277 Zoned device: no 00:12:46.277 Features: extref, skinny-metadata, no-holes, free-space-tree 00:12:46.277 Checksum: crc32c 00:12:46.277 Number of devices: 1 00:12:46.277 Devices: 00:12:46.277 ID SIZE PATH 00:12:46.277 1 510.00MiB /dev/nvme0n1p1 00:12:46.277 00:12:46.277 23:53:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@949 -- # return 0 00:12:46.277 23:53:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:46.845 23:53:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:46.845 23:53:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:12:46.845 23:53:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:46.845 23:53:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:12:46.845 23:53:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:12:46.845 23:53:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:46.845 23:53:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 3909308 00:12:46.845 23:53:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:46.845 23:53:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:47.104 23:53:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:47.104 23:53:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:47.104 00:12:47.104 real 0m1.078s 00:12:47.104 user 0m0.024s 00:12:47.104 sys 0m0.118s 00:12:47.104 23:53:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:47.104 23:53:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:12:47.104 ************************************ 00:12:47.104 END TEST filesystem_in_capsule_btrfs 00:12:47.104 ************************************ 00:12:47.104 23:53:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:12:47.104 23:53:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:12:47.104 23:53:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:47.104 23:53:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:47.104 ************************************ 00:12:47.104 START TEST filesystem_in_capsule_xfs 00:12:47.104 ************************************ 00:12:47.104 23:53:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create xfs nvme0n1 00:12:47.104 23:53:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:12:47.104 23:53:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:12:47.104 23:53:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:12:47.104 23:53:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@930 -- # local fstype=xfs 00:12:47.104 23:53:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:12:47.104 23:53:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@932 -- # local i=0 00:12:47.104 23:53:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@933 -- # local force 00:12:47.104 23:53:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@935 -- # '[' xfs = ext4 ']' 00:12:47.104 23:53:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@938 -- # force=-f 00:12:47.104 23:53:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@941 -- # mkfs.xfs -f /dev/nvme0n1p1 00:12:47.104 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:12:47.104 = sectsz=512 attr=2, projid32bit=1 00:12:47.104 = crc=1 finobt=1, sparse=1, rmapbt=0 00:12:47.104 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:12:47.104 data = bsize=4096 blocks=130560, imaxpct=25 00:12:47.104 = sunit=0 swidth=0 blks 00:12:47.104 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:12:47.104 log =internal log bsize=4096 blocks=16384, version=2 00:12:47.104 = sectsz=512 sunit=0 blks, lazy-count=1 00:12:47.104 realtime =none extsz=4096 blocks=0, rtextents=0 00:12:48.041 Discarding blocks...Done. 00:12:48.041 23:53:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@949 -- # return 0 00:12:48.041 23:53:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:50.576 23:53:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:50.576 23:53:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:12:50.576 23:53:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:50.576 23:53:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:12:50.576 23:53:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:12:50.576 23:53:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:50.576 23:53:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 3909308 00:12:50.576 23:53:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:50.576 23:53:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:50.576 23:53:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:50.576 23:53:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:50.576 00:12:50.576 real 0m3.336s 00:12:50.576 user 0m0.028s 00:12:50.576 sys 0m0.071s 00:12:50.576 23:53:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:50.576 23:53:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:12:50.576 ************************************ 00:12:50.576 END TEST filesystem_in_capsule_xfs 00:12:50.576 ************************************ 00:12:50.576 23:53:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:12:50.576 23:53:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:12:50.576 23:53:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:51.143 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:51.143 23:53:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:51.143 23:53:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1223 -- # local i=0 00:12:51.143 23:53:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:12:51.143 23:53:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:51.143 23:53:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:12:51.143 23:53:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:51.143 23:53:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1235 -- # return 0 00:12:51.143 23:53:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:51.143 23:53:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.143 23:53:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:51.143 23:53:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.143 23:53:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:12:51.143 23:53:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 3909308 00:12:51.143 23:53:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # '[' -z 3909308 ']' 00:12:51.143 23:53:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@958 -- # kill -0 3909308 00:12:51.143 23:53:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@959 -- # uname 00:12:51.143 23:53:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:51.143 23:53:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3909308 00:12:51.143 23:53:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:51.143 23:53:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:51.143 23:53:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3909308' 00:12:51.143 killing process with pid 3909308 00:12:51.143 23:53:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@973 -- # kill 3909308 00:12:51.143 23:53:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@978 -- # wait 3909308 00:12:53.678 23:53:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:12:53.678 00:12:53.678 real 0m21.914s 00:12:53.678 user 1m24.928s 00:12:53.678 sys 0m1.608s 00:12:53.678 23:53:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:53.678 23:53:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:53.678 ************************************ 00:12:53.678 END TEST nvmf_filesystem_in_capsule 00:12:53.678 ************************************ 00:12:53.678 23:53:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:12:53.678 23:53:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:53.678 23:53:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@121 -- # sync 00:12:53.678 23:53:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:53.678 23:53:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@124 -- # set +e 00:12:53.678 23:53:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:53.678 23:53:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:53.678 rmmod nvme_tcp 00:12:53.678 rmmod nvme_fabrics 00:12:53.678 rmmod nvme_keyring 00:12:53.678 23:53:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:53.678 23:53:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@128 -- # set -e 00:12:53.678 23:53:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@129 -- # return 0 00:12:53.678 23:53:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:12:53.678 23:53:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:53.678 23:53:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:53.678 23:53:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:53.678 23:53:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@297 -- # iptr 00:12:53.678 23:53:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # iptables-save 00:12:53.678 23:53:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:53.678 23:53:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # iptables-restore 00:12:53.937 23:53:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:53.937 23:53:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:53.937 23:53:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:53.937 23:53:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:53.937 23:53:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:55.841 23:53:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:55.841 00:12:55.841 real 0m49.916s 00:12:55.841 user 2m42.029s 00:12:55.841 sys 0m7.651s 00:12:55.841 23:53:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:55.841 23:53:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:12:55.841 ************************************ 00:12:55.841 END TEST nvmf_filesystem 00:12:55.841 ************************************ 00:12:55.841 23:53:34 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@18 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:12:55.841 23:53:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:55.841 23:53:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:55.841 23:53:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:55.841 ************************************ 00:12:55.841 START TEST nvmf_target_discovery 00:12:55.841 ************************************ 00:12:55.841 23:53:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:12:56.100 * Looking for test storage... 00:12:56.100 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:56.100 23:53:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:12:56.100 23:53:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1711 -- # lcov --version 00:12:56.100 23:53:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:12:56.100 23:53:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:12:56.100 23:53:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:56.100 23:53:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:56.100 23:53:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:56.100 23:53:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:12:56.100 23:53:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:12:56.100 23:53:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:12:56.100 23:53:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:12:56.100 23:53:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:12:56.100 23:53:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:12:56.100 23:53:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:12:56.100 23:53:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:56.100 23:53:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@344 -- # case "$op" in 00:12:56.100 23:53:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@345 -- # : 1 00:12:56.100 23:53:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:56.100 23:53:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:56.100 23:53:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # decimal 1 00:12:56.100 23:53:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=1 00:12:56.100 23:53:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:56.100 23:53:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 1 00:12:56.100 23:53:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:12:56.100 23:53:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # decimal 2 00:12:56.100 23:53:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=2 00:12:56.100 23:53:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:56.100 23:53:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 2 00:12:56.100 23:53:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:12:56.100 23:53:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:56.100 23:53:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:56.100 23:53:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # return 0 00:12:56.100 23:53:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:56.100 23:53:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:12:56.100 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:56.100 --rc genhtml_branch_coverage=1 00:12:56.100 --rc genhtml_function_coverage=1 00:12:56.100 --rc genhtml_legend=1 00:12:56.100 --rc geninfo_all_blocks=1 00:12:56.100 --rc geninfo_unexecuted_blocks=1 00:12:56.100 00:12:56.100 ' 00:12:56.100 23:53:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:12:56.100 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:56.100 --rc genhtml_branch_coverage=1 00:12:56.100 --rc genhtml_function_coverage=1 00:12:56.100 --rc genhtml_legend=1 00:12:56.100 --rc geninfo_all_blocks=1 00:12:56.100 --rc geninfo_unexecuted_blocks=1 00:12:56.100 00:12:56.100 ' 00:12:56.100 23:53:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:12:56.100 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:56.100 --rc genhtml_branch_coverage=1 00:12:56.100 --rc genhtml_function_coverage=1 00:12:56.100 --rc genhtml_legend=1 00:12:56.100 --rc geninfo_all_blocks=1 00:12:56.100 --rc geninfo_unexecuted_blocks=1 00:12:56.100 00:12:56.100 ' 00:12:56.100 23:53:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:12:56.100 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:56.100 --rc genhtml_branch_coverage=1 00:12:56.100 --rc genhtml_function_coverage=1 00:12:56.100 --rc genhtml_legend=1 00:12:56.100 --rc geninfo_all_blocks=1 00:12:56.100 --rc geninfo_unexecuted_blocks=1 00:12:56.100 00:12:56.100 ' 00:12:56.100 23:53:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:56.100 23:53:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:12:56.100 23:53:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:56.100 23:53:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:56.100 23:53:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:56.100 23:53:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:56.100 23:53:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:56.100 23:53:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:56.100 23:53:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:56.101 23:53:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:56.101 23:53:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:56.101 23:53:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:56.101 23:53:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:12:56.101 23:53:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:12:56.101 23:53:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:56.101 23:53:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:56.101 23:53:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:56.101 23:53:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:56.101 23:53:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:56.101 23:53:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:12:56.101 23:53:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:56.101 23:53:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:56.101 23:53:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:56.101 23:53:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:56.101 23:53:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:56.101 23:53:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:56.101 23:53:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:12:56.101 23:53:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:56.101 23:53:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@51 -- # : 0 00:12:56.101 23:53:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:56.101 23:53:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:56.101 23:53:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:56.101 23:53:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:56.101 23:53:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:56.101 23:53:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:56.101 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:56.101 23:53:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:56.101 23:53:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:56.101 23:53:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:56.101 23:53:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:12:56.101 23:53:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:12:56.101 23:53:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:12:56.101 23:53:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:12:56.101 23:53:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:12:56.101 23:53:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:56.101 23:53:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:56.101 23:53:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:56.101 23:53:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:56.101 23:53:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:56.101 23:53:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:56.101 23:53:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:56.101 23:53:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:56.101 23:53:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:56.101 23:53:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:56.101 23:53:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:12:56.101 23:53:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:01.374 23:53:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:01.374 23:53:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:13:01.374 23:53:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:01.374 23:53:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:01.374 23:53:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:01.374 23:53:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:01.374 23:53:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:01.374 23:53:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:13:01.374 23:53:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:01.374 23:53:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # e810=() 00:13:01.374 23:53:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:13:01.374 23:53:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # x722=() 00:13:01.374 23:53:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:13:01.374 23:53:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # mlx=() 00:13:01.374 23:53:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:13:01.374 23:53:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:01.374 23:53:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:01.374 23:53:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:01.374 23:53:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:01.374 23:53:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:01.374 23:53:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:01.374 23:53:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:01.374 23:53:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:01.374 23:53:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:01.374 23:53:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:01.374 23:53:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:01.374 23:53:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:01.374 23:53:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:01.374 23:53:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:13:01.374 23:53:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:13:01.374 23:53:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:13:01.374 23:53:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:13:01.374 23:53:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:01.374 23:53:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:01.374 23:53:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:13:01.374 Found 0000:af:00.0 (0x8086 - 0x159b) 00:13:01.374 23:53:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:01.374 23:53:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:01.374 23:53:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:01.374 23:53:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:01.374 23:53:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:01.374 23:53:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:01.374 23:53:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:13:01.374 Found 0000:af:00.1 (0x8086 - 0x159b) 00:13:01.374 23:53:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:01.374 23:53:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:01.374 23:53:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:01.374 23:53:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:01.374 23:53:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:01.374 23:53:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:01.374 23:53:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:13:01.374 23:53:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:13:01.374 23:53:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:01.374 23:53:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:01.374 23:53:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:01.374 23:53:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:01.374 23:53:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:01.374 23:53:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:01.374 23:53:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:01.374 23:53:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:13:01.375 Found net devices under 0000:af:00.0: cvl_0_0 00:13:01.375 23:53:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:01.375 23:53:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:01.375 23:53:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:01.375 23:53:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:01.375 23:53:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:01.375 23:53:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:01.375 23:53:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:01.375 23:53:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:01.375 23:53:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:13:01.375 Found net devices under 0000:af:00.1: cvl_0_1 00:13:01.375 23:53:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:01.375 23:53:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:13:01.375 23:53:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # is_hw=yes 00:13:01.375 23:53:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:13:01.375 23:53:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:13:01.375 23:53:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:13:01.375 23:53:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:01.375 23:53:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:01.375 23:53:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:01.375 23:53:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:01.375 23:53:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:13:01.375 23:53:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:01.375 23:53:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:01.375 23:53:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:13:01.375 23:53:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:13:01.375 23:53:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:01.375 23:53:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:01.375 23:53:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:13:01.375 23:53:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:13:01.375 23:53:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:13:01.375 23:53:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:01.375 23:53:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:01.375 23:53:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:01.375 23:53:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:13:01.375 23:53:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:01.634 23:53:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:01.634 23:53:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:01.634 23:53:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:13:01.634 23:53:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:13:01.634 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:01.634 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.367 ms 00:13:01.634 00:13:01.634 --- 10.0.0.2 ping statistics --- 00:13:01.634 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:01.634 rtt min/avg/max/mdev = 0.367/0.367/0.367/0.000 ms 00:13:01.634 23:53:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:01.634 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:01.634 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.211 ms 00:13:01.634 00:13:01.634 --- 10.0.0.1 ping statistics --- 00:13:01.634 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:01.634 rtt min/avg/max/mdev = 0.211/0.211/0.211/0.000 ms 00:13:01.634 23:53:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:01.634 23:53:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@450 -- # return 0 00:13:01.634 23:53:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:01.634 23:53:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:01.634 23:53:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:13:01.634 23:53:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:13:01.634 23:53:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:01.634 23:53:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:13:01.634 23:53:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:13:01.634 23:53:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:13:01.634 23:53:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:01.634 23:53:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:01.634 23:53:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:01.634 23:53:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@509 -- # nvmfpid=3916370 00:13:01.634 23:53:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@510 -- # waitforlisten 3916370 00:13:01.634 23:53:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:01.634 23:53:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@835 -- # '[' -z 3916370 ']' 00:13:01.634 23:53:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:01.634 23:53:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:01.634 23:53:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:01.634 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:01.634 23:53:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:01.634 23:53:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:01.634 [2024-12-13 23:53:40.720136] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:13:01.634 [2024-12-13 23:53:40.720225] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:01.893 [2024-12-13 23:53:40.838937] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:01.893 [2024-12-13 23:53:40.951970] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:01.894 [2024-12-13 23:53:40.952011] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:01.894 [2024-12-13 23:53:40.952021] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:01.894 [2024-12-13 23:53:40.952031] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:01.894 [2024-12-13 23:53:40.952039] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:01.894 [2024-12-13 23:53:40.954220] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:13:01.894 [2024-12-13 23:53:40.954296] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:13:01.894 [2024-12-13 23:53:40.954361] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:13:01.894 [2024-12-13 23:53:40.954370] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:13:02.461 23:53:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:02.461 23:53:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@868 -- # return 0 00:13:02.462 23:53:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:02.462 23:53:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:02.462 23:53:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:02.462 23:53:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:02.462 23:53:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:02.462 23:53:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:02.462 23:53:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:02.462 [2024-12-13 23:53:41.568374] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:02.462 23:53:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:02.462 23:53:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:13:02.462 23:53:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:13:02.462 23:53:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:13:02.462 23:53:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:02.462 23:53:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:02.462 Null1 00:13:02.462 23:53:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:02.462 23:53:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:13:02.462 23:53:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:02.462 23:53:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:02.721 23:53:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:02.721 23:53:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:13:02.721 23:53:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:02.721 23:53:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:02.721 23:53:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:02.721 23:53:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:02.721 23:53:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:02.721 23:53:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:02.721 [2024-12-13 23:53:41.624981] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:02.721 23:53:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:02.721 23:53:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:13:02.721 23:53:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:13:02.721 23:53:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:02.721 23:53:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:02.721 Null2 00:13:02.721 23:53:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:02.721 23:53:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:13:02.721 23:53:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:02.721 23:53:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:02.721 23:53:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:02.721 23:53:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:13:02.721 23:53:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:02.721 23:53:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:02.721 23:53:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:02.721 23:53:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:13:02.722 23:53:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:02.722 23:53:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:02.722 23:53:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:02.722 23:53:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:13:02.722 23:53:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:13:02.722 23:53:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:02.722 23:53:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:02.722 Null3 00:13:02.722 23:53:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:02.722 23:53:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:13:02.722 23:53:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:02.722 23:53:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:02.722 23:53:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:02.722 23:53:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:13:02.722 23:53:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:02.722 23:53:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:02.722 23:53:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:02.722 23:53:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:13:02.722 23:53:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:02.722 23:53:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:02.722 23:53:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:02.722 23:53:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:13:02.722 23:53:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:13:02.722 23:53:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:02.722 23:53:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:02.722 Null4 00:13:02.722 23:53:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:02.722 23:53:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:13:02.722 23:53:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:02.722 23:53:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:02.722 23:53:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:02.722 23:53:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:13:02.722 23:53:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:02.722 23:53:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:02.722 23:53:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:02.722 23:53:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:13:02.722 23:53:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:02.722 23:53:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:02.722 23:53:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:02.722 23:53:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:13:02.722 23:53:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:02.722 23:53:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:02.722 23:53:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:02.722 23:53:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:13:02.722 23:53:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:02.722 23:53:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:02.722 23:53:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:02.722 23:53:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 4420 00:13:02.981 00:13:02.981 Discovery Log Number of Records 6, Generation counter 6 00:13:02.981 =====Discovery Log Entry 0====== 00:13:02.981 trtype: tcp 00:13:02.981 adrfam: ipv4 00:13:02.981 subtype: current discovery subsystem 00:13:02.981 treq: not required 00:13:02.981 portid: 0 00:13:02.981 trsvcid: 4420 00:13:02.981 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:13:02.981 traddr: 10.0.0.2 00:13:02.981 eflags: explicit discovery connections, duplicate discovery information 00:13:02.981 sectype: none 00:13:02.981 =====Discovery Log Entry 1====== 00:13:02.981 trtype: tcp 00:13:02.981 adrfam: ipv4 00:13:02.981 subtype: nvme subsystem 00:13:02.981 treq: not required 00:13:02.981 portid: 0 00:13:02.981 trsvcid: 4420 00:13:02.981 subnqn: nqn.2016-06.io.spdk:cnode1 00:13:02.981 traddr: 10.0.0.2 00:13:02.981 eflags: none 00:13:02.981 sectype: none 00:13:02.981 =====Discovery Log Entry 2====== 00:13:02.981 trtype: tcp 00:13:02.981 adrfam: ipv4 00:13:02.981 subtype: nvme subsystem 00:13:02.981 treq: not required 00:13:02.981 portid: 0 00:13:02.981 trsvcid: 4420 00:13:02.981 subnqn: nqn.2016-06.io.spdk:cnode2 00:13:02.981 traddr: 10.0.0.2 00:13:02.981 eflags: none 00:13:02.981 sectype: none 00:13:02.981 =====Discovery Log Entry 3====== 00:13:02.981 trtype: tcp 00:13:02.981 adrfam: ipv4 00:13:02.981 subtype: nvme subsystem 00:13:02.981 treq: not required 00:13:02.981 portid: 0 00:13:02.981 trsvcid: 4420 00:13:02.981 subnqn: nqn.2016-06.io.spdk:cnode3 00:13:02.981 traddr: 10.0.0.2 00:13:02.981 eflags: none 00:13:02.981 sectype: none 00:13:02.981 =====Discovery Log Entry 4====== 00:13:02.981 trtype: tcp 00:13:02.981 adrfam: ipv4 00:13:02.981 subtype: nvme subsystem 00:13:02.981 treq: not required 00:13:02.981 portid: 0 00:13:02.981 trsvcid: 4420 00:13:02.981 subnqn: nqn.2016-06.io.spdk:cnode4 00:13:02.981 traddr: 10.0.0.2 00:13:02.981 eflags: none 00:13:02.981 sectype: none 00:13:02.981 =====Discovery Log Entry 5====== 00:13:02.981 trtype: tcp 00:13:02.981 adrfam: ipv4 00:13:02.981 subtype: discovery subsystem referral 00:13:02.981 treq: not required 00:13:02.981 portid: 0 00:13:02.981 trsvcid: 4430 00:13:02.981 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:13:02.981 traddr: 10.0.0.2 00:13:02.981 eflags: none 00:13:02.981 sectype: none 00:13:02.981 23:53:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:13:02.981 Perform nvmf subsystem discovery via RPC 00:13:02.981 23:53:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:13:02.981 23:53:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:02.981 23:53:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:02.981 [ 00:13:02.981 { 00:13:02.981 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:13:02.981 "subtype": "Discovery", 00:13:02.981 "listen_addresses": [ 00:13:02.981 { 00:13:02.981 "trtype": "TCP", 00:13:02.981 "adrfam": "IPv4", 00:13:02.981 "traddr": "10.0.0.2", 00:13:02.981 "trsvcid": "4420" 00:13:02.981 } 00:13:02.981 ], 00:13:02.982 "allow_any_host": true, 00:13:02.982 "hosts": [] 00:13:02.982 }, 00:13:02.982 { 00:13:02.982 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:02.982 "subtype": "NVMe", 00:13:02.982 "listen_addresses": [ 00:13:02.982 { 00:13:02.982 "trtype": "TCP", 00:13:02.982 "adrfam": "IPv4", 00:13:02.982 "traddr": "10.0.0.2", 00:13:02.982 "trsvcid": "4420" 00:13:02.982 } 00:13:02.982 ], 00:13:02.982 "allow_any_host": true, 00:13:02.982 "hosts": [], 00:13:02.982 "serial_number": "SPDK00000000000001", 00:13:02.982 "model_number": "SPDK bdev Controller", 00:13:02.982 "max_namespaces": 32, 00:13:02.982 "min_cntlid": 1, 00:13:02.982 "max_cntlid": 65519, 00:13:02.982 "namespaces": [ 00:13:02.982 { 00:13:02.982 "nsid": 1, 00:13:02.982 "bdev_name": "Null1", 00:13:02.982 "name": "Null1", 00:13:02.982 "nguid": "FE4DB3AEEF194952A9A3E4848DE1224A", 00:13:02.982 "uuid": "fe4db3ae-ef19-4952-a9a3-e4848de1224a" 00:13:02.982 } 00:13:02.982 ] 00:13:02.982 }, 00:13:02.982 { 00:13:02.982 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:13:02.982 "subtype": "NVMe", 00:13:02.982 "listen_addresses": [ 00:13:02.982 { 00:13:02.982 "trtype": "TCP", 00:13:02.982 "adrfam": "IPv4", 00:13:02.982 "traddr": "10.0.0.2", 00:13:02.982 "trsvcid": "4420" 00:13:02.982 } 00:13:02.982 ], 00:13:02.982 "allow_any_host": true, 00:13:02.982 "hosts": [], 00:13:02.982 "serial_number": "SPDK00000000000002", 00:13:02.982 "model_number": "SPDK bdev Controller", 00:13:02.982 "max_namespaces": 32, 00:13:02.982 "min_cntlid": 1, 00:13:02.982 "max_cntlid": 65519, 00:13:02.982 "namespaces": [ 00:13:02.982 { 00:13:02.982 "nsid": 1, 00:13:02.982 "bdev_name": "Null2", 00:13:02.982 "name": "Null2", 00:13:02.982 "nguid": "B755BBF06D0141B88260D2AF6FEB0157", 00:13:02.982 "uuid": "b755bbf0-6d01-41b8-8260-d2af6feb0157" 00:13:02.982 } 00:13:02.982 ] 00:13:02.982 }, 00:13:02.982 { 00:13:02.982 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:13:02.982 "subtype": "NVMe", 00:13:02.982 "listen_addresses": [ 00:13:02.982 { 00:13:02.982 "trtype": "TCP", 00:13:02.982 "adrfam": "IPv4", 00:13:02.982 "traddr": "10.0.0.2", 00:13:02.982 "trsvcid": "4420" 00:13:02.982 } 00:13:02.982 ], 00:13:02.982 "allow_any_host": true, 00:13:02.982 "hosts": [], 00:13:02.982 "serial_number": "SPDK00000000000003", 00:13:02.982 "model_number": "SPDK bdev Controller", 00:13:02.982 "max_namespaces": 32, 00:13:02.982 "min_cntlid": 1, 00:13:02.982 "max_cntlid": 65519, 00:13:02.982 "namespaces": [ 00:13:02.982 { 00:13:02.982 "nsid": 1, 00:13:02.982 "bdev_name": "Null3", 00:13:02.982 "name": "Null3", 00:13:02.982 "nguid": "AD2AA407267A46269ECFCE8585CC2C09", 00:13:02.982 "uuid": "ad2aa407-267a-4626-9ecf-ce8585cc2c09" 00:13:02.982 } 00:13:02.982 ] 00:13:02.982 }, 00:13:02.982 { 00:13:02.982 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:13:02.982 "subtype": "NVMe", 00:13:02.982 "listen_addresses": [ 00:13:02.982 { 00:13:02.982 "trtype": "TCP", 00:13:02.982 "adrfam": "IPv4", 00:13:02.982 "traddr": "10.0.0.2", 00:13:02.982 "trsvcid": "4420" 00:13:02.982 } 00:13:02.982 ], 00:13:02.982 "allow_any_host": true, 00:13:02.982 "hosts": [], 00:13:02.982 "serial_number": "SPDK00000000000004", 00:13:02.982 "model_number": "SPDK bdev Controller", 00:13:02.982 "max_namespaces": 32, 00:13:02.982 "min_cntlid": 1, 00:13:02.982 "max_cntlid": 65519, 00:13:02.982 "namespaces": [ 00:13:02.982 { 00:13:02.982 "nsid": 1, 00:13:02.982 "bdev_name": "Null4", 00:13:02.982 "name": "Null4", 00:13:02.982 "nguid": "7A6BE653E9664888ACA92B2E73951788", 00:13:02.982 "uuid": "7a6be653-e966-4888-aca9-2b2e73951788" 00:13:02.982 } 00:13:02.982 ] 00:13:02.982 } 00:13:02.982 ] 00:13:02.982 23:53:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:02.982 23:53:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:13:02.982 23:53:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:13:02.982 23:53:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:02.982 23:53:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:02.982 23:53:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:02.982 23:53:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:02.982 23:53:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:13:02.982 23:53:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:02.982 23:53:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:02.982 23:53:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:02.982 23:53:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:13:02.982 23:53:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:13:02.982 23:53:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:02.982 23:53:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:02.982 23:53:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:02.982 23:53:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:13:02.982 23:53:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:02.982 23:53:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:02.982 23:53:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:02.982 23:53:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:13:02.982 23:53:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:13:02.982 23:53:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:02.982 23:53:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:02.982 23:53:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:02.982 23:53:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:13:02.982 23:53:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:02.982 23:53:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:02.982 23:53:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:02.982 23:53:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:13:02.982 23:53:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:13:02.982 23:53:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:02.982 23:53:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:02.982 23:53:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:02.982 23:53:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:13:02.982 23:53:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:02.982 23:53:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:02.982 23:53:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:02.982 23:53:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:13:02.982 23:53:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:02.982 23:53:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:02.982 23:53:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:02.982 23:53:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:13:02.982 23:53:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:13:02.982 23:53:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:02.982 23:53:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:02.982 23:53:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:02.982 23:53:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:13:02.982 23:53:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:13:02.982 23:53:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:13:02.982 23:53:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:13:02.982 23:53:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:02.982 23:53:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@121 -- # sync 00:13:02.982 23:53:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:02.982 23:53:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@124 -- # set +e 00:13:02.982 23:53:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:02.982 23:53:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:02.982 rmmod nvme_tcp 00:13:03.242 rmmod nvme_fabrics 00:13:03.242 rmmod nvme_keyring 00:13:03.242 23:53:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:03.242 23:53:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@128 -- # set -e 00:13:03.242 23:53:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@129 -- # return 0 00:13:03.242 23:53:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@517 -- # '[' -n 3916370 ']' 00:13:03.242 23:53:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@518 -- # killprocess 3916370 00:13:03.242 23:53:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@954 -- # '[' -z 3916370 ']' 00:13:03.242 23:53:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@958 -- # kill -0 3916370 00:13:03.242 23:53:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@959 -- # uname 00:13:03.242 23:53:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:03.242 23:53:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3916370 00:13:03.242 23:53:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:03.242 23:53:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:03.242 23:53:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3916370' 00:13:03.242 killing process with pid 3916370 00:13:03.242 23:53:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@973 -- # kill 3916370 00:13:03.242 23:53:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@978 -- # wait 3916370 00:13:04.618 23:53:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:04.618 23:53:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:04.618 23:53:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:04.618 23:53:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@297 -- # iptr 00:13:04.618 23:53:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # iptables-save 00:13:04.618 23:53:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:13:04.618 23:53:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:04.618 23:53:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:04.618 23:53:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:04.618 23:53:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:04.618 23:53:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:04.618 23:53:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:06.523 23:53:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:06.523 00:13:06.523 real 0m10.465s 00:13:06.523 user 0m10.558s 00:13:06.523 sys 0m4.516s 00:13:06.523 23:53:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:06.523 23:53:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:06.523 ************************************ 00:13:06.524 END TEST nvmf_target_discovery 00:13:06.524 ************************************ 00:13:06.524 23:53:45 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@19 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:13:06.524 23:53:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:06.524 23:53:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:06.524 23:53:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:06.524 ************************************ 00:13:06.524 START TEST nvmf_referrals 00:13:06.524 ************************************ 00:13:06.524 23:53:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:13:06.524 * Looking for test storage... 00:13:06.524 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:06.524 23:53:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:13:06.524 23:53:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:13:06.524 23:53:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1711 -- # lcov --version 00:13:06.524 23:53:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:13:06.524 23:53:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:06.524 23:53:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:06.524 23:53:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:06.524 23:53:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # IFS=.-: 00:13:06.524 23:53:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # read -ra ver1 00:13:06.524 23:53:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # IFS=.-: 00:13:06.524 23:53:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # read -ra ver2 00:13:06.524 23:53:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@338 -- # local 'op=<' 00:13:06.524 23:53:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@340 -- # ver1_l=2 00:13:06.524 23:53:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@341 -- # ver2_l=1 00:13:06.524 23:53:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:06.524 23:53:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@344 -- # case "$op" in 00:13:06.524 23:53:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@345 -- # : 1 00:13:06.524 23:53:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:06.524 23:53:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:06.524 23:53:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # decimal 1 00:13:06.524 23:53:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=1 00:13:06.524 23:53:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:06.524 23:53:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 1 00:13:06.524 23:53:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # ver1[v]=1 00:13:06.524 23:53:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # decimal 2 00:13:06.524 23:53:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=2 00:13:06.524 23:53:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:06.524 23:53:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 2 00:13:06.524 23:53:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # ver2[v]=2 00:13:06.524 23:53:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:06.524 23:53:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:06.524 23:53:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # return 0 00:13:06.524 23:53:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:06.524 23:53:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:13:06.524 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:06.524 --rc genhtml_branch_coverage=1 00:13:06.524 --rc genhtml_function_coverage=1 00:13:06.524 --rc genhtml_legend=1 00:13:06.524 --rc geninfo_all_blocks=1 00:13:06.524 --rc geninfo_unexecuted_blocks=1 00:13:06.524 00:13:06.524 ' 00:13:06.524 23:53:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:13:06.524 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:06.524 --rc genhtml_branch_coverage=1 00:13:06.524 --rc genhtml_function_coverage=1 00:13:06.524 --rc genhtml_legend=1 00:13:06.524 --rc geninfo_all_blocks=1 00:13:06.524 --rc geninfo_unexecuted_blocks=1 00:13:06.524 00:13:06.524 ' 00:13:06.524 23:53:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:13:06.524 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:06.524 --rc genhtml_branch_coverage=1 00:13:06.524 --rc genhtml_function_coverage=1 00:13:06.524 --rc genhtml_legend=1 00:13:06.524 --rc geninfo_all_blocks=1 00:13:06.524 --rc geninfo_unexecuted_blocks=1 00:13:06.524 00:13:06.524 ' 00:13:06.524 23:53:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:13:06.524 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:06.524 --rc genhtml_branch_coverage=1 00:13:06.524 --rc genhtml_function_coverage=1 00:13:06.524 --rc genhtml_legend=1 00:13:06.524 --rc geninfo_all_blocks=1 00:13:06.524 --rc geninfo_unexecuted_blocks=1 00:13:06.524 00:13:06.524 ' 00:13:06.524 23:53:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:06.524 23:53:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:13:06.783 23:53:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:06.784 23:53:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:06.784 23:53:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:06.784 23:53:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:06.784 23:53:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:06.784 23:53:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:06.784 23:53:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:06.784 23:53:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:06.784 23:53:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:06.784 23:53:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:06.784 23:53:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:13:06.784 23:53:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:13:06.784 23:53:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:06.784 23:53:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:06.784 23:53:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:06.784 23:53:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:06.784 23:53:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:06.784 23:53:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@15 -- # shopt -s extglob 00:13:06.784 23:53:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:06.784 23:53:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:06.784 23:53:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:06.784 23:53:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:06.784 23:53:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:06.784 23:53:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:06.784 23:53:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:13:06.784 23:53:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:06.784 23:53:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@51 -- # : 0 00:13:06.784 23:53:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:06.784 23:53:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:06.784 23:53:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:06.784 23:53:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:06.784 23:53:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:06.784 23:53:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:06.784 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:06.784 23:53:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:06.784 23:53:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:06.784 23:53:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:06.784 23:53:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:13:06.784 23:53:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:13:06.784 23:53:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:13:06.784 23:53:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:13:06.784 23:53:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:13:06.784 23:53:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:13:06.784 23:53:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:13:06.784 23:53:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:13:06.784 23:53:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:06.784 23:53:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:06.784 23:53:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:06.784 23:53:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:06.784 23:53:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:06.784 23:53:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:06.784 23:53:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:06.784 23:53:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:13:06.784 23:53:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:13:06.784 23:53:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@309 -- # xtrace_disable 00:13:06.784 23:53:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:12.058 23:53:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:12.059 23:53:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # pci_devs=() 00:13:12.059 23:53:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:12.059 23:53:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:12.059 23:53:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:12.059 23:53:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:12.059 23:53:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:12.059 23:53:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # net_devs=() 00:13:12.059 23:53:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:12.059 23:53:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # e810=() 00:13:12.059 23:53:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # local -ga e810 00:13:12.059 23:53:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # x722=() 00:13:12.059 23:53:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # local -ga x722 00:13:12.059 23:53:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # mlx=() 00:13:12.059 23:53:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # local -ga mlx 00:13:12.059 23:53:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:12.059 23:53:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:12.059 23:53:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:12.059 23:53:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:12.059 23:53:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:12.059 23:53:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:12.059 23:53:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:12.059 23:53:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:12.059 23:53:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:12.059 23:53:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:12.059 23:53:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:12.059 23:53:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:12.059 23:53:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:12.059 23:53:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:13:12.059 23:53:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:13:12.059 23:53:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:13:12.059 23:53:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:13:12.059 23:53:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:12.059 23:53:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:12.059 23:53:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:13:12.059 Found 0000:af:00.0 (0x8086 - 0x159b) 00:13:12.059 23:53:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:12.059 23:53:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:12.059 23:53:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:12.059 23:53:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:12.059 23:53:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:12.059 23:53:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:12.059 23:53:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:13:12.059 Found 0000:af:00.1 (0x8086 - 0x159b) 00:13:12.059 23:53:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:12.059 23:53:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:12.059 23:53:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:12.059 23:53:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:12.059 23:53:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:12.059 23:53:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:12.059 23:53:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:13:12.059 23:53:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:13:12.059 23:53:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:12.059 23:53:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:12.059 23:53:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:12.059 23:53:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:12.059 23:53:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:12.059 23:53:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:12.059 23:53:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:12.059 23:53:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:13:12.059 Found net devices under 0000:af:00.0: cvl_0_0 00:13:12.059 23:53:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:12.059 23:53:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:12.059 23:53:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:12.059 23:53:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:12.059 23:53:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:12.059 23:53:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:12.059 23:53:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:12.059 23:53:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:12.059 23:53:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:13:12.059 Found net devices under 0000:af:00.1: cvl_0_1 00:13:12.059 23:53:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:12.059 23:53:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:13:12.059 23:53:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # is_hw=yes 00:13:12.059 23:53:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:13:12.059 23:53:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:13:12.059 23:53:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:13:12.059 23:53:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:12.059 23:53:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:12.059 23:53:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:12.059 23:53:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:12.059 23:53:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:13:12.059 23:53:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:12.059 23:53:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:12.059 23:53:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:13:12.059 23:53:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:13:12.059 23:53:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:12.059 23:53:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:12.059 23:53:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:13:12.059 23:53:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:13:12.059 23:53:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:13:12.059 23:53:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:12.059 23:53:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:12.059 23:53:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:12.059 23:53:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:13:12.059 23:53:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:12.059 23:53:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:12.059 23:53:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:12.059 23:53:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:13:12.059 23:53:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:13:12.059 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:12.059 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.471 ms 00:13:12.059 00:13:12.059 --- 10.0.0.2 ping statistics --- 00:13:12.059 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:12.059 rtt min/avg/max/mdev = 0.471/0.471/0.471/0.000 ms 00:13:12.059 23:53:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:12.059 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:12.059 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.199 ms 00:13:12.059 00:13:12.059 --- 10.0.0.1 ping statistics --- 00:13:12.059 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:12.059 rtt min/avg/max/mdev = 0.199/0.199/0.199/0.000 ms 00:13:12.059 23:53:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:12.059 23:53:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@450 -- # return 0 00:13:12.059 23:53:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:12.059 23:53:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:12.059 23:53:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:13:12.318 23:53:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:13:12.318 23:53:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:12.318 23:53:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:13:12.318 23:53:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:13:12.318 23:53:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:13:12.319 23:53:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:12.319 23:53:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:12.319 23:53:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:12.319 23:53:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:12.319 23:53:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@509 -- # nvmfpid=3920301 00:13:12.319 23:53:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@510 -- # waitforlisten 3920301 00:13:12.319 23:53:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@835 -- # '[' -z 3920301 ']' 00:13:12.319 23:53:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:12.319 23:53:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:12.319 23:53:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:12.319 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:12.319 23:53:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:12.319 23:53:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:12.319 [2024-12-13 23:53:51.307073] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:13:12.319 [2024-12-13 23:53:51.307163] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:12.319 [2024-12-13 23:53:51.424318] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:12.577 [2024-12-13 23:53:51.533150] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:12.577 [2024-12-13 23:53:51.533194] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:12.577 [2024-12-13 23:53:51.533204] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:12.577 [2024-12-13 23:53:51.533216] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:12.577 [2024-12-13 23:53:51.533224] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:12.577 [2024-12-13 23:53:51.535483] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:13:12.577 [2024-12-13 23:53:51.535535] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:13:12.577 [2024-12-13 23:53:51.535612] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:13:12.577 [2024-12-13 23:53:51.535623] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:13:13.144 23:53:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:13.144 23:53:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@868 -- # return 0 00:13:13.144 23:53:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:13.144 23:53:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:13.144 23:53:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:13.144 23:53:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:13.144 23:53:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:13.144 23:53:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:13.144 23:53:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:13.144 [2024-12-13 23:53:52.183224] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:13.144 23:53:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:13.144 23:53:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:13:13.144 23:53:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:13.145 23:53:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:13.145 [2024-12-13 23:53:52.218293] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:13:13.145 23:53:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:13.145 23:53:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:13:13.145 23:53:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:13.145 23:53:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:13.145 23:53:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:13.145 23:53:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:13:13.145 23:53:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:13.145 23:53:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:13.145 23:53:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:13.145 23:53:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:13:13.145 23:53:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:13.145 23:53:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:13.145 23:53:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:13.145 23:53:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:13:13.145 23:53:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:13:13.145 23:53:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:13.145 23:53:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:13.145 23:53:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:13.404 23:53:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:13:13.404 23:53:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:13:13.404 23:53:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:13:13.404 23:53:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:13:13.404 23:53:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:13:13.404 23:53:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:13.404 23:53:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:13:13.404 23:53:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:13.404 23:53:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:13.404 23:53:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:13:13.404 23:53:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:13:13.404 23:53:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:13:13.404 23:53:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:13:13.404 23:53:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:13:13.404 23:53:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:13:13.404 23:53:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:13:13.404 23:53:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:13:13.404 23:53:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:13:13.404 23:53:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:13:13.404 23:53:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:13:13.404 23:53:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:13.404 23:53:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:13.404 23:53:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:13.404 23:53:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:13:13.404 23:53:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:13.404 23:53:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:13.404 23:53:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:13.404 23:53:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:13:13.404 23:53:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:13.404 23:53:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:13.663 23:53:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:13.663 23:53:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:13:13.663 23:53:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:13:13.663 23:53:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:13.663 23:53:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:13.663 23:53:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:13.663 23:53:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:13:13.663 23:53:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:13:13.663 23:53:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:13:13.663 23:53:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:13:13.663 23:53:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:13:13.663 23:53:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:13:13.663 23:53:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:13:13.663 23:53:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:13:13.663 23:53:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:13:13.663 23:53:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:13:13.663 23:53:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:13.663 23:53:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:13.663 23:53:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:13.663 23:53:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:13:13.663 23:53:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:13.663 23:53:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:13.663 23:53:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:13.663 23:53:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:13:13.663 23:53:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:13:13.663 23:53:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:13:13.663 23:53:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:13:13.663 23:53:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:13.663 23:53:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:13:13.663 23:53:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:13.922 23:53:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:13.922 23:53:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:13:13.922 23:53:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:13:13.922 23:53:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:13:13.922 23:53:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:13:13.922 23:53:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:13:13.922 23:53:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:13:13.922 23:53:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:13:13.922 23:53:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:13:13.922 23:53:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:13:13.922 23:53:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:13:14.181 23:53:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:13:14.181 23:53:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:13:14.181 23:53:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:13:14.181 23:53:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:13:14.181 23:53:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:13:14.181 23:53:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:13:14.181 23:53:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:13:14.181 23:53:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:13:14.181 23:53:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:13:14.181 23:53:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:13:14.181 23:53:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:13:14.440 23:53:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:13:14.440 23:53:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:13:14.440 23:53:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:14.440 23:53:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:14.440 23:53:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:14.440 23:53:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:13:14.440 23:53:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:13:14.440 23:53:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:13:14.440 23:53:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:13:14.440 23:53:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:13:14.440 23:53:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:14.440 23:53:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:14.440 23:53:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:14.440 23:53:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:13:14.440 23:53:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:13:14.440 23:53:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:13:14.440 23:53:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:13:14.440 23:53:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:13:14.440 23:53:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:13:14.440 23:53:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:13:14.440 23:53:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:13:14.698 23:53:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:13:14.698 23:53:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:13:14.698 23:53:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:13:14.698 23:53:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:13:14.698 23:53:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:13:14.698 23:53:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:13:14.698 23:53:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:13:14.956 23:53:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:13:14.956 23:53:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:13:14.956 23:53:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:13:14.956 23:53:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:13:14.956 23:53:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:13:14.956 23:53:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:13:14.956 23:53:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:13:14.956 23:53:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:13:14.957 23:53:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:14.957 23:53:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:14.957 23:53:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:14.957 23:53:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:13:14.957 23:53:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:13:14.957 23:53:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:14.957 23:53:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:14.957 23:53:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:14.957 23:53:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:13:14.957 23:53:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:13:14.957 23:53:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:13:14.957 23:53:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:13:14.957 23:53:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:13:14.957 23:53:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:13:14.957 23:53:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:13:15.215 23:53:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:13:15.215 23:53:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:13:15.215 23:53:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:13:15.215 23:53:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:13:15.215 23:53:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:15.215 23:53:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@121 -- # sync 00:13:15.215 23:53:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:15.215 23:53:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@124 -- # set +e 00:13:15.216 23:53:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:15.216 23:53:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:15.216 rmmod nvme_tcp 00:13:15.216 rmmod nvme_fabrics 00:13:15.216 rmmod nvme_keyring 00:13:15.216 23:53:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:15.216 23:53:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@128 -- # set -e 00:13:15.216 23:53:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@129 -- # return 0 00:13:15.216 23:53:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@517 -- # '[' -n 3920301 ']' 00:13:15.216 23:53:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@518 -- # killprocess 3920301 00:13:15.216 23:53:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@954 -- # '[' -z 3920301 ']' 00:13:15.216 23:53:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@958 -- # kill -0 3920301 00:13:15.216 23:53:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@959 -- # uname 00:13:15.216 23:53:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:15.216 23:53:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3920301 00:13:15.473 23:53:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:15.473 23:53:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:15.473 23:53:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3920301' 00:13:15.473 killing process with pid 3920301 00:13:15.473 23:53:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@973 -- # kill 3920301 00:13:15.473 23:53:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@978 -- # wait 3920301 00:13:16.406 23:53:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:16.406 23:53:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:16.406 23:53:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:16.406 23:53:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@297 -- # iptr 00:13:16.406 23:53:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # iptables-save 00:13:16.406 23:53:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:16.406 23:53:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # iptables-restore 00:13:16.406 23:53:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:16.406 23:53:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:16.406 23:53:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:16.406 23:53:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:16.406 23:53:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:18.943 23:53:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:18.943 00:13:18.943 real 0m12.103s 00:13:18.943 user 0m17.540s 00:13:18.943 sys 0m4.905s 00:13:18.943 23:53:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:18.943 23:53:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:18.944 ************************************ 00:13:18.944 END TEST nvmf_referrals 00:13:18.944 ************************************ 00:13:18.944 23:53:57 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@20 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:13:18.944 23:53:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:18.944 23:53:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:18.944 23:53:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:18.944 ************************************ 00:13:18.944 START TEST nvmf_connect_disconnect 00:13:18.944 ************************************ 00:13:18.944 23:53:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:13:18.944 * Looking for test storage... 00:13:18.944 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:18.944 23:53:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:13:18.944 23:53:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1711 -- # lcov --version 00:13:18.944 23:53:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:13:18.944 23:53:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:13:18.944 23:53:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:18.944 23:53:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:18.944 23:53:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:18.944 23:53:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:13:18.944 23:53:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:13:18.944 23:53:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:13:18.944 23:53:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:13:18.944 23:53:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:13:18.944 23:53:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:13:18.944 23:53:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:13:18.944 23:53:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:18.944 23:53:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:13:18.944 23:53:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@345 -- # : 1 00:13:18.944 23:53:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:18.944 23:53:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:18.944 23:53:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # decimal 1 00:13:18.944 23:53:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=1 00:13:18.944 23:53:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:18.944 23:53:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 1 00:13:18.944 23:53:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:13:18.944 23:53:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # decimal 2 00:13:18.944 23:53:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=2 00:13:18.944 23:53:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:18.944 23:53:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 2 00:13:18.944 23:53:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:13:18.944 23:53:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:18.944 23:53:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:18.944 23:53:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # return 0 00:13:18.944 23:53:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:18.944 23:53:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:13:18.944 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:18.944 --rc genhtml_branch_coverage=1 00:13:18.944 --rc genhtml_function_coverage=1 00:13:18.944 --rc genhtml_legend=1 00:13:18.944 --rc geninfo_all_blocks=1 00:13:18.944 --rc geninfo_unexecuted_blocks=1 00:13:18.944 00:13:18.944 ' 00:13:18.944 23:53:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:13:18.944 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:18.944 --rc genhtml_branch_coverage=1 00:13:18.944 --rc genhtml_function_coverage=1 00:13:18.944 --rc genhtml_legend=1 00:13:18.944 --rc geninfo_all_blocks=1 00:13:18.944 --rc geninfo_unexecuted_blocks=1 00:13:18.944 00:13:18.944 ' 00:13:18.944 23:53:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:13:18.944 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:18.944 --rc genhtml_branch_coverage=1 00:13:18.944 --rc genhtml_function_coverage=1 00:13:18.944 --rc genhtml_legend=1 00:13:18.944 --rc geninfo_all_blocks=1 00:13:18.944 --rc geninfo_unexecuted_blocks=1 00:13:18.944 00:13:18.944 ' 00:13:18.944 23:53:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:13:18.944 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:18.944 --rc genhtml_branch_coverage=1 00:13:18.944 --rc genhtml_function_coverage=1 00:13:18.944 --rc genhtml_legend=1 00:13:18.944 --rc geninfo_all_blocks=1 00:13:18.944 --rc geninfo_unexecuted_blocks=1 00:13:18.944 00:13:18.944 ' 00:13:18.944 23:53:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:18.944 23:53:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:13:18.944 23:53:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:18.944 23:53:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:18.944 23:53:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:18.944 23:53:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:18.944 23:53:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:18.944 23:53:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:18.944 23:53:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:18.944 23:53:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:18.944 23:53:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:18.944 23:53:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:18.944 23:53:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:13:18.944 23:53:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:13:18.944 23:53:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:18.944 23:53:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:18.944 23:53:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:18.944 23:53:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:18.944 23:53:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:18.944 23:53:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:13:18.944 23:53:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:18.944 23:53:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:18.944 23:53:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:18.944 23:53:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:18.944 23:53:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:18.944 23:53:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:18.944 23:53:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:13:18.944 23:53:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:18.944 23:53:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # : 0 00:13:18.945 23:53:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:18.945 23:53:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:18.945 23:53:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:18.945 23:53:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:18.945 23:53:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:18.945 23:53:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:18.945 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:18.945 23:53:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:18.945 23:53:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:18.945 23:53:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:18.945 23:53:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:18.945 23:53:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:18.945 23:53:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:13:18.945 23:53:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:13:18.945 23:53:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:18.945 23:53:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:18.945 23:53:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:18.945 23:53:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:18.945 23:53:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:18.945 23:53:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:18.945 23:53:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:18.945 23:53:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:13:18.945 23:53:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:13:18.945 23:53:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:13:18.945 23:53:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:24.220 23:54:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:24.220 23:54:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:13:24.220 23:54:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:24.220 23:54:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:24.220 23:54:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:24.220 23:54:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:24.220 23:54:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:24.220 23:54:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:13:24.220 23:54:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:24.220 23:54:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # e810=() 00:13:24.220 23:54:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:13:24.220 23:54:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # x722=() 00:13:24.220 23:54:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:13:24.220 23:54:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:13:24.220 23:54:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:13:24.220 23:54:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:24.220 23:54:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:24.220 23:54:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:24.220 23:54:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:24.220 23:54:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:24.220 23:54:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:24.220 23:54:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:24.220 23:54:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:24.220 23:54:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:24.220 23:54:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:24.220 23:54:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:24.220 23:54:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:24.220 23:54:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:24.220 23:54:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:13:24.220 23:54:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:13:24.220 23:54:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:13:24.220 23:54:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:13:24.220 23:54:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:24.220 23:54:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:24.220 23:54:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:13:24.220 Found 0000:af:00.0 (0x8086 - 0x159b) 00:13:24.220 23:54:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:24.220 23:54:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:24.220 23:54:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:24.220 23:54:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:24.220 23:54:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:24.220 23:54:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:24.220 23:54:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:13:24.220 Found 0000:af:00.1 (0x8086 - 0x159b) 00:13:24.220 23:54:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:24.220 23:54:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:24.220 23:54:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:24.220 23:54:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:24.220 23:54:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:24.220 23:54:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:24.220 23:54:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:13:24.220 23:54:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:13:24.220 23:54:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:24.220 23:54:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:24.220 23:54:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:24.220 23:54:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:24.220 23:54:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:24.220 23:54:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:24.220 23:54:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:24.220 23:54:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:13:24.220 Found net devices under 0000:af:00.0: cvl_0_0 00:13:24.220 23:54:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:24.220 23:54:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:24.220 23:54:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:24.220 23:54:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:24.220 23:54:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:24.220 23:54:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:24.220 23:54:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:24.220 23:54:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:24.220 23:54:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:13:24.220 Found net devices under 0000:af:00.1: cvl_0_1 00:13:24.220 23:54:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:24.220 23:54:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:13:24.220 23:54:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # is_hw=yes 00:13:24.220 23:54:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:13:24.221 23:54:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:13:24.221 23:54:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:13:24.221 23:54:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:24.221 23:54:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:24.221 23:54:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:24.221 23:54:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:24.221 23:54:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:13:24.221 23:54:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:24.221 23:54:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:24.221 23:54:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:13:24.221 23:54:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:13:24.221 23:54:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:24.221 23:54:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:24.221 23:54:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:13:24.221 23:54:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:13:24.221 23:54:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:13:24.221 23:54:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:24.221 23:54:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:24.221 23:54:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:24.221 23:54:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:13:24.221 23:54:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:24.221 23:54:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:24.221 23:54:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:24.221 23:54:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:13:24.221 23:54:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:13:24.221 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:24.221 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.320 ms 00:13:24.221 00:13:24.221 --- 10.0.0.2 ping statistics --- 00:13:24.221 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:24.221 rtt min/avg/max/mdev = 0.320/0.320/0.320/0.000 ms 00:13:24.221 23:54:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:24.221 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:24.221 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.199 ms 00:13:24.221 00:13:24.221 --- 10.0.0.1 ping statistics --- 00:13:24.221 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:24.221 rtt min/avg/max/mdev = 0.199/0.199/0.199/0.000 ms 00:13:24.221 23:54:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:24.221 23:54:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # return 0 00:13:24.221 23:54:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:24.221 23:54:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:24.221 23:54:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:13:24.221 23:54:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:13:24.221 23:54:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:24.221 23:54:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:13:24.221 23:54:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:13:24.221 23:54:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:13:24.221 23:54:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:24.221 23:54:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:24.221 23:54:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:24.221 23:54:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@509 -- # nvmfpid=3924576 00:13:24.221 23:54:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@510 -- # waitforlisten 3924576 00:13:24.221 23:54:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@835 -- # '[' -z 3924576 ']' 00:13:24.221 23:54:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:24.221 23:54:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:24.221 23:54:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:24.221 23:54:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:24.221 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:24.221 23:54:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:24.221 23:54:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:24.221 [2024-12-13 23:54:03.280394] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:13:24.221 [2024-12-13 23:54:03.280494] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:24.480 [2024-12-13 23:54:03.398766] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:24.480 [2024-12-13 23:54:03.510476] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:24.480 [2024-12-13 23:54:03.510522] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:24.480 [2024-12-13 23:54:03.510533] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:24.480 [2024-12-13 23:54:03.510543] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:24.480 [2024-12-13 23:54:03.510551] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:24.480 [2024-12-13 23:54:03.512924] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:13:24.480 [2024-12-13 23:54:03.513001] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:13:24.480 [2024-12-13 23:54:03.513062] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:13:24.480 [2024-12-13 23:54:03.513072] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:13:25.048 23:54:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:25.048 23:54:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@868 -- # return 0 00:13:25.048 23:54:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:25.048 23:54:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:25.048 23:54:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:25.048 23:54:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:25.048 23:54:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:13:25.048 23:54:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.048 23:54:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:25.048 [2024-12-13 23:54:04.124658] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:25.048 23:54:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.048 23:54:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:13:25.048 23:54:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.048 23:54:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:25.307 23:54:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.307 23:54:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:13:25.307 23:54:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:13:25.307 23:54:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.307 23:54:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:25.307 23:54:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.307 23:54:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:13:25.307 23:54:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.307 23:54:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:25.307 23:54:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.307 23:54:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:25.307 23:54:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.307 23:54:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:25.307 [2024-12-13 23:54:04.242302] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:25.307 23:54:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.307 23:54:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 1 -eq 1 ']' 00:13:25.307 23:54:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@27 -- # num_iterations=100 00:13:25.307 23:54:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@29 -- # NVME_CONNECT='nvme connect -i 8' 00:13:25.307 23:54:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:13:27.841 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:29.744 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:32.279 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:34.183 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:36.866 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:38.771 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:41.306 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:43.847 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:45.750 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:48.285 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:50.821 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:53.355 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:55.259 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:57.793 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:00.327 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:02.232 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:04.809 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:06.713 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:09.247 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:11.781 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:14.315 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:16.218 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:18.751 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:21.285 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:23.818 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:25.722 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:28.255 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:30.789 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:32.693 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:35.341 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:37.283 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:39.819 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:42.355 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:44.891 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:46.795 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:49.329 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:51.864 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:53.768 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:56.302 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:58.836 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:00.740 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:03.323 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:05.228 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:07.762 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:10.298 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:12.203 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:14.738 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:17.272 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:19.176 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:21.710 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:24.244 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:26.147 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:28.678 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:31.211 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:33.119 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:35.729 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:38.263 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:40.797 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:42.701 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:45.238 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:47.139 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:49.672 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:52.206 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:54.741 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:57.274 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:59.179 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:01.714 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:04.249 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:06.157 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:08.693 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:11.228 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:13.763 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:16.297 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:18.200 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:20.735 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:23.269 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:25.173 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:27.712 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:30.246 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:32.150 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:34.793 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:37.330 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:39.242 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:41.775 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:44.310 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:46.844 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:49.379 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:51.283 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:53.818 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:56.351 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:58.883 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:00.788 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:03.323 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:05.228 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:07.764 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:10.299 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:12.835 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:15.368 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:17.272 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:19.807 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:19.807 23:57:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:17:19.807 23:57:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:17:19.807 23:57:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:19.807 23:57:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # sync 00:17:19.807 23:57:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:19.807 23:57:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set +e 00:17:19.807 23:57:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:19.807 23:57:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:19.807 rmmod nvme_tcp 00:17:19.807 rmmod nvme_fabrics 00:17:19.807 rmmod nvme_keyring 00:17:19.807 23:57:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:19.807 23:57:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@128 -- # set -e 00:17:19.807 23:57:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@129 -- # return 0 00:17:19.807 23:57:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@517 -- # '[' -n 3924576 ']' 00:17:19.807 23:57:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@518 -- # killprocess 3924576 00:17:19.807 23:57:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # '[' -z 3924576 ']' 00:17:19.807 23:57:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@958 -- # kill -0 3924576 00:17:19.807 23:57:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@959 -- # uname 00:17:19.807 23:57:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:19.807 23:57:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3924576 00:17:19.807 23:57:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:19.807 23:57:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:19.807 23:57:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3924576' 00:17:19.807 killing process with pid 3924576 00:17:19.807 23:57:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@973 -- # kill 3924576 00:17:19.807 23:57:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@978 -- # wait 3924576 00:17:21.185 23:58:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:21.185 23:58:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:17:21.185 23:58:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:17:21.185 23:58:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # iptr 00:17:21.186 23:58:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # iptables-save 00:17:21.186 23:58:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:17:21.186 23:58:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # iptables-restore 00:17:21.186 23:58:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:21.186 23:58:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:17:21.186 23:58:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:21.186 23:58:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:21.186 23:58:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:23.091 23:58:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:17:23.091 00:17:23.091 real 4m4.554s 00:17:23.091 user 15m33.811s 00:17:23.091 sys 0m25.944s 00:17:23.091 23:58:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:23.091 23:58:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:17:23.091 ************************************ 00:17:23.091 END TEST nvmf_connect_disconnect 00:17:23.091 ************************************ 00:17:23.351 23:58:02 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@21 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:17:23.351 23:58:02 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:23.351 23:58:02 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:23.351 23:58:02 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:23.351 ************************************ 00:17:23.351 START TEST nvmf_multitarget 00:17:23.351 ************************************ 00:17:23.351 23:58:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:17:23.351 * Looking for test storage... 00:17:23.351 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:23.351 23:58:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:17:23.351 23:58:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:17:23.351 23:58:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1711 -- # lcov --version 00:17:23.351 23:58:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:17:23.351 23:58:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:23.351 23:58:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:23.351 23:58:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:23.351 23:58:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # IFS=.-: 00:17:23.351 23:58:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # read -ra ver1 00:17:23.351 23:58:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # IFS=.-: 00:17:23.351 23:58:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # read -ra ver2 00:17:23.351 23:58:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@338 -- # local 'op=<' 00:17:23.351 23:58:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@340 -- # ver1_l=2 00:17:23.351 23:58:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@341 -- # ver2_l=1 00:17:23.351 23:58:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:23.351 23:58:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@344 -- # case "$op" in 00:17:23.351 23:58:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@345 -- # : 1 00:17:23.351 23:58:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:23.351 23:58:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:23.351 23:58:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # decimal 1 00:17:23.351 23:58:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=1 00:17:23.351 23:58:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:23.351 23:58:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 1 00:17:23.351 23:58:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # ver1[v]=1 00:17:23.351 23:58:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # decimal 2 00:17:23.351 23:58:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=2 00:17:23.351 23:58:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:23.351 23:58:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 2 00:17:23.351 23:58:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # ver2[v]=2 00:17:23.351 23:58:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:23.351 23:58:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:23.351 23:58:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # return 0 00:17:23.351 23:58:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:23.351 23:58:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:17:23.351 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:23.351 --rc genhtml_branch_coverage=1 00:17:23.351 --rc genhtml_function_coverage=1 00:17:23.351 --rc genhtml_legend=1 00:17:23.351 --rc geninfo_all_blocks=1 00:17:23.351 --rc geninfo_unexecuted_blocks=1 00:17:23.351 00:17:23.351 ' 00:17:23.351 23:58:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:17:23.351 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:23.351 --rc genhtml_branch_coverage=1 00:17:23.351 --rc genhtml_function_coverage=1 00:17:23.351 --rc genhtml_legend=1 00:17:23.351 --rc geninfo_all_blocks=1 00:17:23.351 --rc geninfo_unexecuted_blocks=1 00:17:23.351 00:17:23.351 ' 00:17:23.351 23:58:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:17:23.351 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:23.351 --rc genhtml_branch_coverage=1 00:17:23.351 --rc genhtml_function_coverage=1 00:17:23.351 --rc genhtml_legend=1 00:17:23.351 --rc geninfo_all_blocks=1 00:17:23.351 --rc geninfo_unexecuted_blocks=1 00:17:23.351 00:17:23.351 ' 00:17:23.351 23:58:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:17:23.351 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:23.351 --rc genhtml_branch_coverage=1 00:17:23.351 --rc genhtml_function_coverage=1 00:17:23.351 --rc genhtml_legend=1 00:17:23.351 --rc geninfo_all_blocks=1 00:17:23.351 --rc geninfo_unexecuted_blocks=1 00:17:23.351 00:17:23.351 ' 00:17:23.351 23:58:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:23.351 23:58:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:17:23.351 23:58:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:23.351 23:58:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:23.351 23:58:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:23.351 23:58:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:23.351 23:58:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:23.351 23:58:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:23.351 23:58:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:23.352 23:58:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:23.352 23:58:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:23.352 23:58:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:23.352 23:58:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:17:23.352 23:58:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:17:23.352 23:58:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:23.352 23:58:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:23.352 23:58:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:23.352 23:58:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:23.352 23:58:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:23.352 23:58:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@15 -- # shopt -s extglob 00:17:23.352 23:58:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:23.352 23:58:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:23.352 23:58:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:23.352 23:58:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:23.352 23:58:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:23.352 23:58:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:23.352 23:58:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:17:23.352 23:58:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:23.352 23:58:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@51 -- # : 0 00:17:23.352 23:58:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:23.352 23:58:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:23.352 23:58:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:23.352 23:58:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:23.352 23:58:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:23.352 23:58:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:23.352 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:23.352 23:58:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:23.352 23:58:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:23.611 23:58:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:23.611 23:58:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:17:23.611 23:58:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:17:23.611 23:58:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:17:23.611 23:58:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:23.611 23:58:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:23.611 23:58:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:23.611 23:58:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:23.611 23:58:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:23.611 23:58:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:23.611 23:58:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:23.611 23:58:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:17:23.611 23:58:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:17:23.611 23:58:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@309 -- # xtrace_disable 00:17:23.611 23:58:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:17:28.886 23:58:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:28.886 23:58:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # pci_devs=() 00:17:28.886 23:58:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # local -a pci_devs 00:17:28.886 23:58:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # pci_net_devs=() 00:17:28.886 23:58:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:17:28.886 23:58:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # pci_drivers=() 00:17:28.886 23:58:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # local -A pci_drivers 00:17:28.886 23:58:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # net_devs=() 00:17:28.886 23:58:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # local -ga net_devs 00:17:28.886 23:58:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # e810=() 00:17:28.886 23:58:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # local -ga e810 00:17:28.886 23:58:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # x722=() 00:17:28.886 23:58:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # local -ga x722 00:17:28.886 23:58:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # mlx=() 00:17:28.886 23:58:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # local -ga mlx 00:17:28.886 23:58:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:28.886 23:58:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:28.886 23:58:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:28.886 23:58:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:28.886 23:58:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:28.886 23:58:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:28.886 23:58:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:28.886 23:58:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:17:28.886 23:58:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:28.886 23:58:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:28.886 23:58:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:28.886 23:58:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:28.886 23:58:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:17:28.886 23:58:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:17:28.886 23:58:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:17:28.886 23:58:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:17:28.886 23:58:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:17:28.886 23:58:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:17:28.886 23:58:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:28.886 23:58:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:17:28.886 Found 0000:af:00.0 (0x8086 - 0x159b) 00:17:28.886 23:58:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:28.886 23:58:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:28.886 23:58:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:28.886 23:58:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:28.886 23:58:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:28.886 23:58:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:28.886 23:58:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:17:28.886 Found 0000:af:00.1 (0x8086 - 0x159b) 00:17:28.886 23:58:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:28.886 23:58:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:28.886 23:58:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:28.886 23:58:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:28.886 23:58:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:28.886 23:58:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:17:28.886 23:58:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:17:28.886 23:58:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:17:28.886 23:58:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:28.886 23:58:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:28.886 23:58:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:28.887 23:58:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:28.887 23:58:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:28.887 23:58:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:28.887 23:58:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:28.887 23:58:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:17:28.887 Found net devices under 0000:af:00.0: cvl_0_0 00:17:28.887 23:58:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:28.887 23:58:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:28.887 23:58:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:28.887 23:58:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:28.887 23:58:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:28.887 23:58:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:28.887 23:58:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:28.887 23:58:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:28.887 23:58:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:17:28.887 Found net devices under 0000:af:00.1: cvl_0_1 00:17:28.887 23:58:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:28.887 23:58:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:17:28.887 23:58:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # is_hw=yes 00:17:28.887 23:58:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:17:28.887 23:58:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:17:28.887 23:58:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:17:28.887 23:58:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:28.887 23:58:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:28.887 23:58:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:28.887 23:58:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:28.887 23:58:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:17:28.887 23:58:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:28.887 23:58:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:28.887 23:58:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:17:28.887 23:58:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:17:28.887 23:58:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:28.887 23:58:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:28.887 23:58:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:17:28.887 23:58:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:17:28.887 23:58:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:17:28.887 23:58:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:28.887 23:58:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:28.887 23:58:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:28.887 23:58:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:17:28.887 23:58:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:28.887 23:58:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:28.887 23:58:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:28.887 23:58:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:17:28.887 23:58:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:17:28.887 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:28.887 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.363 ms 00:17:28.887 00:17:28.887 --- 10.0.0.2 ping statistics --- 00:17:28.887 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:28.887 rtt min/avg/max/mdev = 0.363/0.363/0.363/0.000 ms 00:17:28.887 23:58:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:28.887 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:28.887 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.130 ms 00:17:28.887 00:17:28.887 --- 10.0.0.1 ping statistics --- 00:17:28.887 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:28.887 rtt min/avg/max/mdev = 0.130/0.130/0.130/0.000 ms 00:17:28.887 23:58:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:28.887 23:58:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@450 -- # return 0 00:17:28.887 23:58:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:28.887 23:58:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:28.887 23:58:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:17:28.887 23:58:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:17:28.887 23:58:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:28.887 23:58:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:17:28.887 23:58:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:17:28.887 23:58:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:17:28.887 23:58:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:28.887 23:58:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:28.887 23:58:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:17:28.887 23:58:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@509 -- # nvmfpid=3968322 00:17:28.887 23:58:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:17:28.887 23:58:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@510 -- # waitforlisten 3968322 00:17:28.887 23:58:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@835 -- # '[' -z 3968322 ']' 00:17:28.887 23:58:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:28.887 23:58:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:28.887 23:58:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:28.887 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:28.887 23:58:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:28.887 23:58:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:17:28.887 [2024-12-13 23:58:07.926159] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:17:28.887 [2024-12-13 23:58:07.926249] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:29.146 [2024-12-13 23:58:08.042040] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:29.146 [2024-12-13 23:58:08.144237] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:29.146 [2024-12-13 23:58:08.144283] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:29.146 [2024-12-13 23:58:08.144293] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:29.146 [2024-12-13 23:58:08.144303] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:29.146 [2024-12-13 23:58:08.144311] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:29.146 [2024-12-13 23:58:08.146603] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:17:29.146 [2024-12-13 23:58:08.146669] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:17:29.146 [2024-12-13 23:58:08.146732] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:17:29.146 [2024-12-13 23:58:08.146742] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:17:29.714 23:58:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:29.714 23:58:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@868 -- # return 0 00:17:29.714 23:58:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:29.714 23:58:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:29.714 23:58:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:17:29.714 23:58:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:29.714 23:58:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:17:29.714 23:58:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:17:29.714 23:58:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:17:29.973 23:58:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:17:29.973 23:58:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:17:29.973 "nvmf_tgt_1" 00:17:29.973 23:58:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:17:29.973 "nvmf_tgt_2" 00:17:30.232 23:58:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:17:30.232 23:58:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:17:30.232 23:58:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:17:30.232 23:58:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:17:30.232 true 00:17:30.232 23:58:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:17:30.491 true 00:17:30.491 23:58:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:17:30.491 23:58:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:17:30.491 23:58:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:17:30.491 23:58:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:17:30.491 23:58:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:17:30.491 23:58:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:30.491 23:58:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@121 -- # sync 00:17:30.491 23:58:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:30.491 23:58:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@124 -- # set +e 00:17:30.491 23:58:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:30.491 23:58:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:30.491 rmmod nvme_tcp 00:17:30.491 rmmod nvme_fabrics 00:17:30.491 rmmod nvme_keyring 00:17:30.491 23:58:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:30.491 23:58:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@128 -- # set -e 00:17:30.491 23:58:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@129 -- # return 0 00:17:30.491 23:58:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@517 -- # '[' -n 3968322 ']' 00:17:30.491 23:58:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@518 -- # killprocess 3968322 00:17:30.491 23:58:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@954 -- # '[' -z 3968322 ']' 00:17:30.491 23:58:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@958 -- # kill -0 3968322 00:17:30.491 23:58:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@959 -- # uname 00:17:30.491 23:58:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:30.491 23:58:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3968322 00:17:30.753 23:58:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:30.753 23:58:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:30.753 23:58:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3968322' 00:17:30.753 killing process with pid 3968322 00:17:30.753 23:58:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@973 -- # kill 3968322 00:17:30.753 23:58:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@978 -- # wait 3968322 00:17:31.827 23:58:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:31.827 23:58:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:17:31.827 23:58:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:17:31.827 23:58:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@297 -- # iptr 00:17:31.827 23:58:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:17:31.827 23:58:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # iptables-save 00:17:31.827 23:58:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # iptables-restore 00:17:31.827 23:58:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:31.827 23:58:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@302 -- # remove_spdk_ns 00:17:31.827 23:58:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:31.827 23:58:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:31.827 23:58:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:34.363 23:58:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:17:34.363 00:17:34.363 real 0m10.625s 00:17:34.363 user 0m12.677s 00:17:34.363 sys 0m4.541s 00:17:34.363 23:58:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:34.363 23:58:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:17:34.363 ************************************ 00:17:34.363 END TEST nvmf_multitarget 00:17:34.363 ************************************ 00:17:34.363 23:58:12 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@22 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:17:34.363 23:58:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:34.363 23:58:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:34.363 23:58:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:34.363 ************************************ 00:17:34.363 START TEST nvmf_rpc 00:17:34.363 ************************************ 00:17:34.363 23:58:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:17:34.363 * Looking for test storage... 00:17:34.363 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:34.363 23:58:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:17:34.363 23:58:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:17:34.363 23:58:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:17:34.363 23:58:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:17:34.363 23:58:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:34.363 23:58:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:34.363 23:58:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:34.363 23:58:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:17:34.363 23:58:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:17:34.363 23:58:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:17:34.363 23:58:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:17:34.363 23:58:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:17:34.363 23:58:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:17:34.363 23:58:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:17:34.363 23:58:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:34.363 23:58:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@344 -- # case "$op" in 00:17:34.363 23:58:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@345 -- # : 1 00:17:34.363 23:58:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:34.363 23:58:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:34.363 23:58:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # decimal 1 00:17:34.363 23:58:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=1 00:17:34.363 23:58:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:34.363 23:58:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 1 00:17:34.363 23:58:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:17:34.363 23:58:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # decimal 2 00:17:34.363 23:58:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=2 00:17:34.363 23:58:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:34.363 23:58:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 2 00:17:34.363 23:58:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:17:34.363 23:58:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:34.363 23:58:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:34.363 23:58:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # return 0 00:17:34.363 23:58:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:34.363 23:58:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:17:34.363 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:34.363 --rc genhtml_branch_coverage=1 00:17:34.363 --rc genhtml_function_coverage=1 00:17:34.363 --rc genhtml_legend=1 00:17:34.363 --rc geninfo_all_blocks=1 00:17:34.363 --rc geninfo_unexecuted_blocks=1 00:17:34.363 00:17:34.363 ' 00:17:34.363 23:58:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:17:34.363 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:34.363 --rc genhtml_branch_coverage=1 00:17:34.363 --rc genhtml_function_coverage=1 00:17:34.363 --rc genhtml_legend=1 00:17:34.363 --rc geninfo_all_blocks=1 00:17:34.363 --rc geninfo_unexecuted_blocks=1 00:17:34.363 00:17:34.363 ' 00:17:34.363 23:58:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:17:34.363 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:34.363 --rc genhtml_branch_coverage=1 00:17:34.363 --rc genhtml_function_coverage=1 00:17:34.363 --rc genhtml_legend=1 00:17:34.363 --rc geninfo_all_blocks=1 00:17:34.363 --rc geninfo_unexecuted_blocks=1 00:17:34.363 00:17:34.363 ' 00:17:34.363 23:58:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:17:34.363 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:34.363 --rc genhtml_branch_coverage=1 00:17:34.363 --rc genhtml_function_coverage=1 00:17:34.363 --rc genhtml_legend=1 00:17:34.363 --rc geninfo_all_blocks=1 00:17:34.363 --rc geninfo_unexecuted_blocks=1 00:17:34.363 00:17:34.363 ' 00:17:34.363 23:58:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:34.363 23:58:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:17:34.363 23:58:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:34.363 23:58:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:34.363 23:58:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:34.363 23:58:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:34.363 23:58:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:34.363 23:58:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:34.363 23:58:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:34.363 23:58:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:34.363 23:58:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:34.363 23:58:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:34.363 23:58:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:17:34.363 23:58:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:17:34.363 23:58:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:34.363 23:58:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:34.363 23:58:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:34.363 23:58:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:34.363 23:58:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:34.363 23:58:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@15 -- # shopt -s extglob 00:17:34.363 23:58:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:34.363 23:58:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:34.363 23:58:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:34.363 23:58:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:34.363 23:58:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:34.363 23:58:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:34.363 23:58:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:17:34.364 23:58:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:34.364 23:58:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@51 -- # : 0 00:17:34.364 23:58:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:34.364 23:58:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:34.364 23:58:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:34.364 23:58:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:34.364 23:58:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:34.364 23:58:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:34.364 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:34.364 23:58:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:34.364 23:58:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:34.364 23:58:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:34.364 23:58:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:17:34.364 23:58:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:17:34.364 23:58:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:17:34.364 23:58:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:34.364 23:58:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:34.364 23:58:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:34.364 23:58:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:34.364 23:58:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:34.364 23:58:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:34.364 23:58:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:34.364 23:58:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:17:34.364 23:58:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:17:34.364 23:58:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@309 -- # xtrace_disable 00:17:34.364 23:58:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:39.637 23:58:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:39.637 23:58:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # pci_devs=() 00:17:39.637 23:58:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # local -a pci_devs 00:17:39.637 23:58:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:17:39.637 23:58:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:17:39.637 23:58:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # pci_drivers=() 00:17:39.637 23:58:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:17:39.637 23:58:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # net_devs=() 00:17:39.637 23:58:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # local -ga net_devs 00:17:39.637 23:58:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # e810=() 00:17:39.637 23:58:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # local -ga e810 00:17:39.637 23:58:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # x722=() 00:17:39.637 23:58:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # local -ga x722 00:17:39.637 23:58:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # mlx=() 00:17:39.637 23:58:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # local -ga mlx 00:17:39.637 23:58:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:39.637 23:58:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:39.637 23:58:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:39.637 23:58:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:39.637 23:58:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:39.637 23:58:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:39.637 23:58:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:39.637 23:58:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:17:39.637 23:58:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:39.637 23:58:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:39.637 23:58:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:39.637 23:58:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:39.637 23:58:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:17:39.637 23:58:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:17:39.637 23:58:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:17:39.637 23:58:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:17:39.637 23:58:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:17:39.637 23:58:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:17:39.637 23:58:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:39.637 23:58:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:17:39.637 Found 0000:af:00.0 (0x8086 - 0x159b) 00:17:39.637 23:58:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:39.637 23:58:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:39.637 23:58:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:39.637 23:58:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:39.637 23:58:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:39.637 23:58:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:39.637 23:58:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:17:39.637 Found 0000:af:00.1 (0x8086 - 0x159b) 00:17:39.637 23:58:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:39.637 23:58:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:39.637 23:58:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:39.637 23:58:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:39.637 23:58:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:39.637 23:58:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:17:39.637 23:58:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:17:39.637 23:58:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:17:39.637 23:58:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:39.637 23:58:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:39.637 23:58:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:39.637 23:58:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:39.637 23:58:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:39.637 23:58:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:39.637 23:58:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:39.637 23:58:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:17:39.637 Found net devices under 0000:af:00.0: cvl_0_0 00:17:39.637 23:58:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:39.637 23:58:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:39.637 23:58:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:39.637 23:58:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:39.637 23:58:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:39.638 23:58:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:39.638 23:58:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:39.638 23:58:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:39.638 23:58:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:17:39.638 Found net devices under 0000:af:00.1: cvl_0_1 00:17:39.638 23:58:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:39.638 23:58:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:17:39.638 23:58:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # is_hw=yes 00:17:39.638 23:58:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:17:39.638 23:58:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:17:39.638 23:58:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:17:39.638 23:58:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:39.638 23:58:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:39.638 23:58:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:39.638 23:58:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:39.638 23:58:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:17:39.638 23:58:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:39.638 23:58:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:39.638 23:58:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:17:39.638 23:58:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:17:39.638 23:58:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:39.638 23:58:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:39.638 23:58:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:17:39.638 23:58:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:17:39.638 23:58:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:17:39.638 23:58:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:39.638 23:58:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:39.638 23:58:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:39.638 23:58:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:17:39.638 23:58:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:39.638 23:58:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:39.638 23:58:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:39.638 23:58:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:17:39.638 23:58:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:17:39.638 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:39.638 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.449 ms 00:17:39.638 00:17:39.638 --- 10.0.0.2 ping statistics --- 00:17:39.638 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:39.638 rtt min/avg/max/mdev = 0.449/0.449/0.449/0.000 ms 00:17:39.638 23:58:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:39.638 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:39.638 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.195 ms 00:17:39.638 00:17:39.638 --- 10.0.0.1 ping statistics --- 00:17:39.638 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:39.638 rtt min/avg/max/mdev = 0.195/0.195/0.195/0.000 ms 00:17:39.638 23:58:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:39.638 23:58:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@450 -- # return 0 00:17:39.638 23:58:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:39.638 23:58:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:39.638 23:58:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:17:39.638 23:58:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:17:39.638 23:58:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:39.638 23:58:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:17:39.638 23:58:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:17:39.638 23:58:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:17:39.638 23:58:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:39.638 23:58:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:39.638 23:58:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:39.638 23:58:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@509 -- # nvmfpid=3972098 00:17:39.638 23:58:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@510 -- # waitforlisten 3972098 00:17:39.638 23:58:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@835 -- # '[' -z 3972098 ']' 00:17:39.638 23:58:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:39.638 23:58:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:39.638 23:58:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:17:39.638 23:58:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:39.638 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:39.638 23:58:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:39.638 23:58:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:39.638 [2024-12-13 23:58:18.571411] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:17:39.638 [2024-12-13 23:58:18.571509] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:39.638 [2024-12-13 23:58:18.689727] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:39.897 [2024-12-13 23:58:18.799586] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:39.897 [2024-12-13 23:58:18.799634] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:39.897 [2024-12-13 23:58:18.799644] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:39.897 [2024-12-13 23:58:18.799654] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:39.897 [2024-12-13 23:58:18.799662] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:39.897 [2024-12-13 23:58:18.802144] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:17:39.897 [2024-12-13 23:58:18.802216] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:17:39.897 [2024-12-13 23:58:18.802281] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:17:39.897 [2024-12-13 23:58:18.802290] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:17:40.465 23:58:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:40.465 23:58:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@868 -- # return 0 00:17:40.465 23:58:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:40.465 23:58:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:40.465 23:58:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:40.465 23:58:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:40.465 23:58:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:17:40.465 23:58:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:40.465 23:58:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:40.465 23:58:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:40.465 23:58:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:17:40.465 "tick_rate": 2100000000, 00:17:40.465 "poll_groups": [ 00:17:40.465 { 00:17:40.465 "name": "nvmf_tgt_poll_group_000", 00:17:40.465 "admin_qpairs": 0, 00:17:40.465 "io_qpairs": 0, 00:17:40.465 "current_admin_qpairs": 0, 00:17:40.465 "current_io_qpairs": 0, 00:17:40.465 "pending_bdev_io": 0, 00:17:40.465 "completed_nvme_io": 0, 00:17:40.465 "transports": [] 00:17:40.465 }, 00:17:40.465 { 00:17:40.465 "name": "nvmf_tgt_poll_group_001", 00:17:40.465 "admin_qpairs": 0, 00:17:40.465 "io_qpairs": 0, 00:17:40.465 "current_admin_qpairs": 0, 00:17:40.465 "current_io_qpairs": 0, 00:17:40.465 "pending_bdev_io": 0, 00:17:40.465 "completed_nvme_io": 0, 00:17:40.465 "transports": [] 00:17:40.465 }, 00:17:40.465 { 00:17:40.465 "name": "nvmf_tgt_poll_group_002", 00:17:40.465 "admin_qpairs": 0, 00:17:40.465 "io_qpairs": 0, 00:17:40.465 "current_admin_qpairs": 0, 00:17:40.465 "current_io_qpairs": 0, 00:17:40.465 "pending_bdev_io": 0, 00:17:40.465 "completed_nvme_io": 0, 00:17:40.465 "transports": [] 00:17:40.465 }, 00:17:40.465 { 00:17:40.465 "name": "nvmf_tgt_poll_group_003", 00:17:40.465 "admin_qpairs": 0, 00:17:40.465 "io_qpairs": 0, 00:17:40.465 "current_admin_qpairs": 0, 00:17:40.465 "current_io_qpairs": 0, 00:17:40.465 "pending_bdev_io": 0, 00:17:40.465 "completed_nvme_io": 0, 00:17:40.465 "transports": [] 00:17:40.465 } 00:17:40.465 ] 00:17:40.465 }' 00:17:40.465 23:58:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:17:40.465 23:58:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:17:40.465 23:58:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:17:40.465 23:58:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:17:40.465 23:58:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:17:40.465 23:58:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:17:40.465 23:58:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:17:40.465 23:58:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:40.465 23:58:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:40.465 23:58:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:40.465 [2024-12-13 23:58:19.541805] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:40.465 23:58:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:40.465 23:58:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:17:40.465 23:58:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:40.465 23:58:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:40.465 23:58:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:40.465 23:58:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:17:40.465 "tick_rate": 2100000000, 00:17:40.465 "poll_groups": [ 00:17:40.465 { 00:17:40.465 "name": "nvmf_tgt_poll_group_000", 00:17:40.465 "admin_qpairs": 0, 00:17:40.465 "io_qpairs": 0, 00:17:40.465 "current_admin_qpairs": 0, 00:17:40.465 "current_io_qpairs": 0, 00:17:40.465 "pending_bdev_io": 0, 00:17:40.465 "completed_nvme_io": 0, 00:17:40.465 "transports": [ 00:17:40.465 { 00:17:40.465 "trtype": "TCP" 00:17:40.465 } 00:17:40.465 ] 00:17:40.465 }, 00:17:40.465 { 00:17:40.465 "name": "nvmf_tgt_poll_group_001", 00:17:40.465 "admin_qpairs": 0, 00:17:40.465 "io_qpairs": 0, 00:17:40.465 "current_admin_qpairs": 0, 00:17:40.465 "current_io_qpairs": 0, 00:17:40.465 "pending_bdev_io": 0, 00:17:40.465 "completed_nvme_io": 0, 00:17:40.465 "transports": [ 00:17:40.465 { 00:17:40.465 "trtype": "TCP" 00:17:40.465 } 00:17:40.465 ] 00:17:40.465 }, 00:17:40.465 { 00:17:40.465 "name": "nvmf_tgt_poll_group_002", 00:17:40.465 "admin_qpairs": 0, 00:17:40.465 "io_qpairs": 0, 00:17:40.465 "current_admin_qpairs": 0, 00:17:40.465 "current_io_qpairs": 0, 00:17:40.465 "pending_bdev_io": 0, 00:17:40.465 "completed_nvme_io": 0, 00:17:40.465 "transports": [ 00:17:40.465 { 00:17:40.465 "trtype": "TCP" 00:17:40.465 } 00:17:40.465 ] 00:17:40.465 }, 00:17:40.465 { 00:17:40.465 "name": "nvmf_tgt_poll_group_003", 00:17:40.465 "admin_qpairs": 0, 00:17:40.465 "io_qpairs": 0, 00:17:40.465 "current_admin_qpairs": 0, 00:17:40.465 "current_io_qpairs": 0, 00:17:40.465 "pending_bdev_io": 0, 00:17:40.465 "completed_nvme_io": 0, 00:17:40.465 "transports": [ 00:17:40.465 { 00:17:40.465 "trtype": "TCP" 00:17:40.465 } 00:17:40.465 ] 00:17:40.465 } 00:17:40.465 ] 00:17:40.465 }' 00:17:40.465 23:58:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:17:40.465 23:58:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:17:40.465 23:58:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:17:40.465 23:58:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:17:40.724 23:58:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:17:40.724 23:58:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:17:40.725 23:58:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:17:40.725 23:58:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:17:40.725 23:58:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:17:40.725 23:58:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:17:40.725 23:58:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:17:40.725 23:58:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:17:40.725 23:58:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:17:40.725 23:58:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:17:40.725 23:58:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:40.725 23:58:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:40.725 Malloc1 00:17:40.725 23:58:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:40.725 23:58:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:17:40.725 23:58:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:40.725 23:58:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:40.725 23:58:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:40.725 23:58:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:17:40.725 23:58:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:40.725 23:58:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:40.725 23:58:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:40.725 23:58:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:17:40.725 23:58:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:40.725 23:58:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:40.725 23:58:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:40.725 23:58:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:40.725 23:58:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:40.725 23:58:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:40.725 [2024-12-13 23:58:19.774212] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:40.725 23:58:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:40.725 23:58:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -a 10.0.0.2 -s 4420 00:17:40.725 23:58:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # local es=0 00:17:40.725 23:58:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -a 10.0.0.2 -s 4420 00:17:40.725 23:58:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # local arg=nvme 00:17:40.725 23:58:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:40.725 23:58:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -t nvme 00:17:40.725 23:58:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:40.725 23:58:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # type -P nvme 00:17:40.725 23:58:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:40.725 23:58:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # arg=/usr/sbin/nvme 00:17:40.725 23:58:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # [[ -x /usr/sbin/nvme ]] 00:17:40.725 23:58:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -a 10.0.0.2 -s 4420 00:17:40.725 [2024-12-13 23:58:19.803554] ctrlr.c: 825:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562' 00:17:40.725 Failed to write to /dev/nvme-fabrics: Input/output error 00:17:40.725 could not add new controller: failed to write to nvme-fabrics device 00:17:40.725 23:58:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # es=1 00:17:40.725 23:58:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:40.725 23:58:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:40.725 23:58:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:40.725 23:58:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:17:40.725 23:58:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:40.725 23:58:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:40.725 23:58:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:40.725 23:58:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:42.100 23:58:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:17:42.100 23:58:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:17:42.100 23:58:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:17:42.100 23:58:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:17:42.100 23:58:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:17:44.006 23:58:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:17:44.006 23:58:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:17:44.006 23:58:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:17:44.006 23:58:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:17:44.006 23:58:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:17:44.006 23:58:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:17:44.006 23:58:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:44.265 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:44.265 23:58:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:17:44.265 23:58:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:17:44.265 23:58:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:17:44.265 23:58:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:44.265 23:58:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:17:44.265 23:58:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:44.265 23:58:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:17:44.265 23:58:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:17:44.265 23:58:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:44.265 23:58:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:44.265 23:58:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:44.265 23:58:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:44.265 23:58:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # local es=0 00:17:44.265 23:58:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:44.265 23:58:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # local arg=nvme 00:17:44.265 23:58:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:44.265 23:58:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -t nvme 00:17:44.265 23:58:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:44.265 23:58:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # type -P nvme 00:17:44.265 23:58:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:44.265 23:58:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # arg=/usr/sbin/nvme 00:17:44.265 23:58:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # [[ -x /usr/sbin/nvme ]] 00:17:44.265 23:58:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:44.265 [2024-12-13 23:58:23.357501] ctrlr.c: 825:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562' 00:17:44.265 Failed to write to /dev/nvme-fabrics: Input/output error 00:17:44.265 could not add new controller: failed to write to nvme-fabrics device 00:17:44.265 23:58:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # es=1 00:17:44.265 23:58:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:44.265 23:58:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:44.265 23:58:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:44.265 23:58:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:17:44.265 23:58:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:44.265 23:58:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:44.265 23:58:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:44.265 23:58:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:45.643 23:58:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:17:45.643 23:58:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:17:45.643 23:58:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:17:45.643 23:58:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:17:45.643 23:58:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:17:47.546 23:58:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:17:47.546 23:58:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:17:47.546 23:58:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:17:47.546 23:58:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:17:47.546 23:58:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:17:47.546 23:58:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:17:47.546 23:58:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:47.805 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:47.805 23:58:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:17:47.805 23:58:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:17:47.805 23:58:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:17:47.805 23:58:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:47.805 23:58:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:17:47.805 23:58:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:47.805 23:58:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:17:47.805 23:58:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:47.805 23:58:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:47.805 23:58:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:47.805 23:58:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:47.805 23:58:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:17:47.805 23:58:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:17:47.805 23:58:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:47.805 23:58:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:47.805 23:58:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:47.805 23:58:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:47.805 23:58:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:47.805 23:58:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:47.805 23:58:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:47.805 [2024-12-13 23:58:26.917510] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:47.805 23:58:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:47.805 23:58:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:17:47.805 23:58:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:47.805 23:58:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:47.805 23:58:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:47.805 23:58:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:47.805 23:58:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:47.805 23:58:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:47.805 23:58:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:47.805 23:58:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:49.183 23:58:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:17:49.183 23:58:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:17:49.183 23:58:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:17:49.183 23:58:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:17:49.183 23:58:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:17:51.085 23:58:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:17:51.085 23:58:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:17:51.085 23:58:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:17:51.085 23:58:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:17:51.085 23:58:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:17:51.085 23:58:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:17:51.085 23:58:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:51.344 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:51.344 23:58:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:17:51.344 23:58:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:17:51.344 23:58:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:17:51.344 23:58:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:51.344 23:58:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:17:51.344 23:58:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:51.344 23:58:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:17:51.344 23:58:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:17:51.344 23:58:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.344 23:58:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:51.344 23:58:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.344 23:58:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:51.344 23:58:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.344 23:58:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:51.344 23:58:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.344 23:58:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:17:51.344 23:58:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:51.344 23:58:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.344 23:58:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:51.344 23:58:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.344 23:58:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:51.344 23:58:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.344 23:58:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:51.344 [2024-12-13 23:58:30.403576] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:51.344 23:58:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.344 23:58:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:17:51.344 23:58:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.344 23:58:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:51.344 23:58:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.344 23:58:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:51.344 23:58:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.344 23:58:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:51.344 23:58:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.344 23:58:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:52.721 23:58:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:17:52.721 23:58:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:17:52.721 23:58:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:17:52.721 23:58:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:17:52.721 23:58:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:17:54.625 23:58:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:17:54.625 23:58:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:17:54.625 23:58:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:17:54.625 23:58:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:17:54.625 23:58:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:17:54.625 23:58:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:17:54.625 23:58:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:54.884 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:54.884 23:58:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:17:54.884 23:58:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:17:54.884 23:58:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:17:54.884 23:58:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:54.884 23:58:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:17:54.884 23:58:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:54.884 23:58:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:17:54.884 23:58:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:17:54.884 23:58:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:54.884 23:58:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:54.884 23:58:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:54.884 23:58:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:54.884 23:58:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:54.884 23:58:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:54.884 23:58:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:54.884 23:58:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:17:54.884 23:58:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:54.884 23:58:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:54.884 23:58:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:54.884 23:58:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:54.884 23:58:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:54.884 23:58:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:54.884 23:58:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:54.884 [2024-12-13 23:58:33.942624] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:54.884 23:58:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:54.884 23:58:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:17:54.884 23:58:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:54.884 23:58:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:54.884 23:58:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:54.884 23:58:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:54.884 23:58:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:54.884 23:58:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:54.884 23:58:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:54.884 23:58:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:56.261 23:58:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:17:56.261 23:58:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:17:56.261 23:58:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:17:56.261 23:58:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:17:56.261 23:58:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:17:58.163 23:58:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:17:58.163 23:58:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:17:58.163 23:58:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:17:58.163 23:58:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:17:58.163 23:58:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:17:58.163 23:58:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:17:58.163 23:58:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:58.422 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:58.422 23:58:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:17:58.422 23:58:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:17:58.422 23:58:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:17:58.422 23:58:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:58.422 23:58:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:17:58.422 23:58:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:58.422 23:58:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:17:58.422 23:58:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:17:58.422 23:58:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.422 23:58:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:58.422 23:58:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:58.422 23:58:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:58.422 23:58:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.422 23:58:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:58.422 23:58:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:58.422 23:58:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:17:58.422 23:58:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:58.422 23:58:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.422 23:58:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:58.422 23:58:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:58.422 23:58:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:58.422 23:58:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.422 23:58:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:58.422 [2024-12-13 23:58:37.442098] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:58.422 23:58:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:58.422 23:58:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:17:58.422 23:58:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.422 23:58:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:58.422 23:58:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:58.422 23:58:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:58.422 23:58:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.422 23:58:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:58.422 23:58:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:58.422 23:58:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:59.800 23:58:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:17:59.800 23:58:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:17:59.800 23:58:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:17:59.800 23:58:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:17:59.800 23:58:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:18:01.703 23:58:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:18:01.703 23:58:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:18:01.703 23:58:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:18:01.703 23:58:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:18:01.703 23:58:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:18:01.703 23:58:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:18:01.703 23:58:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:18:01.961 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:01.961 23:58:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:18:01.961 23:58:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:18:01.961 23:58:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:18:01.961 23:58:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:01.961 23:58:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:01.961 23:58:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:18:01.961 23:58:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:18:01.961 23:58:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:18:01.961 23:58:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:01.961 23:58:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:01.961 23:58:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:01.961 23:58:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:01.961 23:58:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:01.961 23:58:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:01.961 23:58:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:01.961 23:58:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:18:01.961 23:58:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:18:01.961 23:58:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:01.961 23:58:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:01.961 23:58:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:01.961 23:58:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:01.961 23:58:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:01.961 23:58:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:01.961 [2024-12-13 23:58:40.975129] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:01.961 23:58:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:01.961 23:58:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:18:01.961 23:58:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:01.961 23:58:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:01.961 23:58:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:01.961 23:58:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:18:01.961 23:58:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:01.961 23:58:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:01.961 23:58:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:01.961 23:58:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:18:03.338 23:58:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:18:03.339 23:58:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:18:03.339 23:58:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:18:03.339 23:58:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:18:03.339 23:58:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:18:05.243 23:58:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:18:05.243 23:58:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:18:05.243 23:58:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:18:05.243 23:58:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:18:05.243 23:58:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:18:05.243 23:58:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:18:05.243 23:58:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:18:05.502 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:05.502 23:58:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:18:05.502 23:58:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:18:05.502 23:58:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:18:05.502 23:58:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:05.502 23:58:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:18:05.502 23:58:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:05.502 23:58:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:18:05.502 23:58:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:18:05.502 23:58:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:05.502 23:58:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:05.502 23:58:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:05.502 23:58:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:05.502 23:58:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:05.502 23:58:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:05.502 23:58:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:05.502 23:58:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:18:05.502 23:58:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:18:05.502 23:58:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:18:05.502 23:58:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:05.502 23:58:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:05.502 23:58:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:05.502 23:58:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:05.503 23:58:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:05.503 23:58:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:05.503 [2024-12-13 23:58:44.562717] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:05.503 23:58:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:05.503 23:58:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:18:05.503 23:58:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:05.503 23:58:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:05.503 23:58:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:05.503 23:58:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:18:05.503 23:58:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:05.503 23:58:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:05.503 23:58:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:05.503 23:58:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:18:05.503 23:58:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:05.503 23:58:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:05.503 23:58:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:05.503 23:58:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:05.503 23:58:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:05.503 23:58:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:05.503 23:58:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:05.503 23:58:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:18:05.503 23:58:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:18:05.503 23:58:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:05.503 23:58:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:05.503 23:58:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:05.503 23:58:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:05.503 23:58:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:05.503 23:58:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:05.503 [2024-12-13 23:58:44.614836] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:05.503 23:58:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:05.503 23:58:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:18:05.503 23:58:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:05.503 23:58:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:05.503 23:58:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:05.503 23:58:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:18:05.503 23:58:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:05.503 23:58:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:05.503 23:58:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:05.503 23:58:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:18:05.503 23:58:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:05.503 23:58:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:05.762 23:58:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:05.762 23:58:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:05.762 23:58:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:05.762 23:58:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:05.762 23:58:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:05.762 23:58:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:18:05.762 23:58:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:18:05.762 23:58:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:05.762 23:58:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:05.762 23:58:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:05.762 23:58:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:05.762 23:58:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:05.762 23:58:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:05.762 [2024-12-13 23:58:44.663016] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:05.762 23:58:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:05.762 23:58:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:18:05.762 23:58:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:05.762 23:58:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:05.762 23:58:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:05.762 23:58:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:18:05.762 23:58:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:05.762 23:58:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:05.762 23:58:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:05.762 23:58:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:18:05.762 23:58:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:05.762 23:58:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:05.762 23:58:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:05.762 23:58:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:05.762 23:58:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:05.762 23:58:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:05.762 23:58:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:05.762 23:58:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:18:05.762 23:58:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:18:05.762 23:58:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:05.762 23:58:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:05.762 23:58:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:05.762 23:58:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:05.762 23:58:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:05.762 23:58:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:05.762 [2024-12-13 23:58:44.711199] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:05.762 23:58:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:05.762 23:58:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:18:05.762 23:58:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:05.762 23:58:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:05.762 23:58:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:05.762 23:58:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:18:05.762 23:58:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:05.762 23:58:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:05.762 23:58:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:05.762 23:58:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:18:05.762 23:58:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:05.762 23:58:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:05.762 23:58:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:05.762 23:58:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:05.762 23:58:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:05.762 23:58:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:05.762 23:58:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:05.762 23:58:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:18:05.762 23:58:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:18:05.762 23:58:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:05.762 23:58:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:05.762 23:58:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:05.762 23:58:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:05.763 23:58:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:05.763 23:58:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:05.763 [2024-12-13 23:58:44.763398] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:05.763 23:58:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:05.763 23:58:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:18:05.763 23:58:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:05.763 23:58:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:05.763 23:58:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:05.763 23:58:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:18:05.763 23:58:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:05.763 23:58:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:05.763 23:58:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:05.763 23:58:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:18:05.763 23:58:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:05.763 23:58:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:05.763 23:58:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:05.763 23:58:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:05.763 23:58:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:05.763 23:58:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:05.763 23:58:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:05.763 23:58:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:18:05.763 23:58:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:05.763 23:58:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:05.763 23:58:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:05.763 23:58:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:18:05.763 "tick_rate": 2100000000, 00:18:05.763 "poll_groups": [ 00:18:05.763 { 00:18:05.763 "name": "nvmf_tgt_poll_group_000", 00:18:05.763 "admin_qpairs": 2, 00:18:05.763 "io_qpairs": 168, 00:18:05.763 "current_admin_qpairs": 0, 00:18:05.763 "current_io_qpairs": 0, 00:18:05.763 "pending_bdev_io": 0, 00:18:05.763 "completed_nvme_io": 217, 00:18:05.763 "transports": [ 00:18:05.763 { 00:18:05.763 "trtype": "TCP" 00:18:05.763 } 00:18:05.763 ] 00:18:05.763 }, 00:18:05.763 { 00:18:05.763 "name": "nvmf_tgt_poll_group_001", 00:18:05.763 "admin_qpairs": 2, 00:18:05.763 "io_qpairs": 168, 00:18:05.763 "current_admin_qpairs": 0, 00:18:05.763 "current_io_qpairs": 0, 00:18:05.763 "pending_bdev_io": 0, 00:18:05.763 "completed_nvme_io": 220, 00:18:05.763 "transports": [ 00:18:05.763 { 00:18:05.763 "trtype": "TCP" 00:18:05.763 } 00:18:05.763 ] 00:18:05.763 }, 00:18:05.763 { 00:18:05.763 "name": "nvmf_tgt_poll_group_002", 00:18:05.763 "admin_qpairs": 1, 00:18:05.763 "io_qpairs": 168, 00:18:05.763 "current_admin_qpairs": 0, 00:18:05.763 "current_io_qpairs": 0, 00:18:05.763 "pending_bdev_io": 0, 00:18:05.763 "completed_nvme_io": 340, 00:18:05.763 "transports": [ 00:18:05.763 { 00:18:05.763 "trtype": "TCP" 00:18:05.763 } 00:18:05.763 ] 00:18:05.763 }, 00:18:05.763 { 00:18:05.763 "name": "nvmf_tgt_poll_group_003", 00:18:05.763 "admin_qpairs": 2, 00:18:05.763 "io_qpairs": 168, 00:18:05.763 "current_admin_qpairs": 0, 00:18:05.763 "current_io_qpairs": 0, 00:18:05.763 "pending_bdev_io": 0, 00:18:05.763 "completed_nvme_io": 245, 00:18:05.763 "transports": [ 00:18:05.763 { 00:18:05.763 "trtype": "TCP" 00:18:05.763 } 00:18:05.763 ] 00:18:05.763 } 00:18:05.763 ] 00:18:05.763 }' 00:18:05.763 23:58:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:18:05.763 23:58:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:18:05.763 23:58:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:18:05.763 23:58:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:18:05.763 23:58:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:18:05.763 23:58:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:18:05.763 23:58:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:18:05.763 23:58:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:18:05.763 23:58:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:18:06.022 23:58:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # (( 672 > 0 )) 00:18:06.022 23:58:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:18:06.022 23:58:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:18:06.022 23:58:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:18:06.022 23:58:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@516 -- # nvmfcleanup 00:18:06.022 23:58:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@121 -- # sync 00:18:06.022 23:58:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:06.022 23:58:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@124 -- # set +e 00:18:06.022 23:58:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:06.022 23:58:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:06.022 rmmod nvme_tcp 00:18:06.022 rmmod nvme_fabrics 00:18:06.022 rmmod nvme_keyring 00:18:06.022 23:58:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:06.022 23:58:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@128 -- # set -e 00:18:06.022 23:58:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@129 -- # return 0 00:18:06.022 23:58:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@517 -- # '[' -n 3972098 ']' 00:18:06.022 23:58:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@518 -- # killprocess 3972098 00:18:06.022 23:58:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@954 -- # '[' -z 3972098 ']' 00:18:06.022 23:58:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@958 -- # kill -0 3972098 00:18:06.022 23:58:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@959 -- # uname 00:18:06.022 23:58:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:06.022 23:58:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3972098 00:18:06.022 23:58:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:06.022 23:58:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:06.022 23:58:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3972098' 00:18:06.022 killing process with pid 3972098 00:18:06.022 23:58:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@973 -- # kill 3972098 00:18:06.022 23:58:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@978 -- # wait 3972098 00:18:07.397 23:58:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:18:07.397 23:58:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:18:07.397 23:58:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:18:07.397 23:58:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@297 -- # iptr 00:18:07.397 23:58:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # iptables-save 00:18:07.397 23:58:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # iptables-restore 00:18:07.397 23:58:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:18:07.397 23:58:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:07.397 23:58:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:18:07.397 23:58:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:07.397 23:58:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:07.397 23:58:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:09.931 23:58:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:18:09.931 00:18:09.931 real 0m35.459s 00:18:09.931 user 1m50.454s 00:18:09.931 sys 0m6.204s 00:18:09.931 23:58:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:09.931 23:58:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:09.931 ************************************ 00:18:09.931 END TEST nvmf_rpc 00:18:09.931 ************************************ 00:18:09.931 23:58:48 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@23 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:18:09.931 23:58:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:18:09.931 23:58:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:09.931 23:58:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:09.931 ************************************ 00:18:09.931 START TEST nvmf_invalid 00:18:09.931 ************************************ 00:18:09.931 23:58:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:18:09.931 * Looking for test storage... 00:18:09.931 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:09.931 23:58:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:18:09.931 23:58:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1711 -- # lcov --version 00:18:09.931 23:58:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:18:09.931 23:58:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:18:09.931 23:58:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:09.931 23:58:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:09.931 23:58:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:09.931 23:58:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # IFS=.-: 00:18:09.931 23:58:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # read -ra ver1 00:18:09.931 23:58:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # IFS=.-: 00:18:09.931 23:58:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # read -ra ver2 00:18:09.931 23:58:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@338 -- # local 'op=<' 00:18:09.931 23:58:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@340 -- # ver1_l=2 00:18:09.931 23:58:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@341 -- # ver2_l=1 00:18:09.931 23:58:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:09.931 23:58:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@344 -- # case "$op" in 00:18:09.931 23:58:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@345 -- # : 1 00:18:09.931 23:58:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:09.931 23:58:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:09.931 23:58:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # decimal 1 00:18:09.931 23:58:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=1 00:18:09.931 23:58:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:09.931 23:58:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 1 00:18:09.931 23:58:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # ver1[v]=1 00:18:09.931 23:58:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # decimal 2 00:18:09.931 23:58:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=2 00:18:09.931 23:58:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:09.931 23:58:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 2 00:18:09.931 23:58:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # ver2[v]=2 00:18:09.931 23:58:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:09.931 23:58:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:09.931 23:58:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # return 0 00:18:09.931 23:58:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:09.931 23:58:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:18:09.931 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:09.931 --rc genhtml_branch_coverage=1 00:18:09.931 --rc genhtml_function_coverage=1 00:18:09.931 --rc genhtml_legend=1 00:18:09.931 --rc geninfo_all_blocks=1 00:18:09.931 --rc geninfo_unexecuted_blocks=1 00:18:09.931 00:18:09.931 ' 00:18:09.931 23:58:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:18:09.931 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:09.931 --rc genhtml_branch_coverage=1 00:18:09.931 --rc genhtml_function_coverage=1 00:18:09.931 --rc genhtml_legend=1 00:18:09.931 --rc geninfo_all_blocks=1 00:18:09.931 --rc geninfo_unexecuted_blocks=1 00:18:09.931 00:18:09.931 ' 00:18:09.931 23:58:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:18:09.931 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:09.931 --rc genhtml_branch_coverage=1 00:18:09.931 --rc genhtml_function_coverage=1 00:18:09.931 --rc genhtml_legend=1 00:18:09.931 --rc geninfo_all_blocks=1 00:18:09.931 --rc geninfo_unexecuted_blocks=1 00:18:09.931 00:18:09.931 ' 00:18:09.931 23:58:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:18:09.931 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:09.931 --rc genhtml_branch_coverage=1 00:18:09.931 --rc genhtml_function_coverage=1 00:18:09.931 --rc genhtml_legend=1 00:18:09.931 --rc geninfo_all_blocks=1 00:18:09.931 --rc geninfo_unexecuted_blocks=1 00:18:09.931 00:18:09.931 ' 00:18:09.931 23:58:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:09.931 23:58:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:18:09.931 23:58:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:09.931 23:58:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:09.931 23:58:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:09.931 23:58:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:09.931 23:58:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:09.931 23:58:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:09.931 23:58:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:09.931 23:58:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:09.931 23:58:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:09.931 23:58:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:09.931 23:58:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:18:09.931 23:58:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:18:09.931 23:58:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:09.931 23:58:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:09.931 23:58:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:09.931 23:58:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:09.931 23:58:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:09.931 23:58:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@15 -- # shopt -s extglob 00:18:09.931 23:58:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:09.931 23:58:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:09.931 23:58:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:09.931 23:58:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:09.932 23:58:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:09.932 23:58:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:09.932 23:58:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:18:09.932 23:58:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:09.932 23:58:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@51 -- # : 0 00:18:09.932 23:58:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:09.932 23:58:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:09.932 23:58:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:09.932 23:58:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:09.932 23:58:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:09.932 23:58:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:09.932 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:09.932 23:58:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:09.932 23:58:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:09.932 23:58:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:09.932 23:58:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:18:09.932 23:58:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:09.932 23:58:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:18:09.932 23:58:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:18:09.932 23:58:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:18:09.932 23:58:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:18:09.932 23:58:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:18:09.932 23:58:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:09.932 23:58:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@476 -- # prepare_net_devs 00:18:09.932 23:58:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:18:09.932 23:58:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:18:09.932 23:58:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:09.932 23:58:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:09.932 23:58:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:09.932 23:58:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:18:09.932 23:58:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:18:09.932 23:58:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@309 -- # xtrace_disable 00:18:09.932 23:58:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:18:15.198 23:58:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:15.198 23:58:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # pci_devs=() 00:18:15.198 23:58:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # local -a pci_devs 00:18:15.198 23:58:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # pci_net_devs=() 00:18:15.198 23:58:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:18:15.198 23:58:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # pci_drivers=() 00:18:15.198 23:58:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # local -A pci_drivers 00:18:15.198 23:58:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # net_devs=() 00:18:15.198 23:58:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # local -ga net_devs 00:18:15.198 23:58:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # e810=() 00:18:15.198 23:58:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # local -ga e810 00:18:15.198 23:58:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # x722=() 00:18:15.198 23:58:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # local -ga x722 00:18:15.198 23:58:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # mlx=() 00:18:15.198 23:58:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # local -ga mlx 00:18:15.198 23:58:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:15.198 23:58:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:15.198 23:58:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:15.198 23:58:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:15.198 23:58:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:15.198 23:58:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:15.198 23:58:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:15.198 23:58:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:18:15.198 23:58:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:15.198 23:58:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:15.198 23:58:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:15.198 23:58:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:15.198 23:58:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:18:15.198 23:58:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:18:15.198 23:58:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:18:15.198 23:58:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:18:15.198 23:58:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:18:15.198 23:58:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:18:15.198 23:58:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:15.198 23:58:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:18:15.198 Found 0000:af:00.0 (0x8086 - 0x159b) 00:18:15.198 23:58:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:15.198 23:58:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:15.198 23:58:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:15.198 23:58:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:15.198 23:58:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:15.198 23:58:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:15.198 23:58:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:18:15.198 Found 0000:af:00.1 (0x8086 - 0x159b) 00:18:15.198 23:58:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:15.198 23:58:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:15.198 23:58:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:15.198 23:58:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:15.198 23:58:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:15.198 23:58:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:18:15.198 23:58:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:18:15.198 23:58:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:18:15.198 23:58:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:15.198 23:58:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:15.198 23:58:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:18:15.198 23:58:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:15.198 23:58:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:18:15.198 23:58:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:15.198 23:58:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:15.199 23:58:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:18:15.199 Found net devices under 0000:af:00.0: cvl_0_0 00:18:15.199 23:58:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:15.199 23:58:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:15.199 23:58:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:15.199 23:58:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:18:15.199 23:58:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:15.199 23:58:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:18:15.199 23:58:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:15.199 23:58:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:15.199 23:58:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:18:15.199 Found net devices under 0000:af:00.1: cvl_0_1 00:18:15.199 23:58:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:15.199 23:58:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:18:15.199 23:58:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # is_hw=yes 00:18:15.199 23:58:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:18:15.199 23:58:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:18:15.199 23:58:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:18:15.199 23:58:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:15.199 23:58:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:15.199 23:58:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:15.199 23:58:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:15.199 23:58:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:18:15.199 23:58:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:15.199 23:58:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:15.199 23:58:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:18:15.199 23:58:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:18:15.199 23:58:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:15.199 23:58:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:15.199 23:58:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:18:15.199 23:58:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:18:15.199 23:58:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:18:15.199 23:58:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:15.199 23:58:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:15.199 23:58:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:15.199 23:58:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:18:15.199 23:58:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:15.199 23:58:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:15.199 23:58:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:15.199 23:58:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:18:15.199 23:58:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:18:15.199 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:15.199 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.459 ms 00:18:15.199 00:18:15.199 --- 10.0.0.2 ping statistics --- 00:18:15.199 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:15.199 rtt min/avg/max/mdev = 0.459/0.459/0.459/0.000 ms 00:18:15.199 23:58:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:15.199 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:15.199 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.204 ms 00:18:15.199 00:18:15.199 --- 10.0.0.1 ping statistics --- 00:18:15.199 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:15.199 rtt min/avg/max/mdev = 0.204/0.204/0.204/0.000 ms 00:18:15.199 23:58:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:15.199 23:58:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@450 -- # return 0 00:18:15.199 23:58:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:18:15.199 23:58:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:15.199 23:58:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:18:15.199 23:58:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:18:15.199 23:58:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:15.199 23:58:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:18:15.199 23:58:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:18:15.199 23:58:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:18:15.199 23:58:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:15.199 23:58:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:15.199 23:58:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:18:15.199 23:58:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@509 -- # nvmfpid=3980182 00:18:15.199 23:58:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@510 -- # waitforlisten 3980182 00:18:15.199 23:58:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:18:15.199 23:58:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@835 -- # '[' -z 3980182 ']' 00:18:15.199 23:58:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:15.199 23:58:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:15.199 23:58:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:15.199 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:15.199 23:58:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:15.199 23:58:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:18:15.457 [2024-12-13 23:58:54.372075] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:18:15.457 [2024-12-13 23:58:54.372169] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:15.457 [2024-12-13 23:58:54.491658] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:15.715 [2024-12-13 23:58:54.603224] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:15.715 [2024-12-13 23:58:54.603269] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:15.715 [2024-12-13 23:58:54.603279] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:15.715 [2024-12-13 23:58:54.603290] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:15.715 [2024-12-13 23:58:54.603298] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:15.715 [2024-12-13 23:58:54.605563] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:18:15.715 [2024-12-13 23:58:54.605637] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:18:15.715 [2024-12-13 23:58:54.605704] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:18:15.715 [2024-12-13 23:58:54.605713] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:18:16.282 23:58:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:16.282 23:58:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@868 -- # return 0 00:18:16.282 23:58:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:16.282 23:58:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:16.282 23:58:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:18:16.282 23:58:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:16.282 23:58:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:18:16.282 23:58:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode28317 00:18:16.282 [2024-12-13 23:58:55.400880] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:18:16.541 23:58:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:18:16.541 { 00:18:16.541 "nqn": "nqn.2016-06.io.spdk:cnode28317", 00:18:16.541 "tgt_name": "foobar", 00:18:16.541 "method": "nvmf_create_subsystem", 00:18:16.541 "req_id": 1 00:18:16.541 } 00:18:16.541 Got JSON-RPC error response 00:18:16.541 response: 00:18:16.541 { 00:18:16.541 "code": -32603, 00:18:16.541 "message": "Unable to find target foobar" 00:18:16.541 }' 00:18:16.541 23:58:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:18:16.541 { 00:18:16.541 "nqn": "nqn.2016-06.io.spdk:cnode28317", 00:18:16.541 "tgt_name": "foobar", 00:18:16.541 "method": "nvmf_create_subsystem", 00:18:16.541 "req_id": 1 00:18:16.541 } 00:18:16.541 Got JSON-RPC error response 00:18:16.541 response: 00:18:16.541 { 00:18:16.541 "code": -32603, 00:18:16.541 "message": "Unable to find target foobar" 00:18:16.541 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:18:16.541 23:58:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:18:16.541 23:58:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode29216 00:18:16.541 [2024-12-13 23:58:55.605603] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode29216: invalid serial number 'SPDKISFASTANDAWESOME' 00:18:16.541 23:58:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:18:16.541 { 00:18:16.541 "nqn": "nqn.2016-06.io.spdk:cnode29216", 00:18:16.541 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:18:16.541 "method": "nvmf_create_subsystem", 00:18:16.541 "req_id": 1 00:18:16.541 } 00:18:16.541 Got JSON-RPC error response 00:18:16.541 response: 00:18:16.541 { 00:18:16.541 "code": -32602, 00:18:16.541 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:18:16.541 }' 00:18:16.541 23:58:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:18:16.541 { 00:18:16.541 "nqn": "nqn.2016-06.io.spdk:cnode29216", 00:18:16.541 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:18:16.541 "method": "nvmf_create_subsystem", 00:18:16.541 "req_id": 1 00:18:16.541 } 00:18:16.541 Got JSON-RPC error response 00:18:16.541 response: 00:18:16.541 { 00:18:16.541 "code": -32602, 00:18:16.541 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:18:16.541 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:18:16.541 23:58:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:18:16.541 23:58:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode26695 00:18:16.800 [2024-12-13 23:58:55.810285] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode26695: invalid model number 'SPDK_Controller' 00:18:16.800 23:58:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:18:16.800 { 00:18:16.800 "nqn": "nqn.2016-06.io.spdk:cnode26695", 00:18:16.800 "model_number": "SPDK_Controller\u001f", 00:18:16.800 "method": "nvmf_create_subsystem", 00:18:16.800 "req_id": 1 00:18:16.800 } 00:18:16.800 Got JSON-RPC error response 00:18:16.801 response: 00:18:16.801 { 00:18:16.801 "code": -32602, 00:18:16.801 "message": "Invalid MN SPDK_Controller\u001f" 00:18:16.801 }' 00:18:16.801 23:58:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:18:16.801 { 00:18:16.801 "nqn": "nqn.2016-06.io.spdk:cnode26695", 00:18:16.801 "model_number": "SPDK_Controller\u001f", 00:18:16.801 "method": "nvmf_create_subsystem", 00:18:16.801 "req_id": 1 00:18:16.801 } 00:18:16.801 Got JSON-RPC error response 00:18:16.801 response: 00:18:16.801 { 00:18:16.801 "code": -32602, 00:18:16.801 "message": "Invalid MN SPDK_Controller\u001f" 00:18:16.801 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:18:16.801 23:58:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:18:16.801 23:58:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:18:16.801 23:58:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:18:16.801 23:58:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:18:16.801 23:58:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:18:16.801 23:58:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:18:16.801 23:58:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:16.801 23:58:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 35 00:18:16.801 23:58:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x23' 00:18:16.801 23:58:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='#' 00:18:16.801 23:58:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:16.801 23:58:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:16.801 23:58:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 74 00:18:16.801 23:58:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4a' 00:18:16.801 23:58:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=J 00:18:16.801 23:58:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:16.801 23:58:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:16.801 23:58:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 61 00:18:16.801 23:58:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3d' 00:18:16.801 23:58:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+== 00:18:16.801 23:58:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:16.801 23:58:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:16.801 23:58:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 72 00:18:16.801 23:58:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x48' 00:18:16.801 23:58:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=H 00:18:16.801 23:58:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:16.801 23:58:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:16.801 23:58:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 53 00:18:16.801 23:58:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x35' 00:18:16.801 23:58:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=5 00:18:16.801 23:58:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:16.801 23:58:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:16.801 23:58:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 49 00:18:16.801 23:58:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x31' 00:18:16.801 23:58:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=1 00:18:16.801 23:58:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:16.801 23:58:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:16.801 23:58:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 47 00:18:16.801 23:58:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2f' 00:18:16.801 23:58:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=/ 00:18:16.801 23:58:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:16.801 23:58:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:16.801 23:58:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 41 00:18:16.801 23:58:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x29' 00:18:16.801 23:58:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=')' 00:18:16.801 23:58:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:16.801 23:58:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:16.801 23:58:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 33 00:18:16.801 23:58:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x21' 00:18:16.801 23:58:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='!' 00:18:16.801 23:58:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:16.801 23:58:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:16.801 23:58:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 96 00:18:16.801 23:58:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x60' 00:18:16.801 23:58:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='`' 00:18:16.801 23:58:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:16.801 23:58:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:16.801 23:58:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 41 00:18:16.801 23:58:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x29' 00:18:16.801 23:58:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=')' 00:18:16.801 23:58:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:16.801 23:58:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:16.801 23:58:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 84 00:18:16.801 23:58:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x54' 00:18:16.801 23:58:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=T 00:18:16.801 23:58:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:16.801 23:58:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:16.801 23:58:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 58 00:18:16.801 23:58:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3a' 00:18:16.801 23:58:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=: 00:18:16.801 23:58:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:16.801 23:58:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:16.801 23:58:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 89 00:18:16.801 23:58:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x59' 00:18:16.801 23:58:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Y 00:18:16.801 23:58:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:16.801 23:58:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:16.801 23:58:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 108 00:18:17.060 23:58:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6c' 00:18:17.060 23:58:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=l 00:18:17.060 23:58:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:17.060 23:58:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:17.060 23:58:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 55 00:18:17.060 23:58:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x37' 00:18:17.060 23:58:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=7 00:18:17.060 23:58:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:17.060 23:58:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:17.060 23:58:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 87 00:18:17.060 23:58:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x57' 00:18:17.060 23:58:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=W 00:18:17.060 23:58:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:17.060 23:58:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:17.060 23:58:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 36 00:18:17.060 23:58:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x24' 00:18:17.060 23:58:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='$' 00:18:17.060 23:58:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:17.060 23:58:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:17.060 23:58:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 62 00:18:17.060 23:58:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3e' 00:18:17.060 23:58:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='>' 00:18:17.060 23:58:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:17.060 23:58:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:17.060 23:58:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 57 00:18:17.060 23:58:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x39' 00:18:17.060 23:58:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=9 00:18:17.060 23:58:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:17.060 23:58:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:17.060 23:58:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 83 00:18:17.060 23:58:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x53' 00:18:17.060 23:58:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=S 00:18:17.060 23:58:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:17.060 23:58:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:17.060 23:58:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ # == \- ]] 00:18:17.060 23:58:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo '#J=H51/)!`)T:Yl7W$>9S' 00:18:17.060 23:58:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s '#J=H51/)!`)T:Yl7W$>9S' nqn.2016-06.io.spdk:cnode3393 00:18:17.060 [2024-12-13 23:58:56.155454] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode3393: invalid serial number '#J=H51/)!`)T:Yl7W$>9S' 00:18:17.060 23:58:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:18:17.060 { 00:18:17.060 "nqn": "nqn.2016-06.io.spdk:cnode3393", 00:18:17.060 "serial_number": "#J=H51/)!`)T:Yl7W$>9S", 00:18:17.060 "method": "nvmf_create_subsystem", 00:18:17.060 "req_id": 1 00:18:17.060 } 00:18:17.060 Got JSON-RPC error response 00:18:17.060 response: 00:18:17.060 { 00:18:17.060 "code": -32602, 00:18:17.060 "message": "Invalid SN #J=H51/)!`)T:Yl7W$>9S" 00:18:17.060 }' 00:18:17.060 23:58:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:18:17.060 { 00:18:17.060 "nqn": "nqn.2016-06.io.spdk:cnode3393", 00:18:17.060 "serial_number": "#J=H51/)!`)T:Yl7W$>9S", 00:18:17.060 "method": "nvmf_create_subsystem", 00:18:17.060 "req_id": 1 00:18:17.060 } 00:18:17.060 Got JSON-RPC error response 00:18:17.060 response: 00:18:17.060 { 00:18:17.060 "code": -32602, 00:18:17.060 "message": "Invalid SN #J=H51/)!`)T:Yl7W$>9S" 00:18:17.060 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:18:17.060 23:58:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:18:17.060 23:58:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:18:17.061 23:58:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:18:17.061 23:58:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:18:17.061 23:58:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:18:17.061 23:58:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:18:17.061 23:58:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:17.061 23:58:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 50 00:18:17.061 23:58:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x32' 00:18:17.061 23:58:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=2 00:18:17.061 23:58:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:17.320 23:58:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:17.320 23:58:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 115 00:18:17.320 23:58:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x73' 00:18:17.320 23:58:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=s 00:18:17.320 23:58:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:17.320 23:58:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:17.320 23:58:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 36 00:18:17.320 23:58:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x24' 00:18:17.320 23:58:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='$' 00:18:17.320 23:58:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:17.320 23:58:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:17.320 23:58:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 34 00:18:17.320 23:58:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x22' 00:18:17.320 23:58:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='"' 00:18:17.320 23:58:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:17.320 23:58:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:17.320 23:58:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 110 00:18:17.320 23:58:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6e' 00:18:17.320 23:58:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=n 00:18:17.320 23:58:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:17.320 23:58:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:17.320 23:58:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 40 00:18:17.320 23:58:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x28' 00:18:17.320 23:58:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='(' 00:18:17.320 23:58:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:17.320 23:58:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:17.320 23:58:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 84 00:18:17.320 23:58:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x54' 00:18:17.320 23:58:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=T 00:18:17.320 23:58:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:17.320 23:58:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:17.320 23:58:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 126 00:18:17.320 23:58:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7e' 00:18:17.320 23:58:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='~' 00:18:17.320 23:58:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:17.320 23:58:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:17.320 23:58:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 72 00:18:17.320 23:58:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x48' 00:18:17.320 23:58:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=H 00:18:17.320 23:58:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:17.320 23:58:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:17.320 23:58:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 38 00:18:17.320 23:58:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x26' 00:18:17.320 23:58:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='&' 00:18:17.320 23:58:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:17.320 23:58:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:17.320 23:58:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 83 00:18:17.320 23:58:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x53' 00:18:17.320 23:58:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=S 00:18:17.320 23:58:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:17.320 23:58:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:17.320 23:58:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 106 00:18:17.320 23:58:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6a' 00:18:17.320 23:58:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=j 00:18:17.320 23:58:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:17.320 23:58:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:17.320 23:58:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 120 00:18:17.320 23:58:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x78' 00:18:17.320 23:58:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=x 00:18:17.320 23:58:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:17.320 23:58:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:17.320 23:58:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 92 00:18:17.320 23:58:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5c' 00:18:17.320 23:58:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='\' 00:18:17.320 23:58:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:17.320 23:58:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:17.320 23:58:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 63 00:18:17.320 23:58:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3f' 00:18:17.320 23:58:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='?' 00:18:17.320 23:58:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:17.320 23:58:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:17.320 23:58:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 115 00:18:17.320 23:58:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x73' 00:18:17.320 23:58:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=s 00:18:17.320 23:58:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:17.320 23:58:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:17.320 23:58:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 32 00:18:17.320 23:58:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x20' 00:18:17.320 23:58:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=' ' 00:18:17.320 23:58:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:17.320 23:58:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:17.320 23:58:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 39 00:18:17.320 23:58:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x27' 00:18:17.320 23:58:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=\' 00:18:17.320 23:58:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:17.320 23:58:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:17.320 23:58:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 68 00:18:17.320 23:58:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x44' 00:18:17.320 23:58:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=D 00:18:17.320 23:58:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:17.320 23:58:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:17.320 23:58:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 75 00:18:17.320 23:58:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4b' 00:18:17.320 23:58:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=K 00:18:17.320 23:58:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:17.320 23:58:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:17.320 23:58:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 39 00:18:17.320 23:58:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x27' 00:18:17.320 23:58:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=\' 00:18:17.320 23:58:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:17.320 23:58:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:17.320 23:58:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 60 00:18:17.320 23:58:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3c' 00:18:17.320 23:58:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='<' 00:18:17.320 23:58:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:17.320 23:58:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:17.320 23:58:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 53 00:18:17.320 23:58:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x35' 00:18:17.320 23:58:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=5 00:18:17.320 23:58:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:17.320 23:58:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:17.320 23:58:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 63 00:18:17.320 23:58:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3f' 00:18:17.320 23:58:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='?' 00:18:17.320 23:58:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:17.320 23:58:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:17.320 23:58:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 89 00:18:17.320 23:58:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x59' 00:18:17.320 23:58:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Y 00:18:17.321 23:58:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:17.321 23:58:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:17.321 23:58:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 56 00:18:17.321 23:58:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x38' 00:18:17.321 23:58:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=8 00:18:17.321 23:58:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:17.321 23:58:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:17.321 23:58:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 97 00:18:17.321 23:58:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x61' 00:18:17.321 23:58:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=a 00:18:17.321 23:58:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:17.321 23:58:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:17.321 23:58:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 110 00:18:17.321 23:58:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6e' 00:18:17.321 23:58:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=n 00:18:17.321 23:58:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:17.321 23:58:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:17.321 23:58:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 77 00:18:17.321 23:58:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4d' 00:18:17.321 23:58:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=M 00:18:17.321 23:58:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:17.321 23:58:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:17.321 23:58:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 95 00:18:17.321 23:58:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5f' 00:18:17.321 23:58:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=_ 00:18:17.321 23:58:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:17.321 23:58:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:17.321 23:58:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 109 00:18:17.321 23:58:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6d' 00:18:17.321 23:58:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=m 00:18:17.321 23:58:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:17.321 23:58:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:17.321 23:58:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 74 00:18:17.321 23:58:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4a' 00:18:17.321 23:58:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=J 00:18:17.321 23:58:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:17.321 23:58:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:17.321 23:58:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 109 00:18:17.321 23:58:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6d' 00:18:17.321 23:58:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=m 00:18:17.321 23:58:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:17.321 23:58:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:17.321 23:58:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 36 00:18:17.321 23:58:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x24' 00:18:17.321 23:58:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='$' 00:18:17.321 23:58:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:17.321 23:58:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:17.321 23:58:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 47 00:18:17.321 23:58:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2f' 00:18:17.321 23:58:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=/ 00:18:17.321 23:58:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:17.321 23:58:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:17.321 23:58:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 97 00:18:17.321 23:58:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x61' 00:18:17.321 23:58:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=a 00:18:17.321 23:58:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:17.321 23:58:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:17.321 23:58:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 41 00:18:17.321 23:58:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x29' 00:18:17.321 23:58:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=')' 00:18:17.321 23:58:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:17.321 23:58:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:17.321 23:58:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 33 00:18:17.321 23:58:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x21' 00:18:17.321 23:58:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='!' 00:18:17.321 23:58:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:17.321 23:58:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:17.321 23:58:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 57 00:18:17.321 23:58:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x39' 00:18:17.321 23:58:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=9 00:18:17.321 23:58:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:17.321 23:58:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:17.321 23:58:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 70 00:18:17.321 23:58:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x46' 00:18:17.321 23:58:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=F 00:18:17.321 23:58:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:17.321 23:58:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:17.321 23:58:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 125 00:18:17.321 23:58:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7d' 00:18:17.321 23:58:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='}' 00:18:17.321 23:58:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:17.321 23:58:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:17.321 23:58:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ 2 == \- ]] 00:18:17.321 23:58:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo '2s$"n(T~H&Sjx\?s '\''DK'\''<5?Y8anM_mJm$/a)!9F}' 00:18:17.580 23:58:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d '2s$"n(T~H&Sjx\?s '\''DK'\''<5?Y8anM_mJm$/a)!9F}' nqn.2016-06.io.spdk:cnode30561 00:18:17.580 [2024-12-13 23:58:56.641071] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode30561: invalid model number '2s$"n(T~H&Sjx\?s 'DK'<5?Y8anM_mJm$/a)!9F}' 00:18:17.580 23:58:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # out='request: 00:18:17.580 { 00:18:17.580 "nqn": "nqn.2016-06.io.spdk:cnode30561", 00:18:17.580 "model_number": "2s$\"n(T~H&Sjx\\?s '\''DK'\''<5?Y8anM_mJm$/a)!9F}", 00:18:17.580 "method": "nvmf_create_subsystem", 00:18:17.580 "req_id": 1 00:18:17.580 } 00:18:17.580 Got JSON-RPC error response 00:18:17.580 response: 00:18:17.580 { 00:18:17.580 "code": -32602, 00:18:17.580 "message": "Invalid MN 2s$\"n(T~H&Sjx\\?s '\''DK'\''<5?Y8anM_mJm$/a)!9F}" 00:18:17.580 }' 00:18:17.580 23:58:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@59 -- # [[ request: 00:18:17.580 { 00:18:17.580 "nqn": "nqn.2016-06.io.spdk:cnode30561", 00:18:17.580 "model_number": "2s$\"n(T~H&Sjx\\?s 'DK'<5?Y8anM_mJm$/a)!9F}", 00:18:17.580 "method": "nvmf_create_subsystem", 00:18:17.580 "req_id": 1 00:18:17.580 } 00:18:17.580 Got JSON-RPC error response 00:18:17.580 response: 00:18:17.580 { 00:18:17.580 "code": -32602, 00:18:17.580 "message": "Invalid MN 2s$\"n(T~H&Sjx\\?s 'DK'<5?Y8anM_mJm$/a)!9F}" 00:18:17.580 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:18:17.580 23:58:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:18:17.838 [2024-12-13 23:58:56.845883] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:17.838 23:58:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:18:18.096 23:58:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:18:18.096 23:58:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # head -n 1 00:18:18.096 23:58:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # echo '' 00:18:18.096 23:58:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # IP= 00:18:18.096 23:58:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:18:18.355 [2024-12-13 23:58:57.256001] nvmf_rpc.c: 783:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:18:18.355 23:58:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # out='request: 00:18:18.355 { 00:18:18.355 "nqn": "nqn.2016-06.io.spdk:cnode", 00:18:18.355 "listen_address": { 00:18:18.355 "trtype": "tcp", 00:18:18.355 "traddr": "", 00:18:18.355 "trsvcid": "4421" 00:18:18.355 }, 00:18:18.355 "method": "nvmf_subsystem_remove_listener", 00:18:18.355 "req_id": 1 00:18:18.355 } 00:18:18.355 Got JSON-RPC error response 00:18:18.355 response: 00:18:18.355 { 00:18:18.355 "code": -32602, 00:18:18.355 "message": "Invalid parameters" 00:18:18.355 }' 00:18:18.355 23:58:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@70 -- # [[ request: 00:18:18.355 { 00:18:18.355 "nqn": "nqn.2016-06.io.spdk:cnode", 00:18:18.355 "listen_address": { 00:18:18.355 "trtype": "tcp", 00:18:18.355 "traddr": "", 00:18:18.355 "trsvcid": "4421" 00:18:18.355 }, 00:18:18.355 "method": "nvmf_subsystem_remove_listener", 00:18:18.355 "req_id": 1 00:18:18.355 } 00:18:18.355 Got JSON-RPC error response 00:18:18.355 response: 00:18:18.355 { 00:18:18.355 "code": -32602, 00:18:18.355 "message": "Invalid parameters" 00:18:18.355 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:18:18.355 23:58:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode10489 -i 0 00:18:18.355 [2024-12-13 23:58:57.452618] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode10489: invalid cntlid range [0-65519] 00:18:18.355 23:58:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # out='request: 00:18:18.355 { 00:18:18.355 "nqn": "nqn.2016-06.io.spdk:cnode10489", 00:18:18.355 "min_cntlid": 0, 00:18:18.355 "method": "nvmf_create_subsystem", 00:18:18.355 "req_id": 1 00:18:18.355 } 00:18:18.355 Got JSON-RPC error response 00:18:18.355 response: 00:18:18.355 { 00:18:18.355 "code": -32602, 00:18:18.355 "message": "Invalid cntlid range [0-65519]" 00:18:18.355 }' 00:18:18.355 23:58:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@74 -- # [[ request: 00:18:18.355 { 00:18:18.355 "nqn": "nqn.2016-06.io.spdk:cnode10489", 00:18:18.355 "min_cntlid": 0, 00:18:18.355 "method": "nvmf_create_subsystem", 00:18:18.355 "req_id": 1 00:18:18.355 } 00:18:18.355 Got JSON-RPC error response 00:18:18.355 response: 00:18:18.355 { 00:18:18.355 "code": -32602, 00:18:18.355 "message": "Invalid cntlid range [0-65519]" 00:18:18.355 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:18:18.355 23:58:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1133 -i 65520 00:18:18.614 [2024-12-13 23:58:57.661381] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode1133: invalid cntlid range [65520-65519] 00:18:18.614 23:58:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # out='request: 00:18:18.614 { 00:18:18.614 "nqn": "nqn.2016-06.io.spdk:cnode1133", 00:18:18.614 "min_cntlid": 65520, 00:18:18.614 "method": "nvmf_create_subsystem", 00:18:18.614 "req_id": 1 00:18:18.614 } 00:18:18.614 Got JSON-RPC error response 00:18:18.614 response: 00:18:18.614 { 00:18:18.614 "code": -32602, 00:18:18.614 "message": "Invalid cntlid range [65520-65519]" 00:18:18.614 }' 00:18:18.614 23:58:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@76 -- # [[ request: 00:18:18.614 { 00:18:18.614 "nqn": "nqn.2016-06.io.spdk:cnode1133", 00:18:18.614 "min_cntlid": 65520, 00:18:18.614 "method": "nvmf_create_subsystem", 00:18:18.614 "req_id": 1 00:18:18.614 } 00:18:18.614 Got JSON-RPC error response 00:18:18.614 response: 00:18:18.614 { 00:18:18.614 "code": -32602, 00:18:18.614 "message": "Invalid cntlid range [65520-65519]" 00:18:18.614 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:18:18.614 23:58:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode138 -I 0 00:18:18.873 [2024-12-13 23:58:57.882116] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode138: invalid cntlid range [1-0] 00:18:18.873 23:58:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # out='request: 00:18:18.873 { 00:18:18.873 "nqn": "nqn.2016-06.io.spdk:cnode138", 00:18:18.873 "max_cntlid": 0, 00:18:18.873 "method": "nvmf_create_subsystem", 00:18:18.873 "req_id": 1 00:18:18.873 } 00:18:18.873 Got JSON-RPC error response 00:18:18.873 response: 00:18:18.873 { 00:18:18.873 "code": -32602, 00:18:18.873 "message": "Invalid cntlid range [1-0]" 00:18:18.873 }' 00:18:18.873 23:58:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@78 -- # [[ request: 00:18:18.873 { 00:18:18.873 "nqn": "nqn.2016-06.io.spdk:cnode138", 00:18:18.873 "max_cntlid": 0, 00:18:18.873 "method": "nvmf_create_subsystem", 00:18:18.873 "req_id": 1 00:18:18.873 } 00:18:18.873 Got JSON-RPC error response 00:18:18.873 response: 00:18:18.873 { 00:18:18.873 "code": -32602, 00:18:18.873 "message": "Invalid cntlid range [1-0]" 00:18:18.873 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:18:18.873 23:58:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3050 -I 65520 00:18:19.130 [2024-12-13 23:58:58.082856] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode3050: invalid cntlid range [1-65520] 00:18:19.130 23:58:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # out='request: 00:18:19.130 { 00:18:19.130 "nqn": "nqn.2016-06.io.spdk:cnode3050", 00:18:19.130 "max_cntlid": 65520, 00:18:19.130 "method": "nvmf_create_subsystem", 00:18:19.130 "req_id": 1 00:18:19.130 } 00:18:19.130 Got JSON-RPC error response 00:18:19.130 response: 00:18:19.130 { 00:18:19.130 "code": -32602, 00:18:19.130 "message": "Invalid cntlid range [1-65520]" 00:18:19.130 }' 00:18:19.130 23:58:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@80 -- # [[ request: 00:18:19.130 { 00:18:19.130 "nqn": "nqn.2016-06.io.spdk:cnode3050", 00:18:19.130 "max_cntlid": 65520, 00:18:19.130 "method": "nvmf_create_subsystem", 00:18:19.130 "req_id": 1 00:18:19.130 } 00:18:19.130 Got JSON-RPC error response 00:18:19.130 response: 00:18:19.130 { 00:18:19.130 "code": -32602, 00:18:19.130 "message": "Invalid cntlid range [1-65520]" 00:18:19.130 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:18:19.130 23:58:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode19054 -i 6 -I 5 00:18:19.388 [2024-12-13 23:58:58.283517] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode19054: invalid cntlid range [6-5] 00:18:19.388 23:58:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # out='request: 00:18:19.388 { 00:18:19.388 "nqn": "nqn.2016-06.io.spdk:cnode19054", 00:18:19.388 "min_cntlid": 6, 00:18:19.388 "max_cntlid": 5, 00:18:19.388 "method": "nvmf_create_subsystem", 00:18:19.388 "req_id": 1 00:18:19.388 } 00:18:19.388 Got JSON-RPC error response 00:18:19.388 response: 00:18:19.388 { 00:18:19.388 "code": -32602, 00:18:19.388 "message": "Invalid cntlid range [6-5]" 00:18:19.388 }' 00:18:19.388 23:58:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@84 -- # [[ request: 00:18:19.388 { 00:18:19.388 "nqn": "nqn.2016-06.io.spdk:cnode19054", 00:18:19.388 "min_cntlid": 6, 00:18:19.388 "max_cntlid": 5, 00:18:19.388 "method": "nvmf_create_subsystem", 00:18:19.388 "req_id": 1 00:18:19.388 } 00:18:19.388 Got JSON-RPC error response 00:18:19.388 response: 00:18:19.388 { 00:18:19.388 "code": -32602, 00:18:19.388 "message": "Invalid cntlid range [6-5]" 00:18:19.388 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:18:19.388 23:58:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:18:19.388 23:58:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # out='request: 00:18:19.388 { 00:18:19.388 "name": "foobar", 00:18:19.388 "method": "nvmf_delete_target", 00:18:19.388 "req_id": 1 00:18:19.388 } 00:18:19.388 Got JSON-RPC error response 00:18:19.388 response: 00:18:19.388 { 00:18:19.388 "code": -32602, 00:18:19.388 "message": "The specified target doesn'\''t exist, cannot delete it." 00:18:19.388 }' 00:18:19.388 23:58:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@88 -- # [[ request: 00:18:19.388 { 00:18:19.388 "name": "foobar", 00:18:19.388 "method": "nvmf_delete_target", 00:18:19.388 "req_id": 1 00:18:19.388 } 00:18:19.388 Got JSON-RPC error response 00:18:19.388 response: 00:18:19.388 { 00:18:19.388 "code": -32602, 00:18:19.388 "message": "The specified target doesn't exist, cannot delete it." 00:18:19.388 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:18:19.388 23:58:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:18:19.388 23:58:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@91 -- # nvmftestfini 00:18:19.388 23:58:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@516 -- # nvmfcleanup 00:18:19.388 23:58:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@121 -- # sync 00:18:19.388 23:58:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:19.388 23:58:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@124 -- # set +e 00:18:19.388 23:58:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:19.388 23:58:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:19.388 rmmod nvme_tcp 00:18:19.388 rmmod nvme_fabrics 00:18:19.388 rmmod nvme_keyring 00:18:19.388 23:58:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:19.388 23:58:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@128 -- # set -e 00:18:19.388 23:58:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@129 -- # return 0 00:18:19.389 23:58:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@517 -- # '[' -n 3980182 ']' 00:18:19.389 23:58:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@518 -- # killprocess 3980182 00:18:19.389 23:58:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@954 -- # '[' -z 3980182 ']' 00:18:19.389 23:58:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@958 -- # kill -0 3980182 00:18:19.389 23:58:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@959 -- # uname 00:18:19.389 23:58:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:19.389 23:58:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3980182 00:18:19.647 23:58:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:19.647 23:58:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:19.647 23:58:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3980182' 00:18:19.647 killing process with pid 3980182 00:18:19.647 23:58:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@973 -- # kill 3980182 00:18:19.647 23:58:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@978 -- # wait 3980182 00:18:20.583 23:58:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:18:20.583 23:58:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:18:20.583 23:58:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:18:20.583 23:58:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@297 -- # iptr 00:18:20.583 23:58:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:18:20.583 23:58:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # iptables-save 00:18:20.583 23:58:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # iptables-restore 00:18:20.583 23:58:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:20.583 23:58:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@302 -- # remove_spdk_ns 00:18:20.583 23:58:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:20.583 23:58:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:20.583 23:58:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:23.116 23:59:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:18:23.116 00:18:23.116 real 0m13.225s 00:18:23.116 user 0m23.966s 00:18:23.116 sys 0m5.164s 00:18:23.116 23:59:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:23.116 23:59:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:18:23.116 ************************************ 00:18:23.116 END TEST nvmf_invalid 00:18:23.116 ************************************ 00:18:23.116 23:59:01 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@24 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:18:23.116 23:59:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:18:23.116 23:59:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:23.116 23:59:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:23.116 ************************************ 00:18:23.116 START TEST nvmf_connect_stress 00:18:23.116 ************************************ 00:18:23.116 23:59:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:18:23.116 * Looking for test storage... 00:18:23.116 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:23.116 23:59:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:18:23.116 23:59:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1711 -- # lcov --version 00:18:23.116 23:59:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:18:23.116 23:59:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:18:23.116 23:59:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:23.116 23:59:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:23.116 23:59:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:23.116 23:59:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # IFS=.-: 00:18:23.116 23:59:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # read -ra ver1 00:18:23.116 23:59:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # IFS=.-: 00:18:23.116 23:59:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # read -ra ver2 00:18:23.116 23:59:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@338 -- # local 'op=<' 00:18:23.116 23:59:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@340 -- # ver1_l=2 00:18:23.116 23:59:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@341 -- # ver2_l=1 00:18:23.116 23:59:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:23.116 23:59:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@344 -- # case "$op" in 00:18:23.116 23:59:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@345 -- # : 1 00:18:23.116 23:59:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:23.116 23:59:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:23.116 23:59:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # decimal 1 00:18:23.116 23:59:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=1 00:18:23.116 23:59:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:23.116 23:59:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 1 00:18:23.116 23:59:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:18:23.116 23:59:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # decimal 2 00:18:23.116 23:59:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=2 00:18:23.116 23:59:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:23.116 23:59:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 2 00:18:23.116 23:59:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:18:23.116 23:59:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:23.116 23:59:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:23.116 23:59:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # return 0 00:18:23.116 23:59:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:23.116 23:59:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:18:23.116 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:23.116 --rc genhtml_branch_coverage=1 00:18:23.116 --rc genhtml_function_coverage=1 00:18:23.116 --rc genhtml_legend=1 00:18:23.116 --rc geninfo_all_blocks=1 00:18:23.116 --rc geninfo_unexecuted_blocks=1 00:18:23.116 00:18:23.116 ' 00:18:23.116 23:59:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:18:23.116 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:23.116 --rc genhtml_branch_coverage=1 00:18:23.116 --rc genhtml_function_coverage=1 00:18:23.116 --rc genhtml_legend=1 00:18:23.116 --rc geninfo_all_blocks=1 00:18:23.116 --rc geninfo_unexecuted_blocks=1 00:18:23.116 00:18:23.116 ' 00:18:23.116 23:59:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:18:23.116 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:23.116 --rc genhtml_branch_coverage=1 00:18:23.116 --rc genhtml_function_coverage=1 00:18:23.116 --rc genhtml_legend=1 00:18:23.116 --rc geninfo_all_blocks=1 00:18:23.116 --rc geninfo_unexecuted_blocks=1 00:18:23.116 00:18:23.116 ' 00:18:23.116 23:59:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:18:23.116 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:23.116 --rc genhtml_branch_coverage=1 00:18:23.116 --rc genhtml_function_coverage=1 00:18:23.116 --rc genhtml_legend=1 00:18:23.116 --rc geninfo_all_blocks=1 00:18:23.116 --rc geninfo_unexecuted_blocks=1 00:18:23.116 00:18:23.116 ' 00:18:23.116 23:59:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:23.116 23:59:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:18:23.116 23:59:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:23.116 23:59:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:23.116 23:59:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:23.116 23:59:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:23.116 23:59:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:23.116 23:59:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:23.116 23:59:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:23.116 23:59:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:23.116 23:59:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:23.116 23:59:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:23.116 23:59:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:18:23.116 23:59:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:18:23.116 23:59:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:23.116 23:59:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:23.116 23:59:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:23.116 23:59:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:23.117 23:59:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:23.117 23:59:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:18:23.117 23:59:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:23.117 23:59:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:23.117 23:59:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:23.117 23:59:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:23.117 23:59:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:23.117 23:59:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:23.117 23:59:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:18:23.117 23:59:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:23.117 23:59:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@51 -- # : 0 00:18:23.117 23:59:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:23.117 23:59:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:23.117 23:59:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:23.117 23:59:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:23.117 23:59:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:23.117 23:59:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:23.117 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:23.117 23:59:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:23.117 23:59:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:23.117 23:59:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:23.117 23:59:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:18:23.117 23:59:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:18:23.117 23:59:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:23.117 23:59:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:18:23.117 23:59:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:18:23.117 23:59:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:18:23.117 23:59:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:23.117 23:59:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:23.117 23:59:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:23.117 23:59:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:18:23.117 23:59:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:18:23.117 23:59:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:18:23.117 23:59:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:28.584 23:59:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:28.584 23:59:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:18:28.584 23:59:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:18:28.584 23:59:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:18:28.584 23:59:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:18:28.584 23:59:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:18:28.584 23:59:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:18:28.584 23:59:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # net_devs=() 00:18:28.584 23:59:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:18:28.584 23:59:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # e810=() 00:18:28.584 23:59:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # local -ga e810 00:18:28.584 23:59:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # x722=() 00:18:28.584 23:59:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # local -ga x722 00:18:28.584 23:59:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # mlx=() 00:18:28.584 23:59:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:18:28.584 23:59:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:28.584 23:59:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:28.584 23:59:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:28.584 23:59:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:28.584 23:59:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:28.584 23:59:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:28.584 23:59:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:28.584 23:59:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:18:28.584 23:59:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:28.584 23:59:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:28.584 23:59:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:28.584 23:59:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:28.584 23:59:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:18:28.584 23:59:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:18:28.584 23:59:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:18:28.584 23:59:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:18:28.584 23:59:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:18:28.584 23:59:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:18:28.584 23:59:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:28.584 23:59:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:18:28.584 Found 0000:af:00.0 (0x8086 - 0x159b) 00:18:28.584 23:59:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:28.584 23:59:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:28.584 23:59:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:28.584 23:59:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:28.584 23:59:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:28.584 23:59:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:28.584 23:59:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:18:28.584 Found 0000:af:00.1 (0x8086 - 0x159b) 00:18:28.584 23:59:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:28.584 23:59:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:28.584 23:59:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:28.584 23:59:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:28.584 23:59:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:28.584 23:59:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:18:28.584 23:59:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:18:28.584 23:59:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:18:28.584 23:59:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:28.584 23:59:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:28.584 23:59:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:18:28.584 23:59:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:28.584 23:59:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:18:28.584 23:59:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:28.584 23:59:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:28.584 23:59:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:18:28.584 Found net devices under 0000:af:00.0: cvl_0_0 00:18:28.584 23:59:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:28.584 23:59:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:28.584 23:59:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:28.584 23:59:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:18:28.584 23:59:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:28.584 23:59:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:18:28.584 23:59:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:28.584 23:59:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:28.584 23:59:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:18:28.584 Found net devices under 0000:af:00.1: cvl_0_1 00:18:28.584 23:59:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:28.584 23:59:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:18:28.584 23:59:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:18:28.584 23:59:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:18:28.584 23:59:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:18:28.584 23:59:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:18:28.584 23:59:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:28.584 23:59:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:28.584 23:59:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:28.584 23:59:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:28.584 23:59:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:18:28.584 23:59:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:28.584 23:59:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:28.584 23:59:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:18:28.584 23:59:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:18:28.584 23:59:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:28.584 23:59:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:28.584 23:59:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:18:28.584 23:59:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:18:28.584 23:59:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:18:28.584 23:59:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:28.584 23:59:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:28.584 23:59:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:28.584 23:59:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:18:28.584 23:59:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:28.584 23:59:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:28.584 23:59:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:28.585 23:59:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:18:28.585 23:59:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:18:28.585 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:28.585 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.445 ms 00:18:28.585 00:18:28.585 --- 10.0.0.2 ping statistics --- 00:18:28.585 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:28.585 rtt min/avg/max/mdev = 0.445/0.445/0.445/0.000 ms 00:18:28.585 23:59:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:28.585 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:28.585 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.204 ms 00:18:28.585 00:18:28.585 --- 10.0.0.1 ping statistics --- 00:18:28.585 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:28.585 rtt min/avg/max/mdev = 0.204/0.204/0.204/0.000 ms 00:18:28.585 23:59:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:28.585 23:59:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@450 -- # return 0 00:18:28.585 23:59:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:18:28.585 23:59:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:28.585 23:59:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:18:28.585 23:59:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:18:28.585 23:59:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:28.585 23:59:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:18:28.585 23:59:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:18:28.585 23:59:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:18:28.585 23:59:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:28.585 23:59:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:28.585 23:59:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:28.585 23:59:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@509 -- # nvmfpid=3984709 00:18:28.585 23:59:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:18:28.585 23:59:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@510 -- # waitforlisten 3984709 00:18:28.585 23:59:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@835 -- # '[' -z 3984709 ']' 00:18:28.585 23:59:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:28.585 23:59:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:28.585 23:59:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:28.585 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:28.585 23:59:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:28.585 23:59:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:28.585 [2024-12-13 23:59:07.636415] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:18:28.585 [2024-12-13 23:59:07.636514] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:28.844 [2024-12-13 23:59:07.753001] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:18:28.844 [2024-12-13 23:59:07.860873] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:28.844 [2024-12-13 23:59:07.860919] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:28.844 [2024-12-13 23:59:07.860929] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:28.844 [2024-12-13 23:59:07.860940] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:28.844 [2024-12-13 23:59:07.860948] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:28.844 [2024-12-13 23:59:07.863272] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:18:28.844 [2024-12-13 23:59:07.863339] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:18:28.844 [2024-12-13 23:59:07.863348] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:18:29.411 23:59:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:29.411 23:59:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@868 -- # return 0 00:18:29.411 23:59:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:29.411 23:59:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:29.411 23:59:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:29.411 23:59:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:29.411 23:59:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:29.411 23:59:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:29.411 23:59:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:29.411 [2024-12-13 23:59:08.490366] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:29.411 23:59:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:29.411 23:59:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:18:29.411 23:59:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:29.411 23:59:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:29.411 23:59:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:29.411 23:59:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:29.411 23:59:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:29.411 23:59:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:29.411 [2024-12-13 23:59:08.512419] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:29.411 23:59:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:29.411 23:59:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:18:29.411 23:59:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:29.411 23:59:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:29.411 NULL1 00:18:29.411 23:59:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:29.411 23:59:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=3984764 00:18:29.411 23:59:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:18:29.411 23:59:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:18:29.411 23:59:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:18:29.411 23:59:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:18:29.411 23:59:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:29.411 23:59:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:29.411 23:59:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:29.411 23:59:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:29.411 23:59:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:29.411 23:59:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:29.411 23:59:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:29.411 23:59:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:29.670 23:59:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:29.670 23:59:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:29.670 23:59:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:29.670 23:59:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:29.670 23:59:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:29.670 23:59:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:29.670 23:59:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:29.670 23:59:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:29.670 23:59:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:29.670 23:59:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:29.670 23:59:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:29.670 23:59:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:29.670 23:59:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:29.670 23:59:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:29.670 23:59:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:29.670 23:59:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:29.670 23:59:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:29.670 23:59:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:29.670 23:59:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:29.670 23:59:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:29.670 23:59:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:29.670 23:59:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:29.670 23:59:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:29.670 23:59:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:29.670 23:59:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:29.670 23:59:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:29.670 23:59:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:29.670 23:59:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:29.670 23:59:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:29.670 23:59:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:29.670 23:59:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:29.670 23:59:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:29.670 23:59:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3984764 00:18:29.670 23:59:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:29.670 23:59:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:29.670 23:59:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:29.928 23:59:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:29.928 23:59:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3984764 00:18:29.928 23:59:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:29.928 23:59:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:29.928 23:59:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:30.186 23:59:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:30.186 23:59:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3984764 00:18:30.186 23:59:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:30.186 23:59:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:30.186 23:59:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:30.754 23:59:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:30.754 23:59:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3984764 00:18:30.754 23:59:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:30.754 23:59:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:30.754 23:59:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:31.013 23:59:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:31.013 23:59:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3984764 00:18:31.013 23:59:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:31.013 23:59:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:31.013 23:59:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:31.271 23:59:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:31.271 23:59:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3984764 00:18:31.271 23:59:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:31.271 23:59:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:31.271 23:59:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:31.528 23:59:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:31.528 23:59:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3984764 00:18:31.529 23:59:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:31.529 23:59:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:31.529 23:59:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:31.787 23:59:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:31.787 23:59:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3984764 00:18:31.787 23:59:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:31.787 23:59:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:31.787 23:59:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:32.354 23:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:32.354 23:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3984764 00:18:32.354 23:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:32.354 23:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:32.354 23:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:32.613 23:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:32.613 23:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3984764 00:18:32.613 23:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:32.613 23:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:32.613 23:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:32.871 23:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:32.871 23:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3984764 00:18:32.871 23:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:32.871 23:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:32.871 23:59:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:33.129 23:59:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:33.129 23:59:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3984764 00:18:33.129 23:59:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:33.129 23:59:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:33.129 23:59:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:33.696 23:59:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:33.696 23:59:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3984764 00:18:33.696 23:59:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:33.696 23:59:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:33.696 23:59:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:33.954 23:59:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:33.954 23:59:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3984764 00:18:33.954 23:59:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:33.954 23:59:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:33.954 23:59:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:34.213 23:59:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:34.213 23:59:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3984764 00:18:34.213 23:59:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:34.213 23:59:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:34.213 23:59:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:34.472 23:59:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:34.472 23:59:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3984764 00:18:34.472 23:59:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:34.472 23:59:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:34.472 23:59:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:35.038 23:59:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:35.038 23:59:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3984764 00:18:35.038 23:59:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:35.038 23:59:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:35.038 23:59:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:35.296 23:59:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:35.296 23:59:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3984764 00:18:35.296 23:59:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:35.296 23:59:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:35.296 23:59:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:35.554 23:59:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:35.554 23:59:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3984764 00:18:35.554 23:59:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:35.554 23:59:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:35.554 23:59:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:35.813 23:59:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:35.813 23:59:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3984764 00:18:35.813 23:59:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:35.813 23:59:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:35.813 23:59:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:36.072 23:59:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:36.072 23:59:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3984764 00:18:36.072 23:59:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:36.072 23:59:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:36.072 23:59:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:36.637 23:59:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:36.637 23:59:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3984764 00:18:36.637 23:59:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:36.637 23:59:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:36.637 23:59:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:36.895 23:59:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:36.895 23:59:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3984764 00:18:36.895 23:59:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:36.895 23:59:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:36.895 23:59:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:37.153 23:59:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:37.153 23:59:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3984764 00:18:37.153 23:59:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:37.153 23:59:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:37.153 23:59:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:37.412 23:59:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:37.412 23:59:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3984764 00:18:37.412 23:59:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:37.412 23:59:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:37.412 23:59:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:37.979 23:59:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:37.979 23:59:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3984764 00:18:37.979 23:59:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:37.979 23:59:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:37.979 23:59:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:38.237 23:59:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:38.237 23:59:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3984764 00:18:38.237 23:59:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:38.237 23:59:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:38.237 23:59:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:38.495 23:59:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:38.495 23:59:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3984764 00:18:38.495 23:59:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:38.495 23:59:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:38.495 23:59:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:38.753 23:59:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:38.753 23:59:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3984764 00:18:38.753 23:59:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:38.753 23:59:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:38.753 23:59:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:39.319 23:59:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.319 23:59:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3984764 00:18:39.319 23:59:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:39.319 23:59:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.319 23:59:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:39.577 23:59:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.577 23:59:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3984764 00:18:39.577 23:59:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:39.577 23:59:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.577 23:59:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:39.577 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:18:39.836 23:59:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.836 23:59:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3984764 00:18:39.836 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (3984764) - No such process 00:18:39.836 23:59:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 3984764 00:18:39.836 23:59:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:18:39.836 23:59:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:18:39.836 23:59:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:18:39.836 23:59:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:18:39.836 23:59:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@121 -- # sync 00:18:39.836 23:59:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:39.836 23:59:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@124 -- # set +e 00:18:39.836 23:59:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:39.836 23:59:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:39.836 rmmod nvme_tcp 00:18:39.836 rmmod nvme_fabrics 00:18:39.836 rmmod nvme_keyring 00:18:39.836 23:59:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:39.836 23:59:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@128 -- # set -e 00:18:39.836 23:59:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@129 -- # return 0 00:18:39.836 23:59:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@517 -- # '[' -n 3984709 ']' 00:18:39.836 23:59:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@518 -- # killprocess 3984709 00:18:39.836 23:59:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@954 -- # '[' -z 3984709 ']' 00:18:39.836 23:59:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@958 -- # kill -0 3984709 00:18:39.836 23:59:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@959 -- # uname 00:18:39.836 23:59:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:39.836 23:59:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3984709 00:18:39.836 23:59:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:18:39.836 23:59:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:18:39.836 23:59:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3984709' 00:18:39.836 killing process with pid 3984709 00:18:39.836 23:59:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@973 -- # kill 3984709 00:18:39.836 23:59:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@978 -- # wait 3984709 00:18:41.211 23:59:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:18:41.211 23:59:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:18:41.211 23:59:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:18:41.211 23:59:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@297 -- # iptr 00:18:41.211 23:59:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # iptables-save 00:18:41.211 23:59:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:18:41.211 23:59:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # iptables-restore 00:18:41.211 23:59:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:41.211 23:59:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:18:41.211 23:59:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:41.211 23:59:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:41.211 23:59:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:43.112 23:59:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:18:43.112 00:18:43.112 real 0m20.336s 00:18:43.112 user 0m44.006s 00:18:43.112 sys 0m8.089s 00:18:43.112 23:59:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:43.112 23:59:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:43.112 ************************************ 00:18:43.112 END TEST nvmf_connect_stress 00:18:43.113 ************************************ 00:18:43.113 23:59:22 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@25 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:18:43.113 23:59:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:18:43.113 23:59:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:43.113 23:59:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:43.113 ************************************ 00:18:43.113 START TEST nvmf_fused_ordering 00:18:43.113 ************************************ 00:18:43.113 23:59:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:18:43.372 * Looking for test storage... 00:18:43.372 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:43.372 23:59:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:18:43.372 23:59:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1711 -- # lcov --version 00:18:43.372 23:59:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:18:43.372 23:59:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:18:43.372 23:59:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:43.372 23:59:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:43.372 23:59:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:43.372 23:59:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # IFS=.-: 00:18:43.372 23:59:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # read -ra ver1 00:18:43.372 23:59:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # IFS=.-: 00:18:43.372 23:59:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # read -ra ver2 00:18:43.372 23:59:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@338 -- # local 'op=<' 00:18:43.372 23:59:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@340 -- # ver1_l=2 00:18:43.372 23:59:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@341 -- # ver2_l=1 00:18:43.372 23:59:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:43.372 23:59:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@344 -- # case "$op" in 00:18:43.372 23:59:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@345 -- # : 1 00:18:43.373 23:59:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:43.373 23:59:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:43.373 23:59:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # decimal 1 00:18:43.373 23:59:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=1 00:18:43.373 23:59:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:43.373 23:59:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 1 00:18:43.373 23:59:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # ver1[v]=1 00:18:43.373 23:59:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # decimal 2 00:18:43.373 23:59:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=2 00:18:43.373 23:59:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:43.373 23:59:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 2 00:18:43.373 23:59:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # ver2[v]=2 00:18:43.373 23:59:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:43.373 23:59:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:43.373 23:59:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # return 0 00:18:43.373 23:59:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:43.373 23:59:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:18:43.373 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:43.373 --rc genhtml_branch_coverage=1 00:18:43.373 --rc genhtml_function_coverage=1 00:18:43.373 --rc genhtml_legend=1 00:18:43.373 --rc geninfo_all_blocks=1 00:18:43.373 --rc geninfo_unexecuted_blocks=1 00:18:43.373 00:18:43.373 ' 00:18:43.373 23:59:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:18:43.373 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:43.373 --rc genhtml_branch_coverage=1 00:18:43.373 --rc genhtml_function_coverage=1 00:18:43.373 --rc genhtml_legend=1 00:18:43.373 --rc geninfo_all_blocks=1 00:18:43.373 --rc geninfo_unexecuted_blocks=1 00:18:43.373 00:18:43.373 ' 00:18:43.373 23:59:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:18:43.373 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:43.373 --rc genhtml_branch_coverage=1 00:18:43.373 --rc genhtml_function_coverage=1 00:18:43.373 --rc genhtml_legend=1 00:18:43.373 --rc geninfo_all_blocks=1 00:18:43.373 --rc geninfo_unexecuted_blocks=1 00:18:43.373 00:18:43.373 ' 00:18:43.373 23:59:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:18:43.373 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:43.373 --rc genhtml_branch_coverage=1 00:18:43.373 --rc genhtml_function_coverage=1 00:18:43.373 --rc genhtml_legend=1 00:18:43.373 --rc geninfo_all_blocks=1 00:18:43.373 --rc geninfo_unexecuted_blocks=1 00:18:43.373 00:18:43.373 ' 00:18:43.373 23:59:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:43.373 23:59:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:18:43.373 23:59:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:43.373 23:59:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:43.373 23:59:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:43.373 23:59:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:43.373 23:59:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:43.373 23:59:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:43.373 23:59:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:43.373 23:59:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:43.373 23:59:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:43.373 23:59:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:43.373 23:59:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:18:43.373 23:59:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:18:43.373 23:59:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:43.373 23:59:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:43.373 23:59:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:43.373 23:59:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:43.373 23:59:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:43.373 23:59:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@15 -- # shopt -s extglob 00:18:43.373 23:59:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:43.373 23:59:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:43.373 23:59:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:43.373 23:59:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:43.373 23:59:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:43.373 23:59:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:43.373 23:59:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:18:43.373 23:59:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:43.373 23:59:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@51 -- # : 0 00:18:43.373 23:59:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:43.373 23:59:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:43.373 23:59:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:43.373 23:59:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:43.373 23:59:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:43.373 23:59:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:43.373 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:43.373 23:59:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:43.373 23:59:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:43.373 23:59:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:43.373 23:59:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:18:43.373 23:59:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:18:43.373 23:59:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:43.373 23:59:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@476 -- # prepare_net_devs 00:18:43.373 23:59:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@438 -- # local -g is_hw=no 00:18:43.373 23:59:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@440 -- # remove_spdk_ns 00:18:43.373 23:59:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:43.373 23:59:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:43.373 23:59:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:43.373 23:59:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:18:43.373 23:59:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:18:43.374 23:59:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@309 -- # xtrace_disable 00:18:43.374 23:59:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:18:48.646 23:59:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:48.646 23:59:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # pci_devs=() 00:18:48.646 23:59:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # local -a pci_devs 00:18:48.646 23:59:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # pci_net_devs=() 00:18:48.646 23:59:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:18:48.646 23:59:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # pci_drivers=() 00:18:48.646 23:59:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # local -A pci_drivers 00:18:48.646 23:59:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # net_devs=() 00:18:48.646 23:59:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # local -ga net_devs 00:18:48.646 23:59:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # e810=() 00:18:48.646 23:59:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # local -ga e810 00:18:48.646 23:59:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # x722=() 00:18:48.646 23:59:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # local -ga x722 00:18:48.646 23:59:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # mlx=() 00:18:48.646 23:59:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # local -ga mlx 00:18:48.646 23:59:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:48.646 23:59:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:48.646 23:59:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:48.647 23:59:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:48.647 23:59:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:48.647 23:59:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:48.647 23:59:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:48.647 23:59:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:18:48.647 23:59:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:48.647 23:59:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:48.647 23:59:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:48.647 23:59:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:48.647 23:59:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:18:48.647 23:59:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:18:48.647 23:59:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:18:48.647 23:59:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:18:48.647 23:59:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:18:48.647 23:59:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:18:48.647 23:59:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:48.647 23:59:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:18:48.647 Found 0000:af:00.0 (0x8086 - 0x159b) 00:18:48.647 23:59:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:48.647 23:59:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:48.647 23:59:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:48.647 23:59:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:48.647 23:59:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:48.647 23:59:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:48.647 23:59:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:18:48.647 Found 0000:af:00.1 (0x8086 - 0x159b) 00:18:48.647 23:59:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:48.647 23:59:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:48.647 23:59:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:48.647 23:59:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:48.647 23:59:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:48.647 23:59:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:18:48.647 23:59:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:18:48.647 23:59:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:18:48.647 23:59:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:48.647 23:59:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:48.647 23:59:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:18:48.647 23:59:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:48.647 23:59:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # [[ up == up ]] 00:18:48.647 23:59:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:48.647 23:59:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:48.647 23:59:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:18:48.647 Found net devices under 0000:af:00.0: cvl_0_0 00:18:48.647 23:59:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:48.647 23:59:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:48.647 23:59:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:48.647 23:59:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:18:48.647 23:59:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:48.647 23:59:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # [[ up == up ]] 00:18:48.647 23:59:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:48.647 23:59:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:48.647 23:59:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:18:48.647 Found net devices under 0000:af:00.1: cvl_0_1 00:18:48.647 23:59:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:48.647 23:59:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:18:48.647 23:59:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # is_hw=yes 00:18:48.647 23:59:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:18:48.647 23:59:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:18:48.647 23:59:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:18:48.647 23:59:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:48.647 23:59:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:48.647 23:59:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:48.647 23:59:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:48.647 23:59:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:18:48.647 23:59:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:48.647 23:59:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:48.647 23:59:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:18:48.647 23:59:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:18:48.647 23:59:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:48.647 23:59:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:48.647 23:59:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:18:48.647 23:59:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:18:48.647 23:59:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:18:48.647 23:59:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:48.647 23:59:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:48.647 23:59:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:48.647 23:59:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:18:48.647 23:59:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:48.906 23:59:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:48.906 23:59:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:48.906 23:59:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:18:48.907 23:59:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:18:48.907 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:48.907 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.443 ms 00:18:48.907 00:18:48.907 --- 10.0.0.2 ping statistics --- 00:18:48.907 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:48.907 rtt min/avg/max/mdev = 0.443/0.443/0.443/0.000 ms 00:18:48.907 23:59:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:48.907 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:48.907 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.197 ms 00:18:48.907 00:18:48.907 --- 10.0.0.1 ping statistics --- 00:18:48.907 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:48.907 rtt min/avg/max/mdev = 0.197/0.197/0.197/0.000 ms 00:18:48.907 23:59:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:48.907 23:59:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@450 -- # return 0 00:18:48.907 23:59:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:18:48.907 23:59:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:48.907 23:59:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:18:48.907 23:59:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:18:48.907 23:59:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:48.907 23:59:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:18:48.907 23:59:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:18:48.907 23:59:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:18:48.907 23:59:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:48.907 23:59:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:48.907 23:59:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:18:48.907 23:59:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@509 -- # nvmfpid=3990047 00:18:48.907 23:59:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@510 -- # waitforlisten 3990047 00:18:48.907 23:59:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:48.907 23:59:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@835 -- # '[' -z 3990047 ']' 00:18:48.907 23:59:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:48.907 23:59:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:48.907 23:59:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:48.907 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:48.907 23:59:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:48.907 23:59:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:18:48.907 [2024-12-13 23:59:27.981580] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:18:48.907 [2024-12-13 23:59:27.981680] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:49.166 [2024-12-13 23:59:28.100246] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:49.166 [2024-12-13 23:59:28.202254] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:49.166 [2024-12-13 23:59:28.202299] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:49.166 [2024-12-13 23:59:28.202308] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:49.166 [2024-12-13 23:59:28.202319] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:49.166 [2024-12-13 23:59:28.202326] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:49.166 [2024-12-13 23:59:28.203742] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:18:49.733 23:59:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:49.733 23:59:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@868 -- # return 0 00:18:49.733 23:59:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:49.733 23:59:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:49.733 23:59:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:18:49.733 23:59:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:49.733 23:59:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:49.733 23:59:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:49.733 23:59:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:18:49.733 [2024-12-13 23:59:28.811566] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:49.733 23:59:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:49.733 23:59:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:18:49.733 23:59:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:49.733 23:59:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:18:49.733 23:59:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:49.733 23:59:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:49.733 23:59:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:49.733 23:59:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:18:49.733 [2024-12-13 23:59:28.827723] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:49.733 23:59:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:49.733 23:59:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:18:49.733 23:59:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:49.733 23:59:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:18:49.733 NULL1 00:18:49.733 23:59:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:49.733 23:59:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:18:49.733 23:59:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:49.733 23:59:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:18:49.733 23:59:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:49.733 23:59:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:18:49.733 23:59:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:49.733 23:59:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:18:49.733 23:59:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:49.733 23:59:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:18:49.992 [2024-12-13 23:59:28.904797] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:18:49.992 [2024-12-13 23:59:28.904855] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3990250 ] 00:18:50.251 Attached to nqn.2016-06.io.spdk:cnode1 00:18:50.251 Namespace ID: 1 size: 1GB 00:18:50.251 fused_ordering(0) 00:18:50.251 fused_ordering(1) 00:18:50.251 fused_ordering(2) 00:18:50.251 fused_ordering(3) 00:18:50.251 fused_ordering(4) 00:18:50.251 fused_ordering(5) 00:18:50.251 fused_ordering(6) 00:18:50.251 fused_ordering(7) 00:18:50.251 fused_ordering(8) 00:18:50.251 fused_ordering(9) 00:18:50.251 fused_ordering(10) 00:18:50.251 fused_ordering(11) 00:18:50.251 fused_ordering(12) 00:18:50.251 fused_ordering(13) 00:18:50.251 fused_ordering(14) 00:18:50.251 fused_ordering(15) 00:18:50.251 fused_ordering(16) 00:18:50.251 fused_ordering(17) 00:18:50.251 fused_ordering(18) 00:18:50.251 fused_ordering(19) 00:18:50.251 fused_ordering(20) 00:18:50.251 fused_ordering(21) 00:18:50.251 fused_ordering(22) 00:18:50.251 fused_ordering(23) 00:18:50.251 fused_ordering(24) 00:18:50.251 fused_ordering(25) 00:18:50.251 fused_ordering(26) 00:18:50.251 fused_ordering(27) 00:18:50.251 fused_ordering(28) 00:18:50.251 fused_ordering(29) 00:18:50.251 fused_ordering(30) 00:18:50.251 fused_ordering(31) 00:18:50.251 fused_ordering(32) 00:18:50.251 fused_ordering(33) 00:18:50.251 fused_ordering(34) 00:18:50.251 fused_ordering(35) 00:18:50.251 fused_ordering(36) 00:18:50.251 fused_ordering(37) 00:18:50.251 fused_ordering(38) 00:18:50.251 fused_ordering(39) 00:18:50.251 fused_ordering(40) 00:18:50.251 fused_ordering(41) 00:18:50.251 fused_ordering(42) 00:18:50.251 fused_ordering(43) 00:18:50.251 fused_ordering(44) 00:18:50.251 fused_ordering(45) 00:18:50.251 fused_ordering(46) 00:18:50.251 fused_ordering(47) 00:18:50.251 fused_ordering(48) 00:18:50.251 fused_ordering(49) 00:18:50.251 fused_ordering(50) 00:18:50.251 fused_ordering(51) 00:18:50.251 fused_ordering(52) 00:18:50.251 fused_ordering(53) 00:18:50.251 fused_ordering(54) 00:18:50.251 fused_ordering(55) 00:18:50.251 fused_ordering(56) 00:18:50.251 fused_ordering(57) 00:18:50.251 fused_ordering(58) 00:18:50.251 fused_ordering(59) 00:18:50.251 fused_ordering(60) 00:18:50.251 fused_ordering(61) 00:18:50.251 fused_ordering(62) 00:18:50.251 fused_ordering(63) 00:18:50.251 fused_ordering(64) 00:18:50.251 fused_ordering(65) 00:18:50.251 fused_ordering(66) 00:18:50.251 fused_ordering(67) 00:18:50.251 fused_ordering(68) 00:18:50.251 fused_ordering(69) 00:18:50.251 fused_ordering(70) 00:18:50.251 fused_ordering(71) 00:18:50.251 fused_ordering(72) 00:18:50.251 fused_ordering(73) 00:18:50.251 fused_ordering(74) 00:18:50.251 fused_ordering(75) 00:18:50.251 fused_ordering(76) 00:18:50.251 fused_ordering(77) 00:18:50.251 fused_ordering(78) 00:18:50.251 fused_ordering(79) 00:18:50.251 fused_ordering(80) 00:18:50.251 fused_ordering(81) 00:18:50.251 fused_ordering(82) 00:18:50.251 fused_ordering(83) 00:18:50.251 fused_ordering(84) 00:18:50.251 fused_ordering(85) 00:18:50.251 fused_ordering(86) 00:18:50.251 fused_ordering(87) 00:18:50.251 fused_ordering(88) 00:18:50.251 fused_ordering(89) 00:18:50.251 fused_ordering(90) 00:18:50.251 fused_ordering(91) 00:18:50.252 fused_ordering(92) 00:18:50.252 fused_ordering(93) 00:18:50.252 fused_ordering(94) 00:18:50.252 fused_ordering(95) 00:18:50.252 fused_ordering(96) 00:18:50.252 fused_ordering(97) 00:18:50.252 fused_ordering(98) 00:18:50.252 fused_ordering(99) 00:18:50.252 fused_ordering(100) 00:18:50.252 fused_ordering(101) 00:18:50.252 fused_ordering(102) 00:18:50.252 fused_ordering(103) 00:18:50.252 fused_ordering(104) 00:18:50.252 fused_ordering(105) 00:18:50.252 fused_ordering(106) 00:18:50.252 fused_ordering(107) 00:18:50.252 fused_ordering(108) 00:18:50.252 fused_ordering(109) 00:18:50.252 fused_ordering(110) 00:18:50.252 fused_ordering(111) 00:18:50.252 fused_ordering(112) 00:18:50.252 fused_ordering(113) 00:18:50.252 fused_ordering(114) 00:18:50.252 fused_ordering(115) 00:18:50.252 fused_ordering(116) 00:18:50.252 fused_ordering(117) 00:18:50.252 fused_ordering(118) 00:18:50.252 fused_ordering(119) 00:18:50.252 fused_ordering(120) 00:18:50.252 fused_ordering(121) 00:18:50.252 fused_ordering(122) 00:18:50.252 fused_ordering(123) 00:18:50.252 fused_ordering(124) 00:18:50.252 fused_ordering(125) 00:18:50.252 fused_ordering(126) 00:18:50.252 fused_ordering(127) 00:18:50.252 fused_ordering(128) 00:18:50.252 fused_ordering(129) 00:18:50.252 fused_ordering(130) 00:18:50.252 fused_ordering(131) 00:18:50.252 fused_ordering(132) 00:18:50.252 fused_ordering(133) 00:18:50.252 fused_ordering(134) 00:18:50.252 fused_ordering(135) 00:18:50.252 fused_ordering(136) 00:18:50.252 fused_ordering(137) 00:18:50.252 fused_ordering(138) 00:18:50.252 fused_ordering(139) 00:18:50.252 fused_ordering(140) 00:18:50.252 fused_ordering(141) 00:18:50.252 fused_ordering(142) 00:18:50.252 fused_ordering(143) 00:18:50.252 fused_ordering(144) 00:18:50.252 fused_ordering(145) 00:18:50.252 fused_ordering(146) 00:18:50.252 fused_ordering(147) 00:18:50.252 fused_ordering(148) 00:18:50.252 fused_ordering(149) 00:18:50.252 fused_ordering(150) 00:18:50.252 fused_ordering(151) 00:18:50.252 fused_ordering(152) 00:18:50.252 fused_ordering(153) 00:18:50.252 fused_ordering(154) 00:18:50.252 fused_ordering(155) 00:18:50.252 fused_ordering(156) 00:18:50.252 fused_ordering(157) 00:18:50.252 fused_ordering(158) 00:18:50.252 fused_ordering(159) 00:18:50.252 fused_ordering(160) 00:18:50.252 fused_ordering(161) 00:18:50.252 fused_ordering(162) 00:18:50.252 fused_ordering(163) 00:18:50.252 fused_ordering(164) 00:18:50.252 fused_ordering(165) 00:18:50.252 fused_ordering(166) 00:18:50.252 fused_ordering(167) 00:18:50.252 fused_ordering(168) 00:18:50.252 fused_ordering(169) 00:18:50.252 fused_ordering(170) 00:18:50.252 fused_ordering(171) 00:18:50.252 fused_ordering(172) 00:18:50.252 fused_ordering(173) 00:18:50.252 fused_ordering(174) 00:18:50.252 fused_ordering(175) 00:18:50.252 fused_ordering(176) 00:18:50.252 fused_ordering(177) 00:18:50.252 fused_ordering(178) 00:18:50.252 fused_ordering(179) 00:18:50.252 fused_ordering(180) 00:18:50.252 fused_ordering(181) 00:18:50.252 fused_ordering(182) 00:18:50.252 fused_ordering(183) 00:18:50.252 fused_ordering(184) 00:18:50.252 fused_ordering(185) 00:18:50.252 fused_ordering(186) 00:18:50.252 fused_ordering(187) 00:18:50.252 fused_ordering(188) 00:18:50.252 fused_ordering(189) 00:18:50.252 fused_ordering(190) 00:18:50.252 fused_ordering(191) 00:18:50.252 fused_ordering(192) 00:18:50.252 fused_ordering(193) 00:18:50.252 fused_ordering(194) 00:18:50.252 fused_ordering(195) 00:18:50.252 fused_ordering(196) 00:18:50.252 fused_ordering(197) 00:18:50.252 fused_ordering(198) 00:18:50.252 fused_ordering(199) 00:18:50.252 fused_ordering(200) 00:18:50.252 fused_ordering(201) 00:18:50.252 fused_ordering(202) 00:18:50.252 fused_ordering(203) 00:18:50.252 fused_ordering(204) 00:18:50.252 fused_ordering(205) 00:18:50.511 fused_ordering(206) 00:18:50.511 fused_ordering(207) 00:18:50.511 fused_ordering(208) 00:18:50.511 fused_ordering(209) 00:18:50.511 fused_ordering(210) 00:18:50.511 fused_ordering(211) 00:18:50.511 fused_ordering(212) 00:18:50.511 fused_ordering(213) 00:18:50.511 fused_ordering(214) 00:18:50.511 fused_ordering(215) 00:18:50.511 fused_ordering(216) 00:18:50.511 fused_ordering(217) 00:18:50.511 fused_ordering(218) 00:18:50.511 fused_ordering(219) 00:18:50.511 fused_ordering(220) 00:18:50.511 fused_ordering(221) 00:18:50.511 fused_ordering(222) 00:18:50.511 fused_ordering(223) 00:18:50.511 fused_ordering(224) 00:18:50.511 fused_ordering(225) 00:18:50.511 fused_ordering(226) 00:18:50.511 fused_ordering(227) 00:18:50.511 fused_ordering(228) 00:18:50.511 fused_ordering(229) 00:18:50.511 fused_ordering(230) 00:18:50.511 fused_ordering(231) 00:18:50.511 fused_ordering(232) 00:18:50.511 fused_ordering(233) 00:18:50.511 fused_ordering(234) 00:18:50.511 fused_ordering(235) 00:18:50.511 fused_ordering(236) 00:18:50.511 fused_ordering(237) 00:18:50.511 fused_ordering(238) 00:18:50.511 fused_ordering(239) 00:18:50.511 fused_ordering(240) 00:18:50.511 fused_ordering(241) 00:18:50.511 fused_ordering(242) 00:18:50.511 fused_ordering(243) 00:18:50.511 fused_ordering(244) 00:18:50.511 fused_ordering(245) 00:18:50.511 fused_ordering(246) 00:18:50.511 fused_ordering(247) 00:18:50.511 fused_ordering(248) 00:18:50.511 fused_ordering(249) 00:18:50.511 fused_ordering(250) 00:18:50.511 fused_ordering(251) 00:18:50.511 fused_ordering(252) 00:18:50.511 fused_ordering(253) 00:18:50.511 fused_ordering(254) 00:18:50.511 fused_ordering(255) 00:18:50.511 fused_ordering(256) 00:18:50.511 fused_ordering(257) 00:18:50.511 fused_ordering(258) 00:18:50.511 fused_ordering(259) 00:18:50.511 fused_ordering(260) 00:18:50.511 fused_ordering(261) 00:18:50.511 fused_ordering(262) 00:18:50.511 fused_ordering(263) 00:18:50.511 fused_ordering(264) 00:18:50.511 fused_ordering(265) 00:18:50.511 fused_ordering(266) 00:18:50.511 fused_ordering(267) 00:18:50.511 fused_ordering(268) 00:18:50.511 fused_ordering(269) 00:18:50.511 fused_ordering(270) 00:18:50.511 fused_ordering(271) 00:18:50.511 fused_ordering(272) 00:18:50.511 fused_ordering(273) 00:18:50.511 fused_ordering(274) 00:18:50.511 fused_ordering(275) 00:18:50.511 fused_ordering(276) 00:18:50.511 fused_ordering(277) 00:18:50.511 fused_ordering(278) 00:18:50.511 fused_ordering(279) 00:18:50.511 fused_ordering(280) 00:18:50.511 fused_ordering(281) 00:18:50.511 fused_ordering(282) 00:18:50.511 fused_ordering(283) 00:18:50.511 fused_ordering(284) 00:18:50.511 fused_ordering(285) 00:18:50.511 fused_ordering(286) 00:18:50.511 fused_ordering(287) 00:18:50.511 fused_ordering(288) 00:18:50.511 fused_ordering(289) 00:18:50.511 fused_ordering(290) 00:18:50.511 fused_ordering(291) 00:18:50.511 fused_ordering(292) 00:18:50.511 fused_ordering(293) 00:18:50.511 fused_ordering(294) 00:18:50.511 fused_ordering(295) 00:18:50.511 fused_ordering(296) 00:18:50.511 fused_ordering(297) 00:18:50.511 fused_ordering(298) 00:18:50.511 fused_ordering(299) 00:18:50.511 fused_ordering(300) 00:18:50.511 fused_ordering(301) 00:18:50.511 fused_ordering(302) 00:18:50.511 fused_ordering(303) 00:18:50.511 fused_ordering(304) 00:18:50.511 fused_ordering(305) 00:18:50.511 fused_ordering(306) 00:18:50.511 fused_ordering(307) 00:18:50.511 fused_ordering(308) 00:18:50.511 fused_ordering(309) 00:18:50.512 fused_ordering(310) 00:18:50.512 fused_ordering(311) 00:18:50.512 fused_ordering(312) 00:18:50.512 fused_ordering(313) 00:18:50.512 fused_ordering(314) 00:18:50.512 fused_ordering(315) 00:18:50.512 fused_ordering(316) 00:18:50.512 fused_ordering(317) 00:18:50.512 fused_ordering(318) 00:18:50.512 fused_ordering(319) 00:18:50.512 fused_ordering(320) 00:18:50.512 fused_ordering(321) 00:18:50.512 fused_ordering(322) 00:18:50.512 fused_ordering(323) 00:18:50.512 fused_ordering(324) 00:18:50.512 fused_ordering(325) 00:18:50.512 fused_ordering(326) 00:18:50.512 fused_ordering(327) 00:18:50.512 fused_ordering(328) 00:18:50.512 fused_ordering(329) 00:18:50.512 fused_ordering(330) 00:18:50.512 fused_ordering(331) 00:18:50.512 fused_ordering(332) 00:18:50.512 fused_ordering(333) 00:18:50.512 fused_ordering(334) 00:18:50.512 fused_ordering(335) 00:18:50.512 fused_ordering(336) 00:18:50.512 fused_ordering(337) 00:18:50.512 fused_ordering(338) 00:18:50.512 fused_ordering(339) 00:18:50.512 fused_ordering(340) 00:18:50.512 fused_ordering(341) 00:18:50.512 fused_ordering(342) 00:18:50.512 fused_ordering(343) 00:18:50.512 fused_ordering(344) 00:18:50.512 fused_ordering(345) 00:18:50.512 fused_ordering(346) 00:18:50.512 fused_ordering(347) 00:18:50.512 fused_ordering(348) 00:18:50.512 fused_ordering(349) 00:18:50.512 fused_ordering(350) 00:18:50.512 fused_ordering(351) 00:18:50.512 fused_ordering(352) 00:18:50.512 fused_ordering(353) 00:18:50.512 fused_ordering(354) 00:18:50.512 fused_ordering(355) 00:18:50.512 fused_ordering(356) 00:18:50.512 fused_ordering(357) 00:18:50.512 fused_ordering(358) 00:18:50.512 fused_ordering(359) 00:18:50.512 fused_ordering(360) 00:18:50.512 fused_ordering(361) 00:18:50.512 fused_ordering(362) 00:18:50.512 fused_ordering(363) 00:18:50.512 fused_ordering(364) 00:18:50.512 fused_ordering(365) 00:18:50.512 fused_ordering(366) 00:18:50.512 fused_ordering(367) 00:18:50.512 fused_ordering(368) 00:18:50.512 fused_ordering(369) 00:18:50.512 fused_ordering(370) 00:18:50.512 fused_ordering(371) 00:18:50.512 fused_ordering(372) 00:18:50.512 fused_ordering(373) 00:18:50.512 fused_ordering(374) 00:18:50.512 fused_ordering(375) 00:18:50.512 fused_ordering(376) 00:18:50.512 fused_ordering(377) 00:18:50.512 fused_ordering(378) 00:18:50.512 fused_ordering(379) 00:18:50.512 fused_ordering(380) 00:18:50.512 fused_ordering(381) 00:18:50.512 fused_ordering(382) 00:18:50.512 fused_ordering(383) 00:18:50.512 fused_ordering(384) 00:18:50.512 fused_ordering(385) 00:18:50.512 fused_ordering(386) 00:18:50.512 fused_ordering(387) 00:18:50.512 fused_ordering(388) 00:18:50.512 fused_ordering(389) 00:18:50.512 fused_ordering(390) 00:18:50.512 fused_ordering(391) 00:18:50.512 fused_ordering(392) 00:18:50.512 fused_ordering(393) 00:18:50.512 fused_ordering(394) 00:18:50.512 fused_ordering(395) 00:18:50.512 fused_ordering(396) 00:18:50.512 fused_ordering(397) 00:18:50.512 fused_ordering(398) 00:18:50.512 fused_ordering(399) 00:18:50.512 fused_ordering(400) 00:18:50.512 fused_ordering(401) 00:18:50.512 fused_ordering(402) 00:18:50.512 fused_ordering(403) 00:18:50.512 fused_ordering(404) 00:18:50.512 fused_ordering(405) 00:18:50.512 fused_ordering(406) 00:18:50.512 fused_ordering(407) 00:18:50.512 fused_ordering(408) 00:18:50.512 fused_ordering(409) 00:18:50.512 fused_ordering(410) 00:18:51.079 fused_ordering(411) 00:18:51.079 fused_ordering(412) 00:18:51.079 fused_ordering(413) 00:18:51.079 fused_ordering(414) 00:18:51.079 fused_ordering(415) 00:18:51.079 fused_ordering(416) 00:18:51.079 fused_ordering(417) 00:18:51.079 fused_ordering(418) 00:18:51.079 fused_ordering(419) 00:18:51.079 fused_ordering(420) 00:18:51.079 fused_ordering(421) 00:18:51.079 fused_ordering(422) 00:18:51.079 fused_ordering(423) 00:18:51.079 fused_ordering(424) 00:18:51.079 fused_ordering(425) 00:18:51.079 fused_ordering(426) 00:18:51.079 fused_ordering(427) 00:18:51.079 fused_ordering(428) 00:18:51.079 fused_ordering(429) 00:18:51.079 fused_ordering(430) 00:18:51.079 fused_ordering(431) 00:18:51.079 fused_ordering(432) 00:18:51.079 fused_ordering(433) 00:18:51.079 fused_ordering(434) 00:18:51.079 fused_ordering(435) 00:18:51.079 fused_ordering(436) 00:18:51.079 fused_ordering(437) 00:18:51.079 fused_ordering(438) 00:18:51.079 fused_ordering(439) 00:18:51.079 fused_ordering(440) 00:18:51.079 fused_ordering(441) 00:18:51.079 fused_ordering(442) 00:18:51.079 fused_ordering(443) 00:18:51.079 fused_ordering(444) 00:18:51.079 fused_ordering(445) 00:18:51.079 fused_ordering(446) 00:18:51.079 fused_ordering(447) 00:18:51.079 fused_ordering(448) 00:18:51.079 fused_ordering(449) 00:18:51.079 fused_ordering(450) 00:18:51.079 fused_ordering(451) 00:18:51.079 fused_ordering(452) 00:18:51.079 fused_ordering(453) 00:18:51.079 fused_ordering(454) 00:18:51.079 fused_ordering(455) 00:18:51.079 fused_ordering(456) 00:18:51.079 fused_ordering(457) 00:18:51.079 fused_ordering(458) 00:18:51.079 fused_ordering(459) 00:18:51.079 fused_ordering(460) 00:18:51.079 fused_ordering(461) 00:18:51.079 fused_ordering(462) 00:18:51.079 fused_ordering(463) 00:18:51.079 fused_ordering(464) 00:18:51.079 fused_ordering(465) 00:18:51.079 fused_ordering(466) 00:18:51.079 fused_ordering(467) 00:18:51.079 fused_ordering(468) 00:18:51.079 fused_ordering(469) 00:18:51.079 fused_ordering(470) 00:18:51.079 fused_ordering(471) 00:18:51.079 fused_ordering(472) 00:18:51.079 fused_ordering(473) 00:18:51.079 fused_ordering(474) 00:18:51.079 fused_ordering(475) 00:18:51.079 fused_ordering(476) 00:18:51.079 fused_ordering(477) 00:18:51.079 fused_ordering(478) 00:18:51.079 fused_ordering(479) 00:18:51.079 fused_ordering(480) 00:18:51.079 fused_ordering(481) 00:18:51.079 fused_ordering(482) 00:18:51.079 fused_ordering(483) 00:18:51.079 fused_ordering(484) 00:18:51.079 fused_ordering(485) 00:18:51.079 fused_ordering(486) 00:18:51.079 fused_ordering(487) 00:18:51.079 fused_ordering(488) 00:18:51.079 fused_ordering(489) 00:18:51.079 fused_ordering(490) 00:18:51.079 fused_ordering(491) 00:18:51.079 fused_ordering(492) 00:18:51.079 fused_ordering(493) 00:18:51.079 fused_ordering(494) 00:18:51.079 fused_ordering(495) 00:18:51.079 fused_ordering(496) 00:18:51.079 fused_ordering(497) 00:18:51.079 fused_ordering(498) 00:18:51.079 fused_ordering(499) 00:18:51.079 fused_ordering(500) 00:18:51.079 fused_ordering(501) 00:18:51.079 fused_ordering(502) 00:18:51.079 fused_ordering(503) 00:18:51.079 fused_ordering(504) 00:18:51.079 fused_ordering(505) 00:18:51.079 fused_ordering(506) 00:18:51.079 fused_ordering(507) 00:18:51.080 fused_ordering(508) 00:18:51.080 fused_ordering(509) 00:18:51.080 fused_ordering(510) 00:18:51.080 fused_ordering(511) 00:18:51.080 fused_ordering(512) 00:18:51.080 fused_ordering(513) 00:18:51.080 fused_ordering(514) 00:18:51.080 fused_ordering(515) 00:18:51.080 fused_ordering(516) 00:18:51.080 fused_ordering(517) 00:18:51.080 fused_ordering(518) 00:18:51.080 fused_ordering(519) 00:18:51.080 fused_ordering(520) 00:18:51.080 fused_ordering(521) 00:18:51.080 fused_ordering(522) 00:18:51.080 fused_ordering(523) 00:18:51.080 fused_ordering(524) 00:18:51.080 fused_ordering(525) 00:18:51.080 fused_ordering(526) 00:18:51.080 fused_ordering(527) 00:18:51.080 fused_ordering(528) 00:18:51.080 fused_ordering(529) 00:18:51.080 fused_ordering(530) 00:18:51.080 fused_ordering(531) 00:18:51.080 fused_ordering(532) 00:18:51.080 fused_ordering(533) 00:18:51.080 fused_ordering(534) 00:18:51.080 fused_ordering(535) 00:18:51.080 fused_ordering(536) 00:18:51.080 fused_ordering(537) 00:18:51.080 fused_ordering(538) 00:18:51.080 fused_ordering(539) 00:18:51.080 fused_ordering(540) 00:18:51.080 fused_ordering(541) 00:18:51.080 fused_ordering(542) 00:18:51.080 fused_ordering(543) 00:18:51.080 fused_ordering(544) 00:18:51.080 fused_ordering(545) 00:18:51.080 fused_ordering(546) 00:18:51.080 fused_ordering(547) 00:18:51.080 fused_ordering(548) 00:18:51.080 fused_ordering(549) 00:18:51.080 fused_ordering(550) 00:18:51.080 fused_ordering(551) 00:18:51.080 fused_ordering(552) 00:18:51.080 fused_ordering(553) 00:18:51.080 fused_ordering(554) 00:18:51.080 fused_ordering(555) 00:18:51.080 fused_ordering(556) 00:18:51.080 fused_ordering(557) 00:18:51.080 fused_ordering(558) 00:18:51.080 fused_ordering(559) 00:18:51.080 fused_ordering(560) 00:18:51.080 fused_ordering(561) 00:18:51.080 fused_ordering(562) 00:18:51.080 fused_ordering(563) 00:18:51.080 fused_ordering(564) 00:18:51.080 fused_ordering(565) 00:18:51.080 fused_ordering(566) 00:18:51.080 fused_ordering(567) 00:18:51.080 fused_ordering(568) 00:18:51.080 fused_ordering(569) 00:18:51.080 fused_ordering(570) 00:18:51.080 fused_ordering(571) 00:18:51.080 fused_ordering(572) 00:18:51.080 fused_ordering(573) 00:18:51.080 fused_ordering(574) 00:18:51.080 fused_ordering(575) 00:18:51.080 fused_ordering(576) 00:18:51.080 fused_ordering(577) 00:18:51.080 fused_ordering(578) 00:18:51.080 fused_ordering(579) 00:18:51.080 fused_ordering(580) 00:18:51.080 fused_ordering(581) 00:18:51.080 fused_ordering(582) 00:18:51.080 fused_ordering(583) 00:18:51.080 fused_ordering(584) 00:18:51.080 fused_ordering(585) 00:18:51.080 fused_ordering(586) 00:18:51.080 fused_ordering(587) 00:18:51.080 fused_ordering(588) 00:18:51.080 fused_ordering(589) 00:18:51.080 fused_ordering(590) 00:18:51.080 fused_ordering(591) 00:18:51.080 fused_ordering(592) 00:18:51.080 fused_ordering(593) 00:18:51.080 fused_ordering(594) 00:18:51.080 fused_ordering(595) 00:18:51.080 fused_ordering(596) 00:18:51.080 fused_ordering(597) 00:18:51.080 fused_ordering(598) 00:18:51.080 fused_ordering(599) 00:18:51.080 fused_ordering(600) 00:18:51.080 fused_ordering(601) 00:18:51.080 fused_ordering(602) 00:18:51.080 fused_ordering(603) 00:18:51.080 fused_ordering(604) 00:18:51.080 fused_ordering(605) 00:18:51.080 fused_ordering(606) 00:18:51.080 fused_ordering(607) 00:18:51.080 fused_ordering(608) 00:18:51.080 fused_ordering(609) 00:18:51.080 fused_ordering(610) 00:18:51.080 fused_ordering(611) 00:18:51.080 fused_ordering(612) 00:18:51.080 fused_ordering(613) 00:18:51.080 fused_ordering(614) 00:18:51.080 fused_ordering(615) 00:18:51.339 fused_ordering(616) 00:18:51.339 fused_ordering(617) 00:18:51.339 fused_ordering(618) 00:18:51.339 fused_ordering(619) 00:18:51.339 fused_ordering(620) 00:18:51.339 fused_ordering(621) 00:18:51.339 fused_ordering(622) 00:18:51.339 fused_ordering(623) 00:18:51.339 fused_ordering(624) 00:18:51.339 fused_ordering(625) 00:18:51.339 fused_ordering(626) 00:18:51.339 fused_ordering(627) 00:18:51.339 fused_ordering(628) 00:18:51.339 fused_ordering(629) 00:18:51.339 fused_ordering(630) 00:18:51.339 fused_ordering(631) 00:18:51.339 fused_ordering(632) 00:18:51.339 fused_ordering(633) 00:18:51.339 fused_ordering(634) 00:18:51.339 fused_ordering(635) 00:18:51.339 fused_ordering(636) 00:18:51.339 fused_ordering(637) 00:18:51.339 fused_ordering(638) 00:18:51.339 fused_ordering(639) 00:18:51.339 fused_ordering(640) 00:18:51.339 fused_ordering(641) 00:18:51.339 fused_ordering(642) 00:18:51.339 fused_ordering(643) 00:18:51.339 fused_ordering(644) 00:18:51.339 fused_ordering(645) 00:18:51.339 fused_ordering(646) 00:18:51.339 fused_ordering(647) 00:18:51.339 fused_ordering(648) 00:18:51.339 fused_ordering(649) 00:18:51.339 fused_ordering(650) 00:18:51.339 fused_ordering(651) 00:18:51.339 fused_ordering(652) 00:18:51.339 fused_ordering(653) 00:18:51.339 fused_ordering(654) 00:18:51.339 fused_ordering(655) 00:18:51.339 fused_ordering(656) 00:18:51.339 fused_ordering(657) 00:18:51.339 fused_ordering(658) 00:18:51.339 fused_ordering(659) 00:18:51.339 fused_ordering(660) 00:18:51.339 fused_ordering(661) 00:18:51.339 fused_ordering(662) 00:18:51.339 fused_ordering(663) 00:18:51.339 fused_ordering(664) 00:18:51.339 fused_ordering(665) 00:18:51.339 fused_ordering(666) 00:18:51.339 fused_ordering(667) 00:18:51.339 fused_ordering(668) 00:18:51.339 fused_ordering(669) 00:18:51.339 fused_ordering(670) 00:18:51.339 fused_ordering(671) 00:18:51.339 fused_ordering(672) 00:18:51.339 fused_ordering(673) 00:18:51.339 fused_ordering(674) 00:18:51.339 fused_ordering(675) 00:18:51.339 fused_ordering(676) 00:18:51.339 fused_ordering(677) 00:18:51.339 fused_ordering(678) 00:18:51.339 fused_ordering(679) 00:18:51.339 fused_ordering(680) 00:18:51.339 fused_ordering(681) 00:18:51.339 fused_ordering(682) 00:18:51.339 fused_ordering(683) 00:18:51.339 fused_ordering(684) 00:18:51.339 fused_ordering(685) 00:18:51.339 fused_ordering(686) 00:18:51.339 fused_ordering(687) 00:18:51.339 fused_ordering(688) 00:18:51.339 fused_ordering(689) 00:18:51.339 fused_ordering(690) 00:18:51.339 fused_ordering(691) 00:18:51.339 fused_ordering(692) 00:18:51.339 fused_ordering(693) 00:18:51.339 fused_ordering(694) 00:18:51.339 fused_ordering(695) 00:18:51.339 fused_ordering(696) 00:18:51.339 fused_ordering(697) 00:18:51.339 fused_ordering(698) 00:18:51.339 fused_ordering(699) 00:18:51.339 fused_ordering(700) 00:18:51.339 fused_ordering(701) 00:18:51.339 fused_ordering(702) 00:18:51.339 fused_ordering(703) 00:18:51.339 fused_ordering(704) 00:18:51.339 fused_ordering(705) 00:18:51.339 fused_ordering(706) 00:18:51.339 fused_ordering(707) 00:18:51.339 fused_ordering(708) 00:18:51.339 fused_ordering(709) 00:18:51.339 fused_ordering(710) 00:18:51.339 fused_ordering(711) 00:18:51.339 fused_ordering(712) 00:18:51.339 fused_ordering(713) 00:18:51.339 fused_ordering(714) 00:18:51.339 fused_ordering(715) 00:18:51.339 fused_ordering(716) 00:18:51.339 fused_ordering(717) 00:18:51.339 fused_ordering(718) 00:18:51.339 fused_ordering(719) 00:18:51.339 fused_ordering(720) 00:18:51.339 fused_ordering(721) 00:18:51.339 fused_ordering(722) 00:18:51.339 fused_ordering(723) 00:18:51.339 fused_ordering(724) 00:18:51.339 fused_ordering(725) 00:18:51.339 fused_ordering(726) 00:18:51.339 fused_ordering(727) 00:18:51.339 fused_ordering(728) 00:18:51.339 fused_ordering(729) 00:18:51.339 fused_ordering(730) 00:18:51.339 fused_ordering(731) 00:18:51.339 fused_ordering(732) 00:18:51.339 fused_ordering(733) 00:18:51.339 fused_ordering(734) 00:18:51.339 fused_ordering(735) 00:18:51.339 fused_ordering(736) 00:18:51.339 fused_ordering(737) 00:18:51.339 fused_ordering(738) 00:18:51.339 fused_ordering(739) 00:18:51.339 fused_ordering(740) 00:18:51.339 fused_ordering(741) 00:18:51.339 fused_ordering(742) 00:18:51.339 fused_ordering(743) 00:18:51.339 fused_ordering(744) 00:18:51.339 fused_ordering(745) 00:18:51.339 fused_ordering(746) 00:18:51.339 fused_ordering(747) 00:18:51.339 fused_ordering(748) 00:18:51.339 fused_ordering(749) 00:18:51.339 fused_ordering(750) 00:18:51.339 fused_ordering(751) 00:18:51.339 fused_ordering(752) 00:18:51.340 fused_ordering(753) 00:18:51.340 fused_ordering(754) 00:18:51.340 fused_ordering(755) 00:18:51.340 fused_ordering(756) 00:18:51.340 fused_ordering(757) 00:18:51.340 fused_ordering(758) 00:18:51.340 fused_ordering(759) 00:18:51.340 fused_ordering(760) 00:18:51.340 fused_ordering(761) 00:18:51.340 fused_ordering(762) 00:18:51.340 fused_ordering(763) 00:18:51.340 fused_ordering(764) 00:18:51.340 fused_ordering(765) 00:18:51.340 fused_ordering(766) 00:18:51.340 fused_ordering(767) 00:18:51.340 fused_ordering(768) 00:18:51.340 fused_ordering(769) 00:18:51.340 fused_ordering(770) 00:18:51.340 fused_ordering(771) 00:18:51.340 fused_ordering(772) 00:18:51.340 fused_ordering(773) 00:18:51.340 fused_ordering(774) 00:18:51.340 fused_ordering(775) 00:18:51.340 fused_ordering(776) 00:18:51.340 fused_ordering(777) 00:18:51.340 fused_ordering(778) 00:18:51.340 fused_ordering(779) 00:18:51.340 fused_ordering(780) 00:18:51.340 fused_ordering(781) 00:18:51.340 fused_ordering(782) 00:18:51.340 fused_ordering(783) 00:18:51.340 fused_ordering(784) 00:18:51.340 fused_ordering(785) 00:18:51.340 fused_ordering(786) 00:18:51.340 fused_ordering(787) 00:18:51.340 fused_ordering(788) 00:18:51.340 fused_ordering(789) 00:18:51.340 fused_ordering(790) 00:18:51.340 fused_ordering(791) 00:18:51.340 fused_ordering(792) 00:18:51.340 fused_ordering(793) 00:18:51.340 fused_ordering(794) 00:18:51.340 fused_ordering(795) 00:18:51.340 fused_ordering(796) 00:18:51.340 fused_ordering(797) 00:18:51.340 fused_ordering(798) 00:18:51.340 fused_ordering(799) 00:18:51.340 fused_ordering(800) 00:18:51.340 fused_ordering(801) 00:18:51.340 fused_ordering(802) 00:18:51.340 fused_ordering(803) 00:18:51.340 fused_ordering(804) 00:18:51.340 fused_ordering(805) 00:18:51.340 fused_ordering(806) 00:18:51.340 fused_ordering(807) 00:18:51.340 fused_ordering(808) 00:18:51.340 fused_ordering(809) 00:18:51.340 fused_ordering(810) 00:18:51.340 fused_ordering(811) 00:18:51.340 fused_ordering(812) 00:18:51.340 fused_ordering(813) 00:18:51.340 fused_ordering(814) 00:18:51.340 fused_ordering(815) 00:18:51.340 fused_ordering(816) 00:18:51.340 fused_ordering(817) 00:18:51.340 fused_ordering(818) 00:18:51.340 fused_ordering(819) 00:18:51.340 fused_ordering(820) 00:18:51.908 fused_ordering(821) 00:18:51.908 fused_ordering(822) 00:18:51.908 fused_ordering(823) 00:18:51.908 fused_ordering(824) 00:18:51.908 fused_ordering(825) 00:18:51.908 fused_ordering(826) 00:18:51.908 fused_ordering(827) 00:18:51.908 fused_ordering(828) 00:18:51.908 fused_ordering(829) 00:18:51.908 fused_ordering(830) 00:18:51.908 fused_ordering(831) 00:18:51.908 fused_ordering(832) 00:18:51.908 fused_ordering(833) 00:18:51.908 fused_ordering(834) 00:18:51.908 fused_ordering(835) 00:18:51.908 fused_ordering(836) 00:18:51.908 fused_ordering(837) 00:18:51.908 fused_ordering(838) 00:18:51.908 fused_ordering(839) 00:18:51.908 fused_ordering(840) 00:18:51.908 fused_ordering(841) 00:18:51.908 fused_ordering(842) 00:18:51.908 fused_ordering(843) 00:18:51.908 fused_ordering(844) 00:18:51.908 fused_ordering(845) 00:18:51.908 fused_ordering(846) 00:18:51.908 fused_ordering(847) 00:18:51.908 fused_ordering(848) 00:18:51.908 fused_ordering(849) 00:18:51.908 fused_ordering(850) 00:18:51.908 fused_ordering(851) 00:18:51.908 fused_ordering(852) 00:18:51.908 fused_ordering(853) 00:18:51.908 fused_ordering(854) 00:18:51.908 fused_ordering(855) 00:18:51.908 fused_ordering(856) 00:18:51.908 fused_ordering(857) 00:18:51.908 fused_ordering(858) 00:18:51.908 fused_ordering(859) 00:18:51.908 fused_ordering(860) 00:18:51.908 fused_ordering(861) 00:18:51.908 fused_ordering(862) 00:18:51.908 fused_ordering(863) 00:18:51.908 fused_ordering(864) 00:18:51.908 fused_ordering(865) 00:18:51.908 fused_ordering(866) 00:18:51.908 fused_ordering(867) 00:18:51.908 fused_ordering(868) 00:18:51.908 fused_ordering(869) 00:18:51.908 fused_ordering(870) 00:18:51.908 fused_ordering(871) 00:18:51.908 fused_ordering(872) 00:18:51.908 fused_ordering(873) 00:18:51.908 fused_ordering(874) 00:18:51.908 fused_ordering(875) 00:18:51.908 fused_ordering(876) 00:18:51.908 fused_ordering(877) 00:18:51.908 fused_ordering(878) 00:18:51.908 fused_ordering(879) 00:18:51.908 fused_ordering(880) 00:18:51.908 fused_ordering(881) 00:18:51.908 fused_ordering(882) 00:18:51.908 fused_ordering(883) 00:18:51.908 fused_ordering(884) 00:18:51.908 fused_ordering(885) 00:18:51.908 fused_ordering(886) 00:18:51.908 fused_ordering(887) 00:18:51.908 fused_ordering(888) 00:18:51.908 fused_ordering(889) 00:18:51.908 fused_ordering(890) 00:18:51.908 fused_ordering(891) 00:18:51.908 fused_ordering(892) 00:18:51.908 fused_ordering(893) 00:18:51.908 fused_ordering(894) 00:18:51.908 fused_ordering(895) 00:18:51.908 fused_ordering(896) 00:18:51.908 fused_ordering(897) 00:18:51.908 fused_ordering(898) 00:18:51.908 fused_ordering(899) 00:18:51.908 fused_ordering(900) 00:18:51.908 fused_ordering(901) 00:18:51.908 fused_ordering(902) 00:18:51.908 fused_ordering(903) 00:18:51.908 fused_ordering(904) 00:18:51.908 fused_ordering(905) 00:18:51.908 fused_ordering(906) 00:18:51.908 fused_ordering(907) 00:18:51.908 fused_ordering(908) 00:18:51.908 fused_ordering(909) 00:18:51.908 fused_ordering(910) 00:18:51.908 fused_ordering(911) 00:18:51.908 fused_ordering(912) 00:18:51.908 fused_ordering(913) 00:18:51.908 fused_ordering(914) 00:18:51.908 fused_ordering(915) 00:18:51.908 fused_ordering(916) 00:18:51.908 fused_ordering(917) 00:18:51.908 fused_ordering(918) 00:18:51.908 fused_ordering(919) 00:18:51.909 fused_ordering(920) 00:18:51.909 fused_ordering(921) 00:18:51.909 fused_ordering(922) 00:18:51.909 fused_ordering(923) 00:18:51.909 fused_ordering(924) 00:18:51.909 fused_ordering(925) 00:18:51.909 fused_ordering(926) 00:18:51.909 fused_ordering(927) 00:18:51.909 fused_ordering(928) 00:18:51.909 fused_ordering(929) 00:18:51.909 fused_ordering(930) 00:18:51.909 fused_ordering(931) 00:18:51.909 fused_ordering(932) 00:18:51.909 fused_ordering(933) 00:18:51.909 fused_ordering(934) 00:18:51.909 fused_ordering(935) 00:18:51.909 fused_ordering(936) 00:18:51.909 fused_ordering(937) 00:18:51.909 fused_ordering(938) 00:18:51.909 fused_ordering(939) 00:18:51.909 fused_ordering(940) 00:18:51.909 fused_ordering(941) 00:18:51.909 fused_ordering(942) 00:18:51.909 fused_ordering(943) 00:18:51.909 fused_ordering(944) 00:18:51.909 fused_ordering(945) 00:18:51.909 fused_ordering(946) 00:18:51.909 fused_ordering(947) 00:18:51.909 fused_ordering(948) 00:18:51.909 fused_ordering(949) 00:18:51.909 fused_ordering(950) 00:18:51.909 fused_ordering(951) 00:18:51.909 fused_ordering(952) 00:18:51.909 fused_ordering(953) 00:18:51.909 fused_ordering(954) 00:18:51.909 fused_ordering(955) 00:18:51.909 fused_ordering(956) 00:18:51.909 fused_ordering(957) 00:18:51.909 fused_ordering(958) 00:18:51.909 fused_ordering(959) 00:18:51.909 fused_ordering(960) 00:18:51.909 fused_ordering(961) 00:18:51.909 fused_ordering(962) 00:18:51.909 fused_ordering(963) 00:18:51.909 fused_ordering(964) 00:18:51.909 fused_ordering(965) 00:18:51.909 fused_ordering(966) 00:18:51.909 fused_ordering(967) 00:18:51.909 fused_ordering(968) 00:18:51.909 fused_ordering(969) 00:18:51.909 fused_ordering(970) 00:18:51.909 fused_ordering(971) 00:18:51.909 fused_ordering(972) 00:18:51.909 fused_ordering(973) 00:18:51.909 fused_ordering(974) 00:18:51.909 fused_ordering(975) 00:18:51.909 fused_ordering(976) 00:18:51.909 fused_ordering(977) 00:18:51.909 fused_ordering(978) 00:18:51.909 fused_ordering(979) 00:18:51.909 fused_ordering(980) 00:18:51.909 fused_ordering(981) 00:18:51.909 fused_ordering(982) 00:18:51.909 fused_ordering(983) 00:18:51.909 fused_ordering(984) 00:18:51.909 fused_ordering(985) 00:18:51.909 fused_ordering(986) 00:18:51.909 fused_ordering(987) 00:18:51.909 fused_ordering(988) 00:18:51.909 fused_ordering(989) 00:18:51.909 fused_ordering(990) 00:18:51.909 fused_ordering(991) 00:18:51.909 fused_ordering(992) 00:18:51.909 fused_ordering(993) 00:18:51.909 fused_ordering(994) 00:18:51.909 fused_ordering(995) 00:18:51.909 fused_ordering(996) 00:18:51.909 fused_ordering(997) 00:18:51.909 fused_ordering(998) 00:18:51.909 fused_ordering(999) 00:18:51.909 fused_ordering(1000) 00:18:51.909 fused_ordering(1001) 00:18:51.909 fused_ordering(1002) 00:18:51.909 fused_ordering(1003) 00:18:51.909 fused_ordering(1004) 00:18:51.909 fused_ordering(1005) 00:18:51.909 fused_ordering(1006) 00:18:51.909 fused_ordering(1007) 00:18:51.909 fused_ordering(1008) 00:18:51.909 fused_ordering(1009) 00:18:51.909 fused_ordering(1010) 00:18:51.909 fused_ordering(1011) 00:18:51.909 fused_ordering(1012) 00:18:51.909 fused_ordering(1013) 00:18:51.909 fused_ordering(1014) 00:18:51.909 fused_ordering(1015) 00:18:51.909 fused_ordering(1016) 00:18:51.909 fused_ordering(1017) 00:18:51.909 fused_ordering(1018) 00:18:51.909 fused_ordering(1019) 00:18:51.909 fused_ordering(1020) 00:18:51.909 fused_ordering(1021) 00:18:51.909 fused_ordering(1022) 00:18:51.909 fused_ordering(1023) 00:18:51.909 23:59:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:18:51.909 23:59:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:18:51.909 23:59:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@516 -- # nvmfcleanup 00:18:51.909 23:59:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@121 -- # sync 00:18:51.909 23:59:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:51.909 23:59:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set +e 00:18:51.909 23:59:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:51.909 23:59:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:51.909 rmmod nvme_tcp 00:18:51.909 rmmod nvme_fabrics 00:18:51.909 rmmod nvme_keyring 00:18:51.909 23:59:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:51.909 23:59:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@128 -- # set -e 00:18:51.909 23:59:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@129 -- # return 0 00:18:51.909 23:59:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@517 -- # '[' -n 3990047 ']' 00:18:51.909 23:59:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@518 -- # killprocess 3990047 00:18:51.909 23:59:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # '[' -z 3990047 ']' 00:18:51.909 23:59:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@958 -- # kill -0 3990047 00:18:52.168 23:59:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@959 -- # uname 00:18:52.168 23:59:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:52.168 23:59:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3990047 00:18:52.168 23:59:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:18:52.168 23:59:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:18:52.168 23:59:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3990047' 00:18:52.168 killing process with pid 3990047 00:18:52.168 23:59:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@973 -- # kill 3990047 00:18:52.168 23:59:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@978 -- # wait 3990047 00:18:53.105 23:59:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:18:53.105 23:59:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:18:53.105 23:59:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:18:53.105 23:59:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@297 -- # iptr 00:18:53.105 23:59:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # iptables-save 00:18:53.105 23:59:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:18:53.105 23:59:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # iptables-restore 00:18:53.105 23:59:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:53.105 23:59:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@302 -- # remove_spdk_ns 00:18:53.105 23:59:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:53.105 23:59:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:53.105 23:59:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:55.641 23:59:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:18:55.641 00:18:55.641 real 0m12.018s 00:18:55.641 user 0m7.038s 00:18:55.641 sys 0m5.605s 00:18:55.641 23:59:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:55.641 23:59:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:18:55.641 ************************************ 00:18:55.641 END TEST nvmf_fused_ordering 00:18:55.641 ************************************ 00:18:55.641 23:59:34 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@26 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:18:55.641 23:59:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:18:55.641 23:59:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:55.641 23:59:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:55.641 ************************************ 00:18:55.641 START TEST nvmf_ns_masking 00:18:55.641 ************************************ 00:18:55.641 23:59:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1129 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:18:55.641 * Looking for test storage... 00:18:55.641 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:55.641 23:59:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:18:55.641 23:59:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1711 -- # lcov --version 00:18:55.641 23:59:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:18:55.641 23:59:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:18:55.641 23:59:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:55.641 23:59:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:55.641 23:59:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:55.641 23:59:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # IFS=.-: 00:18:55.641 23:59:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # read -ra ver1 00:18:55.641 23:59:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # IFS=.-: 00:18:55.641 23:59:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # read -ra ver2 00:18:55.641 23:59:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@338 -- # local 'op=<' 00:18:55.641 23:59:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@340 -- # ver1_l=2 00:18:55.641 23:59:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@341 -- # ver2_l=1 00:18:55.641 23:59:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:55.641 23:59:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@344 -- # case "$op" in 00:18:55.641 23:59:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@345 -- # : 1 00:18:55.641 23:59:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:55.641 23:59:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:55.642 23:59:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # decimal 1 00:18:55.642 23:59:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=1 00:18:55.642 23:59:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:55.642 23:59:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 1 00:18:55.642 23:59:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # ver1[v]=1 00:18:55.642 23:59:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # decimal 2 00:18:55.642 23:59:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=2 00:18:55.642 23:59:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:55.642 23:59:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 2 00:18:55.642 23:59:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # ver2[v]=2 00:18:55.642 23:59:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:55.642 23:59:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:55.642 23:59:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # return 0 00:18:55.642 23:59:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:55.642 23:59:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:18:55.642 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:55.642 --rc genhtml_branch_coverage=1 00:18:55.642 --rc genhtml_function_coverage=1 00:18:55.642 --rc genhtml_legend=1 00:18:55.642 --rc geninfo_all_blocks=1 00:18:55.642 --rc geninfo_unexecuted_blocks=1 00:18:55.642 00:18:55.642 ' 00:18:55.642 23:59:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:18:55.642 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:55.642 --rc genhtml_branch_coverage=1 00:18:55.642 --rc genhtml_function_coverage=1 00:18:55.642 --rc genhtml_legend=1 00:18:55.642 --rc geninfo_all_blocks=1 00:18:55.642 --rc geninfo_unexecuted_blocks=1 00:18:55.642 00:18:55.642 ' 00:18:55.642 23:59:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:18:55.642 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:55.642 --rc genhtml_branch_coverage=1 00:18:55.642 --rc genhtml_function_coverage=1 00:18:55.642 --rc genhtml_legend=1 00:18:55.642 --rc geninfo_all_blocks=1 00:18:55.642 --rc geninfo_unexecuted_blocks=1 00:18:55.642 00:18:55.642 ' 00:18:55.642 23:59:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:18:55.642 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:55.642 --rc genhtml_branch_coverage=1 00:18:55.642 --rc genhtml_function_coverage=1 00:18:55.642 --rc genhtml_legend=1 00:18:55.642 --rc geninfo_all_blocks=1 00:18:55.642 --rc geninfo_unexecuted_blocks=1 00:18:55.642 00:18:55.642 ' 00:18:55.642 23:59:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:55.642 23:59:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:18:55.642 23:59:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:55.642 23:59:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:55.642 23:59:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:55.642 23:59:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:55.642 23:59:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:55.642 23:59:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:55.642 23:59:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:55.642 23:59:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:55.642 23:59:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:55.642 23:59:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:55.642 23:59:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:18:55.642 23:59:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:18:55.642 23:59:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:55.642 23:59:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:55.642 23:59:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:55.642 23:59:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:55.642 23:59:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:55.642 23:59:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@15 -- # shopt -s extglob 00:18:55.642 23:59:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:55.642 23:59:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:55.642 23:59:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:55.642 23:59:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:55.642 23:59:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:55.642 23:59:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:55.642 23:59:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:18:55.642 23:59:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:55.642 23:59:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@51 -- # : 0 00:18:55.642 23:59:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:55.642 23:59:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:55.642 23:59:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:55.642 23:59:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:55.642 23:59:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:55.642 23:59:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:55.642 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:55.642 23:59:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:55.642 23:59:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:55.642 23:59:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:55.642 23:59:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:55.642 23:59:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:18:55.642 23:59:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:18:55.642 23:59:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:18:55.642 23:59:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=18803650-cfcc-4c62-900d-4e8cb28ec681 00:18:55.642 23:59:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:18:55.642 23:59:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=c1537177-021c-4e1b-a3ba-16f8d174c0d4 00:18:55.642 23:59:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:18:55.642 23:59:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:18:55.642 23:59:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:18:55.642 23:59:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:18:55.642 23:59:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=7ac68889-ffff-4b05-b6e5-59aa79aa69ee 00:18:55.642 23:59:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:18:55.642 23:59:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:18:55.642 23:59:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:55.642 23:59:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@476 -- # prepare_net_devs 00:18:55.642 23:59:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@438 -- # local -g is_hw=no 00:18:55.643 23:59:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@440 -- # remove_spdk_ns 00:18:55.643 23:59:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:55.643 23:59:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:55.643 23:59:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:55.643 23:59:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:18:55.643 23:59:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:18:55.643 23:59:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@309 -- # xtrace_disable 00:18:55.643 23:59:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:19:00.968 23:59:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:00.968 23:59:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # pci_devs=() 00:19:00.968 23:59:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # local -a pci_devs 00:19:00.968 23:59:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # pci_net_devs=() 00:19:00.968 23:59:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:19:00.968 23:59:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # pci_drivers=() 00:19:00.968 23:59:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # local -A pci_drivers 00:19:00.968 23:59:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # net_devs=() 00:19:00.968 23:59:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # local -ga net_devs 00:19:00.968 23:59:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # e810=() 00:19:00.968 23:59:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # local -ga e810 00:19:00.968 23:59:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # x722=() 00:19:00.968 23:59:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # local -ga x722 00:19:00.968 23:59:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # mlx=() 00:19:00.968 23:59:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # local -ga mlx 00:19:00.968 23:59:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:00.968 23:59:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:00.968 23:59:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:00.968 23:59:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:00.968 23:59:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:00.968 23:59:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:00.968 23:59:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:00.968 23:59:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:19:00.968 23:59:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:00.968 23:59:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:00.968 23:59:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:00.968 23:59:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:00.968 23:59:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:19:00.968 23:59:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:19:00.968 23:59:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:19:00.968 23:59:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:19:00.968 23:59:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:19:00.968 23:59:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:19:00.968 23:59:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:00.968 23:59:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:19:00.968 Found 0000:af:00.0 (0x8086 - 0x159b) 00:19:00.968 23:59:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:00.968 23:59:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:00.968 23:59:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:00.968 23:59:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:00.968 23:59:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:00.968 23:59:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:00.968 23:59:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:19:00.968 Found 0000:af:00.1 (0x8086 - 0x159b) 00:19:00.968 23:59:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:00.968 23:59:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:00.968 23:59:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:00.968 23:59:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:00.968 23:59:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:00.968 23:59:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:19:00.968 23:59:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:19:00.968 23:59:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:19:00.968 23:59:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:00.968 23:59:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:00.968 23:59:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:00.968 23:59:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:00.968 23:59:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:00.968 23:59:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:00.969 23:59:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:00.969 23:59:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:19:00.969 Found net devices under 0000:af:00.0: cvl_0_0 00:19:00.969 23:59:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:00.969 23:59:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:00.969 23:59:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:00.969 23:59:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:00.969 23:59:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:00.969 23:59:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:00.969 23:59:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:00.969 23:59:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:00.969 23:59:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:19:00.969 Found net devices under 0000:af:00.1: cvl_0_1 00:19:00.969 23:59:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:00.969 23:59:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:19:00.969 23:59:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # is_hw=yes 00:19:00.969 23:59:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:19:00.969 23:59:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:19:00.969 23:59:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:19:00.969 23:59:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:00.969 23:59:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:00.969 23:59:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:00.969 23:59:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:00.969 23:59:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:19:00.969 23:59:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:00.969 23:59:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:00.969 23:59:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:19:00.969 23:59:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:19:00.969 23:59:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:00.969 23:59:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:00.969 23:59:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:19:00.969 23:59:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:19:00.969 23:59:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:19:00.969 23:59:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:00.969 23:59:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:00.969 23:59:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:00.969 23:59:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:19:00.969 23:59:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:00.969 23:59:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:00.969 23:59:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:00.969 23:59:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:19:00.969 23:59:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:19:00.969 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:00.969 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.426 ms 00:19:00.969 00:19:00.969 --- 10.0.0.2 ping statistics --- 00:19:00.969 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:00.969 rtt min/avg/max/mdev = 0.426/0.426/0.426/0.000 ms 00:19:00.969 23:59:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:00.969 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:00.969 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.196 ms 00:19:00.969 00:19:00.969 --- 10.0.0.1 ping statistics --- 00:19:00.969 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:00.969 rtt min/avg/max/mdev = 0.196/0.196/0.196/0.000 ms 00:19:00.969 23:59:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:00.969 23:59:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@450 -- # return 0 00:19:00.969 23:59:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:19:00.969 23:59:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:00.969 23:59:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:19:00.969 23:59:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:19:00.969 23:59:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:00.969 23:59:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:19:00.969 23:59:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:19:00.969 23:59:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:19:00.969 23:59:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:00.969 23:59:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:00.969 23:59:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:19:00.969 23:59:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@509 -- # nvmfpid=3994156 00:19:00.969 23:59:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@510 -- # waitforlisten 3994156 00:19:00.969 23:59:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # '[' -z 3994156 ']' 00:19:00.969 23:59:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:00.969 23:59:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:00.969 23:59:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:00.969 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:00.969 23:59:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:00.969 23:59:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:19:00.969 23:59:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:19:00.969 [2024-12-13 23:59:39.516308] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:19:00.969 [2024-12-13 23:59:39.516416] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:00.969 [2024-12-13 23:59:39.634789] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:00.969 [2024-12-13 23:59:39.737311] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:00.969 [2024-12-13 23:59:39.737357] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:00.969 [2024-12-13 23:59:39.737367] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:00.969 [2024-12-13 23:59:39.737377] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:00.969 [2024-12-13 23:59:39.737386] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:00.969 [2024-12-13 23:59:39.738920] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:19:01.228 23:59:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:01.228 23:59:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@868 -- # return 0 00:19:01.228 23:59:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:01.228 23:59:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:01.228 23:59:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:19:01.228 23:59:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:01.228 23:59:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:19:01.487 [2024-12-13 23:59:40.527782] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:01.487 23:59:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:19:01.487 23:59:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:19:01.487 23:59:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:19:01.744 Malloc1 00:19:01.744 23:59:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:19:02.004 Malloc2 00:19:02.004 23:59:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:19:02.262 23:59:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:19:02.262 23:59:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:02.521 [2024-12-13 23:59:41.566632] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:02.521 23:59:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:19:02.521 23:59:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 7ac68889-ffff-4b05-b6e5-59aa79aa69ee -a 10.0.0.2 -s 4420 -i 4 00:19:02.780 23:59:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:19:02.780 23:59:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:19:02.780 23:59:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:19:02.780 23:59:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:19:02.780 23:59:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:19:04.682 23:59:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:19:04.682 23:59:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:19:04.682 23:59:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:19:04.682 23:59:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:19:04.682 23:59:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:19:04.682 23:59:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:19:04.682 23:59:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:19:04.682 23:59:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:19:04.682 23:59:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:19:04.682 23:59:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:19:04.682 23:59:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:19:04.682 23:59:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:19:04.682 23:59:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:19:04.682 [ 0]:0x1 00:19:04.682 23:59:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:19:04.682 23:59:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:19:04.940 23:59:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=966a4741a1b341a582c6044cc901d749 00:19:04.940 23:59:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 966a4741a1b341a582c6044cc901d749 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:19:04.940 23:59:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:19:04.940 23:59:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:19:04.940 23:59:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:19:04.940 23:59:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:19:04.940 [ 0]:0x1 00:19:04.940 23:59:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:19:04.940 23:59:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:19:04.940 23:59:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=966a4741a1b341a582c6044cc901d749 00:19:04.940 23:59:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 966a4741a1b341a582c6044cc901d749 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:19:04.940 23:59:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:19:04.940 23:59:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:19:04.940 23:59:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:19:05.200 [ 1]:0x2 00:19:05.200 23:59:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:19:05.200 23:59:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:19:05.200 23:59:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=2ebb98b1c2894f9e8e411c63af2d0515 00:19:05.200 23:59:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 2ebb98b1c2894f9e8e411c63af2d0515 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:19:05.200 23:59:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:19:05.200 23:59:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:19:05.459 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:05.459 23:59:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:05.459 23:59:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:19:05.718 23:59:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:19:05.718 23:59:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 7ac68889-ffff-4b05-b6e5-59aa79aa69ee -a 10.0.0.2 -s 4420 -i 4 00:19:05.977 23:59:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:19:05.977 23:59:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:19:05.977 23:59:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:19:05.977 23:59:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n 1 ]] 00:19:05.977 23:59:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # nvme_device_counter=1 00:19:05.977 23:59:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:19:07.877 23:59:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:19:07.877 23:59:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:19:07.877 23:59:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:19:07.877 23:59:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:19:07.877 23:59:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:19:07.877 23:59:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:19:07.877 23:59:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:19:07.877 23:59:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:19:08.136 23:59:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:19:08.136 23:59:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:19:08.136 23:59:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:19:08.136 23:59:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:19:08.136 23:59:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:19:08.136 23:59:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:19:08.136 23:59:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:08.136 23:59:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:19:08.136 23:59:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:08.136 23:59:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:19:08.136 23:59:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:19:08.136 23:59:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:19:08.136 23:59:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:19:08.136 23:59:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:19:08.136 23:59:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:19:08.136 23:59:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:19:08.136 23:59:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:19:08.136 23:59:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:08.136 23:59:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:08.136 23:59:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:08.136 23:59:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:19:08.136 23:59:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:19:08.136 23:59:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:19:08.136 [ 0]:0x2 00:19:08.136 23:59:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:19:08.136 23:59:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:19:08.136 23:59:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=2ebb98b1c2894f9e8e411c63af2d0515 00:19:08.136 23:59:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 2ebb98b1c2894f9e8e411c63af2d0515 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:19:08.136 23:59:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:19:08.395 23:59:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:19:08.395 23:59:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:19:08.395 23:59:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:19:08.395 [ 0]:0x1 00:19:08.395 23:59:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:19:08.395 23:59:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:19:08.395 23:59:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=966a4741a1b341a582c6044cc901d749 00:19:08.395 23:59:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 966a4741a1b341a582c6044cc901d749 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:19:08.395 23:59:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:19:08.395 23:59:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:19:08.395 23:59:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:19:08.670 [ 1]:0x2 00:19:08.670 23:59:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:19:08.670 23:59:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:19:08.670 23:59:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=2ebb98b1c2894f9e8e411c63af2d0515 00:19:08.670 23:59:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 2ebb98b1c2894f9e8e411c63af2d0515 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:19:08.670 23:59:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:19:08.670 23:59:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:19:08.670 23:59:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:19:08.670 23:59:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:19:08.670 23:59:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:19:08.670 23:59:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:08.670 23:59:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:19:08.670 23:59:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:08.670 23:59:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:19:08.670 23:59:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:19:08.670 23:59:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:19:08.670 23:59:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:19:08.670 23:59:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:19:08.929 23:59:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:19:08.929 23:59:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:19:08.929 23:59:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:19:08.929 23:59:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:08.929 23:59:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:08.929 23:59:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:08.929 23:59:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:19:08.929 23:59:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:19:08.929 23:59:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:19:08.929 [ 0]:0x2 00:19:08.929 23:59:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:19:08.929 23:59:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:19:08.929 23:59:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=2ebb98b1c2894f9e8e411c63af2d0515 00:19:08.929 23:59:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 2ebb98b1c2894f9e8e411c63af2d0515 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:19:08.929 23:59:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:19:08.929 23:59:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:19:08.929 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:08.929 23:59:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:19:09.188 23:59:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:19:09.188 23:59:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 7ac68889-ffff-4b05-b6e5-59aa79aa69ee -a 10.0.0.2 -s 4420 -i 4 00:19:09.446 23:59:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:19:09.446 23:59:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:19:09.446 23:59:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:19:09.446 23:59:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n 2 ]] 00:19:09.446 23:59:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # nvme_device_counter=2 00:19:09.446 23:59:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:19:11.350 23:59:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:19:11.350 23:59:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:19:11.350 23:59:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:19:11.350 23:59:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=2 00:19:11.350 23:59:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:19:11.350 23:59:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:19:11.350 23:59:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:19:11.350 23:59:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:19:11.350 23:59:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:19:11.350 23:59:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:19:11.350 23:59:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:19:11.350 23:59:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:19:11.350 23:59:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:19:11.350 [ 0]:0x1 00:19:11.350 23:59:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:19:11.350 23:59:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:19:11.350 23:59:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=966a4741a1b341a582c6044cc901d749 00:19:11.350 23:59:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 966a4741a1b341a582c6044cc901d749 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:19:11.350 23:59:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:19:11.350 23:59:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:19:11.350 23:59:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:19:11.350 [ 1]:0x2 00:19:11.350 23:59:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:19:11.350 23:59:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:19:11.609 23:59:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=2ebb98b1c2894f9e8e411c63af2d0515 00:19:11.609 23:59:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 2ebb98b1c2894f9e8e411c63af2d0515 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:19:11.609 23:59:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:19:11.609 23:59:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:19:11.609 23:59:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:19:11.609 23:59:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:19:11.609 23:59:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:19:11.609 23:59:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:11.609 23:59:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:19:11.609 23:59:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:11.609 23:59:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:19:11.609 23:59:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:19:11.609 23:59:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:19:11.609 23:59:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:19:11.609 23:59:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:19:11.867 23:59:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:19:11.867 23:59:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:19:11.867 23:59:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:19:11.867 23:59:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:11.867 23:59:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:11.867 23:59:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:11.867 23:59:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:19:11.867 23:59:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:19:11.867 23:59:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:19:11.867 [ 0]:0x2 00:19:11.867 23:59:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:19:11.867 23:59:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:19:11.867 23:59:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=2ebb98b1c2894f9e8e411c63af2d0515 00:19:11.867 23:59:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 2ebb98b1c2894f9e8e411c63af2d0515 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:19:11.867 23:59:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:19:11.867 23:59:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:19:11.867 23:59:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:19:11.867 23:59:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:19:11.867 23:59:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:11.867 23:59:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:19:11.867 23:59:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:11.867 23:59:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:19:11.867 23:59:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:11.867 23:59:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:19:11.867 23:59:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:19:11.868 23:59:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:19:12.126 [2024-12-13 23:59:51.065787] nvmf_rpc.c:1873:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:19:12.126 request: 00:19:12.126 { 00:19:12.126 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:12.126 "nsid": 2, 00:19:12.126 "host": "nqn.2016-06.io.spdk:host1", 00:19:12.126 "method": "nvmf_ns_remove_host", 00:19:12.126 "req_id": 1 00:19:12.126 } 00:19:12.126 Got JSON-RPC error response 00:19:12.126 response: 00:19:12.126 { 00:19:12.126 "code": -32602, 00:19:12.126 "message": "Invalid parameters" 00:19:12.126 } 00:19:12.126 23:59:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:19:12.126 23:59:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:12.126 23:59:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:12.126 23:59:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:12.126 23:59:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:19:12.126 23:59:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:19:12.126 23:59:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:19:12.126 23:59:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:19:12.126 23:59:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:12.126 23:59:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:19:12.126 23:59:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:12.126 23:59:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:19:12.126 23:59:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:19:12.126 23:59:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:19:12.126 23:59:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:19:12.126 23:59:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:19:12.126 23:59:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:19:12.126 23:59:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:19:12.126 23:59:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:19:12.126 23:59:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:12.126 23:59:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:12.126 23:59:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:12.126 23:59:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:19:12.126 23:59:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:19:12.126 23:59:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:19:12.126 [ 0]:0x2 00:19:12.126 23:59:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:19:12.126 23:59:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:19:12.126 23:59:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=2ebb98b1c2894f9e8e411c63af2d0515 00:19:12.126 23:59:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 2ebb98b1c2894f9e8e411c63af2d0515 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:19:12.126 23:59:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:19:12.126 23:59:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:19:12.384 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:12.384 23:59:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=3996119 00:19:12.384 23:59:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:19:12.384 23:59:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:19:12.384 23:59:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 3996119 /var/tmp/host.sock 00:19:12.384 23:59:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # '[' -z 3996119 ']' 00:19:12.384 23:59:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/host.sock 00:19:12.384 23:59:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:12.384 23:59:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:19:12.384 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:19:12.384 23:59:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:12.384 23:59:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:19:12.384 [2024-12-13 23:59:51.457295] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:19:12.384 [2024-12-13 23:59:51.457386] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3996119 ] 00:19:12.643 [2024-12-13 23:59:51.570991] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:12.643 [2024-12-13 23:59:51.682422] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:19:13.579 23:59:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:13.579 23:59:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@868 -- # return 0 00:19:13.579 23:59:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:13.579 23:59:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:19:13.837 23:59:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid 18803650-cfcc-4c62-900d-4e8cb28ec681 00:19:13.837 23:59:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:19:13.837 23:59:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 18803650CFCC4C62900D4E8CB28EC681 -i 00:19:14.095 23:59:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid c1537177-021c-4e1b-a3ba-16f8d174c0d4 00:19:14.095 23:59:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:19:14.095 23:59:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g C1537177021C4E1BA3BA16F8D174C0D4 -i 00:19:14.353 23:59:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:19:14.353 23:59:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:19:14.611 23:59:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:19:14.611 23:59:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:19:15.178 nvme0n1 00:19:15.178 23:59:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:19:15.178 23:59:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:19:15.437 nvme1n2 00:19:15.437 23:59:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:19:15.437 23:59:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:19:15.437 23:59:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:19:15.437 23:59:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:19:15.437 23:59:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:19:15.695 23:59:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:19:15.695 23:59:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:19:15.695 23:59:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:19:15.695 23:59:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:19:15.953 23:59:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ 18803650-cfcc-4c62-900d-4e8cb28ec681 == \1\8\8\0\3\6\5\0\-\c\f\c\c\-\4\c\6\2\-\9\0\0\d\-\4\e\8\c\b\2\8\e\c\6\8\1 ]] 00:19:15.953 23:59:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:19:15.953 23:59:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:19:15.953 23:59:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:19:15.953 23:59:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ c1537177-021c-4e1b-a3ba-16f8d174c0d4 == \c\1\5\3\7\1\7\7\-\0\2\1\c\-\4\e\1\b\-\a\3\b\a\-\1\6\f\8\d\1\7\4\c\0\d\4 ]] 00:19:15.953 23:59:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@137 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:16.210 23:59:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@138 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:19:16.469 23:59:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # uuid2nguid 18803650-cfcc-4c62-900d-4e8cb28ec681 00:19:16.469 23:59:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:19:16.469 23:59:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 18803650CFCC4C62900D4E8CB28EC681 00:19:16.469 23:59:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:19:16.469 23:59:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 18803650CFCC4C62900D4E8CB28EC681 00:19:16.469 23:59:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:19:16.469 23:59:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:16.469 23:59:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:19:16.469 23:59:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:16.469 23:59:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:19:16.469 23:59:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:16.469 23:59:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:19:16.469 23:59:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:19:16.469 23:59:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 18803650CFCC4C62900D4E8CB28EC681 00:19:16.469 [2024-12-13 23:59:55.597881] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: invalid 00:19:16.469 [2024-12-13 23:59:55.597925] subsystem.c:2160:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode1: bdev invalid cannot be opened, error=-19 00:19:16.469 [2024-12-13 23:59:55.597938] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:16.469 request: 00:19:16.469 { 00:19:16.469 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:16.469 "namespace": { 00:19:16.469 "bdev_name": "invalid", 00:19:16.469 "nsid": 1, 00:19:16.469 "nguid": "18803650CFCC4C62900D4E8CB28EC681", 00:19:16.469 "no_auto_visible": false, 00:19:16.469 "hide_metadata": false 00:19:16.469 }, 00:19:16.469 "method": "nvmf_subsystem_add_ns", 00:19:16.469 "req_id": 1 00:19:16.469 } 00:19:16.469 Got JSON-RPC error response 00:19:16.469 response: 00:19:16.469 { 00:19:16.469 "code": -32602, 00:19:16.469 "message": "Invalid parameters" 00:19:16.469 } 00:19:16.728 23:59:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:19:16.728 23:59:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:16.728 23:59:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:16.728 23:59:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:16.728 23:59:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # uuid2nguid 18803650-cfcc-4c62-900d-4e8cb28ec681 00:19:16.728 23:59:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:19:16.728 23:59:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 18803650CFCC4C62900D4E8CB28EC681 -i 00:19:16.728 23:59:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@143 -- # sleep 2s 00:19:19.262 23:59:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # hostrpc bdev_get_bdevs 00:19:19.262 23:59:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # jq length 00:19:19.262 23:59:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:19:19.262 23:59:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # (( 0 == 0 )) 00:19:19.262 23:59:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@146 -- # killprocess 3996119 00:19:19.262 23:59:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # '[' -z 3996119 ']' 00:19:19.262 23:59:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # kill -0 3996119 00:19:19.262 23:59:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # uname 00:19:19.262 23:59:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:19.262 23:59:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3996119 00:19:19.262 23:59:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:19:19.262 23:59:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:19:19.262 23:59:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3996119' 00:19:19.262 killing process with pid 3996119 00:19:19.262 23:59:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@973 -- # kill 3996119 00:19:19.262 23:59:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@978 -- # wait 3996119 00:19:21.796 00:00:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:21.796 00:00:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:19:21.796 00:00:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@150 -- # nvmftestfini 00:19:21.796 00:00:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@516 -- # nvmfcleanup 00:19:21.796 00:00:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@121 -- # sync 00:19:21.796 00:00:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:21.796 00:00:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@124 -- # set +e 00:19:21.796 00:00:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:21.796 00:00:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:21.796 rmmod nvme_tcp 00:19:21.796 rmmod nvme_fabrics 00:19:21.796 rmmod nvme_keyring 00:19:21.796 00:00:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:21.796 00:00:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@128 -- # set -e 00:19:21.796 00:00:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@129 -- # return 0 00:19:21.796 00:00:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@517 -- # '[' -n 3994156 ']' 00:19:21.796 00:00:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@518 -- # killprocess 3994156 00:19:21.796 00:00:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # '[' -z 3994156 ']' 00:19:21.796 00:00:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # kill -0 3994156 00:19:21.796 00:00:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # uname 00:19:21.796 00:00:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:21.796 00:00:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3994156 00:19:21.796 00:00:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:21.796 00:00:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:21.796 00:00:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3994156' 00:19:21.796 killing process with pid 3994156 00:19:21.796 00:00:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@973 -- # kill 3994156 00:19:21.796 00:00:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@978 -- # wait 3994156 00:19:23.296 00:00:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:19:23.296 00:00:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:19:23.296 00:00:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:19:23.296 00:00:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@297 -- # iptr 00:19:23.296 00:00:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:19:23.296 00:00:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # iptables-save 00:19:23.296 00:00:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # iptables-restore 00:19:23.296 00:00:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:23.296 00:00:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@302 -- # remove_spdk_ns 00:19:23.296 00:00:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:23.296 00:00:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:23.296 00:00:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:25.196 00:00:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:19:25.196 00:19:25.196 real 0m29.878s 00:19:25.196 user 0m38.194s 00:19:25.196 sys 0m6.428s 00:19:25.196 00:00:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:25.196 00:00:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:19:25.196 ************************************ 00:19:25.196 END TEST nvmf_ns_masking 00:19:25.196 ************************************ 00:19:25.196 00:00:04 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@27 -- # [[ 1 -eq 1 ]] 00:19:25.196 00:00:04 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@28 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:19:25.196 00:00:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:25.196 00:00:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:25.196 00:00:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:25.196 ************************************ 00:19:25.196 START TEST nvmf_nvme_cli 00:19:25.196 ************************************ 00:19:25.196 00:00:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:19:25.196 * Looking for test storage... 00:19:25.196 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:25.196 00:00:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:19:25.454 00:00:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1711 -- # lcov --version 00:19:25.454 00:00:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:19:25.454 00:00:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:19:25.454 00:00:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:25.454 00:00:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:25.454 00:00:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:25.454 00:00:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # IFS=.-: 00:19:25.454 00:00:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # read -ra ver1 00:19:25.454 00:00:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # IFS=.-: 00:19:25.454 00:00:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # read -ra ver2 00:19:25.454 00:00:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@338 -- # local 'op=<' 00:19:25.454 00:00:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@340 -- # ver1_l=2 00:19:25.454 00:00:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@341 -- # ver2_l=1 00:19:25.454 00:00:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:25.454 00:00:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@344 -- # case "$op" in 00:19:25.454 00:00:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@345 -- # : 1 00:19:25.454 00:00:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:25.454 00:00:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:25.454 00:00:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # decimal 1 00:19:25.454 00:00:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=1 00:19:25.454 00:00:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:25.454 00:00:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 1 00:19:25.454 00:00:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # ver1[v]=1 00:19:25.454 00:00:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # decimal 2 00:19:25.454 00:00:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=2 00:19:25.454 00:00:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:25.454 00:00:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 2 00:19:25.454 00:00:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # ver2[v]=2 00:19:25.454 00:00:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:25.454 00:00:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:25.454 00:00:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # return 0 00:19:25.454 00:00:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:25.454 00:00:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:19:25.454 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:25.454 --rc genhtml_branch_coverage=1 00:19:25.454 --rc genhtml_function_coverage=1 00:19:25.454 --rc genhtml_legend=1 00:19:25.454 --rc geninfo_all_blocks=1 00:19:25.454 --rc geninfo_unexecuted_blocks=1 00:19:25.454 00:19:25.454 ' 00:19:25.454 00:00:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:19:25.454 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:25.454 --rc genhtml_branch_coverage=1 00:19:25.454 --rc genhtml_function_coverage=1 00:19:25.454 --rc genhtml_legend=1 00:19:25.454 --rc geninfo_all_blocks=1 00:19:25.454 --rc geninfo_unexecuted_blocks=1 00:19:25.454 00:19:25.454 ' 00:19:25.455 00:00:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:19:25.455 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:25.455 --rc genhtml_branch_coverage=1 00:19:25.455 --rc genhtml_function_coverage=1 00:19:25.455 --rc genhtml_legend=1 00:19:25.455 --rc geninfo_all_blocks=1 00:19:25.455 --rc geninfo_unexecuted_blocks=1 00:19:25.455 00:19:25.455 ' 00:19:25.455 00:00:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:19:25.455 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:25.455 --rc genhtml_branch_coverage=1 00:19:25.455 --rc genhtml_function_coverage=1 00:19:25.455 --rc genhtml_legend=1 00:19:25.455 --rc geninfo_all_blocks=1 00:19:25.455 --rc geninfo_unexecuted_blocks=1 00:19:25.455 00:19:25.455 ' 00:19:25.455 00:00:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:25.455 00:00:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:19:25.455 00:00:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:25.455 00:00:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:25.455 00:00:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:25.455 00:00:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:25.455 00:00:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:25.455 00:00:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:25.455 00:00:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:25.455 00:00:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:25.455 00:00:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:25.455 00:00:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:25.455 00:00:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:19:25.455 00:00:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:19:25.455 00:00:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:25.455 00:00:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:25.455 00:00:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:25.455 00:00:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:25.455 00:00:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:25.455 00:00:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@15 -- # shopt -s extglob 00:19:25.455 00:00:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:25.455 00:00:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:25.455 00:00:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:25.455 00:00:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:25.455 00:00:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:25.455 00:00:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:25.455 00:00:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:19:25.455 00:00:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:25.455 00:00:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@51 -- # : 0 00:19:25.455 00:00:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:25.455 00:00:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:25.455 00:00:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:25.455 00:00:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:25.455 00:00:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:25.455 00:00:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:25.455 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:25.455 00:00:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:25.455 00:00:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:25.455 00:00:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:25.455 00:00:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:25.455 00:00:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:25.455 00:00:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:19:25.455 00:00:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:19:25.455 00:00:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:19:25.455 00:00:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:25.455 00:00:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@476 -- # prepare_net_devs 00:19:25.455 00:00:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@438 -- # local -g is_hw=no 00:19:25.455 00:00:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@440 -- # remove_spdk_ns 00:19:25.455 00:00:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:25.455 00:00:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:25.455 00:00:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:25.455 00:00:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:19:25.455 00:00:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:19:25.455 00:00:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@309 -- # xtrace_disable 00:19:25.455 00:00:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:19:30.720 00:00:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:30.720 00:00:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # pci_devs=() 00:19:30.720 00:00:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # local -a pci_devs 00:19:30.720 00:00:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # pci_net_devs=() 00:19:30.720 00:00:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:19:30.720 00:00:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # pci_drivers=() 00:19:30.720 00:00:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # local -A pci_drivers 00:19:30.720 00:00:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # net_devs=() 00:19:30.720 00:00:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # local -ga net_devs 00:19:30.720 00:00:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # e810=() 00:19:30.720 00:00:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # local -ga e810 00:19:30.720 00:00:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # x722=() 00:19:30.720 00:00:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # local -ga x722 00:19:30.720 00:00:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # mlx=() 00:19:30.720 00:00:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # local -ga mlx 00:19:30.720 00:00:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:30.720 00:00:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:30.720 00:00:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:30.720 00:00:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:30.720 00:00:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:30.720 00:00:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:30.720 00:00:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:30.720 00:00:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:19:30.720 00:00:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:30.720 00:00:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:30.720 00:00:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:30.720 00:00:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:30.720 00:00:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:19:30.720 00:00:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:19:30.720 00:00:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:19:30.720 00:00:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:19:30.720 00:00:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:19:30.720 00:00:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:19:30.720 00:00:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:30.720 00:00:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:19:30.720 Found 0000:af:00.0 (0x8086 - 0x159b) 00:19:30.720 00:00:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:30.720 00:00:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:30.720 00:00:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:30.720 00:00:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:30.720 00:00:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:30.720 00:00:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:30.720 00:00:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:19:30.720 Found 0000:af:00.1 (0x8086 - 0x159b) 00:19:30.720 00:00:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:30.720 00:00:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:30.720 00:00:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:30.720 00:00:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:30.720 00:00:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:30.720 00:00:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:19:30.720 00:00:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:19:30.720 00:00:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:19:30.720 00:00:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:30.720 00:00:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:30.720 00:00:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:30.720 00:00:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:30.721 00:00:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:30.721 00:00:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:30.721 00:00:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:30.721 00:00:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:19:30.721 Found net devices under 0000:af:00.0: cvl_0_0 00:19:30.721 00:00:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:30.721 00:00:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:30.721 00:00:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:30.721 00:00:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:30.721 00:00:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:30.721 00:00:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:30.721 00:00:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:30.721 00:00:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:30.721 00:00:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:19:30.721 Found net devices under 0000:af:00.1: cvl_0_1 00:19:30.721 00:00:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:30.721 00:00:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:19:30.721 00:00:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # is_hw=yes 00:19:30.721 00:00:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:19:30.721 00:00:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:19:30.721 00:00:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:19:30.721 00:00:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:30.721 00:00:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:30.721 00:00:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:30.721 00:00:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:30.721 00:00:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:19:30.721 00:00:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:30.721 00:00:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:30.721 00:00:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:19:30.721 00:00:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:19:30.721 00:00:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:30.721 00:00:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:30.721 00:00:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:19:30.721 00:00:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:19:30.721 00:00:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:19:30.721 00:00:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:30.721 00:00:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:30.721 00:00:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:30.721 00:00:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:19:30.721 00:00:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:30.721 00:00:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:30.721 00:00:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:30.721 00:00:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:19:30.721 00:00:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:19:30.721 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:30.721 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.218 ms 00:19:30.721 00:19:30.721 --- 10.0.0.2 ping statistics --- 00:19:30.721 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:30.721 rtt min/avg/max/mdev = 0.218/0.218/0.218/0.000 ms 00:19:30.721 00:00:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:30.721 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:30.721 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.054 ms 00:19:30.721 00:19:30.721 --- 10.0.0.1 ping statistics --- 00:19:30.721 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:30.721 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:19:30.721 00:00:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:30.721 00:00:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@450 -- # return 0 00:19:30.721 00:00:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:19:30.721 00:00:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:30.721 00:00:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:19:30.721 00:00:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:19:30.721 00:00:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:30.721 00:00:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:19:30.721 00:00:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:19:30.721 00:00:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:19:30.721 00:00:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:30.721 00:00:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:30.721 00:00:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:19:30.721 00:00:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@509 -- # nvmfpid=4001927 00:19:30.721 00:00:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@510 -- # waitforlisten 4001927 00:19:30.721 00:00:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@835 -- # '[' -z 4001927 ']' 00:19:30.721 00:00:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:30.721 00:00:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:30.721 00:00:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:19:30.721 00:00:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:30.721 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:30.721 00:00:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:30.721 00:00:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:19:30.721 [2024-12-14 00:00:09.827155] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:19:30.721 [2024-12-14 00:00:09.827246] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:30.980 [2024-12-14 00:00:09.944328] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:30.980 [2024-12-14 00:00:10.055422] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:30.980 [2024-12-14 00:00:10.055475] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:30.980 [2024-12-14 00:00:10.055486] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:30.980 [2024-12-14 00:00:10.055496] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:30.980 [2024-12-14 00:00:10.055505] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:30.980 [2024-12-14 00:00:10.057854] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:19:30.980 [2024-12-14 00:00:10.057929] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:19:30.980 [2024-12-14 00:00:10.057991] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:19:30.980 [2024-12-14 00:00:10.058000] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:19:31.553 00:00:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:31.553 00:00:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@868 -- # return 0 00:19:31.553 00:00:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:31.553 00:00:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:31.553 00:00:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:19:31.553 00:00:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:31.553 00:00:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:31.553 00:00:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:31.553 00:00:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:19:31.553 [2024-12-14 00:00:10.681789] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:31.812 00:00:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:31.812 00:00:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:19:31.812 00:00:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:31.812 00:00:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:19:31.812 Malloc0 00:19:31.812 00:00:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:31.812 00:00:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:19:31.812 00:00:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:31.812 00:00:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:19:31.812 Malloc1 00:19:31.812 00:00:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:31.812 00:00:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:19:31.812 00:00:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:31.812 00:00:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:19:31.812 00:00:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:31.812 00:00:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:31.812 00:00:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:31.812 00:00:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:19:31.812 00:00:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:31.812 00:00:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:19:31.812 00:00:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:31.812 00:00:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:19:31.812 00:00:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:31.812 00:00:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:31.812 00:00:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:31.812 00:00:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:19:31.812 [2024-12-14 00:00:10.873323] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:31.812 00:00:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:31.812 00:00:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:19:31.812 00:00:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:31.812 00:00:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:19:31.812 00:00:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:31.813 00:00:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 4420 00:19:32.071 00:19:32.071 Discovery Log Number of Records 2, Generation counter 2 00:19:32.071 =====Discovery Log Entry 0====== 00:19:32.071 trtype: tcp 00:19:32.071 adrfam: ipv4 00:19:32.071 subtype: current discovery subsystem 00:19:32.071 treq: not required 00:19:32.071 portid: 0 00:19:32.071 trsvcid: 4420 00:19:32.071 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:19:32.071 traddr: 10.0.0.2 00:19:32.071 eflags: explicit discovery connections, duplicate discovery information 00:19:32.071 sectype: none 00:19:32.071 =====Discovery Log Entry 1====== 00:19:32.071 trtype: tcp 00:19:32.071 adrfam: ipv4 00:19:32.071 subtype: nvme subsystem 00:19:32.071 treq: not required 00:19:32.071 portid: 0 00:19:32.071 trsvcid: 4420 00:19:32.071 subnqn: nqn.2016-06.io.spdk:cnode1 00:19:32.071 traddr: 10.0.0.2 00:19:32.071 eflags: none 00:19:32.071 sectype: none 00:19:32.071 00:00:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:19:32.071 00:00:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:19:32.071 00:00:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:19:32.071 00:00:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:19:32.071 00:00:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:19:32.071 00:00:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:19:32.071 00:00:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:19:32.071 00:00:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:19:32.071 00:00:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:19:32.071 00:00:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:19:32.071 00:00:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:19:33.447 00:00:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:19:33.447 00:00:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1202 -- # local i=0 00:19:33.447 00:00:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:19:33.447 00:00:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1204 -- # [[ -n 2 ]] 00:19:33.447 00:00:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1205 -- # nvme_device_counter=2 00:19:33.447 00:00:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1209 -- # sleep 2 00:19:35.350 00:00:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:19:35.350 00:00:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:19:35.350 00:00:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:19:35.350 00:00:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # nvme_devices=2 00:19:35.350 00:00:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:19:35.350 00:00:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1212 -- # return 0 00:19:35.350 00:00:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:19:35.350 00:00:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:19:35.350 00:00:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:19:35.350 00:00:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:19:35.350 00:00:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:19:35.350 00:00:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:19:35.350 00:00:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:19:35.350 00:00:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:19:35.350 00:00:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:19:35.350 00:00:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n1 00:19:35.350 00:00:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:19:35.350 00:00:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:19:35.350 00:00:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n2 00:19:35.350 00:00:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:19:35.350 00:00:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n1 00:19:35.350 /dev/nvme0n2 ]] 00:19:35.350 00:00:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:19:35.350 00:00:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:19:35.350 00:00:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:19:35.351 00:00:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:19:35.351 00:00:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:19:35.351 00:00:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:19:35.351 00:00:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:19:35.351 00:00:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:19:35.351 00:00:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:19:35.351 00:00:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:19:35.351 00:00:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n1 00:19:35.351 00:00:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:19:35.351 00:00:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:19:35.351 00:00:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n2 00:19:35.351 00:00:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:19:35.351 00:00:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:19:35.351 00:00:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:19:35.610 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:35.610 00:00:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:19:35.610 00:00:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1223 -- # local i=0 00:19:35.610 00:00:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:19:35.610 00:00:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:19:35.610 00:00:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:19:35.610 00:00:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:19:35.610 00:00:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1235 -- # return 0 00:19:35.610 00:00:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:19:35.610 00:00:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:35.610 00:00:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:35.610 00:00:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:19:35.610 00:00:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:35.610 00:00:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:19:35.610 00:00:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:19:35.610 00:00:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@516 -- # nvmfcleanup 00:19:35.610 00:00:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@121 -- # sync 00:19:35.610 00:00:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:35.610 00:00:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set +e 00:19:35.610 00:00:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:35.610 00:00:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:35.610 rmmod nvme_tcp 00:19:35.610 rmmod nvme_fabrics 00:19:35.610 rmmod nvme_keyring 00:19:35.610 00:00:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:35.610 00:00:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@128 -- # set -e 00:19:35.610 00:00:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@129 -- # return 0 00:19:35.610 00:00:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@517 -- # '[' -n 4001927 ']' 00:19:35.610 00:00:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@518 -- # killprocess 4001927 00:19:35.610 00:00:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # '[' -z 4001927 ']' 00:19:35.610 00:00:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@958 -- # kill -0 4001927 00:19:35.610 00:00:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@959 -- # uname 00:19:35.610 00:00:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:35.610 00:00:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4001927 00:19:35.610 00:00:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:35.610 00:00:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:35.610 00:00:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4001927' 00:19:35.610 killing process with pid 4001927 00:19:35.869 00:00:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@973 -- # kill 4001927 00:19:35.869 00:00:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@978 -- # wait 4001927 00:19:37.252 00:00:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:19:37.252 00:00:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:19:37.252 00:00:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:19:37.252 00:00:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@297 -- # iptr 00:19:37.252 00:00:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # iptables-save 00:19:37.252 00:00:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # iptables-restore 00:19:37.252 00:00:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:19:37.252 00:00:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:37.252 00:00:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@302 -- # remove_spdk_ns 00:19:37.252 00:00:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:37.252 00:00:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:37.252 00:00:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:39.788 00:00:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:19:39.788 00:19:39.788 real 0m14.095s 00:19:39.788 user 0m25.459s 00:19:39.788 sys 0m4.725s 00:19:39.788 00:00:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:39.788 00:00:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:19:39.788 ************************************ 00:19:39.788 END TEST nvmf_nvme_cli 00:19:39.788 ************************************ 00:19:39.788 00:00:18 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@30 -- # [[ 0 -eq 1 ]] 00:19:39.788 00:00:18 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:19:39.788 00:00:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:39.788 00:00:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:39.788 00:00:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:39.788 ************************************ 00:19:39.788 START TEST nvmf_auth_target 00:19:39.788 ************************************ 00:19:39.788 00:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:19:39.788 * Looking for test storage... 00:19:39.788 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:39.788 00:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:19:39.788 00:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1711 -- # lcov --version 00:19:39.788 00:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:19:39.788 00:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:19:39.788 00:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:39.788 00:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:39.788 00:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:39.788 00:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # IFS=.-: 00:19:39.788 00:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # read -ra ver1 00:19:39.788 00:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # IFS=.-: 00:19:39.788 00:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # read -ra ver2 00:19:39.788 00:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@338 -- # local 'op=<' 00:19:39.788 00:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@340 -- # ver1_l=2 00:19:39.788 00:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@341 -- # ver2_l=1 00:19:39.788 00:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:39.788 00:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@344 -- # case "$op" in 00:19:39.788 00:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@345 -- # : 1 00:19:39.788 00:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:39.788 00:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:39.788 00:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # decimal 1 00:19:39.788 00:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=1 00:19:39.788 00:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:39.788 00:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 1 00:19:39.788 00:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # ver1[v]=1 00:19:39.788 00:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # decimal 2 00:19:39.788 00:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=2 00:19:39.788 00:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:39.788 00:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 2 00:19:39.788 00:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # ver2[v]=2 00:19:39.788 00:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:39.788 00:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:39.788 00:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # return 0 00:19:39.788 00:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:39.788 00:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:19:39.788 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:39.788 --rc genhtml_branch_coverage=1 00:19:39.788 --rc genhtml_function_coverage=1 00:19:39.788 --rc genhtml_legend=1 00:19:39.788 --rc geninfo_all_blocks=1 00:19:39.788 --rc geninfo_unexecuted_blocks=1 00:19:39.788 00:19:39.788 ' 00:19:39.788 00:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:19:39.788 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:39.788 --rc genhtml_branch_coverage=1 00:19:39.788 --rc genhtml_function_coverage=1 00:19:39.788 --rc genhtml_legend=1 00:19:39.788 --rc geninfo_all_blocks=1 00:19:39.788 --rc geninfo_unexecuted_blocks=1 00:19:39.788 00:19:39.788 ' 00:19:39.788 00:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:19:39.788 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:39.788 --rc genhtml_branch_coverage=1 00:19:39.788 --rc genhtml_function_coverage=1 00:19:39.788 --rc genhtml_legend=1 00:19:39.788 --rc geninfo_all_blocks=1 00:19:39.788 --rc geninfo_unexecuted_blocks=1 00:19:39.788 00:19:39.788 ' 00:19:39.788 00:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:19:39.788 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:39.788 --rc genhtml_branch_coverage=1 00:19:39.788 --rc genhtml_function_coverage=1 00:19:39.788 --rc genhtml_legend=1 00:19:39.788 --rc geninfo_all_blocks=1 00:19:39.788 --rc geninfo_unexecuted_blocks=1 00:19:39.788 00:19:39.788 ' 00:19:39.788 00:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:39.788 00:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:19:39.788 00:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:39.788 00:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:39.788 00:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:39.788 00:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:39.788 00:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:39.788 00:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:39.788 00:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:39.788 00:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:39.788 00:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:39.788 00:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:39.788 00:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:19:39.788 00:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:19:39.788 00:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:39.788 00:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:39.788 00:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:39.788 00:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:39.788 00:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:39.788 00:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@15 -- # shopt -s extglob 00:19:39.788 00:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:39.788 00:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:39.788 00:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:39.789 00:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:39.789 00:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:39.789 00:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:39.789 00:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:19:39.789 00:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:39.789 00:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@51 -- # : 0 00:19:39.789 00:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:39.789 00:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:39.789 00:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:39.789 00:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:39.789 00:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:39.789 00:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:39.789 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:39.789 00:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:39.789 00:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:39.789 00:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:39.789 00:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:19:39.789 00:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:19:39.789 00:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:19:39.789 00:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:19:39.789 00:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:19:39.789 00:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:19:39.789 00:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:19:39.789 00:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # nvmftestinit 00:19:39.789 00:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:19:39.789 00:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:39.789 00:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:19:39.789 00:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:19:39.789 00:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:19:39.789 00:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:39.789 00:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:39.789 00:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:39.789 00:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:19:39.789 00:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:19:39.789 00:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@309 -- # xtrace_disable 00:19:39.789 00:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:45.063 00:00:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:45.063 00:00:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # pci_devs=() 00:19:45.063 00:00:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:19:45.063 00:00:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:19:45.063 00:00:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:19:45.063 00:00:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:19:45.063 00:00:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:19:45.063 00:00:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # net_devs=() 00:19:45.063 00:00:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:19:45.063 00:00:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # e810=() 00:19:45.063 00:00:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # local -ga e810 00:19:45.063 00:00:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # x722=() 00:19:45.063 00:00:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # local -ga x722 00:19:45.063 00:00:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # mlx=() 00:19:45.063 00:00:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # local -ga mlx 00:19:45.063 00:00:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:45.063 00:00:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:45.063 00:00:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:45.063 00:00:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:45.063 00:00:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:45.063 00:00:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:45.063 00:00:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:45.063 00:00:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:19:45.063 00:00:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:45.063 00:00:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:45.063 00:00:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:45.063 00:00:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:45.063 00:00:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:19:45.063 00:00:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:19:45.063 00:00:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:19:45.063 00:00:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:19:45.063 00:00:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:19:45.063 00:00:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:19:45.063 00:00:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:45.063 00:00:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:19:45.063 Found 0000:af:00.0 (0x8086 - 0x159b) 00:19:45.063 00:00:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:45.063 00:00:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:45.063 00:00:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:45.063 00:00:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:45.063 00:00:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:45.063 00:00:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:45.063 00:00:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:19:45.063 Found 0000:af:00.1 (0x8086 - 0x159b) 00:19:45.063 00:00:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:45.063 00:00:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:45.063 00:00:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:45.063 00:00:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:45.063 00:00:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:45.063 00:00:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:19:45.063 00:00:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:19:45.063 00:00:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:19:45.063 00:00:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:45.063 00:00:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:45.063 00:00:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:45.063 00:00:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:45.063 00:00:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:45.063 00:00:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:45.063 00:00:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:45.064 00:00:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:19:45.064 Found net devices under 0000:af:00.0: cvl_0_0 00:19:45.064 00:00:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:45.064 00:00:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:45.064 00:00:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:45.064 00:00:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:45.064 00:00:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:45.064 00:00:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:45.064 00:00:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:45.064 00:00:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:45.064 00:00:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:19:45.064 Found net devices under 0000:af:00.1: cvl_0_1 00:19:45.064 00:00:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:45.064 00:00:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:19:45.064 00:00:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # is_hw=yes 00:19:45.064 00:00:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:19:45.064 00:00:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:19:45.064 00:00:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:19:45.064 00:00:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:45.064 00:00:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:45.064 00:00:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:45.064 00:00:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:45.064 00:00:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:19:45.064 00:00:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:45.064 00:00:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:45.064 00:00:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:19:45.064 00:00:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:19:45.064 00:00:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:45.064 00:00:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:45.064 00:00:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:19:45.064 00:00:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:19:45.064 00:00:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:19:45.064 00:00:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:45.064 00:00:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:45.064 00:00:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:45.064 00:00:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:19:45.064 00:00:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:45.064 00:00:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:45.064 00:00:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:45.064 00:00:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:19:45.064 00:00:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:19:45.064 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:45.064 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.300 ms 00:19:45.064 00:19:45.064 --- 10.0.0.2 ping statistics --- 00:19:45.064 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:45.064 rtt min/avg/max/mdev = 0.300/0.300/0.300/0.000 ms 00:19:45.064 00:00:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:45.064 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:45.064 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.115 ms 00:19:45.064 00:19:45.064 --- 10.0.0.1 ping statistics --- 00:19:45.064 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:45.064 rtt min/avg/max/mdev = 0.115/0.115/0.115/0.000 ms 00:19:45.064 00:00:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:45.064 00:00:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@450 -- # return 0 00:19:45.064 00:00:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:19:45.064 00:00:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:45.064 00:00:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:19:45.064 00:00:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:19:45.064 00:00:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:45.064 00:00:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:19:45.064 00:00:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:19:45.323 00:00:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@87 -- # nvmfappstart -L nvmf_auth 00:19:45.323 00:00:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:45.323 00:00:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:45.323 00:00:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:45.323 00:00:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:19:45.323 00:00:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=4006331 00:19:45.323 00:00:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 4006331 00:19:45.323 00:00:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 4006331 ']' 00:19:45.323 00:00:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:45.323 00:00:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:45.323 00:00:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:45.323 00:00:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:45.323 00:00:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:46.262 00:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:46.262 00:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:19:46.262 00:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:46.262 00:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:46.262 00:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:46.262 00:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:46.262 00:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@89 -- # hostpid=4006567 00:19:46.262 00:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:19:46.262 00:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:19:46.262 00:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key null 48 00:19:46.262 00:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:19:46.262 00:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:46.262 00:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:19:46.262 00:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=null 00:19:46.262 00:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:19:46.262 00:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:19:46.262 00:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=b9ab41525ac9dcc56e33a137211afea8f06d8263cf99c275 00:19:46.262 00:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:19:46.262 00:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.5iu 00:19:46.262 00:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key b9ab41525ac9dcc56e33a137211afea8f06d8263cf99c275 0 00:19:46.262 00:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 b9ab41525ac9dcc56e33a137211afea8f06d8263cf99c275 0 00:19:46.262 00:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:19:46.262 00:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:19:46.262 00:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=b9ab41525ac9dcc56e33a137211afea8f06d8263cf99c275 00:19:46.262 00:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=0 00:19:46.262 00:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:19:46.262 00:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.5iu 00:19:46.262 00:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.5iu 00:19:46.262 00:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # keys[0]=/tmp/spdk.key-null.5iu 00:19:46.262 00:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key sha512 64 00:19:46.262 00:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:19:46.262 00:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:46.262 00:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:19:46.262 00:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:19:46.262 00:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:19:46.262 00:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:19:46.262 00:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=3290b8285662e805dc617ea1e63c4bdf7f6e19b02e5ee339edefac4fec85804c 00:19:46.262 00:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:19:46.262 00:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.dqN 00:19:46.262 00:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 3290b8285662e805dc617ea1e63c4bdf7f6e19b02e5ee339edefac4fec85804c 3 00:19:46.262 00:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 3290b8285662e805dc617ea1e63c4bdf7f6e19b02e5ee339edefac4fec85804c 3 00:19:46.262 00:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:19:46.262 00:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:19:46.262 00:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=3290b8285662e805dc617ea1e63c4bdf7f6e19b02e5ee339edefac4fec85804c 00:19:46.262 00:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:19:46.262 00:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:19:46.262 00:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.dqN 00:19:46.262 00:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.dqN 00:19:46.262 00:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # ckeys[0]=/tmp/spdk.key-sha512.dqN 00:19:46.262 00:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha256 32 00:19:46.262 00:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:19:46.262 00:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:46.262 00:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:19:46.262 00:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:19:46.262 00:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:19:46.262 00:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:19:46.262 00:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=2050113ed7bbdf4d2b7d8268ca257b6c 00:19:46.262 00:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:19:46.262 00:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.Iun 00:19:46.262 00:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 2050113ed7bbdf4d2b7d8268ca257b6c 1 00:19:46.262 00:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 2050113ed7bbdf4d2b7d8268ca257b6c 1 00:19:46.262 00:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:19:46.262 00:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:19:46.262 00:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=2050113ed7bbdf4d2b7d8268ca257b6c 00:19:46.262 00:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:19:46.262 00:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:19:46.262 00:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.Iun 00:19:46.262 00:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.Iun 00:19:46.262 00:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # keys[1]=/tmp/spdk.key-sha256.Iun 00:19:46.262 00:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha384 48 00:19:46.262 00:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:19:46.262 00:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:46.262 00:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:19:46.262 00:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:19:46.262 00:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:19:46.262 00:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:19:46.262 00:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=e5dfc241ee445f8d29cf9e430f9a3adf6163dc708ce23e87 00:19:46.262 00:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:19:46.262 00:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.d8H 00:19:46.262 00:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key e5dfc241ee445f8d29cf9e430f9a3adf6163dc708ce23e87 2 00:19:46.262 00:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 e5dfc241ee445f8d29cf9e430f9a3adf6163dc708ce23e87 2 00:19:46.262 00:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:19:46.262 00:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:19:46.262 00:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=e5dfc241ee445f8d29cf9e430f9a3adf6163dc708ce23e87 00:19:46.262 00:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:19:46.262 00:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:19:46.262 00:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.d8H 00:19:46.262 00:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.d8H 00:19:46.262 00:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # ckeys[1]=/tmp/spdk.key-sha384.d8H 00:19:46.262 00:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha384 48 00:19:46.262 00:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:19:46.263 00:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:46.263 00:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:19:46.263 00:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:19:46.263 00:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:19:46.263 00:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:19:46.263 00:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=ce4dd5f513603a77db50d66bd7e212a406386bb2a29ead7e 00:19:46.263 00:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:19:46.263 00:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.gRk 00:19:46.263 00:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key ce4dd5f513603a77db50d66bd7e212a406386bb2a29ead7e 2 00:19:46.263 00:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 ce4dd5f513603a77db50d66bd7e212a406386bb2a29ead7e 2 00:19:46.263 00:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:19:46.263 00:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:19:46.263 00:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=ce4dd5f513603a77db50d66bd7e212a406386bb2a29ead7e 00:19:46.263 00:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:19:46.263 00:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:19:46.522 00:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.gRk 00:19:46.522 00:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.gRk 00:19:46.522 00:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # keys[2]=/tmp/spdk.key-sha384.gRk 00:19:46.522 00:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha256 32 00:19:46.522 00:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:19:46.522 00:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:46.522 00:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:19:46.522 00:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:19:46.522 00:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:19:46.522 00:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:19:46.522 00:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=76fd3a26389c0b3b5bcdf7fd45c4aabd 00:19:46.522 00:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:19:46.522 00:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.BxF 00:19:46.522 00:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 76fd3a26389c0b3b5bcdf7fd45c4aabd 1 00:19:46.522 00:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 76fd3a26389c0b3b5bcdf7fd45c4aabd 1 00:19:46.522 00:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:19:46.522 00:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:19:46.522 00:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=76fd3a26389c0b3b5bcdf7fd45c4aabd 00:19:46.522 00:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:19:46.522 00:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:19:46.522 00:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.BxF 00:19:46.522 00:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.BxF 00:19:46.522 00:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # ckeys[2]=/tmp/spdk.key-sha256.BxF 00:19:46.522 00:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # gen_dhchap_key sha512 64 00:19:46.522 00:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:19:46.522 00:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:46.522 00:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:19:46.522 00:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:19:46.522 00:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:19:46.522 00:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:19:46.522 00:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=7c12c0b051bf53e6c573d3449a09d4540bab74c72bbfecef49d07ef537087ab5 00:19:46.522 00:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:19:46.522 00:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.kmj 00:19:46.522 00:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 7c12c0b051bf53e6c573d3449a09d4540bab74c72bbfecef49d07ef537087ab5 3 00:19:46.522 00:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 7c12c0b051bf53e6c573d3449a09d4540bab74c72bbfecef49d07ef537087ab5 3 00:19:46.522 00:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:19:46.522 00:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:19:46.522 00:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=7c12c0b051bf53e6c573d3449a09d4540bab74c72bbfecef49d07ef537087ab5 00:19:46.522 00:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:19:46.522 00:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:19:46.522 00:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.kmj 00:19:46.522 00:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.kmj 00:19:46.522 00:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # keys[3]=/tmp/spdk.key-sha512.kmj 00:19:46.522 00:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # ckeys[3]= 00:19:46.522 00:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@99 -- # waitforlisten 4006331 00:19:46.522 00:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 4006331 ']' 00:19:46.522 00:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:46.522 00:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:46.522 00:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:46.522 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:46.522 00:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:46.522 00:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:46.781 00:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:46.781 00:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:19:46.781 00:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@100 -- # waitforlisten 4006567 /var/tmp/host.sock 00:19:46.781 00:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 4006567 ']' 00:19:46.781 00:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/host.sock 00:19:46.781 00:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:46.781 00:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:19:46.781 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:19:46.781 00:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:46.781 00:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:47.349 00:00:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:47.349 00:00:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:19:47.349 00:00:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@101 -- # rpc_cmd 00:19:47.349 00:00:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:47.349 00:00:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:47.349 00:00:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:47.349 00:00:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:19:47.349 00:00:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.5iu 00:19:47.349 00:00:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:47.349 00:00:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:47.349 00:00:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:47.349 00:00:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.5iu 00:19:47.349 00:00:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.5iu 00:19:47.349 00:00:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha512.dqN ]] 00:19:47.349 00:00:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.dqN 00:19:47.349 00:00:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:47.349 00:00:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:47.349 00:00:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:47.349 00:00:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.dqN 00:19:47.349 00:00:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.dqN 00:19:47.609 00:00:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:19:47.609 00:00:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.Iun 00:19:47.609 00:00:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:47.609 00:00:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:47.609 00:00:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:47.609 00:00:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.Iun 00:19:47.609 00:00:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.Iun 00:19:47.868 00:00:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha384.d8H ]] 00:19:47.868 00:00:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.d8H 00:19:47.868 00:00:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:47.868 00:00:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:47.868 00:00:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:47.868 00:00:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.d8H 00:19:47.868 00:00:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.d8H 00:19:48.126 00:00:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:19:48.126 00:00:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.gRk 00:19:48.126 00:00:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:48.126 00:00:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:48.126 00:00:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:48.126 00:00:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.gRk 00:19:48.126 00:00:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.gRk 00:19:48.126 00:00:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha256.BxF ]] 00:19:48.126 00:00:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.BxF 00:19:48.126 00:00:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:48.126 00:00:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:48.386 00:00:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:48.386 00:00:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.BxF 00:19:48.386 00:00:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.BxF 00:19:48.386 00:00:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:19:48.386 00:00:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.kmj 00:19:48.386 00:00:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:48.386 00:00:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:48.386 00:00:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:48.386 00:00:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.kmj 00:19:48.386 00:00:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.kmj 00:19:48.644 00:00:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n '' ]] 00:19:48.644 00:00:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:19:48.644 00:00:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:48.644 00:00:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:48.644 00:00:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:48.644 00:00:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:48.902 00:00:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 0 00:19:48.902 00:00:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:48.902 00:00:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:48.902 00:00:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:19:48.902 00:00:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:48.902 00:00:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:48.903 00:00:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:48.903 00:00:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:48.903 00:00:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:48.903 00:00:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:48.903 00:00:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:48.903 00:00:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:48.903 00:00:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:49.161 00:19:49.161 00:00:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:49.161 00:00:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:49.161 00:00:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:49.161 00:00:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:49.161 00:00:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:49.161 00:00:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:49.161 00:00:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:49.420 00:00:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:49.420 00:00:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:49.420 { 00:19:49.420 "cntlid": 1, 00:19:49.420 "qid": 0, 00:19:49.420 "state": "enabled", 00:19:49.420 "thread": "nvmf_tgt_poll_group_000", 00:19:49.420 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:19:49.420 "listen_address": { 00:19:49.420 "trtype": "TCP", 00:19:49.420 "adrfam": "IPv4", 00:19:49.420 "traddr": "10.0.0.2", 00:19:49.420 "trsvcid": "4420" 00:19:49.420 }, 00:19:49.420 "peer_address": { 00:19:49.420 "trtype": "TCP", 00:19:49.420 "adrfam": "IPv4", 00:19:49.420 "traddr": "10.0.0.1", 00:19:49.420 "trsvcid": "37326" 00:19:49.420 }, 00:19:49.420 "auth": { 00:19:49.420 "state": "completed", 00:19:49.420 "digest": "sha256", 00:19:49.420 "dhgroup": "null" 00:19:49.420 } 00:19:49.420 } 00:19:49.420 ]' 00:19:49.420 00:00:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:49.420 00:00:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:49.420 00:00:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:49.420 00:00:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:19:49.420 00:00:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:49.420 00:00:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:49.420 00:00:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:49.420 00:00:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:49.679 00:00:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YjlhYjQxNTI1YWM5ZGNjNTZlMzNhMTM3MjExYWZlYThmMDZkODI2M2NmOTljMjc1svf59w==: --dhchap-ctrl-secret DHHC-1:03:MzI5MGI4Mjg1NjYyZTgwNWRjNjE3ZWExZTYzYzRiZGY3ZjZlMTliMDJlNWVlMzM5ZWRlZmFjNGZlYzg1ODA0Y+nxpHg=: 00:19:49.679 00:00:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:YjlhYjQxNTI1YWM5ZGNjNTZlMzNhMTM3MjExYWZlYThmMDZkODI2M2NmOTljMjc1svf59w==: --dhchap-ctrl-secret DHHC-1:03:MzI5MGI4Mjg1NjYyZTgwNWRjNjE3ZWExZTYzYzRiZGY3ZjZlMTliMDJlNWVlMzM5ZWRlZmFjNGZlYzg1ODA0Y+nxpHg=: 00:19:50.251 00:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:50.251 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:50.251 00:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:19:50.251 00:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:50.251 00:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:50.251 00:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:50.251 00:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:50.251 00:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:50.251 00:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:50.510 00:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 1 00:19:50.510 00:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:50.510 00:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:50.510 00:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:19:50.510 00:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:50.510 00:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:50.510 00:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:50.510 00:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:50.510 00:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:50.510 00:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:50.510 00:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:50.510 00:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:50.510 00:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:50.510 00:19:50.769 00:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:50.769 00:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:50.769 00:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:50.769 00:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:50.769 00:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:50.769 00:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:50.769 00:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:50.769 00:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:50.769 00:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:50.769 { 00:19:50.769 "cntlid": 3, 00:19:50.769 "qid": 0, 00:19:50.769 "state": "enabled", 00:19:50.769 "thread": "nvmf_tgt_poll_group_000", 00:19:50.769 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:19:50.769 "listen_address": { 00:19:50.769 "trtype": "TCP", 00:19:50.769 "adrfam": "IPv4", 00:19:50.769 "traddr": "10.0.0.2", 00:19:50.769 "trsvcid": "4420" 00:19:50.769 }, 00:19:50.769 "peer_address": { 00:19:50.769 "trtype": "TCP", 00:19:50.769 "adrfam": "IPv4", 00:19:50.769 "traddr": "10.0.0.1", 00:19:50.769 "trsvcid": "37342" 00:19:50.769 }, 00:19:50.769 "auth": { 00:19:50.769 "state": "completed", 00:19:50.769 "digest": "sha256", 00:19:50.769 "dhgroup": "null" 00:19:50.769 } 00:19:50.769 } 00:19:50.769 ]' 00:19:50.769 00:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:50.769 00:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:50.769 00:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:51.028 00:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:19:51.028 00:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:51.028 00:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:51.028 00:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:51.028 00:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:51.286 00:00:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MjA1MDExM2VkN2JiZGY0ZDJiN2Q4MjY4Y2EyNTdiNmNaF3cn: --dhchap-ctrl-secret DHHC-1:02:ZTVkZmMyNDFlZTQ0NWY4ZDI5Y2Y5ZTQzMGY5YTNhZGY2MTYzZGM3MDhjZTIzZTg3m25QsA==: 00:19:51.286 00:00:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:MjA1MDExM2VkN2JiZGY0ZDJiN2Q4MjY4Y2EyNTdiNmNaF3cn: --dhchap-ctrl-secret DHHC-1:02:ZTVkZmMyNDFlZTQ0NWY4ZDI5Y2Y5ZTQzMGY5YTNhZGY2MTYzZGM3MDhjZTIzZTg3m25QsA==: 00:19:51.854 00:00:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:51.854 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:51.854 00:00:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:19:51.854 00:00:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:51.854 00:00:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:51.854 00:00:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:51.854 00:00:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:51.854 00:00:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:51.855 00:00:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:51.855 00:00:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 2 00:19:51.855 00:00:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:51.855 00:00:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:51.855 00:00:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:19:51.855 00:00:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:51.855 00:00:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:51.855 00:00:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:51.855 00:00:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:51.855 00:00:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:51.855 00:00:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:51.855 00:00:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:51.855 00:00:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:51.855 00:00:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:52.113 00:19:52.113 00:00:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:52.113 00:00:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:52.114 00:00:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:52.380 00:00:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:52.380 00:00:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:52.380 00:00:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:52.380 00:00:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:52.380 00:00:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:52.380 00:00:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:52.380 { 00:19:52.380 "cntlid": 5, 00:19:52.380 "qid": 0, 00:19:52.380 "state": "enabled", 00:19:52.380 "thread": "nvmf_tgt_poll_group_000", 00:19:52.380 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:19:52.380 "listen_address": { 00:19:52.380 "trtype": "TCP", 00:19:52.381 "adrfam": "IPv4", 00:19:52.381 "traddr": "10.0.0.2", 00:19:52.381 "trsvcid": "4420" 00:19:52.381 }, 00:19:52.381 "peer_address": { 00:19:52.381 "trtype": "TCP", 00:19:52.381 "adrfam": "IPv4", 00:19:52.381 "traddr": "10.0.0.1", 00:19:52.381 "trsvcid": "37354" 00:19:52.381 }, 00:19:52.381 "auth": { 00:19:52.381 "state": "completed", 00:19:52.381 "digest": "sha256", 00:19:52.381 "dhgroup": "null" 00:19:52.381 } 00:19:52.381 } 00:19:52.381 ]' 00:19:52.381 00:00:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:52.381 00:00:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:52.381 00:00:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:52.640 00:00:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:19:52.640 00:00:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:52.640 00:00:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:52.640 00:00:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:52.640 00:00:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:52.640 00:00:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Y2U0ZGQ1ZjUxMzYwM2E3N2RiNTBkNjZiZDdlMjEyYTQwNjM4NmJiMmEyOWVhZDdlkUs/MQ==: --dhchap-ctrl-secret DHHC-1:01:NzZmZDNhMjYzODljMGIzYjViY2RmN2ZkNDVjNGFhYmRKwaUG: 00:19:52.640 00:00:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:Y2U0ZGQ1ZjUxMzYwM2E3N2RiNTBkNjZiZDdlMjEyYTQwNjM4NmJiMmEyOWVhZDdlkUs/MQ==: --dhchap-ctrl-secret DHHC-1:01:NzZmZDNhMjYzODljMGIzYjViY2RmN2ZkNDVjNGFhYmRKwaUG: 00:19:53.208 00:00:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:53.208 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:53.208 00:00:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:19:53.208 00:00:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:53.208 00:00:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:53.208 00:00:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:53.208 00:00:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:53.208 00:00:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:53.208 00:00:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:53.467 00:00:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 3 00:19:53.467 00:00:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:53.467 00:00:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:53.467 00:00:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:19:53.467 00:00:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:53.467 00:00:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:53.467 00:00:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:19:53.467 00:00:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:53.467 00:00:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:53.467 00:00:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:53.467 00:00:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:53.467 00:00:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:53.467 00:00:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:53.725 00:19:53.725 00:00:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:53.725 00:00:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:53.725 00:00:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:53.983 00:00:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:53.983 00:00:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:53.983 00:00:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:53.983 00:00:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:53.983 00:00:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:53.983 00:00:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:53.983 { 00:19:53.983 "cntlid": 7, 00:19:53.983 "qid": 0, 00:19:53.983 "state": "enabled", 00:19:53.983 "thread": "nvmf_tgt_poll_group_000", 00:19:53.983 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:19:53.983 "listen_address": { 00:19:53.983 "trtype": "TCP", 00:19:53.983 "adrfam": "IPv4", 00:19:53.983 "traddr": "10.0.0.2", 00:19:53.983 "trsvcid": "4420" 00:19:53.983 }, 00:19:53.983 "peer_address": { 00:19:53.983 "trtype": "TCP", 00:19:53.983 "adrfam": "IPv4", 00:19:53.983 "traddr": "10.0.0.1", 00:19:53.983 "trsvcid": "37374" 00:19:53.983 }, 00:19:53.983 "auth": { 00:19:53.983 "state": "completed", 00:19:53.983 "digest": "sha256", 00:19:53.983 "dhgroup": "null" 00:19:53.983 } 00:19:53.983 } 00:19:53.983 ]' 00:19:53.983 00:00:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:53.983 00:00:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:53.983 00:00:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:53.983 00:00:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:19:53.983 00:00:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:53.983 00:00:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:53.983 00:00:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:53.983 00:00:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:54.242 00:00:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:N2MxMmMwYjA1MWJmNTNlNmM1NzNkMzQ0OWEwOWQ0NTQwYmFiNzRjNzJiYmZlY2VmNDlkMDdlZjUzNzA4N2FiNRcYr4o=: 00:19:54.242 00:00:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:N2MxMmMwYjA1MWJmNTNlNmM1NzNkMzQ0OWEwOWQ0NTQwYmFiNzRjNzJiYmZlY2VmNDlkMDdlZjUzNzA4N2FiNRcYr4o=: 00:19:54.809 00:00:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:54.809 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:54.809 00:00:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:19:54.809 00:00:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:54.809 00:00:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:54.809 00:00:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:54.809 00:00:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:54.809 00:00:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:54.809 00:00:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:54.809 00:00:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:55.068 00:00:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 0 00:19:55.068 00:00:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:55.068 00:00:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:55.068 00:00:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:19:55.068 00:00:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:55.068 00:00:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:55.068 00:00:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:55.068 00:00:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:55.068 00:00:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:55.068 00:00:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:55.069 00:00:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:55.069 00:00:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:55.069 00:00:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:55.327 00:19:55.327 00:00:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:55.327 00:00:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:55.327 00:00:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:55.586 00:00:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:55.586 00:00:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:55.586 00:00:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:55.586 00:00:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:55.586 00:00:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:55.586 00:00:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:55.586 { 00:19:55.586 "cntlid": 9, 00:19:55.586 "qid": 0, 00:19:55.586 "state": "enabled", 00:19:55.586 "thread": "nvmf_tgt_poll_group_000", 00:19:55.586 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:19:55.586 "listen_address": { 00:19:55.586 "trtype": "TCP", 00:19:55.586 "adrfam": "IPv4", 00:19:55.586 "traddr": "10.0.0.2", 00:19:55.586 "trsvcid": "4420" 00:19:55.586 }, 00:19:55.586 "peer_address": { 00:19:55.586 "trtype": "TCP", 00:19:55.586 "adrfam": "IPv4", 00:19:55.586 "traddr": "10.0.0.1", 00:19:55.586 "trsvcid": "37406" 00:19:55.586 }, 00:19:55.586 "auth": { 00:19:55.586 "state": "completed", 00:19:55.586 "digest": "sha256", 00:19:55.586 "dhgroup": "ffdhe2048" 00:19:55.586 } 00:19:55.586 } 00:19:55.586 ]' 00:19:55.586 00:00:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:55.586 00:00:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:55.586 00:00:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:55.586 00:00:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:55.586 00:00:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:55.586 00:00:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:55.586 00:00:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:55.586 00:00:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:55.845 00:00:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YjlhYjQxNTI1YWM5ZGNjNTZlMzNhMTM3MjExYWZlYThmMDZkODI2M2NmOTljMjc1svf59w==: --dhchap-ctrl-secret DHHC-1:03:MzI5MGI4Mjg1NjYyZTgwNWRjNjE3ZWExZTYzYzRiZGY3ZjZlMTliMDJlNWVlMzM5ZWRlZmFjNGZlYzg1ODA0Y+nxpHg=: 00:19:55.845 00:00:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:YjlhYjQxNTI1YWM5ZGNjNTZlMzNhMTM3MjExYWZlYThmMDZkODI2M2NmOTljMjc1svf59w==: --dhchap-ctrl-secret DHHC-1:03:MzI5MGI4Mjg1NjYyZTgwNWRjNjE3ZWExZTYzYzRiZGY3ZjZlMTliMDJlNWVlMzM5ZWRlZmFjNGZlYzg1ODA0Y+nxpHg=: 00:19:56.411 00:00:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:56.411 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:56.411 00:00:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:19:56.411 00:00:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:56.411 00:00:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:56.411 00:00:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:56.411 00:00:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:56.411 00:00:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:56.411 00:00:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:56.670 00:00:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 1 00:19:56.670 00:00:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:56.670 00:00:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:56.670 00:00:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:19:56.670 00:00:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:56.670 00:00:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:56.670 00:00:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:56.670 00:00:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:56.670 00:00:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:56.670 00:00:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:56.670 00:00:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:56.670 00:00:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:56.670 00:00:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:56.929 00:19:56.929 00:00:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:56.929 00:00:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:56.929 00:00:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:57.188 00:00:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:57.188 00:00:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:57.188 00:00:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:57.188 00:00:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:57.188 00:00:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:57.188 00:00:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:57.188 { 00:19:57.188 "cntlid": 11, 00:19:57.188 "qid": 0, 00:19:57.188 "state": "enabled", 00:19:57.188 "thread": "nvmf_tgt_poll_group_000", 00:19:57.188 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:19:57.188 "listen_address": { 00:19:57.188 "trtype": "TCP", 00:19:57.188 "adrfam": "IPv4", 00:19:57.188 "traddr": "10.0.0.2", 00:19:57.188 "trsvcid": "4420" 00:19:57.188 }, 00:19:57.188 "peer_address": { 00:19:57.188 "trtype": "TCP", 00:19:57.188 "adrfam": "IPv4", 00:19:57.188 "traddr": "10.0.0.1", 00:19:57.188 "trsvcid": "37438" 00:19:57.188 }, 00:19:57.188 "auth": { 00:19:57.188 "state": "completed", 00:19:57.188 "digest": "sha256", 00:19:57.188 "dhgroup": "ffdhe2048" 00:19:57.188 } 00:19:57.188 } 00:19:57.188 ]' 00:19:57.188 00:00:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:57.188 00:00:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:57.188 00:00:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:57.188 00:00:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:57.188 00:00:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:57.188 00:00:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:57.188 00:00:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:57.188 00:00:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:57.447 00:00:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MjA1MDExM2VkN2JiZGY0ZDJiN2Q4MjY4Y2EyNTdiNmNaF3cn: --dhchap-ctrl-secret DHHC-1:02:ZTVkZmMyNDFlZTQ0NWY4ZDI5Y2Y5ZTQzMGY5YTNhZGY2MTYzZGM3MDhjZTIzZTg3m25QsA==: 00:19:57.447 00:00:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:MjA1MDExM2VkN2JiZGY0ZDJiN2Q4MjY4Y2EyNTdiNmNaF3cn: --dhchap-ctrl-secret DHHC-1:02:ZTVkZmMyNDFlZTQ0NWY4ZDI5Y2Y5ZTQzMGY5YTNhZGY2MTYzZGM3MDhjZTIzZTg3m25QsA==: 00:19:58.015 00:00:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:58.015 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:58.015 00:00:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:19:58.015 00:00:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:58.015 00:00:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:58.015 00:00:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:58.015 00:00:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:58.015 00:00:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:58.015 00:00:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:58.274 00:00:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 2 00:19:58.274 00:00:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:58.274 00:00:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:58.274 00:00:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:19:58.274 00:00:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:58.274 00:00:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:58.274 00:00:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:58.274 00:00:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:58.274 00:00:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:58.274 00:00:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:58.274 00:00:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:58.274 00:00:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:58.274 00:00:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:58.533 00:19:58.533 00:00:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:58.533 00:00:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:58.533 00:00:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:58.533 00:00:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:58.533 00:00:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:58.533 00:00:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:58.533 00:00:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:58.792 00:00:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:58.792 00:00:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:58.792 { 00:19:58.792 "cntlid": 13, 00:19:58.792 "qid": 0, 00:19:58.792 "state": "enabled", 00:19:58.792 "thread": "nvmf_tgt_poll_group_000", 00:19:58.792 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:19:58.792 "listen_address": { 00:19:58.792 "trtype": "TCP", 00:19:58.792 "adrfam": "IPv4", 00:19:58.792 "traddr": "10.0.0.2", 00:19:58.792 "trsvcid": "4420" 00:19:58.792 }, 00:19:58.792 "peer_address": { 00:19:58.792 "trtype": "TCP", 00:19:58.792 "adrfam": "IPv4", 00:19:58.792 "traddr": "10.0.0.1", 00:19:58.792 "trsvcid": "53974" 00:19:58.792 }, 00:19:58.792 "auth": { 00:19:58.792 "state": "completed", 00:19:58.792 "digest": "sha256", 00:19:58.792 "dhgroup": "ffdhe2048" 00:19:58.792 } 00:19:58.792 } 00:19:58.792 ]' 00:19:58.792 00:00:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:58.792 00:00:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:58.792 00:00:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:58.792 00:00:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:58.792 00:00:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:58.792 00:00:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:58.792 00:00:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:58.792 00:00:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:59.051 00:00:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Y2U0ZGQ1ZjUxMzYwM2E3N2RiNTBkNjZiZDdlMjEyYTQwNjM4NmJiMmEyOWVhZDdlkUs/MQ==: --dhchap-ctrl-secret DHHC-1:01:NzZmZDNhMjYzODljMGIzYjViY2RmN2ZkNDVjNGFhYmRKwaUG: 00:19:59.051 00:00:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:Y2U0ZGQ1ZjUxMzYwM2E3N2RiNTBkNjZiZDdlMjEyYTQwNjM4NmJiMmEyOWVhZDdlkUs/MQ==: --dhchap-ctrl-secret DHHC-1:01:NzZmZDNhMjYzODljMGIzYjViY2RmN2ZkNDVjNGFhYmRKwaUG: 00:19:59.618 00:00:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:59.618 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:59.618 00:00:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:19:59.618 00:00:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:59.618 00:00:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:59.618 00:00:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:59.618 00:00:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:59.618 00:00:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:59.618 00:00:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:59.880 00:00:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 3 00:19:59.880 00:00:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:59.880 00:00:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:59.880 00:00:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:19:59.880 00:00:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:59.880 00:00:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:59.880 00:00:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:19:59.880 00:00:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:59.880 00:00:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:59.880 00:00:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:59.880 00:00:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:59.880 00:00:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:59.880 00:00:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:59.880 00:20:00.142 00:00:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:00.142 00:00:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:00.142 00:00:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:00.142 00:00:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:00.142 00:00:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:00.142 00:00:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:00.142 00:00:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:00.142 00:00:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:00.142 00:00:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:00.142 { 00:20:00.142 "cntlid": 15, 00:20:00.142 "qid": 0, 00:20:00.142 "state": "enabled", 00:20:00.142 "thread": "nvmf_tgt_poll_group_000", 00:20:00.142 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:00.142 "listen_address": { 00:20:00.142 "trtype": "TCP", 00:20:00.142 "adrfam": "IPv4", 00:20:00.142 "traddr": "10.0.0.2", 00:20:00.142 "trsvcid": "4420" 00:20:00.142 }, 00:20:00.142 "peer_address": { 00:20:00.142 "trtype": "TCP", 00:20:00.142 "adrfam": "IPv4", 00:20:00.142 "traddr": "10.0.0.1", 00:20:00.142 "trsvcid": "54000" 00:20:00.142 }, 00:20:00.142 "auth": { 00:20:00.142 "state": "completed", 00:20:00.142 "digest": "sha256", 00:20:00.142 "dhgroup": "ffdhe2048" 00:20:00.142 } 00:20:00.142 } 00:20:00.142 ]' 00:20:00.142 00:00:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:00.142 00:00:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:00.142 00:00:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:00.401 00:00:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:00.401 00:00:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:00.401 00:00:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:00.401 00:00:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:00.401 00:00:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:00.660 00:00:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:N2MxMmMwYjA1MWJmNTNlNmM1NzNkMzQ0OWEwOWQ0NTQwYmFiNzRjNzJiYmZlY2VmNDlkMDdlZjUzNzA4N2FiNRcYr4o=: 00:20:00.660 00:00:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:N2MxMmMwYjA1MWJmNTNlNmM1NzNkMzQ0OWEwOWQ0NTQwYmFiNzRjNzJiYmZlY2VmNDlkMDdlZjUzNzA4N2FiNRcYr4o=: 00:20:01.227 00:00:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:01.227 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:01.227 00:00:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:01.227 00:00:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:01.227 00:00:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:01.227 00:00:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:01.227 00:00:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:01.227 00:00:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:01.227 00:00:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:01.228 00:00:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:01.228 00:00:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 0 00:20:01.228 00:00:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:01.228 00:00:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:01.228 00:00:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:20:01.228 00:00:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:01.228 00:00:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:01.228 00:00:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:01.228 00:00:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:01.228 00:00:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:01.486 00:00:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:01.486 00:00:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:01.486 00:00:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:01.486 00:00:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:01.486 00:20:01.745 00:00:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:01.745 00:00:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:01.745 00:00:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:01.745 00:00:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:01.745 00:00:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:01.745 00:00:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:01.745 00:00:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:01.745 00:00:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:01.745 00:00:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:01.745 { 00:20:01.745 "cntlid": 17, 00:20:01.745 "qid": 0, 00:20:01.745 "state": "enabled", 00:20:01.745 "thread": "nvmf_tgt_poll_group_000", 00:20:01.745 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:01.745 "listen_address": { 00:20:01.745 "trtype": "TCP", 00:20:01.745 "adrfam": "IPv4", 00:20:01.745 "traddr": "10.0.0.2", 00:20:01.745 "trsvcid": "4420" 00:20:01.745 }, 00:20:01.745 "peer_address": { 00:20:01.745 "trtype": "TCP", 00:20:01.745 "adrfam": "IPv4", 00:20:01.745 "traddr": "10.0.0.1", 00:20:01.745 "trsvcid": "54028" 00:20:01.745 }, 00:20:01.745 "auth": { 00:20:01.745 "state": "completed", 00:20:01.745 "digest": "sha256", 00:20:01.745 "dhgroup": "ffdhe3072" 00:20:01.745 } 00:20:01.745 } 00:20:01.745 ]' 00:20:01.745 00:00:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:02.003 00:00:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:02.003 00:00:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:02.003 00:00:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:02.003 00:00:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:02.003 00:00:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:02.003 00:00:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:02.003 00:00:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:02.262 00:00:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YjlhYjQxNTI1YWM5ZGNjNTZlMzNhMTM3MjExYWZlYThmMDZkODI2M2NmOTljMjc1svf59w==: --dhchap-ctrl-secret DHHC-1:03:MzI5MGI4Mjg1NjYyZTgwNWRjNjE3ZWExZTYzYzRiZGY3ZjZlMTliMDJlNWVlMzM5ZWRlZmFjNGZlYzg1ODA0Y+nxpHg=: 00:20:02.262 00:00:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:YjlhYjQxNTI1YWM5ZGNjNTZlMzNhMTM3MjExYWZlYThmMDZkODI2M2NmOTljMjc1svf59w==: --dhchap-ctrl-secret DHHC-1:03:MzI5MGI4Mjg1NjYyZTgwNWRjNjE3ZWExZTYzYzRiZGY3ZjZlMTliMDJlNWVlMzM5ZWRlZmFjNGZlYzg1ODA0Y+nxpHg=: 00:20:02.829 00:00:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:02.829 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:02.829 00:00:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:02.829 00:00:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:02.829 00:00:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:02.829 00:00:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:02.829 00:00:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:02.829 00:00:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:02.829 00:00:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:02.829 00:00:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 1 00:20:02.829 00:00:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:02.829 00:00:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:02.829 00:00:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:20:02.829 00:00:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:02.829 00:00:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:02.829 00:00:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:02.829 00:00:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:02.829 00:00:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:03.088 00:00:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:03.088 00:00:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:03.088 00:00:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:03.088 00:00:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:03.088 00:20:03.347 00:00:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:03.347 00:00:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:03.347 00:00:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:03.347 00:00:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:03.347 00:00:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:03.347 00:00:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:03.347 00:00:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:03.347 00:00:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:03.347 00:00:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:03.347 { 00:20:03.347 "cntlid": 19, 00:20:03.347 "qid": 0, 00:20:03.347 "state": "enabled", 00:20:03.347 "thread": "nvmf_tgt_poll_group_000", 00:20:03.347 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:03.347 "listen_address": { 00:20:03.347 "trtype": "TCP", 00:20:03.347 "adrfam": "IPv4", 00:20:03.347 "traddr": "10.0.0.2", 00:20:03.347 "trsvcid": "4420" 00:20:03.347 }, 00:20:03.347 "peer_address": { 00:20:03.347 "trtype": "TCP", 00:20:03.347 "adrfam": "IPv4", 00:20:03.347 "traddr": "10.0.0.1", 00:20:03.347 "trsvcid": "54060" 00:20:03.347 }, 00:20:03.347 "auth": { 00:20:03.347 "state": "completed", 00:20:03.347 "digest": "sha256", 00:20:03.347 "dhgroup": "ffdhe3072" 00:20:03.347 } 00:20:03.347 } 00:20:03.347 ]' 00:20:03.347 00:00:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:03.347 00:00:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:03.347 00:00:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:03.606 00:00:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:03.606 00:00:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:03.606 00:00:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:03.606 00:00:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:03.606 00:00:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:03.865 00:00:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MjA1MDExM2VkN2JiZGY0ZDJiN2Q4MjY4Y2EyNTdiNmNaF3cn: --dhchap-ctrl-secret DHHC-1:02:ZTVkZmMyNDFlZTQ0NWY4ZDI5Y2Y5ZTQzMGY5YTNhZGY2MTYzZGM3MDhjZTIzZTg3m25QsA==: 00:20:03.865 00:00:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:MjA1MDExM2VkN2JiZGY0ZDJiN2Q4MjY4Y2EyNTdiNmNaF3cn: --dhchap-ctrl-secret DHHC-1:02:ZTVkZmMyNDFlZTQ0NWY4ZDI5Y2Y5ZTQzMGY5YTNhZGY2MTYzZGM3MDhjZTIzZTg3m25QsA==: 00:20:04.443 00:00:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:04.443 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:04.443 00:00:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:04.443 00:00:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:04.443 00:00:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:04.443 00:00:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:04.443 00:00:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:04.443 00:00:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:04.443 00:00:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:04.443 00:00:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 2 00:20:04.443 00:00:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:04.443 00:00:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:04.443 00:00:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:20:04.443 00:00:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:04.443 00:00:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:04.443 00:00:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:04.443 00:00:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:04.443 00:00:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:04.443 00:00:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:04.443 00:00:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:04.443 00:00:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:04.443 00:00:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:04.707 00:20:04.707 00:00:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:04.707 00:00:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:04.707 00:00:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:04.966 00:00:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:04.966 00:00:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:04.966 00:00:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:04.966 00:00:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:04.966 00:00:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:04.966 00:00:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:04.966 { 00:20:04.966 "cntlid": 21, 00:20:04.966 "qid": 0, 00:20:04.966 "state": "enabled", 00:20:04.966 "thread": "nvmf_tgt_poll_group_000", 00:20:04.966 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:04.966 "listen_address": { 00:20:04.966 "trtype": "TCP", 00:20:04.966 "adrfam": "IPv4", 00:20:04.966 "traddr": "10.0.0.2", 00:20:04.966 "trsvcid": "4420" 00:20:04.966 }, 00:20:04.966 "peer_address": { 00:20:04.966 "trtype": "TCP", 00:20:04.966 "adrfam": "IPv4", 00:20:04.966 "traddr": "10.0.0.1", 00:20:04.966 "trsvcid": "54092" 00:20:04.966 }, 00:20:04.966 "auth": { 00:20:04.966 "state": "completed", 00:20:04.966 "digest": "sha256", 00:20:04.966 "dhgroup": "ffdhe3072" 00:20:04.966 } 00:20:04.966 } 00:20:04.966 ]' 00:20:04.966 00:00:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:04.966 00:00:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:04.966 00:00:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:04.966 00:00:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:04.966 00:00:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:05.225 00:00:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:05.225 00:00:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:05.225 00:00:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:05.225 00:00:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Y2U0ZGQ1ZjUxMzYwM2E3N2RiNTBkNjZiZDdlMjEyYTQwNjM4NmJiMmEyOWVhZDdlkUs/MQ==: --dhchap-ctrl-secret DHHC-1:01:NzZmZDNhMjYzODljMGIzYjViY2RmN2ZkNDVjNGFhYmRKwaUG: 00:20:05.225 00:00:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:Y2U0ZGQ1ZjUxMzYwM2E3N2RiNTBkNjZiZDdlMjEyYTQwNjM4NmJiMmEyOWVhZDdlkUs/MQ==: --dhchap-ctrl-secret DHHC-1:01:NzZmZDNhMjYzODljMGIzYjViY2RmN2ZkNDVjNGFhYmRKwaUG: 00:20:05.792 00:00:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:05.792 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:05.792 00:00:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:05.792 00:00:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:05.792 00:00:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:05.792 00:00:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:05.792 00:00:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:05.792 00:00:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:05.792 00:00:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:06.055 00:00:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 3 00:20:06.055 00:00:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:06.055 00:00:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:06.055 00:00:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:20:06.055 00:00:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:06.055 00:00:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:06.055 00:00:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:20:06.055 00:00:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:06.055 00:00:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:06.055 00:00:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:06.055 00:00:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:06.055 00:00:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:06.055 00:00:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:06.314 00:20:06.314 00:00:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:06.314 00:00:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:06.314 00:00:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:06.573 00:00:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:06.573 00:00:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:06.573 00:00:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:06.573 00:00:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:06.573 00:00:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:06.573 00:00:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:06.573 { 00:20:06.573 "cntlid": 23, 00:20:06.573 "qid": 0, 00:20:06.573 "state": "enabled", 00:20:06.573 "thread": "nvmf_tgt_poll_group_000", 00:20:06.573 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:06.573 "listen_address": { 00:20:06.573 "trtype": "TCP", 00:20:06.573 "adrfam": "IPv4", 00:20:06.573 "traddr": "10.0.0.2", 00:20:06.573 "trsvcid": "4420" 00:20:06.573 }, 00:20:06.573 "peer_address": { 00:20:06.573 "trtype": "TCP", 00:20:06.573 "adrfam": "IPv4", 00:20:06.573 "traddr": "10.0.0.1", 00:20:06.573 "trsvcid": "54128" 00:20:06.573 }, 00:20:06.573 "auth": { 00:20:06.573 "state": "completed", 00:20:06.573 "digest": "sha256", 00:20:06.573 "dhgroup": "ffdhe3072" 00:20:06.573 } 00:20:06.573 } 00:20:06.573 ]' 00:20:06.573 00:00:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:06.573 00:00:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:06.573 00:00:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:06.573 00:00:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:06.573 00:00:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:06.573 00:00:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:06.573 00:00:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:06.573 00:00:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:06.831 00:00:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:N2MxMmMwYjA1MWJmNTNlNmM1NzNkMzQ0OWEwOWQ0NTQwYmFiNzRjNzJiYmZlY2VmNDlkMDdlZjUzNzA4N2FiNRcYr4o=: 00:20:06.831 00:00:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:N2MxMmMwYjA1MWJmNTNlNmM1NzNkMzQ0OWEwOWQ0NTQwYmFiNzRjNzJiYmZlY2VmNDlkMDdlZjUzNzA4N2FiNRcYr4o=: 00:20:07.399 00:00:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:07.399 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:07.399 00:00:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:07.399 00:00:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:07.399 00:00:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:07.399 00:00:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:07.399 00:00:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:07.399 00:00:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:07.399 00:00:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:07.399 00:00:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:07.657 00:00:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 0 00:20:07.657 00:00:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:07.657 00:00:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:07.657 00:00:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:20:07.657 00:00:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:07.657 00:00:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:07.657 00:00:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:07.657 00:00:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:07.657 00:00:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:07.657 00:00:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:07.657 00:00:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:07.657 00:00:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:07.657 00:00:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:07.916 00:20:07.916 00:00:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:07.916 00:00:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:07.916 00:00:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:08.174 00:00:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:08.174 00:00:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:08.174 00:00:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:08.174 00:00:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:08.174 00:00:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:08.174 00:00:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:08.174 { 00:20:08.174 "cntlid": 25, 00:20:08.174 "qid": 0, 00:20:08.174 "state": "enabled", 00:20:08.174 "thread": "nvmf_tgt_poll_group_000", 00:20:08.174 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:08.174 "listen_address": { 00:20:08.174 "trtype": "TCP", 00:20:08.174 "adrfam": "IPv4", 00:20:08.174 "traddr": "10.0.0.2", 00:20:08.174 "trsvcid": "4420" 00:20:08.174 }, 00:20:08.174 "peer_address": { 00:20:08.174 "trtype": "TCP", 00:20:08.174 "adrfam": "IPv4", 00:20:08.174 "traddr": "10.0.0.1", 00:20:08.174 "trsvcid": "59844" 00:20:08.174 }, 00:20:08.174 "auth": { 00:20:08.174 "state": "completed", 00:20:08.174 "digest": "sha256", 00:20:08.174 "dhgroup": "ffdhe4096" 00:20:08.174 } 00:20:08.174 } 00:20:08.174 ]' 00:20:08.174 00:00:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:08.174 00:00:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:08.174 00:00:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:08.174 00:00:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:08.174 00:00:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:08.174 00:00:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:08.174 00:00:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:08.174 00:00:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:08.433 00:00:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YjlhYjQxNTI1YWM5ZGNjNTZlMzNhMTM3MjExYWZlYThmMDZkODI2M2NmOTljMjc1svf59w==: --dhchap-ctrl-secret DHHC-1:03:MzI5MGI4Mjg1NjYyZTgwNWRjNjE3ZWExZTYzYzRiZGY3ZjZlMTliMDJlNWVlMzM5ZWRlZmFjNGZlYzg1ODA0Y+nxpHg=: 00:20:08.433 00:00:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:YjlhYjQxNTI1YWM5ZGNjNTZlMzNhMTM3MjExYWZlYThmMDZkODI2M2NmOTljMjc1svf59w==: --dhchap-ctrl-secret DHHC-1:03:MzI5MGI4Mjg1NjYyZTgwNWRjNjE3ZWExZTYzYzRiZGY3ZjZlMTliMDJlNWVlMzM5ZWRlZmFjNGZlYzg1ODA0Y+nxpHg=: 00:20:09.006 00:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:09.006 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:09.006 00:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:09.006 00:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:09.006 00:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:09.006 00:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:09.006 00:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:09.007 00:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:09.007 00:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:09.270 00:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 1 00:20:09.270 00:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:09.270 00:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:09.270 00:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:20:09.270 00:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:09.270 00:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:09.270 00:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:09.270 00:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:09.270 00:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:09.270 00:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:09.270 00:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:09.270 00:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:09.270 00:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:09.530 00:20:09.530 00:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:09.530 00:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:09.530 00:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:09.788 00:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:09.788 00:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:09.788 00:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:09.788 00:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:09.788 00:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:09.788 00:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:09.788 { 00:20:09.788 "cntlid": 27, 00:20:09.788 "qid": 0, 00:20:09.788 "state": "enabled", 00:20:09.788 "thread": "nvmf_tgt_poll_group_000", 00:20:09.788 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:09.788 "listen_address": { 00:20:09.788 "trtype": "TCP", 00:20:09.788 "adrfam": "IPv4", 00:20:09.788 "traddr": "10.0.0.2", 00:20:09.788 "trsvcid": "4420" 00:20:09.788 }, 00:20:09.788 "peer_address": { 00:20:09.788 "trtype": "TCP", 00:20:09.788 "adrfam": "IPv4", 00:20:09.788 "traddr": "10.0.0.1", 00:20:09.788 "trsvcid": "59880" 00:20:09.788 }, 00:20:09.788 "auth": { 00:20:09.788 "state": "completed", 00:20:09.788 "digest": "sha256", 00:20:09.788 "dhgroup": "ffdhe4096" 00:20:09.788 } 00:20:09.788 } 00:20:09.788 ]' 00:20:09.788 00:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:09.788 00:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:09.788 00:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:09.788 00:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:09.788 00:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:09.788 00:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:09.788 00:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:09.788 00:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:10.046 00:00:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MjA1MDExM2VkN2JiZGY0ZDJiN2Q4MjY4Y2EyNTdiNmNaF3cn: --dhchap-ctrl-secret DHHC-1:02:ZTVkZmMyNDFlZTQ0NWY4ZDI5Y2Y5ZTQzMGY5YTNhZGY2MTYzZGM3MDhjZTIzZTg3m25QsA==: 00:20:10.046 00:00:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:MjA1MDExM2VkN2JiZGY0ZDJiN2Q4MjY4Y2EyNTdiNmNaF3cn: --dhchap-ctrl-secret DHHC-1:02:ZTVkZmMyNDFlZTQ0NWY4ZDI5Y2Y5ZTQzMGY5YTNhZGY2MTYzZGM3MDhjZTIzZTg3m25QsA==: 00:20:10.614 00:00:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:10.614 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:10.614 00:00:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:10.614 00:00:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:10.614 00:00:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:10.614 00:00:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:10.614 00:00:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:10.614 00:00:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:10.614 00:00:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:10.873 00:00:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 2 00:20:10.873 00:00:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:10.873 00:00:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:10.873 00:00:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:20:10.873 00:00:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:10.873 00:00:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:10.873 00:00:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:10.873 00:00:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:10.873 00:00:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:10.873 00:00:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:10.873 00:00:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:10.873 00:00:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:10.873 00:00:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:11.131 00:20:11.131 00:00:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:11.131 00:00:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:11.131 00:00:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:11.388 00:00:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:11.389 00:00:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:11.389 00:00:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:11.389 00:00:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:11.389 00:00:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:11.389 00:00:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:11.389 { 00:20:11.389 "cntlid": 29, 00:20:11.389 "qid": 0, 00:20:11.389 "state": "enabled", 00:20:11.389 "thread": "nvmf_tgt_poll_group_000", 00:20:11.389 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:11.389 "listen_address": { 00:20:11.389 "trtype": "TCP", 00:20:11.389 "adrfam": "IPv4", 00:20:11.389 "traddr": "10.0.0.2", 00:20:11.389 "trsvcid": "4420" 00:20:11.389 }, 00:20:11.389 "peer_address": { 00:20:11.389 "trtype": "TCP", 00:20:11.389 "adrfam": "IPv4", 00:20:11.389 "traddr": "10.0.0.1", 00:20:11.389 "trsvcid": "59900" 00:20:11.389 }, 00:20:11.389 "auth": { 00:20:11.389 "state": "completed", 00:20:11.389 "digest": "sha256", 00:20:11.389 "dhgroup": "ffdhe4096" 00:20:11.389 } 00:20:11.389 } 00:20:11.389 ]' 00:20:11.389 00:00:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:11.389 00:00:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:11.389 00:00:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:11.389 00:00:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:11.389 00:00:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:11.389 00:00:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:11.389 00:00:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:11.389 00:00:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:11.646 00:00:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Y2U0ZGQ1ZjUxMzYwM2E3N2RiNTBkNjZiZDdlMjEyYTQwNjM4NmJiMmEyOWVhZDdlkUs/MQ==: --dhchap-ctrl-secret DHHC-1:01:NzZmZDNhMjYzODljMGIzYjViY2RmN2ZkNDVjNGFhYmRKwaUG: 00:20:11.646 00:00:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:Y2U0ZGQ1ZjUxMzYwM2E3N2RiNTBkNjZiZDdlMjEyYTQwNjM4NmJiMmEyOWVhZDdlkUs/MQ==: --dhchap-ctrl-secret DHHC-1:01:NzZmZDNhMjYzODljMGIzYjViY2RmN2ZkNDVjNGFhYmRKwaUG: 00:20:12.213 00:00:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:12.213 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:12.213 00:00:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:12.213 00:00:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:12.213 00:00:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:12.213 00:00:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:12.213 00:00:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:12.213 00:00:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:12.213 00:00:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:12.472 00:00:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 3 00:20:12.472 00:00:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:12.472 00:00:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:12.472 00:00:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:20:12.472 00:00:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:12.472 00:00:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:12.472 00:00:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:20:12.472 00:00:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:12.472 00:00:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:12.472 00:00:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:12.472 00:00:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:12.472 00:00:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:12.472 00:00:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:12.731 00:20:12.731 00:00:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:12.731 00:00:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:12.731 00:00:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:12.990 00:00:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:12.990 00:00:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:12.990 00:00:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:12.990 00:00:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:12.990 00:00:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:12.990 00:00:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:12.990 { 00:20:12.990 "cntlid": 31, 00:20:12.990 "qid": 0, 00:20:12.990 "state": "enabled", 00:20:12.990 "thread": "nvmf_tgt_poll_group_000", 00:20:12.990 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:12.990 "listen_address": { 00:20:12.990 "trtype": "TCP", 00:20:12.990 "adrfam": "IPv4", 00:20:12.990 "traddr": "10.0.0.2", 00:20:12.990 "trsvcid": "4420" 00:20:12.990 }, 00:20:12.990 "peer_address": { 00:20:12.990 "trtype": "TCP", 00:20:12.990 "adrfam": "IPv4", 00:20:12.990 "traddr": "10.0.0.1", 00:20:12.990 "trsvcid": "59920" 00:20:12.990 }, 00:20:12.990 "auth": { 00:20:12.990 "state": "completed", 00:20:12.990 "digest": "sha256", 00:20:12.990 "dhgroup": "ffdhe4096" 00:20:12.990 } 00:20:12.990 } 00:20:12.990 ]' 00:20:12.990 00:00:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:12.990 00:00:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:12.990 00:00:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:12.991 00:00:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:12.991 00:00:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:12.991 00:00:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:12.991 00:00:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:12.991 00:00:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:13.249 00:00:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:N2MxMmMwYjA1MWJmNTNlNmM1NzNkMzQ0OWEwOWQ0NTQwYmFiNzRjNzJiYmZlY2VmNDlkMDdlZjUzNzA4N2FiNRcYr4o=: 00:20:13.249 00:00:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:N2MxMmMwYjA1MWJmNTNlNmM1NzNkMzQ0OWEwOWQ0NTQwYmFiNzRjNzJiYmZlY2VmNDlkMDdlZjUzNzA4N2FiNRcYr4o=: 00:20:13.817 00:00:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:13.817 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:13.817 00:00:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:13.817 00:00:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:13.817 00:00:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:13.817 00:00:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:13.817 00:00:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:13.817 00:00:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:13.817 00:00:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:13.817 00:00:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:14.075 00:00:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 0 00:20:14.076 00:00:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:14.076 00:00:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:14.076 00:00:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:20:14.076 00:00:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:14.076 00:00:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:14.076 00:00:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:14.076 00:00:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:14.076 00:00:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:14.076 00:00:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:14.076 00:00:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:14.076 00:00:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:14.076 00:00:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:14.334 00:20:14.334 00:00:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:14.334 00:00:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:14.334 00:00:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:14.593 00:00:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:14.593 00:00:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:14.593 00:00:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:14.593 00:00:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:14.593 00:00:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:14.593 00:00:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:14.593 { 00:20:14.593 "cntlid": 33, 00:20:14.593 "qid": 0, 00:20:14.593 "state": "enabled", 00:20:14.593 "thread": "nvmf_tgt_poll_group_000", 00:20:14.593 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:14.593 "listen_address": { 00:20:14.593 "trtype": "TCP", 00:20:14.593 "adrfam": "IPv4", 00:20:14.593 "traddr": "10.0.0.2", 00:20:14.593 "trsvcid": "4420" 00:20:14.593 }, 00:20:14.593 "peer_address": { 00:20:14.593 "trtype": "TCP", 00:20:14.593 "adrfam": "IPv4", 00:20:14.593 "traddr": "10.0.0.1", 00:20:14.593 "trsvcid": "59956" 00:20:14.593 }, 00:20:14.593 "auth": { 00:20:14.593 "state": "completed", 00:20:14.593 "digest": "sha256", 00:20:14.593 "dhgroup": "ffdhe6144" 00:20:14.593 } 00:20:14.593 } 00:20:14.593 ]' 00:20:14.593 00:00:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:14.593 00:00:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:14.593 00:00:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:14.593 00:00:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:14.593 00:00:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:14.852 00:00:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:14.852 00:00:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:14.852 00:00:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:14.852 00:00:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YjlhYjQxNTI1YWM5ZGNjNTZlMzNhMTM3MjExYWZlYThmMDZkODI2M2NmOTljMjc1svf59w==: --dhchap-ctrl-secret DHHC-1:03:MzI5MGI4Mjg1NjYyZTgwNWRjNjE3ZWExZTYzYzRiZGY3ZjZlMTliMDJlNWVlMzM5ZWRlZmFjNGZlYzg1ODA0Y+nxpHg=: 00:20:14.852 00:00:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:YjlhYjQxNTI1YWM5ZGNjNTZlMzNhMTM3MjExYWZlYThmMDZkODI2M2NmOTljMjc1svf59w==: --dhchap-ctrl-secret DHHC-1:03:MzI5MGI4Mjg1NjYyZTgwNWRjNjE3ZWExZTYzYzRiZGY3ZjZlMTliMDJlNWVlMzM5ZWRlZmFjNGZlYzg1ODA0Y+nxpHg=: 00:20:15.417 00:00:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:15.417 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:15.417 00:00:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:15.417 00:00:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:15.417 00:00:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:15.418 00:00:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:15.418 00:00:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:15.418 00:00:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:15.418 00:00:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:15.676 00:00:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 1 00:20:15.676 00:00:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:15.676 00:00:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:15.676 00:00:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:20:15.676 00:00:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:15.676 00:00:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:15.676 00:00:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:15.676 00:00:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:15.676 00:00:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:15.676 00:00:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:15.676 00:00:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:15.676 00:00:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:15.676 00:00:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:15.935 00:20:16.193 00:00:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:16.193 00:00:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:16.193 00:00:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:16.193 00:00:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:16.193 00:00:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:16.193 00:00:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:16.193 00:00:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:16.193 00:00:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:16.193 00:00:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:16.193 { 00:20:16.193 "cntlid": 35, 00:20:16.193 "qid": 0, 00:20:16.193 "state": "enabled", 00:20:16.193 "thread": "nvmf_tgt_poll_group_000", 00:20:16.193 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:16.193 "listen_address": { 00:20:16.193 "trtype": "TCP", 00:20:16.193 "adrfam": "IPv4", 00:20:16.193 "traddr": "10.0.0.2", 00:20:16.193 "trsvcid": "4420" 00:20:16.193 }, 00:20:16.193 "peer_address": { 00:20:16.193 "trtype": "TCP", 00:20:16.193 "adrfam": "IPv4", 00:20:16.193 "traddr": "10.0.0.1", 00:20:16.193 "trsvcid": "59982" 00:20:16.193 }, 00:20:16.193 "auth": { 00:20:16.193 "state": "completed", 00:20:16.193 "digest": "sha256", 00:20:16.193 "dhgroup": "ffdhe6144" 00:20:16.193 } 00:20:16.193 } 00:20:16.193 ]' 00:20:16.193 00:00:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:16.193 00:00:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:16.193 00:00:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:16.451 00:00:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:16.451 00:00:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:16.451 00:00:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:16.451 00:00:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:16.451 00:00:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:16.710 00:00:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MjA1MDExM2VkN2JiZGY0ZDJiN2Q4MjY4Y2EyNTdiNmNaF3cn: --dhchap-ctrl-secret DHHC-1:02:ZTVkZmMyNDFlZTQ0NWY4ZDI5Y2Y5ZTQzMGY5YTNhZGY2MTYzZGM3MDhjZTIzZTg3m25QsA==: 00:20:16.710 00:00:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:MjA1MDExM2VkN2JiZGY0ZDJiN2Q4MjY4Y2EyNTdiNmNaF3cn: --dhchap-ctrl-secret DHHC-1:02:ZTVkZmMyNDFlZTQ0NWY4ZDI5Y2Y5ZTQzMGY5YTNhZGY2MTYzZGM3MDhjZTIzZTg3m25QsA==: 00:20:17.275 00:00:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:17.275 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:17.275 00:00:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:17.275 00:00:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:17.275 00:00:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:17.275 00:00:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:17.275 00:00:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:17.275 00:00:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:17.275 00:00:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:17.275 00:00:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 2 00:20:17.275 00:00:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:17.275 00:00:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:17.275 00:00:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:20:17.275 00:00:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:17.275 00:00:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:17.275 00:00:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:17.275 00:00:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:17.275 00:00:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:17.275 00:00:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:17.275 00:00:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:17.275 00:00:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:17.275 00:00:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:17.842 00:20:17.842 00:00:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:17.842 00:00:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:17.843 00:00:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:17.843 00:00:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:17.843 00:00:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:17.843 00:00:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:17.843 00:00:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:17.843 00:00:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:17.843 00:00:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:17.843 { 00:20:17.843 "cntlid": 37, 00:20:17.843 "qid": 0, 00:20:17.843 "state": "enabled", 00:20:17.843 "thread": "nvmf_tgt_poll_group_000", 00:20:17.843 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:17.843 "listen_address": { 00:20:17.843 "trtype": "TCP", 00:20:17.843 "adrfam": "IPv4", 00:20:17.843 "traddr": "10.0.0.2", 00:20:17.843 "trsvcid": "4420" 00:20:17.843 }, 00:20:17.843 "peer_address": { 00:20:17.843 "trtype": "TCP", 00:20:17.843 "adrfam": "IPv4", 00:20:17.843 "traddr": "10.0.0.1", 00:20:17.843 "trsvcid": "60012" 00:20:17.843 }, 00:20:17.843 "auth": { 00:20:17.843 "state": "completed", 00:20:17.843 "digest": "sha256", 00:20:17.843 "dhgroup": "ffdhe6144" 00:20:17.843 } 00:20:17.843 } 00:20:17.843 ]' 00:20:17.843 00:00:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:18.101 00:00:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:18.101 00:00:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:18.101 00:00:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:18.101 00:00:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:18.101 00:00:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:18.101 00:00:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:18.101 00:00:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:18.360 00:00:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Y2U0ZGQ1ZjUxMzYwM2E3N2RiNTBkNjZiZDdlMjEyYTQwNjM4NmJiMmEyOWVhZDdlkUs/MQ==: --dhchap-ctrl-secret DHHC-1:01:NzZmZDNhMjYzODljMGIzYjViY2RmN2ZkNDVjNGFhYmRKwaUG: 00:20:18.360 00:00:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:Y2U0ZGQ1ZjUxMzYwM2E3N2RiNTBkNjZiZDdlMjEyYTQwNjM4NmJiMmEyOWVhZDdlkUs/MQ==: --dhchap-ctrl-secret DHHC-1:01:NzZmZDNhMjYzODljMGIzYjViY2RmN2ZkNDVjNGFhYmRKwaUG: 00:20:18.928 00:00:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:18.928 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:18.928 00:00:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:18.928 00:00:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:18.928 00:00:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:18.928 00:00:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:18.928 00:00:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:18.928 00:00:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:18.928 00:00:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:18.928 00:00:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 3 00:20:18.928 00:00:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:18.928 00:00:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:18.928 00:00:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:20:18.928 00:00:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:18.928 00:00:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:18.928 00:00:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:20:18.928 00:00:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:18.928 00:00:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:18.928 00:00:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:18.928 00:00:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:18.928 00:00:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:18.928 00:00:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:19.493 00:20:19.494 00:00:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:19.494 00:00:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:19.494 00:00:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:19.494 00:00:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:19.494 00:00:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:19.494 00:00:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:19.494 00:00:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:19.494 00:00:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:19.494 00:00:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:19.494 { 00:20:19.494 "cntlid": 39, 00:20:19.494 "qid": 0, 00:20:19.494 "state": "enabled", 00:20:19.494 "thread": "nvmf_tgt_poll_group_000", 00:20:19.494 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:19.494 "listen_address": { 00:20:19.494 "trtype": "TCP", 00:20:19.494 "adrfam": "IPv4", 00:20:19.494 "traddr": "10.0.0.2", 00:20:19.494 "trsvcid": "4420" 00:20:19.494 }, 00:20:19.494 "peer_address": { 00:20:19.494 "trtype": "TCP", 00:20:19.494 "adrfam": "IPv4", 00:20:19.494 "traddr": "10.0.0.1", 00:20:19.494 "trsvcid": "35162" 00:20:19.494 }, 00:20:19.494 "auth": { 00:20:19.494 "state": "completed", 00:20:19.494 "digest": "sha256", 00:20:19.494 "dhgroup": "ffdhe6144" 00:20:19.494 } 00:20:19.494 } 00:20:19.494 ]' 00:20:19.494 00:00:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:19.752 00:00:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:19.752 00:00:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:19.752 00:00:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:19.752 00:00:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:19.752 00:00:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:19.752 00:00:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:19.752 00:00:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:20.010 00:00:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:N2MxMmMwYjA1MWJmNTNlNmM1NzNkMzQ0OWEwOWQ0NTQwYmFiNzRjNzJiYmZlY2VmNDlkMDdlZjUzNzA4N2FiNRcYr4o=: 00:20:20.010 00:00:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:N2MxMmMwYjA1MWJmNTNlNmM1NzNkMzQ0OWEwOWQ0NTQwYmFiNzRjNzJiYmZlY2VmNDlkMDdlZjUzNzA4N2FiNRcYr4o=: 00:20:20.578 00:00:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:20.578 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:20.579 00:00:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:20.579 00:00:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:20.579 00:00:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:20.579 00:00:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:20.579 00:00:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:20.579 00:00:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:20.579 00:00:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:20.579 00:00:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:20.579 00:00:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 0 00:20:20.579 00:00:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:20.579 00:00:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:20.579 00:00:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:20.579 00:00:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:20.579 00:00:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:20.579 00:00:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:20.579 00:00:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:20.579 00:00:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:20.579 00:00:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:20.579 00:00:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:20.579 00:00:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:20.579 00:00:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:21.146 00:20:21.146 00:01:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:21.146 00:01:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:21.146 00:01:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:21.404 00:01:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:21.404 00:01:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:21.404 00:01:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:21.404 00:01:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:21.404 00:01:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:21.404 00:01:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:21.404 { 00:20:21.404 "cntlid": 41, 00:20:21.404 "qid": 0, 00:20:21.404 "state": "enabled", 00:20:21.404 "thread": "nvmf_tgt_poll_group_000", 00:20:21.404 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:21.404 "listen_address": { 00:20:21.404 "trtype": "TCP", 00:20:21.404 "adrfam": "IPv4", 00:20:21.404 "traddr": "10.0.0.2", 00:20:21.404 "trsvcid": "4420" 00:20:21.404 }, 00:20:21.404 "peer_address": { 00:20:21.404 "trtype": "TCP", 00:20:21.404 "adrfam": "IPv4", 00:20:21.404 "traddr": "10.0.0.1", 00:20:21.404 "trsvcid": "35202" 00:20:21.404 }, 00:20:21.404 "auth": { 00:20:21.404 "state": "completed", 00:20:21.404 "digest": "sha256", 00:20:21.404 "dhgroup": "ffdhe8192" 00:20:21.404 } 00:20:21.404 } 00:20:21.404 ]' 00:20:21.404 00:01:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:21.404 00:01:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:21.404 00:01:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:21.404 00:01:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:21.404 00:01:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:21.404 00:01:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:21.404 00:01:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:21.404 00:01:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:21.663 00:01:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YjlhYjQxNTI1YWM5ZGNjNTZlMzNhMTM3MjExYWZlYThmMDZkODI2M2NmOTljMjc1svf59w==: --dhchap-ctrl-secret DHHC-1:03:MzI5MGI4Mjg1NjYyZTgwNWRjNjE3ZWExZTYzYzRiZGY3ZjZlMTliMDJlNWVlMzM5ZWRlZmFjNGZlYzg1ODA0Y+nxpHg=: 00:20:21.663 00:01:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:YjlhYjQxNTI1YWM5ZGNjNTZlMzNhMTM3MjExYWZlYThmMDZkODI2M2NmOTljMjc1svf59w==: --dhchap-ctrl-secret DHHC-1:03:MzI5MGI4Mjg1NjYyZTgwNWRjNjE3ZWExZTYzYzRiZGY3ZjZlMTliMDJlNWVlMzM5ZWRlZmFjNGZlYzg1ODA0Y+nxpHg=: 00:20:22.229 00:01:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:22.229 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:22.229 00:01:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:22.229 00:01:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:22.229 00:01:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:22.229 00:01:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:22.229 00:01:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:22.229 00:01:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:22.230 00:01:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:22.488 00:01:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 1 00:20:22.488 00:01:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:22.488 00:01:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:22.488 00:01:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:22.488 00:01:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:22.488 00:01:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:22.488 00:01:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:22.488 00:01:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:22.488 00:01:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:22.488 00:01:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:22.488 00:01:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:22.488 00:01:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:22.488 00:01:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:23.056 00:20:23.056 00:01:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:23.056 00:01:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:23.056 00:01:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:23.056 00:01:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:23.056 00:01:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:23.056 00:01:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:23.056 00:01:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:23.056 00:01:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:23.056 00:01:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:23.056 { 00:20:23.056 "cntlid": 43, 00:20:23.056 "qid": 0, 00:20:23.056 "state": "enabled", 00:20:23.056 "thread": "nvmf_tgt_poll_group_000", 00:20:23.056 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:23.056 "listen_address": { 00:20:23.056 "trtype": "TCP", 00:20:23.056 "adrfam": "IPv4", 00:20:23.056 "traddr": "10.0.0.2", 00:20:23.056 "trsvcid": "4420" 00:20:23.056 }, 00:20:23.056 "peer_address": { 00:20:23.056 "trtype": "TCP", 00:20:23.056 "adrfam": "IPv4", 00:20:23.056 "traddr": "10.0.0.1", 00:20:23.056 "trsvcid": "35238" 00:20:23.056 }, 00:20:23.056 "auth": { 00:20:23.056 "state": "completed", 00:20:23.056 "digest": "sha256", 00:20:23.056 "dhgroup": "ffdhe8192" 00:20:23.056 } 00:20:23.056 } 00:20:23.056 ]' 00:20:23.315 00:01:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:23.315 00:01:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:23.315 00:01:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:23.315 00:01:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:23.315 00:01:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:23.315 00:01:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:23.315 00:01:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:23.315 00:01:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:23.573 00:01:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MjA1MDExM2VkN2JiZGY0ZDJiN2Q4MjY4Y2EyNTdiNmNaF3cn: --dhchap-ctrl-secret DHHC-1:02:ZTVkZmMyNDFlZTQ0NWY4ZDI5Y2Y5ZTQzMGY5YTNhZGY2MTYzZGM3MDhjZTIzZTg3m25QsA==: 00:20:23.573 00:01:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:MjA1MDExM2VkN2JiZGY0ZDJiN2Q4MjY4Y2EyNTdiNmNaF3cn: --dhchap-ctrl-secret DHHC-1:02:ZTVkZmMyNDFlZTQ0NWY4ZDI5Y2Y5ZTQzMGY5YTNhZGY2MTYzZGM3MDhjZTIzZTg3m25QsA==: 00:20:24.141 00:01:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:24.141 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:24.141 00:01:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:24.141 00:01:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:24.141 00:01:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:24.141 00:01:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:24.141 00:01:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:24.141 00:01:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:24.141 00:01:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:24.401 00:01:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 2 00:20:24.401 00:01:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:24.401 00:01:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:24.401 00:01:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:24.401 00:01:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:24.401 00:01:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:24.401 00:01:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:24.401 00:01:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:24.401 00:01:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:24.401 00:01:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:24.401 00:01:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:24.401 00:01:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:24.401 00:01:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:24.660 00:20:24.919 00:01:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:24.919 00:01:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:24.919 00:01:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:24.919 00:01:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:24.919 00:01:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:24.919 00:01:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:24.919 00:01:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:24.919 00:01:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:24.919 00:01:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:24.919 { 00:20:24.919 "cntlid": 45, 00:20:24.919 "qid": 0, 00:20:24.919 "state": "enabled", 00:20:24.919 "thread": "nvmf_tgt_poll_group_000", 00:20:24.919 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:24.919 "listen_address": { 00:20:24.919 "trtype": "TCP", 00:20:24.919 "adrfam": "IPv4", 00:20:24.919 "traddr": "10.0.0.2", 00:20:24.919 "trsvcid": "4420" 00:20:24.919 }, 00:20:24.919 "peer_address": { 00:20:24.919 "trtype": "TCP", 00:20:24.919 "adrfam": "IPv4", 00:20:24.919 "traddr": "10.0.0.1", 00:20:24.919 "trsvcid": "35260" 00:20:24.919 }, 00:20:24.919 "auth": { 00:20:24.919 "state": "completed", 00:20:24.919 "digest": "sha256", 00:20:24.919 "dhgroup": "ffdhe8192" 00:20:24.919 } 00:20:24.919 } 00:20:24.919 ]' 00:20:24.919 00:01:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:24.919 00:01:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:25.178 00:01:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:25.178 00:01:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:25.178 00:01:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:25.178 00:01:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:25.178 00:01:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:25.178 00:01:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:25.436 00:01:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Y2U0ZGQ1ZjUxMzYwM2E3N2RiNTBkNjZiZDdlMjEyYTQwNjM4NmJiMmEyOWVhZDdlkUs/MQ==: --dhchap-ctrl-secret DHHC-1:01:NzZmZDNhMjYzODljMGIzYjViY2RmN2ZkNDVjNGFhYmRKwaUG: 00:20:25.437 00:01:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:Y2U0ZGQ1ZjUxMzYwM2E3N2RiNTBkNjZiZDdlMjEyYTQwNjM4NmJiMmEyOWVhZDdlkUs/MQ==: --dhchap-ctrl-secret DHHC-1:01:NzZmZDNhMjYzODljMGIzYjViY2RmN2ZkNDVjNGFhYmRKwaUG: 00:20:26.003 00:01:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:26.003 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:26.003 00:01:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:26.003 00:01:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:26.003 00:01:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:26.003 00:01:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:26.003 00:01:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:26.003 00:01:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:26.003 00:01:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:26.003 00:01:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 3 00:20:26.003 00:01:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:26.003 00:01:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:26.003 00:01:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:26.003 00:01:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:26.003 00:01:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:26.003 00:01:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:20:26.003 00:01:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:26.003 00:01:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:26.003 00:01:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:26.003 00:01:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:26.004 00:01:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:26.004 00:01:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:26.570 00:20:26.570 00:01:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:26.570 00:01:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:26.570 00:01:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:26.829 00:01:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:26.829 00:01:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:26.829 00:01:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:26.829 00:01:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:26.829 00:01:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:26.829 00:01:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:26.829 { 00:20:26.829 "cntlid": 47, 00:20:26.829 "qid": 0, 00:20:26.829 "state": "enabled", 00:20:26.829 "thread": "nvmf_tgt_poll_group_000", 00:20:26.829 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:26.829 "listen_address": { 00:20:26.829 "trtype": "TCP", 00:20:26.829 "adrfam": "IPv4", 00:20:26.829 "traddr": "10.0.0.2", 00:20:26.829 "trsvcid": "4420" 00:20:26.829 }, 00:20:26.829 "peer_address": { 00:20:26.829 "trtype": "TCP", 00:20:26.829 "adrfam": "IPv4", 00:20:26.829 "traddr": "10.0.0.1", 00:20:26.829 "trsvcid": "35294" 00:20:26.829 }, 00:20:26.829 "auth": { 00:20:26.829 "state": "completed", 00:20:26.829 "digest": "sha256", 00:20:26.829 "dhgroup": "ffdhe8192" 00:20:26.829 } 00:20:26.829 } 00:20:26.829 ]' 00:20:26.829 00:01:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:26.829 00:01:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:26.829 00:01:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:26.829 00:01:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:26.829 00:01:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:26.829 00:01:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:26.829 00:01:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:26.829 00:01:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:27.087 00:01:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:N2MxMmMwYjA1MWJmNTNlNmM1NzNkMzQ0OWEwOWQ0NTQwYmFiNzRjNzJiYmZlY2VmNDlkMDdlZjUzNzA4N2FiNRcYr4o=: 00:20:27.087 00:01:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:N2MxMmMwYjA1MWJmNTNlNmM1NzNkMzQ0OWEwOWQ0NTQwYmFiNzRjNzJiYmZlY2VmNDlkMDdlZjUzNzA4N2FiNRcYr4o=: 00:20:27.656 00:01:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:27.656 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:27.656 00:01:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:27.656 00:01:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:27.656 00:01:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:27.656 00:01:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:27.656 00:01:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:20:27.656 00:01:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:27.656 00:01:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:27.656 00:01:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:27.656 00:01:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:27.914 00:01:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 0 00:20:27.914 00:01:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:27.914 00:01:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:27.914 00:01:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:27.914 00:01:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:27.914 00:01:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:27.914 00:01:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:27.914 00:01:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:27.914 00:01:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:27.914 00:01:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:27.914 00:01:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:27.914 00:01:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:27.914 00:01:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:28.173 00:20:28.173 00:01:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:28.173 00:01:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:28.173 00:01:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:28.432 00:01:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:28.432 00:01:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:28.432 00:01:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:28.432 00:01:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:28.432 00:01:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:28.432 00:01:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:28.432 { 00:20:28.432 "cntlid": 49, 00:20:28.432 "qid": 0, 00:20:28.432 "state": "enabled", 00:20:28.432 "thread": "nvmf_tgt_poll_group_000", 00:20:28.432 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:28.432 "listen_address": { 00:20:28.432 "trtype": "TCP", 00:20:28.432 "adrfam": "IPv4", 00:20:28.432 "traddr": "10.0.0.2", 00:20:28.432 "trsvcid": "4420" 00:20:28.432 }, 00:20:28.432 "peer_address": { 00:20:28.432 "trtype": "TCP", 00:20:28.432 "adrfam": "IPv4", 00:20:28.432 "traddr": "10.0.0.1", 00:20:28.432 "trsvcid": "51486" 00:20:28.432 }, 00:20:28.432 "auth": { 00:20:28.432 "state": "completed", 00:20:28.432 "digest": "sha384", 00:20:28.432 "dhgroup": "null" 00:20:28.432 } 00:20:28.432 } 00:20:28.432 ]' 00:20:28.432 00:01:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:28.432 00:01:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:28.432 00:01:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:28.432 00:01:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:28.432 00:01:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:28.432 00:01:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:28.432 00:01:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:28.432 00:01:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:28.691 00:01:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YjlhYjQxNTI1YWM5ZGNjNTZlMzNhMTM3MjExYWZlYThmMDZkODI2M2NmOTljMjc1svf59w==: --dhchap-ctrl-secret DHHC-1:03:MzI5MGI4Mjg1NjYyZTgwNWRjNjE3ZWExZTYzYzRiZGY3ZjZlMTliMDJlNWVlMzM5ZWRlZmFjNGZlYzg1ODA0Y+nxpHg=: 00:20:28.691 00:01:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:YjlhYjQxNTI1YWM5ZGNjNTZlMzNhMTM3MjExYWZlYThmMDZkODI2M2NmOTljMjc1svf59w==: --dhchap-ctrl-secret DHHC-1:03:MzI5MGI4Mjg1NjYyZTgwNWRjNjE3ZWExZTYzYzRiZGY3ZjZlMTliMDJlNWVlMzM5ZWRlZmFjNGZlYzg1ODA0Y+nxpHg=: 00:20:29.267 00:01:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:29.268 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:29.268 00:01:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:29.268 00:01:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:29.268 00:01:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:29.268 00:01:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:29.268 00:01:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:29.268 00:01:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:29.268 00:01:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:29.562 00:01:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 1 00:20:29.562 00:01:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:29.562 00:01:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:29.562 00:01:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:29.562 00:01:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:29.562 00:01:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:29.562 00:01:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:29.562 00:01:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:29.562 00:01:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:29.562 00:01:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:29.562 00:01:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:29.562 00:01:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:29.562 00:01:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:29.945 00:20:29.945 00:01:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:29.945 00:01:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:29.945 00:01:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:29.945 00:01:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:29.945 00:01:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:29.945 00:01:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:29.945 00:01:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:29.945 00:01:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:29.945 00:01:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:29.945 { 00:20:29.945 "cntlid": 51, 00:20:29.945 "qid": 0, 00:20:29.945 "state": "enabled", 00:20:29.945 "thread": "nvmf_tgt_poll_group_000", 00:20:29.945 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:29.945 "listen_address": { 00:20:29.945 "trtype": "TCP", 00:20:29.945 "adrfam": "IPv4", 00:20:29.945 "traddr": "10.0.0.2", 00:20:29.945 "trsvcid": "4420" 00:20:29.945 }, 00:20:29.945 "peer_address": { 00:20:29.945 "trtype": "TCP", 00:20:29.945 "adrfam": "IPv4", 00:20:29.945 "traddr": "10.0.0.1", 00:20:29.945 "trsvcid": "51516" 00:20:29.945 }, 00:20:29.945 "auth": { 00:20:29.945 "state": "completed", 00:20:29.945 "digest": "sha384", 00:20:29.945 "dhgroup": "null" 00:20:29.945 } 00:20:29.945 } 00:20:29.945 ]' 00:20:29.945 00:01:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:29.945 00:01:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:29.945 00:01:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:30.218 00:01:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:30.218 00:01:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:30.218 00:01:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:30.218 00:01:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:30.218 00:01:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:30.218 00:01:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MjA1MDExM2VkN2JiZGY0ZDJiN2Q4MjY4Y2EyNTdiNmNaF3cn: --dhchap-ctrl-secret DHHC-1:02:ZTVkZmMyNDFlZTQ0NWY4ZDI5Y2Y5ZTQzMGY5YTNhZGY2MTYzZGM3MDhjZTIzZTg3m25QsA==: 00:20:30.218 00:01:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:MjA1MDExM2VkN2JiZGY0ZDJiN2Q4MjY4Y2EyNTdiNmNaF3cn: --dhchap-ctrl-secret DHHC-1:02:ZTVkZmMyNDFlZTQ0NWY4ZDI5Y2Y5ZTQzMGY5YTNhZGY2MTYzZGM3MDhjZTIzZTg3m25QsA==: 00:20:30.785 00:01:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:30.785 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:30.785 00:01:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:30.785 00:01:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:30.785 00:01:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:30.785 00:01:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:30.785 00:01:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:30.786 00:01:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:30.786 00:01:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:31.044 00:01:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 2 00:20:31.044 00:01:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:31.044 00:01:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:31.044 00:01:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:31.044 00:01:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:31.044 00:01:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:31.044 00:01:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:31.044 00:01:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:31.044 00:01:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:31.044 00:01:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:31.044 00:01:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:31.044 00:01:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:31.044 00:01:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:31.302 00:20:31.302 00:01:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:31.302 00:01:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:31.302 00:01:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:31.560 00:01:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:31.560 00:01:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:31.560 00:01:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:31.560 00:01:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:31.560 00:01:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:31.560 00:01:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:31.560 { 00:20:31.560 "cntlid": 53, 00:20:31.560 "qid": 0, 00:20:31.560 "state": "enabled", 00:20:31.560 "thread": "nvmf_tgt_poll_group_000", 00:20:31.560 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:31.560 "listen_address": { 00:20:31.560 "trtype": "TCP", 00:20:31.560 "adrfam": "IPv4", 00:20:31.560 "traddr": "10.0.0.2", 00:20:31.560 "trsvcid": "4420" 00:20:31.560 }, 00:20:31.560 "peer_address": { 00:20:31.560 "trtype": "TCP", 00:20:31.560 "adrfam": "IPv4", 00:20:31.560 "traddr": "10.0.0.1", 00:20:31.560 "trsvcid": "51544" 00:20:31.560 }, 00:20:31.560 "auth": { 00:20:31.560 "state": "completed", 00:20:31.560 "digest": "sha384", 00:20:31.560 "dhgroup": "null" 00:20:31.560 } 00:20:31.560 } 00:20:31.560 ]' 00:20:31.560 00:01:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:31.560 00:01:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:31.560 00:01:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:31.560 00:01:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:31.560 00:01:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:31.560 00:01:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:31.560 00:01:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:31.560 00:01:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:31.819 00:01:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Y2U0ZGQ1ZjUxMzYwM2E3N2RiNTBkNjZiZDdlMjEyYTQwNjM4NmJiMmEyOWVhZDdlkUs/MQ==: --dhchap-ctrl-secret DHHC-1:01:NzZmZDNhMjYzODljMGIzYjViY2RmN2ZkNDVjNGFhYmRKwaUG: 00:20:31.819 00:01:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:Y2U0ZGQ1ZjUxMzYwM2E3N2RiNTBkNjZiZDdlMjEyYTQwNjM4NmJiMmEyOWVhZDdlkUs/MQ==: --dhchap-ctrl-secret DHHC-1:01:NzZmZDNhMjYzODljMGIzYjViY2RmN2ZkNDVjNGFhYmRKwaUG: 00:20:32.385 00:01:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:32.385 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:32.385 00:01:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:32.385 00:01:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:32.385 00:01:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:32.385 00:01:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:32.385 00:01:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:32.385 00:01:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:32.385 00:01:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:32.644 00:01:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 3 00:20:32.644 00:01:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:32.644 00:01:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:32.644 00:01:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:32.644 00:01:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:32.644 00:01:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:32.644 00:01:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:20:32.644 00:01:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:32.644 00:01:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:32.644 00:01:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:32.644 00:01:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:32.644 00:01:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:32.644 00:01:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:32.903 00:20:32.903 00:01:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:32.903 00:01:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:32.903 00:01:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:33.161 00:01:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:33.162 00:01:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:33.162 00:01:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:33.162 00:01:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:33.162 00:01:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:33.162 00:01:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:33.162 { 00:20:33.162 "cntlid": 55, 00:20:33.162 "qid": 0, 00:20:33.162 "state": "enabled", 00:20:33.162 "thread": "nvmf_tgt_poll_group_000", 00:20:33.162 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:33.162 "listen_address": { 00:20:33.162 "trtype": "TCP", 00:20:33.162 "adrfam": "IPv4", 00:20:33.162 "traddr": "10.0.0.2", 00:20:33.162 "trsvcid": "4420" 00:20:33.162 }, 00:20:33.162 "peer_address": { 00:20:33.162 "trtype": "TCP", 00:20:33.162 "adrfam": "IPv4", 00:20:33.162 "traddr": "10.0.0.1", 00:20:33.162 "trsvcid": "51562" 00:20:33.162 }, 00:20:33.162 "auth": { 00:20:33.162 "state": "completed", 00:20:33.162 "digest": "sha384", 00:20:33.162 "dhgroup": "null" 00:20:33.162 } 00:20:33.162 } 00:20:33.162 ]' 00:20:33.162 00:01:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:33.162 00:01:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:33.162 00:01:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:33.162 00:01:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:33.162 00:01:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:33.162 00:01:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:33.162 00:01:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:33.162 00:01:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:33.420 00:01:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:N2MxMmMwYjA1MWJmNTNlNmM1NzNkMzQ0OWEwOWQ0NTQwYmFiNzRjNzJiYmZlY2VmNDlkMDdlZjUzNzA4N2FiNRcYr4o=: 00:20:33.420 00:01:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:N2MxMmMwYjA1MWJmNTNlNmM1NzNkMzQ0OWEwOWQ0NTQwYmFiNzRjNzJiYmZlY2VmNDlkMDdlZjUzNzA4N2FiNRcYr4o=: 00:20:33.987 00:01:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:33.987 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:33.987 00:01:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:33.987 00:01:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:33.987 00:01:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:33.987 00:01:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:33.987 00:01:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:33.987 00:01:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:33.987 00:01:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:33.987 00:01:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:34.246 00:01:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 0 00:20:34.246 00:01:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:34.246 00:01:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:34.246 00:01:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:20:34.246 00:01:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:34.246 00:01:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:34.246 00:01:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:34.246 00:01:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:34.246 00:01:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:34.246 00:01:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:34.246 00:01:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:34.246 00:01:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:34.246 00:01:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:34.504 00:20:34.504 00:01:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:34.505 00:01:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:34.505 00:01:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:34.762 00:01:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:34.762 00:01:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:34.762 00:01:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:34.762 00:01:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:34.762 00:01:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:34.762 00:01:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:34.762 { 00:20:34.762 "cntlid": 57, 00:20:34.762 "qid": 0, 00:20:34.762 "state": "enabled", 00:20:34.762 "thread": "nvmf_tgt_poll_group_000", 00:20:34.762 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:34.762 "listen_address": { 00:20:34.762 "trtype": "TCP", 00:20:34.762 "adrfam": "IPv4", 00:20:34.762 "traddr": "10.0.0.2", 00:20:34.762 "trsvcid": "4420" 00:20:34.762 }, 00:20:34.762 "peer_address": { 00:20:34.762 "trtype": "TCP", 00:20:34.762 "adrfam": "IPv4", 00:20:34.762 "traddr": "10.0.0.1", 00:20:34.762 "trsvcid": "51584" 00:20:34.762 }, 00:20:34.762 "auth": { 00:20:34.762 "state": "completed", 00:20:34.762 "digest": "sha384", 00:20:34.762 "dhgroup": "ffdhe2048" 00:20:34.762 } 00:20:34.762 } 00:20:34.762 ]' 00:20:34.762 00:01:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:34.762 00:01:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:34.762 00:01:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:34.762 00:01:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:34.762 00:01:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:34.762 00:01:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:34.762 00:01:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:34.762 00:01:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:35.021 00:01:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YjlhYjQxNTI1YWM5ZGNjNTZlMzNhMTM3MjExYWZlYThmMDZkODI2M2NmOTljMjc1svf59w==: --dhchap-ctrl-secret DHHC-1:03:MzI5MGI4Mjg1NjYyZTgwNWRjNjE3ZWExZTYzYzRiZGY3ZjZlMTliMDJlNWVlMzM5ZWRlZmFjNGZlYzg1ODA0Y+nxpHg=: 00:20:35.021 00:01:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:YjlhYjQxNTI1YWM5ZGNjNTZlMzNhMTM3MjExYWZlYThmMDZkODI2M2NmOTljMjc1svf59w==: --dhchap-ctrl-secret DHHC-1:03:MzI5MGI4Mjg1NjYyZTgwNWRjNjE3ZWExZTYzYzRiZGY3ZjZlMTliMDJlNWVlMzM5ZWRlZmFjNGZlYzg1ODA0Y+nxpHg=: 00:20:35.592 00:01:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:35.592 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:35.592 00:01:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:35.592 00:01:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:35.592 00:01:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:35.592 00:01:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:35.592 00:01:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:35.592 00:01:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:35.592 00:01:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:35.852 00:01:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 1 00:20:35.852 00:01:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:35.852 00:01:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:35.852 00:01:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:20:35.852 00:01:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:35.852 00:01:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:35.852 00:01:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:35.852 00:01:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:35.852 00:01:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:35.852 00:01:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:35.852 00:01:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:35.852 00:01:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:35.852 00:01:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:36.114 00:20:36.114 00:01:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:36.114 00:01:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:36.114 00:01:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:36.114 00:01:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:36.114 00:01:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:36.114 00:01:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:36.114 00:01:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:36.114 00:01:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:36.114 00:01:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:36.114 { 00:20:36.114 "cntlid": 59, 00:20:36.114 "qid": 0, 00:20:36.114 "state": "enabled", 00:20:36.114 "thread": "nvmf_tgt_poll_group_000", 00:20:36.115 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:36.115 "listen_address": { 00:20:36.115 "trtype": "TCP", 00:20:36.115 "adrfam": "IPv4", 00:20:36.115 "traddr": "10.0.0.2", 00:20:36.115 "trsvcid": "4420" 00:20:36.115 }, 00:20:36.115 "peer_address": { 00:20:36.115 "trtype": "TCP", 00:20:36.115 "adrfam": "IPv4", 00:20:36.115 "traddr": "10.0.0.1", 00:20:36.115 "trsvcid": "51628" 00:20:36.115 }, 00:20:36.115 "auth": { 00:20:36.115 "state": "completed", 00:20:36.115 "digest": "sha384", 00:20:36.115 "dhgroup": "ffdhe2048" 00:20:36.115 } 00:20:36.115 } 00:20:36.115 ]' 00:20:36.115 00:01:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:36.376 00:01:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:36.376 00:01:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:36.376 00:01:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:36.376 00:01:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:36.376 00:01:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:36.376 00:01:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:36.376 00:01:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:36.634 00:01:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MjA1MDExM2VkN2JiZGY0ZDJiN2Q4MjY4Y2EyNTdiNmNaF3cn: --dhchap-ctrl-secret DHHC-1:02:ZTVkZmMyNDFlZTQ0NWY4ZDI5Y2Y5ZTQzMGY5YTNhZGY2MTYzZGM3MDhjZTIzZTg3m25QsA==: 00:20:36.634 00:01:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:MjA1MDExM2VkN2JiZGY0ZDJiN2Q4MjY4Y2EyNTdiNmNaF3cn: --dhchap-ctrl-secret DHHC-1:02:ZTVkZmMyNDFlZTQ0NWY4ZDI5Y2Y5ZTQzMGY5YTNhZGY2MTYzZGM3MDhjZTIzZTg3m25QsA==: 00:20:37.207 00:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:37.207 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:37.207 00:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:37.207 00:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:37.207 00:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:37.207 00:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:37.207 00:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:37.207 00:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:37.207 00:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:37.207 00:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 2 00:20:37.207 00:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:37.207 00:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:37.207 00:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:20:37.207 00:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:37.207 00:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:37.207 00:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:37.207 00:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:37.207 00:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:37.207 00:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:37.207 00:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:37.207 00:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:37.207 00:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:37.465 00:20:37.465 00:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:37.465 00:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:37.465 00:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:37.723 00:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:37.723 00:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:37.723 00:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:37.723 00:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:37.723 00:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:37.723 00:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:37.723 { 00:20:37.723 "cntlid": 61, 00:20:37.723 "qid": 0, 00:20:37.723 "state": "enabled", 00:20:37.723 "thread": "nvmf_tgt_poll_group_000", 00:20:37.723 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:37.723 "listen_address": { 00:20:37.723 "trtype": "TCP", 00:20:37.723 "adrfam": "IPv4", 00:20:37.723 "traddr": "10.0.0.2", 00:20:37.723 "trsvcid": "4420" 00:20:37.723 }, 00:20:37.723 "peer_address": { 00:20:37.723 "trtype": "TCP", 00:20:37.723 "adrfam": "IPv4", 00:20:37.723 "traddr": "10.0.0.1", 00:20:37.723 "trsvcid": "51660" 00:20:37.723 }, 00:20:37.723 "auth": { 00:20:37.723 "state": "completed", 00:20:37.723 "digest": "sha384", 00:20:37.723 "dhgroup": "ffdhe2048" 00:20:37.723 } 00:20:37.723 } 00:20:37.723 ]' 00:20:37.723 00:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:37.723 00:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:37.723 00:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:37.982 00:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:37.982 00:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:37.982 00:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:37.982 00:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:37.982 00:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:37.983 00:01:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Y2U0ZGQ1ZjUxMzYwM2E3N2RiNTBkNjZiZDdlMjEyYTQwNjM4NmJiMmEyOWVhZDdlkUs/MQ==: --dhchap-ctrl-secret DHHC-1:01:NzZmZDNhMjYzODljMGIzYjViY2RmN2ZkNDVjNGFhYmRKwaUG: 00:20:37.983 00:01:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:Y2U0ZGQ1ZjUxMzYwM2E3N2RiNTBkNjZiZDdlMjEyYTQwNjM4NmJiMmEyOWVhZDdlkUs/MQ==: --dhchap-ctrl-secret DHHC-1:01:NzZmZDNhMjYzODljMGIzYjViY2RmN2ZkNDVjNGFhYmRKwaUG: 00:20:38.550 00:01:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:38.550 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:38.550 00:01:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:38.550 00:01:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:38.550 00:01:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:38.550 00:01:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:38.550 00:01:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:38.550 00:01:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:38.550 00:01:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:38.807 00:01:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 3 00:20:38.807 00:01:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:38.807 00:01:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:38.807 00:01:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:20:38.807 00:01:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:38.807 00:01:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:38.807 00:01:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:20:38.807 00:01:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:38.807 00:01:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:38.807 00:01:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:38.807 00:01:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:38.808 00:01:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:38.808 00:01:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:39.066 00:20:39.066 00:01:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:39.066 00:01:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:39.066 00:01:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:39.324 00:01:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:39.324 00:01:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:39.324 00:01:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:39.324 00:01:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:39.324 00:01:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:39.324 00:01:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:39.324 { 00:20:39.324 "cntlid": 63, 00:20:39.324 "qid": 0, 00:20:39.324 "state": "enabled", 00:20:39.324 "thread": "nvmf_tgt_poll_group_000", 00:20:39.324 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:39.324 "listen_address": { 00:20:39.324 "trtype": "TCP", 00:20:39.324 "adrfam": "IPv4", 00:20:39.324 "traddr": "10.0.0.2", 00:20:39.324 "trsvcid": "4420" 00:20:39.324 }, 00:20:39.324 "peer_address": { 00:20:39.324 "trtype": "TCP", 00:20:39.324 "adrfam": "IPv4", 00:20:39.324 "traddr": "10.0.0.1", 00:20:39.324 "trsvcid": "52874" 00:20:39.324 }, 00:20:39.324 "auth": { 00:20:39.324 "state": "completed", 00:20:39.324 "digest": "sha384", 00:20:39.324 "dhgroup": "ffdhe2048" 00:20:39.324 } 00:20:39.324 } 00:20:39.324 ]' 00:20:39.324 00:01:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:39.324 00:01:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:39.324 00:01:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:39.324 00:01:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:39.324 00:01:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:39.324 00:01:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:39.324 00:01:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:39.324 00:01:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:39.583 00:01:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:N2MxMmMwYjA1MWJmNTNlNmM1NzNkMzQ0OWEwOWQ0NTQwYmFiNzRjNzJiYmZlY2VmNDlkMDdlZjUzNzA4N2FiNRcYr4o=: 00:20:39.583 00:01:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:N2MxMmMwYjA1MWJmNTNlNmM1NzNkMzQ0OWEwOWQ0NTQwYmFiNzRjNzJiYmZlY2VmNDlkMDdlZjUzNzA4N2FiNRcYr4o=: 00:20:40.150 00:01:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:40.150 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:40.150 00:01:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:40.150 00:01:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:40.150 00:01:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:40.150 00:01:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:40.150 00:01:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:40.150 00:01:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:40.150 00:01:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:40.150 00:01:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:40.409 00:01:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 0 00:20:40.409 00:01:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:40.409 00:01:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:40.409 00:01:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:20:40.409 00:01:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:40.409 00:01:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:40.409 00:01:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:40.409 00:01:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:40.409 00:01:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:40.409 00:01:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:40.409 00:01:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:40.409 00:01:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:40.409 00:01:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:40.668 00:20:40.668 00:01:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:40.668 00:01:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:40.668 00:01:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:40.926 00:01:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:40.926 00:01:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:40.926 00:01:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:40.926 00:01:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:40.926 00:01:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:40.926 00:01:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:40.926 { 00:20:40.926 "cntlid": 65, 00:20:40.926 "qid": 0, 00:20:40.926 "state": "enabled", 00:20:40.926 "thread": "nvmf_tgt_poll_group_000", 00:20:40.926 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:40.926 "listen_address": { 00:20:40.926 "trtype": "TCP", 00:20:40.926 "adrfam": "IPv4", 00:20:40.926 "traddr": "10.0.0.2", 00:20:40.926 "trsvcid": "4420" 00:20:40.926 }, 00:20:40.926 "peer_address": { 00:20:40.926 "trtype": "TCP", 00:20:40.926 "adrfam": "IPv4", 00:20:40.926 "traddr": "10.0.0.1", 00:20:40.926 "trsvcid": "52910" 00:20:40.926 }, 00:20:40.926 "auth": { 00:20:40.926 "state": "completed", 00:20:40.926 "digest": "sha384", 00:20:40.926 "dhgroup": "ffdhe3072" 00:20:40.926 } 00:20:40.926 } 00:20:40.926 ]' 00:20:40.926 00:01:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:40.926 00:01:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:40.926 00:01:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:40.926 00:01:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:40.926 00:01:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:40.926 00:01:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:40.926 00:01:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:40.926 00:01:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:41.185 00:01:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YjlhYjQxNTI1YWM5ZGNjNTZlMzNhMTM3MjExYWZlYThmMDZkODI2M2NmOTljMjc1svf59w==: --dhchap-ctrl-secret DHHC-1:03:MzI5MGI4Mjg1NjYyZTgwNWRjNjE3ZWExZTYzYzRiZGY3ZjZlMTliMDJlNWVlMzM5ZWRlZmFjNGZlYzg1ODA0Y+nxpHg=: 00:20:41.185 00:01:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:YjlhYjQxNTI1YWM5ZGNjNTZlMzNhMTM3MjExYWZlYThmMDZkODI2M2NmOTljMjc1svf59w==: --dhchap-ctrl-secret DHHC-1:03:MzI5MGI4Mjg1NjYyZTgwNWRjNjE3ZWExZTYzYzRiZGY3ZjZlMTliMDJlNWVlMzM5ZWRlZmFjNGZlYzg1ODA0Y+nxpHg=: 00:20:41.753 00:01:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:41.753 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:41.753 00:01:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:41.753 00:01:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:41.753 00:01:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:41.753 00:01:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:41.753 00:01:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:41.753 00:01:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:41.753 00:01:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:42.011 00:01:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 1 00:20:42.011 00:01:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:42.011 00:01:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:42.011 00:01:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:20:42.011 00:01:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:42.011 00:01:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:42.011 00:01:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:42.011 00:01:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:42.011 00:01:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:42.011 00:01:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:42.011 00:01:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:42.011 00:01:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:42.011 00:01:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:42.269 00:20:42.269 00:01:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:42.269 00:01:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:42.269 00:01:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:42.527 00:01:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:42.527 00:01:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:42.527 00:01:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:42.527 00:01:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:42.527 00:01:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:42.527 00:01:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:42.527 { 00:20:42.527 "cntlid": 67, 00:20:42.527 "qid": 0, 00:20:42.527 "state": "enabled", 00:20:42.527 "thread": "nvmf_tgt_poll_group_000", 00:20:42.527 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:42.527 "listen_address": { 00:20:42.527 "trtype": "TCP", 00:20:42.527 "adrfam": "IPv4", 00:20:42.527 "traddr": "10.0.0.2", 00:20:42.527 "trsvcid": "4420" 00:20:42.527 }, 00:20:42.527 "peer_address": { 00:20:42.527 "trtype": "TCP", 00:20:42.527 "adrfam": "IPv4", 00:20:42.527 "traddr": "10.0.0.1", 00:20:42.527 "trsvcid": "52936" 00:20:42.527 }, 00:20:42.527 "auth": { 00:20:42.527 "state": "completed", 00:20:42.527 "digest": "sha384", 00:20:42.527 "dhgroup": "ffdhe3072" 00:20:42.527 } 00:20:42.527 } 00:20:42.527 ]' 00:20:42.527 00:01:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:42.527 00:01:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:42.527 00:01:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:42.527 00:01:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:42.527 00:01:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:42.527 00:01:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:42.527 00:01:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:42.527 00:01:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:42.786 00:01:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MjA1MDExM2VkN2JiZGY0ZDJiN2Q4MjY4Y2EyNTdiNmNaF3cn: --dhchap-ctrl-secret DHHC-1:02:ZTVkZmMyNDFlZTQ0NWY4ZDI5Y2Y5ZTQzMGY5YTNhZGY2MTYzZGM3MDhjZTIzZTg3m25QsA==: 00:20:42.786 00:01:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:MjA1MDExM2VkN2JiZGY0ZDJiN2Q4MjY4Y2EyNTdiNmNaF3cn: --dhchap-ctrl-secret DHHC-1:02:ZTVkZmMyNDFlZTQ0NWY4ZDI5Y2Y5ZTQzMGY5YTNhZGY2MTYzZGM3MDhjZTIzZTg3m25QsA==: 00:20:43.352 00:01:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:43.352 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:43.352 00:01:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:43.352 00:01:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:43.352 00:01:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:43.352 00:01:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:43.352 00:01:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:43.352 00:01:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:43.352 00:01:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:43.611 00:01:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 2 00:20:43.611 00:01:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:43.611 00:01:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:43.611 00:01:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:20:43.611 00:01:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:43.611 00:01:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:43.611 00:01:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:43.611 00:01:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:43.611 00:01:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:43.611 00:01:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:43.611 00:01:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:43.611 00:01:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:43.611 00:01:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:43.870 00:20:43.870 00:01:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:43.870 00:01:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:43.870 00:01:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:44.129 00:01:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:44.129 00:01:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:44.129 00:01:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:44.129 00:01:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:44.129 00:01:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:44.129 00:01:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:44.129 { 00:20:44.129 "cntlid": 69, 00:20:44.129 "qid": 0, 00:20:44.129 "state": "enabled", 00:20:44.129 "thread": "nvmf_tgt_poll_group_000", 00:20:44.129 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:44.129 "listen_address": { 00:20:44.129 "trtype": "TCP", 00:20:44.129 "adrfam": "IPv4", 00:20:44.129 "traddr": "10.0.0.2", 00:20:44.129 "trsvcid": "4420" 00:20:44.129 }, 00:20:44.129 "peer_address": { 00:20:44.129 "trtype": "TCP", 00:20:44.129 "adrfam": "IPv4", 00:20:44.129 "traddr": "10.0.0.1", 00:20:44.129 "trsvcid": "52964" 00:20:44.129 }, 00:20:44.129 "auth": { 00:20:44.129 "state": "completed", 00:20:44.129 "digest": "sha384", 00:20:44.129 "dhgroup": "ffdhe3072" 00:20:44.129 } 00:20:44.129 } 00:20:44.129 ]' 00:20:44.129 00:01:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:44.129 00:01:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:44.129 00:01:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:44.129 00:01:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:44.129 00:01:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:44.129 00:01:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:44.129 00:01:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:44.129 00:01:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:44.388 00:01:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Y2U0ZGQ1ZjUxMzYwM2E3N2RiNTBkNjZiZDdlMjEyYTQwNjM4NmJiMmEyOWVhZDdlkUs/MQ==: --dhchap-ctrl-secret DHHC-1:01:NzZmZDNhMjYzODljMGIzYjViY2RmN2ZkNDVjNGFhYmRKwaUG: 00:20:44.388 00:01:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:Y2U0ZGQ1ZjUxMzYwM2E3N2RiNTBkNjZiZDdlMjEyYTQwNjM4NmJiMmEyOWVhZDdlkUs/MQ==: --dhchap-ctrl-secret DHHC-1:01:NzZmZDNhMjYzODljMGIzYjViY2RmN2ZkNDVjNGFhYmRKwaUG: 00:20:44.953 00:01:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:44.953 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:44.953 00:01:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:44.953 00:01:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:44.953 00:01:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:44.953 00:01:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:44.953 00:01:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:44.953 00:01:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:44.953 00:01:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:45.212 00:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 3 00:20:45.212 00:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:45.212 00:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:45.212 00:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:20:45.212 00:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:45.212 00:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:45.212 00:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:20:45.212 00:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:45.212 00:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:45.212 00:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:45.212 00:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:45.212 00:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:45.212 00:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:45.470 00:20:45.470 00:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:45.470 00:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:45.470 00:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:45.470 00:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:45.470 00:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:45.470 00:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:45.470 00:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:45.470 00:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:45.470 00:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:45.470 { 00:20:45.470 "cntlid": 71, 00:20:45.470 "qid": 0, 00:20:45.470 "state": "enabled", 00:20:45.470 "thread": "nvmf_tgt_poll_group_000", 00:20:45.470 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:45.470 "listen_address": { 00:20:45.470 "trtype": "TCP", 00:20:45.470 "adrfam": "IPv4", 00:20:45.470 "traddr": "10.0.0.2", 00:20:45.470 "trsvcid": "4420" 00:20:45.470 }, 00:20:45.470 "peer_address": { 00:20:45.470 "trtype": "TCP", 00:20:45.470 "adrfam": "IPv4", 00:20:45.470 "traddr": "10.0.0.1", 00:20:45.470 "trsvcid": "52978" 00:20:45.470 }, 00:20:45.470 "auth": { 00:20:45.470 "state": "completed", 00:20:45.470 "digest": "sha384", 00:20:45.470 "dhgroup": "ffdhe3072" 00:20:45.470 } 00:20:45.470 } 00:20:45.470 ]' 00:20:45.470 00:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:45.728 00:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:45.728 00:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:45.728 00:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:45.728 00:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:45.728 00:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:45.728 00:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:45.728 00:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:45.986 00:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:N2MxMmMwYjA1MWJmNTNlNmM1NzNkMzQ0OWEwOWQ0NTQwYmFiNzRjNzJiYmZlY2VmNDlkMDdlZjUzNzA4N2FiNRcYr4o=: 00:20:45.986 00:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:N2MxMmMwYjA1MWJmNTNlNmM1NzNkMzQ0OWEwOWQ0NTQwYmFiNzRjNzJiYmZlY2VmNDlkMDdlZjUzNzA4N2FiNRcYr4o=: 00:20:46.552 00:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:46.552 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:46.552 00:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:46.552 00:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:46.552 00:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:46.552 00:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:46.552 00:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:46.552 00:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:46.552 00:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:46.552 00:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:46.552 00:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 0 00:20:46.552 00:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:46.552 00:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:46.552 00:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:20:46.552 00:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:46.552 00:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:46.552 00:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:46.552 00:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:46.552 00:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:46.812 00:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:46.812 00:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:46.812 00:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:46.812 00:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:47.071 00:20:47.071 00:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:47.071 00:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:47.071 00:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:47.071 00:01:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:47.071 00:01:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:47.071 00:01:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:47.071 00:01:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:47.071 00:01:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:47.071 00:01:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:47.071 { 00:20:47.071 "cntlid": 73, 00:20:47.071 "qid": 0, 00:20:47.071 "state": "enabled", 00:20:47.071 "thread": "nvmf_tgt_poll_group_000", 00:20:47.071 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:47.071 "listen_address": { 00:20:47.071 "trtype": "TCP", 00:20:47.071 "adrfam": "IPv4", 00:20:47.071 "traddr": "10.0.0.2", 00:20:47.071 "trsvcid": "4420" 00:20:47.071 }, 00:20:47.071 "peer_address": { 00:20:47.071 "trtype": "TCP", 00:20:47.071 "adrfam": "IPv4", 00:20:47.071 "traddr": "10.0.0.1", 00:20:47.071 "trsvcid": "53004" 00:20:47.071 }, 00:20:47.071 "auth": { 00:20:47.071 "state": "completed", 00:20:47.071 "digest": "sha384", 00:20:47.071 "dhgroup": "ffdhe4096" 00:20:47.071 } 00:20:47.071 } 00:20:47.071 ]' 00:20:47.071 00:01:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:47.330 00:01:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:47.330 00:01:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:47.330 00:01:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:47.330 00:01:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:47.330 00:01:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:47.330 00:01:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:47.330 00:01:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:47.589 00:01:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YjlhYjQxNTI1YWM5ZGNjNTZlMzNhMTM3MjExYWZlYThmMDZkODI2M2NmOTljMjc1svf59w==: --dhchap-ctrl-secret DHHC-1:03:MzI5MGI4Mjg1NjYyZTgwNWRjNjE3ZWExZTYzYzRiZGY3ZjZlMTliMDJlNWVlMzM5ZWRlZmFjNGZlYzg1ODA0Y+nxpHg=: 00:20:47.589 00:01:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:YjlhYjQxNTI1YWM5ZGNjNTZlMzNhMTM3MjExYWZlYThmMDZkODI2M2NmOTljMjc1svf59w==: --dhchap-ctrl-secret DHHC-1:03:MzI5MGI4Mjg1NjYyZTgwNWRjNjE3ZWExZTYzYzRiZGY3ZjZlMTliMDJlNWVlMzM5ZWRlZmFjNGZlYzg1ODA0Y+nxpHg=: 00:20:48.156 00:01:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:48.156 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:48.156 00:01:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:48.156 00:01:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:48.156 00:01:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:48.156 00:01:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:48.156 00:01:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:48.156 00:01:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:48.156 00:01:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:48.414 00:01:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 1 00:20:48.414 00:01:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:48.414 00:01:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:48.414 00:01:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:20:48.414 00:01:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:48.414 00:01:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:48.414 00:01:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:48.414 00:01:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:48.414 00:01:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:48.414 00:01:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:48.414 00:01:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:48.414 00:01:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:48.414 00:01:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:48.673 00:20:48.673 00:01:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:48.673 00:01:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:48.673 00:01:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:48.932 00:01:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:48.932 00:01:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:48.932 00:01:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:48.932 00:01:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:48.932 00:01:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:48.932 00:01:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:48.932 { 00:20:48.932 "cntlid": 75, 00:20:48.932 "qid": 0, 00:20:48.932 "state": "enabled", 00:20:48.932 "thread": "nvmf_tgt_poll_group_000", 00:20:48.932 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:48.932 "listen_address": { 00:20:48.932 "trtype": "TCP", 00:20:48.932 "adrfam": "IPv4", 00:20:48.932 "traddr": "10.0.0.2", 00:20:48.932 "trsvcid": "4420" 00:20:48.932 }, 00:20:48.932 "peer_address": { 00:20:48.932 "trtype": "TCP", 00:20:48.932 "adrfam": "IPv4", 00:20:48.932 "traddr": "10.0.0.1", 00:20:48.932 "trsvcid": "46114" 00:20:48.932 }, 00:20:48.932 "auth": { 00:20:48.932 "state": "completed", 00:20:48.932 "digest": "sha384", 00:20:48.932 "dhgroup": "ffdhe4096" 00:20:48.932 } 00:20:48.932 } 00:20:48.932 ]' 00:20:48.932 00:01:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:48.932 00:01:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:48.932 00:01:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:48.932 00:01:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:48.932 00:01:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:48.932 00:01:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:48.932 00:01:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:48.932 00:01:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:49.190 00:01:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MjA1MDExM2VkN2JiZGY0ZDJiN2Q4MjY4Y2EyNTdiNmNaF3cn: --dhchap-ctrl-secret DHHC-1:02:ZTVkZmMyNDFlZTQ0NWY4ZDI5Y2Y5ZTQzMGY5YTNhZGY2MTYzZGM3MDhjZTIzZTg3m25QsA==: 00:20:49.190 00:01:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:MjA1MDExM2VkN2JiZGY0ZDJiN2Q4MjY4Y2EyNTdiNmNaF3cn: --dhchap-ctrl-secret DHHC-1:02:ZTVkZmMyNDFlZTQ0NWY4ZDI5Y2Y5ZTQzMGY5YTNhZGY2MTYzZGM3MDhjZTIzZTg3m25QsA==: 00:20:49.757 00:01:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:49.757 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:49.757 00:01:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:49.757 00:01:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:49.757 00:01:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:49.757 00:01:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:49.757 00:01:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:49.757 00:01:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:49.757 00:01:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:49.757 00:01:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 2 00:20:49.757 00:01:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:49.757 00:01:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:49.757 00:01:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:20:49.757 00:01:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:49.757 00:01:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:49.757 00:01:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:49.757 00:01:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:49.757 00:01:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:50.016 00:01:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:50.016 00:01:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:50.016 00:01:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:50.016 00:01:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:50.275 00:20:50.275 00:01:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:50.275 00:01:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:50.275 00:01:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:50.275 00:01:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:50.275 00:01:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:50.275 00:01:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:50.275 00:01:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:50.275 00:01:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:50.275 00:01:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:50.275 { 00:20:50.275 "cntlid": 77, 00:20:50.275 "qid": 0, 00:20:50.275 "state": "enabled", 00:20:50.275 "thread": "nvmf_tgt_poll_group_000", 00:20:50.275 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:50.275 "listen_address": { 00:20:50.275 "trtype": "TCP", 00:20:50.275 "adrfam": "IPv4", 00:20:50.275 "traddr": "10.0.0.2", 00:20:50.275 "trsvcid": "4420" 00:20:50.275 }, 00:20:50.275 "peer_address": { 00:20:50.275 "trtype": "TCP", 00:20:50.275 "adrfam": "IPv4", 00:20:50.275 "traddr": "10.0.0.1", 00:20:50.275 "trsvcid": "46122" 00:20:50.275 }, 00:20:50.275 "auth": { 00:20:50.275 "state": "completed", 00:20:50.275 "digest": "sha384", 00:20:50.275 "dhgroup": "ffdhe4096" 00:20:50.275 } 00:20:50.275 } 00:20:50.275 ]' 00:20:50.275 00:01:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:50.534 00:01:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:50.534 00:01:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:50.534 00:01:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:50.534 00:01:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:50.534 00:01:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:50.534 00:01:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:50.534 00:01:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:50.793 00:01:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Y2U0ZGQ1ZjUxMzYwM2E3N2RiNTBkNjZiZDdlMjEyYTQwNjM4NmJiMmEyOWVhZDdlkUs/MQ==: --dhchap-ctrl-secret DHHC-1:01:NzZmZDNhMjYzODljMGIzYjViY2RmN2ZkNDVjNGFhYmRKwaUG: 00:20:50.793 00:01:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:Y2U0ZGQ1ZjUxMzYwM2E3N2RiNTBkNjZiZDdlMjEyYTQwNjM4NmJiMmEyOWVhZDdlkUs/MQ==: --dhchap-ctrl-secret DHHC-1:01:NzZmZDNhMjYzODljMGIzYjViY2RmN2ZkNDVjNGFhYmRKwaUG: 00:20:51.360 00:01:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:51.360 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:51.360 00:01:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:51.360 00:01:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:51.360 00:01:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:51.360 00:01:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:51.360 00:01:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:51.360 00:01:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:51.360 00:01:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:51.360 00:01:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 3 00:20:51.360 00:01:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:51.360 00:01:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:51.360 00:01:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:20:51.360 00:01:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:51.360 00:01:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:51.360 00:01:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:20:51.360 00:01:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:51.360 00:01:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:51.360 00:01:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:51.360 00:01:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:51.360 00:01:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:51.360 00:01:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:51.619 00:20:51.876 00:01:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:51.876 00:01:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:51.876 00:01:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:51.876 00:01:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:51.876 00:01:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:51.876 00:01:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:51.876 00:01:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:51.876 00:01:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:51.876 00:01:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:51.876 { 00:20:51.876 "cntlid": 79, 00:20:51.876 "qid": 0, 00:20:51.876 "state": "enabled", 00:20:51.876 "thread": "nvmf_tgt_poll_group_000", 00:20:51.876 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:51.876 "listen_address": { 00:20:51.876 "trtype": "TCP", 00:20:51.876 "adrfam": "IPv4", 00:20:51.876 "traddr": "10.0.0.2", 00:20:51.876 "trsvcid": "4420" 00:20:51.876 }, 00:20:51.876 "peer_address": { 00:20:51.876 "trtype": "TCP", 00:20:51.876 "adrfam": "IPv4", 00:20:51.876 "traddr": "10.0.0.1", 00:20:51.876 "trsvcid": "46168" 00:20:51.876 }, 00:20:51.876 "auth": { 00:20:51.876 "state": "completed", 00:20:51.876 "digest": "sha384", 00:20:51.876 "dhgroup": "ffdhe4096" 00:20:51.876 } 00:20:51.876 } 00:20:51.876 ]' 00:20:51.876 00:01:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:51.876 00:01:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:51.876 00:01:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:52.134 00:01:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:52.134 00:01:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:52.134 00:01:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:52.134 00:01:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:52.134 00:01:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:52.392 00:01:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:N2MxMmMwYjA1MWJmNTNlNmM1NzNkMzQ0OWEwOWQ0NTQwYmFiNzRjNzJiYmZlY2VmNDlkMDdlZjUzNzA4N2FiNRcYr4o=: 00:20:52.392 00:01:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:N2MxMmMwYjA1MWJmNTNlNmM1NzNkMzQ0OWEwOWQ0NTQwYmFiNzRjNzJiYmZlY2VmNDlkMDdlZjUzNzA4N2FiNRcYr4o=: 00:20:52.959 00:01:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:52.959 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:52.959 00:01:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:52.959 00:01:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:52.959 00:01:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:52.959 00:01:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:52.959 00:01:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:52.959 00:01:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:52.959 00:01:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:52.959 00:01:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:52.959 00:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 0 00:20:52.959 00:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:52.959 00:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:52.959 00:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:20:52.959 00:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:52.959 00:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:52.959 00:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:52.959 00:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:52.959 00:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:52.959 00:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:52.959 00:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:52.959 00:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:52.959 00:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:53.537 00:20:53.537 00:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:53.537 00:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:53.537 00:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:53.537 00:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:53.537 00:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:53.537 00:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:53.537 00:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:53.537 00:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:53.537 00:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:53.537 { 00:20:53.537 "cntlid": 81, 00:20:53.537 "qid": 0, 00:20:53.537 "state": "enabled", 00:20:53.537 "thread": "nvmf_tgt_poll_group_000", 00:20:53.537 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:53.537 "listen_address": { 00:20:53.537 "trtype": "TCP", 00:20:53.537 "adrfam": "IPv4", 00:20:53.537 "traddr": "10.0.0.2", 00:20:53.537 "trsvcid": "4420" 00:20:53.537 }, 00:20:53.537 "peer_address": { 00:20:53.537 "trtype": "TCP", 00:20:53.537 "adrfam": "IPv4", 00:20:53.538 "traddr": "10.0.0.1", 00:20:53.538 "trsvcid": "46202" 00:20:53.538 }, 00:20:53.538 "auth": { 00:20:53.538 "state": "completed", 00:20:53.538 "digest": "sha384", 00:20:53.538 "dhgroup": "ffdhe6144" 00:20:53.538 } 00:20:53.538 } 00:20:53.538 ]' 00:20:53.538 00:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:53.538 00:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:53.538 00:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:53.797 00:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:53.797 00:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:53.797 00:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:53.797 00:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:53.797 00:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:54.056 00:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YjlhYjQxNTI1YWM5ZGNjNTZlMzNhMTM3MjExYWZlYThmMDZkODI2M2NmOTljMjc1svf59w==: --dhchap-ctrl-secret DHHC-1:03:MzI5MGI4Mjg1NjYyZTgwNWRjNjE3ZWExZTYzYzRiZGY3ZjZlMTliMDJlNWVlMzM5ZWRlZmFjNGZlYzg1ODA0Y+nxpHg=: 00:20:54.056 00:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:YjlhYjQxNTI1YWM5ZGNjNTZlMzNhMTM3MjExYWZlYThmMDZkODI2M2NmOTljMjc1svf59w==: --dhchap-ctrl-secret DHHC-1:03:MzI5MGI4Mjg1NjYyZTgwNWRjNjE3ZWExZTYzYzRiZGY3ZjZlMTliMDJlNWVlMzM5ZWRlZmFjNGZlYzg1ODA0Y+nxpHg=: 00:20:54.622 00:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:54.623 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:54.623 00:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:54.623 00:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:54.623 00:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:54.623 00:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:54.623 00:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:54.623 00:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:54.623 00:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:54.623 00:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 1 00:20:54.623 00:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:54.623 00:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:54.623 00:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:20:54.623 00:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:54.623 00:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:54.623 00:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:54.623 00:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:54.623 00:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:54.623 00:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:54.623 00:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:54.623 00:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:54.623 00:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:55.190 00:20:55.190 00:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:55.190 00:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:55.190 00:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:55.190 00:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:55.190 00:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:55.190 00:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:55.190 00:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:55.190 00:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:55.190 00:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:55.190 { 00:20:55.190 "cntlid": 83, 00:20:55.190 "qid": 0, 00:20:55.190 "state": "enabled", 00:20:55.190 "thread": "nvmf_tgt_poll_group_000", 00:20:55.190 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:55.190 "listen_address": { 00:20:55.190 "trtype": "TCP", 00:20:55.190 "adrfam": "IPv4", 00:20:55.190 "traddr": "10.0.0.2", 00:20:55.190 "trsvcid": "4420" 00:20:55.190 }, 00:20:55.190 "peer_address": { 00:20:55.190 "trtype": "TCP", 00:20:55.190 "adrfam": "IPv4", 00:20:55.190 "traddr": "10.0.0.1", 00:20:55.190 "trsvcid": "46220" 00:20:55.190 }, 00:20:55.190 "auth": { 00:20:55.190 "state": "completed", 00:20:55.190 "digest": "sha384", 00:20:55.190 "dhgroup": "ffdhe6144" 00:20:55.190 } 00:20:55.190 } 00:20:55.190 ]' 00:20:55.190 00:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:55.449 00:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:55.449 00:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:55.449 00:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:55.449 00:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:55.449 00:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:55.449 00:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:55.449 00:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:55.708 00:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MjA1MDExM2VkN2JiZGY0ZDJiN2Q4MjY4Y2EyNTdiNmNaF3cn: --dhchap-ctrl-secret DHHC-1:02:ZTVkZmMyNDFlZTQ0NWY4ZDI5Y2Y5ZTQzMGY5YTNhZGY2MTYzZGM3MDhjZTIzZTg3m25QsA==: 00:20:55.708 00:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:MjA1MDExM2VkN2JiZGY0ZDJiN2Q4MjY4Y2EyNTdiNmNaF3cn: --dhchap-ctrl-secret DHHC-1:02:ZTVkZmMyNDFlZTQ0NWY4ZDI5Y2Y5ZTQzMGY5YTNhZGY2MTYzZGM3MDhjZTIzZTg3m25QsA==: 00:20:56.275 00:01:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:56.275 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:56.275 00:01:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:56.275 00:01:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:56.275 00:01:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:56.275 00:01:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:56.275 00:01:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:56.275 00:01:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:56.275 00:01:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:56.275 00:01:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 2 00:20:56.275 00:01:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:56.275 00:01:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:56.275 00:01:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:20:56.275 00:01:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:56.275 00:01:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:56.275 00:01:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:56.275 00:01:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:56.275 00:01:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:56.275 00:01:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:56.275 00:01:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:56.275 00:01:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:56.275 00:01:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:56.840 00:20:56.840 00:01:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:56.840 00:01:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:56.840 00:01:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:56.840 00:01:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:56.840 00:01:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:56.840 00:01:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:56.840 00:01:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:56.840 00:01:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:56.840 00:01:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:56.840 { 00:20:56.840 "cntlid": 85, 00:20:56.840 "qid": 0, 00:20:56.840 "state": "enabled", 00:20:56.840 "thread": "nvmf_tgt_poll_group_000", 00:20:56.840 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:56.840 "listen_address": { 00:20:56.840 "trtype": "TCP", 00:20:56.840 "adrfam": "IPv4", 00:20:56.840 "traddr": "10.0.0.2", 00:20:56.840 "trsvcid": "4420" 00:20:56.840 }, 00:20:56.840 "peer_address": { 00:20:56.840 "trtype": "TCP", 00:20:56.840 "adrfam": "IPv4", 00:20:56.840 "traddr": "10.0.0.1", 00:20:56.840 "trsvcid": "46248" 00:20:56.840 }, 00:20:56.840 "auth": { 00:20:56.840 "state": "completed", 00:20:56.840 "digest": "sha384", 00:20:56.840 "dhgroup": "ffdhe6144" 00:20:56.840 } 00:20:56.840 } 00:20:56.840 ]' 00:20:56.840 00:01:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:56.840 00:01:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:56.840 00:01:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:57.098 00:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:57.098 00:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:57.098 00:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:57.098 00:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:57.098 00:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:57.357 00:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Y2U0ZGQ1ZjUxMzYwM2E3N2RiNTBkNjZiZDdlMjEyYTQwNjM4NmJiMmEyOWVhZDdlkUs/MQ==: --dhchap-ctrl-secret DHHC-1:01:NzZmZDNhMjYzODljMGIzYjViY2RmN2ZkNDVjNGFhYmRKwaUG: 00:20:57.357 00:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:Y2U0ZGQ1ZjUxMzYwM2E3N2RiNTBkNjZiZDdlMjEyYTQwNjM4NmJiMmEyOWVhZDdlkUs/MQ==: --dhchap-ctrl-secret DHHC-1:01:NzZmZDNhMjYzODljMGIzYjViY2RmN2ZkNDVjNGFhYmRKwaUG: 00:20:57.924 00:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:57.924 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:57.924 00:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:57.924 00:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:57.924 00:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:57.924 00:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:57.924 00:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:57.924 00:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:57.924 00:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:57.924 00:01:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 3 00:20:57.924 00:01:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:57.924 00:01:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:57.924 00:01:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:20:57.924 00:01:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:57.924 00:01:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:57.924 00:01:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:20:57.924 00:01:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:57.924 00:01:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:57.924 00:01:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:57.924 00:01:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:57.924 00:01:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:57.924 00:01:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:58.491 00:20:58.491 00:01:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:58.491 00:01:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:58.491 00:01:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:58.491 00:01:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:58.491 00:01:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:58.491 00:01:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:58.491 00:01:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:58.491 00:01:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:58.491 00:01:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:58.491 { 00:20:58.491 "cntlid": 87, 00:20:58.491 "qid": 0, 00:20:58.491 "state": "enabled", 00:20:58.491 "thread": "nvmf_tgt_poll_group_000", 00:20:58.491 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:58.491 "listen_address": { 00:20:58.491 "trtype": "TCP", 00:20:58.491 "adrfam": "IPv4", 00:20:58.491 "traddr": "10.0.0.2", 00:20:58.491 "trsvcid": "4420" 00:20:58.492 }, 00:20:58.492 "peer_address": { 00:20:58.492 "trtype": "TCP", 00:20:58.492 "adrfam": "IPv4", 00:20:58.492 "traddr": "10.0.0.1", 00:20:58.492 "trsvcid": "59310" 00:20:58.492 }, 00:20:58.492 "auth": { 00:20:58.492 "state": "completed", 00:20:58.492 "digest": "sha384", 00:20:58.492 "dhgroup": "ffdhe6144" 00:20:58.492 } 00:20:58.492 } 00:20:58.492 ]' 00:20:58.492 00:01:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:58.751 00:01:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:58.751 00:01:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:58.751 00:01:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:58.751 00:01:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:58.751 00:01:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:58.751 00:01:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:58.751 00:01:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:59.078 00:01:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:N2MxMmMwYjA1MWJmNTNlNmM1NzNkMzQ0OWEwOWQ0NTQwYmFiNzRjNzJiYmZlY2VmNDlkMDdlZjUzNzA4N2FiNRcYr4o=: 00:20:59.078 00:01:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:N2MxMmMwYjA1MWJmNTNlNmM1NzNkMzQ0OWEwOWQ0NTQwYmFiNzRjNzJiYmZlY2VmNDlkMDdlZjUzNzA4N2FiNRcYr4o=: 00:20:59.352 00:01:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:59.352 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:59.352 00:01:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:59.352 00:01:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:59.352 00:01:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:59.352 00:01:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:59.352 00:01:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:59.352 00:01:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:59.352 00:01:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:59.352 00:01:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:59.610 00:01:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 0 00:20:59.610 00:01:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:59.610 00:01:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:59.610 00:01:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:59.610 00:01:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:59.610 00:01:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:59.610 00:01:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:59.610 00:01:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:59.610 00:01:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:59.610 00:01:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:59.610 00:01:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:59.610 00:01:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:59.610 00:01:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:00.178 00:21:00.178 00:01:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:00.178 00:01:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:00.178 00:01:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:00.436 00:01:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:00.437 00:01:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:00.437 00:01:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:00.437 00:01:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:00.437 00:01:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:00.437 00:01:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:00.437 { 00:21:00.437 "cntlid": 89, 00:21:00.437 "qid": 0, 00:21:00.437 "state": "enabled", 00:21:00.437 "thread": "nvmf_tgt_poll_group_000", 00:21:00.437 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:00.437 "listen_address": { 00:21:00.437 "trtype": "TCP", 00:21:00.437 "adrfam": "IPv4", 00:21:00.437 "traddr": "10.0.0.2", 00:21:00.437 "trsvcid": "4420" 00:21:00.437 }, 00:21:00.437 "peer_address": { 00:21:00.437 "trtype": "TCP", 00:21:00.437 "adrfam": "IPv4", 00:21:00.437 "traddr": "10.0.0.1", 00:21:00.437 "trsvcid": "59342" 00:21:00.437 }, 00:21:00.437 "auth": { 00:21:00.437 "state": "completed", 00:21:00.437 "digest": "sha384", 00:21:00.437 "dhgroup": "ffdhe8192" 00:21:00.437 } 00:21:00.437 } 00:21:00.437 ]' 00:21:00.437 00:01:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:00.437 00:01:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:00.437 00:01:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:00.437 00:01:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:00.437 00:01:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:00.437 00:01:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:00.437 00:01:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:00.437 00:01:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:00.696 00:01:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YjlhYjQxNTI1YWM5ZGNjNTZlMzNhMTM3MjExYWZlYThmMDZkODI2M2NmOTljMjc1svf59w==: --dhchap-ctrl-secret DHHC-1:03:MzI5MGI4Mjg1NjYyZTgwNWRjNjE3ZWExZTYzYzRiZGY3ZjZlMTliMDJlNWVlMzM5ZWRlZmFjNGZlYzg1ODA0Y+nxpHg=: 00:21:00.696 00:01:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:YjlhYjQxNTI1YWM5ZGNjNTZlMzNhMTM3MjExYWZlYThmMDZkODI2M2NmOTljMjc1svf59w==: --dhchap-ctrl-secret DHHC-1:03:MzI5MGI4Mjg1NjYyZTgwNWRjNjE3ZWExZTYzYzRiZGY3ZjZlMTliMDJlNWVlMzM5ZWRlZmFjNGZlYzg1ODA0Y+nxpHg=: 00:21:01.263 00:01:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:01.263 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:01.263 00:01:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:01.263 00:01:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:01.263 00:01:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:01.263 00:01:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:01.263 00:01:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:01.263 00:01:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:01.263 00:01:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:01.522 00:01:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 1 00:21:01.522 00:01:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:01.522 00:01:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:01.522 00:01:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:01.522 00:01:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:01.522 00:01:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:01.522 00:01:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:01.522 00:01:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:01.522 00:01:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:01.522 00:01:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:01.522 00:01:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:01.522 00:01:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:01.522 00:01:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:02.090 00:21:02.090 00:01:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:02.090 00:01:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:02.090 00:01:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:02.090 00:01:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:02.090 00:01:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:02.090 00:01:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:02.090 00:01:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:02.090 00:01:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:02.090 00:01:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:02.090 { 00:21:02.090 "cntlid": 91, 00:21:02.090 "qid": 0, 00:21:02.090 "state": "enabled", 00:21:02.090 "thread": "nvmf_tgt_poll_group_000", 00:21:02.090 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:02.090 "listen_address": { 00:21:02.090 "trtype": "TCP", 00:21:02.090 "adrfam": "IPv4", 00:21:02.090 "traddr": "10.0.0.2", 00:21:02.090 "trsvcid": "4420" 00:21:02.090 }, 00:21:02.090 "peer_address": { 00:21:02.090 "trtype": "TCP", 00:21:02.090 "adrfam": "IPv4", 00:21:02.090 "traddr": "10.0.0.1", 00:21:02.090 "trsvcid": "59376" 00:21:02.090 }, 00:21:02.090 "auth": { 00:21:02.090 "state": "completed", 00:21:02.090 "digest": "sha384", 00:21:02.090 "dhgroup": "ffdhe8192" 00:21:02.090 } 00:21:02.090 } 00:21:02.090 ]' 00:21:02.090 00:01:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:02.090 00:01:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:02.090 00:01:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:02.350 00:01:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:02.350 00:01:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:02.350 00:01:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:02.350 00:01:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:02.350 00:01:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:02.612 00:01:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MjA1MDExM2VkN2JiZGY0ZDJiN2Q4MjY4Y2EyNTdiNmNaF3cn: --dhchap-ctrl-secret DHHC-1:02:ZTVkZmMyNDFlZTQ0NWY4ZDI5Y2Y5ZTQzMGY5YTNhZGY2MTYzZGM3MDhjZTIzZTg3m25QsA==: 00:21:02.612 00:01:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:MjA1MDExM2VkN2JiZGY0ZDJiN2Q4MjY4Y2EyNTdiNmNaF3cn: --dhchap-ctrl-secret DHHC-1:02:ZTVkZmMyNDFlZTQ0NWY4ZDI5Y2Y5ZTQzMGY5YTNhZGY2MTYzZGM3MDhjZTIzZTg3m25QsA==: 00:21:03.178 00:01:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:03.178 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:03.178 00:01:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:03.178 00:01:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:03.178 00:01:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:03.178 00:01:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:03.178 00:01:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:03.178 00:01:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:03.178 00:01:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:03.178 00:01:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 2 00:21:03.178 00:01:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:03.178 00:01:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:03.178 00:01:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:03.178 00:01:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:03.178 00:01:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:03.178 00:01:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:03.178 00:01:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:03.178 00:01:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:03.178 00:01:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:03.178 00:01:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:03.178 00:01:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:03.178 00:01:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:03.747 00:21:03.747 00:01:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:03.747 00:01:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:03.747 00:01:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:04.005 00:01:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:04.005 00:01:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:04.005 00:01:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:04.005 00:01:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:04.005 00:01:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:04.005 00:01:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:04.005 { 00:21:04.005 "cntlid": 93, 00:21:04.005 "qid": 0, 00:21:04.005 "state": "enabled", 00:21:04.005 "thread": "nvmf_tgt_poll_group_000", 00:21:04.005 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:04.005 "listen_address": { 00:21:04.005 "trtype": "TCP", 00:21:04.005 "adrfam": "IPv4", 00:21:04.005 "traddr": "10.0.0.2", 00:21:04.005 "trsvcid": "4420" 00:21:04.005 }, 00:21:04.005 "peer_address": { 00:21:04.005 "trtype": "TCP", 00:21:04.005 "adrfam": "IPv4", 00:21:04.005 "traddr": "10.0.0.1", 00:21:04.005 "trsvcid": "59402" 00:21:04.005 }, 00:21:04.005 "auth": { 00:21:04.005 "state": "completed", 00:21:04.005 "digest": "sha384", 00:21:04.005 "dhgroup": "ffdhe8192" 00:21:04.005 } 00:21:04.005 } 00:21:04.005 ]' 00:21:04.005 00:01:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:04.005 00:01:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:04.005 00:01:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:04.005 00:01:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:04.005 00:01:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:04.005 00:01:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:04.005 00:01:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:04.005 00:01:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:04.263 00:01:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Y2U0ZGQ1ZjUxMzYwM2E3N2RiNTBkNjZiZDdlMjEyYTQwNjM4NmJiMmEyOWVhZDdlkUs/MQ==: --dhchap-ctrl-secret DHHC-1:01:NzZmZDNhMjYzODljMGIzYjViY2RmN2ZkNDVjNGFhYmRKwaUG: 00:21:04.263 00:01:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:Y2U0ZGQ1ZjUxMzYwM2E3N2RiNTBkNjZiZDdlMjEyYTQwNjM4NmJiMmEyOWVhZDdlkUs/MQ==: --dhchap-ctrl-secret DHHC-1:01:NzZmZDNhMjYzODljMGIzYjViY2RmN2ZkNDVjNGFhYmRKwaUG: 00:21:04.830 00:01:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:04.830 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:04.830 00:01:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:04.830 00:01:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:04.830 00:01:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:04.830 00:01:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:04.830 00:01:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:04.830 00:01:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:04.830 00:01:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:05.089 00:01:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 3 00:21:05.089 00:01:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:05.089 00:01:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:05.089 00:01:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:05.089 00:01:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:05.089 00:01:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:05.089 00:01:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:21:05.089 00:01:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:05.089 00:01:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:05.089 00:01:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:05.089 00:01:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:05.089 00:01:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:05.089 00:01:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:05.657 00:21:05.657 00:01:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:05.657 00:01:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:05.657 00:01:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:05.657 00:01:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:05.657 00:01:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:05.657 00:01:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:05.657 00:01:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:05.657 00:01:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:05.657 00:01:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:05.657 { 00:21:05.657 "cntlid": 95, 00:21:05.657 "qid": 0, 00:21:05.657 "state": "enabled", 00:21:05.657 "thread": "nvmf_tgt_poll_group_000", 00:21:05.657 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:05.657 "listen_address": { 00:21:05.657 "trtype": "TCP", 00:21:05.657 "adrfam": "IPv4", 00:21:05.657 "traddr": "10.0.0.2", 00:21:05.657 "trsvcid": "4420" 00:21:05.657 }, 00:21:05.657 "peer_address": { 00:21:05.657 "trtype": "TCP", 00:21:05.657 "adrfam": "IPv4", 00:21:05.657 "traddr": "10.0.0.1", 00:21:05.657 "trsvcid": "59442" 00:21:05.657 }, 00:21:05.657 "auth": { 00:21:05.657 "state": "completed", 00:21:05.657 "digest": "sha384", 00:21:05.657 "dhgroup": "ffdhe8192" 00:21:05.657 } 00:21:05.657 } 00:21:05.657 ]' 00:21:05.657 00:01:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:05.916 00:01:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:05.916 00:01:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:05.916 00:01:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:05.916 00:01:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:05.916 00:01:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:05.916 00:01:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:05.916 00:01:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:06.174 00:01:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:N2MxMmMwYjA1MWJmNTNlNmM1NzNkMzQ0OWEwOWQ0NTQwYmFiNzRjNzJiYmZlY2VmNDlkMDdlZjUzNzA4N2FiNRcYr4o=: 00:21:06.174 00:01:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:N2MxMmMwYjA1MWJmNTNlNmM1NzNkMzQ0OWEwOWQ0NTQwYmFiNzRjNzJiYmZlY2VmNDlkMDdlZjUzNzA4N2FiNRcYr4o=: 00:21:06.741 00:01:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:06.741 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:06.741 00:01:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:06.741 00:01:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:06.741 00:01:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:06.741 00:01:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:06.741 00:01:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:21:06.741 00:01:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:06.741 00:01:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:06.741 00:01:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:06.741 00:01:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:06.741 00:01:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 0 00:21:06.741 00:01:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:06.741 00:01:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:06.741 00:01:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:21:06.741 00:01:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:06.741 00:01:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:06.741 00:01:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:06.741 00:01:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:06.741 00:01:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:06.741 00:01:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:06.741 00:01:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:06.741 00:01:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:06.741 00:01:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:07.000 00:21:07.000 00:01:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:07.000 00:01:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:07.000 00:01:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:07.258 00:01:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:07.258 00:01:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:07.258 00:01:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:07.258 00:01:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:07.258 00:01:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:07.258 00:01:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:07.258 { 00:21:07.258 "cntlid": 97, 00:21:07.258 "qid": 0, 00:21:07.258 "state": "enabled", 00:21:07.258 "thread": "nvmf_tgt_poll_group_000", 00:21:07.258 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:07.258 "listen_address": { 00:21:07.258 "trtype": "TCP", 00:21:07.258 "adrfam": "IPv4", 00:21:07.258 "traddr": "10.0.0.2", 00:21:07.258 "trsvcid": "4420" 00:21:07.258 }, 00:21:07.258 "peer_address": { 00:21:07.258 "trtype": "TCP", 00:21:07.258 "adrfam": "IPv4", 00:21:07.258 "traddr": "10.0.0.1", 00:21:07.258 "trsvcid": "59464" 00:21:07.258 }, 00:21:07.258 "auth": { 00:21:07.258 "state": "completed", 00:21:07.258 "digest": "sha512", 00:21:07.258 "dhgroup": "null" 00:21:07.258 } 00:21:07.258 } 00:21:07.258 ]' 00:21:07.258 00:01:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:07.258 00:01:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:07.258 00:01:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:07.258 00:01:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:21:07.258 00:01:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:07.516 00:01:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:07.516 00:01:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:07.516 00:01:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:07.516 00:01:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YjlhYjQxNTI1YWM5ZGNjNTZlMzNhMTM3MjExYWZlYThmMDZkODI2M2NmOTljMjc1svf59w==: --dhchap-ctrl-secret DHHC-1:03:MzI5MGI4Mjg1NjYyZTgwNWRjNjE3ZWExZTYzYzRiZGY3ZjZlMTliMDJlNWVlMzM5ZWRlZmFjNGZlYzg1ODA0Y+nxpHg=: 00:21:07.516 00:01:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:YjlhYjQxNTI1YWM5ZGNjNTZlMzNhMTM3MjExYWZlYThmMDZkODI2M2NmOTljMjc1svf59w==: --dhchap-ctrl-secret DHHC-1:03:MzI5MGI4Mjg1NjYyZTgwNWRjNjE3ZWExZTYzYzRiZGY3ZjZlMTliMDJlNWVlMzM5ZWRlZmFjNGZlYzg1ODA0Y+nxpHg=: 00:21:08.084 00:01:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:08.084 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:08.084 00:01:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:08.084 00:01:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:08.084 00:01:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:08.084 00:01:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:08.084 00:01:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:08.084 00:01:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:08.084 00:01:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:08.343 00:01:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 1 00:21:08.343 00:01:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:08.343 00:01:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:08.343 00:01:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:21:08.343 00:01:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:08.343 00:01:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:08.343 00:01:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:08.343 00:01:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:08.343 00:01:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:08.343 00:01:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:08.343 00:01:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:08.343 00:01:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:08.343 00:01:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:08.602 00:21:08.602 00:01:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:08.602 00:01:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:08.602 00:01:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:08.860 00:01:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:08.860 00:01:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:08.860 00:01:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:08.860 00:01:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:08.860 00:01:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:08.860 00:01:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:08.860 { 00:21:08.860 "cntlid": 99, 00:21:08.860 "qid": 0, 00:21:08.860 "state": "enabled", 00:21:08.860 "thread": "nvmf_tgt_poll_group_000", 00:21:08.860 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:08.860 "listen_address": { 00:21:08.860 "trtype": "TCP", 00:21:08.860 "adrfam": "IPv4", 00:21:08.860 "traddr": "10.0.0.2", 00:21:08.860 "trsvcid": "4420" 00:21:08.860 }, 00:21:08.860 "peer_address": { 00:21:08.860 "trtype": "TCP", 00:21:08.860 "adrfam": "IPv4", 00:21:08.860 "traddr": "10.0.0.1", 00:21:08.860 "trsvcid": "41802" 00:21:08.860 }, 00:21:08.860 "auth": { 00:21:08.860 "state": "completed", 00:21:08.860 "digest": "sha512", 00:21:08.860 "dhgroup": "null" 00:21:08.860 } 00:21:08.860 } 00:21:08.860 ]' 00:21:08.860 00:01:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:08.860 00:01:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:08.860 00:01:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:08.860 00:01:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:21:08.860 00:01:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:08.860 00:01:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:08.860 00:01:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:08.860 00:01:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:09.118 00:01:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MjA1MDExM2VkN2JiZGY0ZDJiN2Q4MjY4Y2EyNTdiNmNaF3cn: --dhchap-ctrl-secret DHHC-1:02:ZTVkZmMyNDFlZTQ0NWY4ZDI5Y2Y5ZTQzMGY5YTNhZGY2MTYzZGM3MDhjZTIzZTg3m25QsA==: 00:21:09.119 00:01:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:MjA1MDExM2VkN2JiZGY0ZDJiN2Q4MjY4Y2EyNTdiNmNaF3cn: --dhchap-ctrl-secret DHHC-1:02:ZTVkZmMyNDFlZTQ0NWY4ZDI5Y2Y5ZTQzMGY5YTNhZGY2MTYzZGM3MDhjZTIzZTg3m25QsA==: 00:21:09.686 00:01:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:09.686 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:09.687 00:01:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:09.687 00:01:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:09.687 00:01:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:09.687 00:01:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:09.687 00:01:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:09.687 00:01:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:09.687 00:01:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:09.945 00:01:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 2 00:21:09.945 00:01:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:09.945 00:01:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:09.945 00:01:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:21:09.946 00:01:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:09.946 00:01:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:09.946 00:01:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:09.946 00:01:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:09.946 00:01:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:09.946 00:01:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:09.946 00:01:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:09.946 00:01:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:09.946 00:01:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:10.212 00:21:10.212 00:01:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:10.212 00:01:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:10.212 00:01:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:10.475 00:01:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:10.475 00:01:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:10.475 00:01:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:10.475 00:01:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:10.475 00:01:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:10.475 00:01:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:10.475 { 00:21:10.475 "cntlid": 101, 00:21:10.475 "qid": 0, 00:21:10.475 "state": "enabled", 00:21:10.475 "thread": "nvmf_tgt_poll_group_000", 00:21:10.475 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:10.475 "listen_address": { 00:21:10.475 "trtype": "TCP", 00:21:10.475 "adrfam": "IPv4", 00:21:10.475 "traddr": "10.0.0.2", 00:21:10.475 "trsvcid": "4420" 00:21:10.475 }, 00:21:10.475 "peer_address": { 00:21:10.475 "trtype": "TCP", 00:21:10.475 "adrfam": "IPv4", 00:21:10.475 "traddr": "10.0.0.1", 00:21:10.475 "trsvcid": "41812" 00:21:10.475 }, 00:21:10.475 "auth": { 00:21:10.475 "state": "completed", 00:21:10.475 "digest": "sha512", 00:21:10.475 "dhgroup": "null" 00:21:10.475 } 00:21:10.475 } 00:21:10.475 ]' 00:21:10.475 00:01:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:10.475 00:01:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:10.475 00:01:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:10.476 00:01:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:21:10.476 00:01:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:10.476 00:01:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:10.476 00:01:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:10.476 00:01:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:10.735 00:01:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Y2U0ZGQ1ZjUxMzYwM2E3N2RiNTBkNjZiZDdlMjEyYTQwNjM4NmJiMmEyOWVhZDdlkUs/MQ==: --dhchap-ctrl-secret DHHC-1:01:NzZmZDNhMjYzODljMGIzYjViY2RmN2ZkNDVjNGFhYmRKwaUG: 00:21:10.735 00:01:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:Y2U0ZGQ1ZjUxMzYwM2E3N2RiNTBkNjZiZDdlMjEyYTQwNjM4NmJiMmEyOWVhZDdlkUs/MQ==: --dhchap-ctrl-secret DHHC-1:01:NzZmZDNhMjYzODljMGIzYjViY2RmN2ZkNDVjNGFhYmRKwaUG: 00:21:11.302 00:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:11.302 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:11.302 00:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:11.302 00:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:11.302 00:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:11.302 00:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:11.302 00:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:11.302 00:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:11.302 00:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:11.561 00:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 3 00:21:11.561 00:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:11.561 00:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:11.561 00:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:21:11.561 00:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:11.561 00:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:11.561 00:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:21:11.561 00:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:11.561 00:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:11.561 00:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:11.561 00:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:11.561 00:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:11.561 00:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:11.825 00:21:11.825 00:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:11.825 00:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:11.825 00:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:11.825 00:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:11.825 00:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:11.825 00:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:11.825 00:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:11.825 00:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:11.825 00:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:11.826 { 00:21:11.826 "cntlid": 103, 00:21:11.826 "qid": 0, 00:21:11.826 "state": "enabled", 00:21:11.826 "thread": "nvmf_tgt_poll_group_000", 00:21:11.826 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:11.826 "listen_address": { 00:21:11.826 "trtype": "TCP", 00:21:11.826 "adrfam": "IPv4", 00:21:11.826 "traddr": "10.0.0.2", 00:21:11.826 "trsvcid": "4420" 00:21:11.826 }, 00:21:11.826 "peer_address": { 00:21:11.826 "trtype": "TCP", 00:21:11.826 "adrfam": "IPv4", 00:21:11.826 "traddr": "10.0.0.1", 00:21:11.826 "trsvcid": "41836" 00:21:11.826 }, 00:21:11.826 "auth": { 00:21:11.826 "state": "completed", 00:21:11.826 "digest": "sha512", 00:21:11.826 "dhgroup": "null" 00:21:11.826 } 00:21:11.826 } 00:21:11.826 ]' 00:21:11.826 00:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:12.085 00:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:12.085 00:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:12.085 00:01:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:21:12.085 00:01:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:12.085 00:01:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:12.085 00:01:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:12.085 00:01:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:12.343 00:01:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:N2MxMmMwYjA1MWJmNTNlNmM1NzNkMzQ0OWEwOWQ0NTQwYmFiNzRjNzJiYmZlY2VmNDlkMDdlZjUzNzA4N2FiNRcYr4o=: 00:21:12.343 00:01:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:N2MxMmMwYjA1MWJmNTNlNmM1NzNkMzQ0OWEwOWQ0NTQwYmFiNzRjNzJiYmZlY2VmNDlkMDdlZjUzNzA4N2FiNRcYr4o=: 00:21:12.910 00:01:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:12.910 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:12.910 00:01:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:12.910 00:01:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:12.910 00:01:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:12.910 00:01:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:12.910 00:01:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:12.910 00:01:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:12.911 00:01:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:12.911 00:01:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:13.170 00:01:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 0 00:21:13.170 00:01:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:13.170 00:01:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:13.170 00:01:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:21:13.170 00:01:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:13.170 00:01:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:13.170 00:01:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:13.170 00:01:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:13.170 00:01:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:13.170 00:01:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:13.170 00:01:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:13.170 00:01:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:13.170 00:01:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:13.170 00:21:13.429 00:01:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:13.429 00:01:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:13.429 00:01:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:13.429 00:01:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:13.429 00:01:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:13.429 00:01:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:13.429 00:01:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:13.429 00:01:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:13.429 00:01:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:13.429 { 00:21:13.429 "cntlid": 105, 00:21:13.429 "qid": 0, 00:21:13.429 "state": "enabled", 00:21:13.429 "thread": "nvmf_tgt_poll_group_000", 00:21:13.429 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:13.429 "listen_address": { 00:21:13.429 "trtype": "TCP", 00:21:13.429 "adrfam": "IPv4", 00:21:13.429 "traddr": "10.0.0.2", 00:21:13.429 "trsvcid": "4420" 00:21:13.429 }, 00:21:13.429 "peer_address": { 00:21:13.429 "trtype": "TCP", 00:21:13.429 "adrfam": "IPv4", 00:21:13.429 "traddr": "10.0.0.1", 00:21:13.429 "trsvcid": "41852" 00:21:13.429 }, 00:21:13.429 "auth": { 00:21:13.429 "state": "completed", 00:21:13.429 "digest": "sha512", 00:21:13.429 "dhgroup": "ffdhe2048" 00:21:13.429 } 00:21:13.429 } 00:21:13.429 ]' 00:21:13.429 00:01:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:13.688 00:01:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:13.688 00:01:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:13.688 00:01:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:13.688 00:01:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:13.688 00:01:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:13.688 00:01:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:13.688 00:01:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:13.947 00:01:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YjlhYjQxNTI1YWM5ZGNjNTZlMzNhMTM3MjExYWZlYThmMDZkODI2M2NmOTljMjc1svf59w==: --dhchap-ctrl-secret DHHC-1:03:MzI5MGI4Mjg1NjYyZTgwNWRjNjE3ZWExZTYzYzRiZGY3ZjZlMTliMDJlNWVlMzM5ZWRlZmFjNGZlYzg1ODA0Y+nxpHg=: 00:21:13.947 00:01:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:YjlhYjQxNTI1YWM5ZGNjNTZlMzNhMTM3MjExYWZlYThmMDZkODI2M2NmOTljMjc1svf59w==: --dhchap-ctrl-secret DHHC-1:03:MzI5MGI4Mjg1NjYyZTgwNWRjNjE3ZWExZTYzYzRiZGY3ZjZlMTliMDJlNWVlMzM5ZWRlZmFjNGZlYzg1ODA0Y+nxpHg=: 00:21:14.513 00:01:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:14.513 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:14.513 00:01:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:14.513 00:01:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:14.513 00:01:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:14.513 00:01:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:14.513 00:01:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:14.513 00:01:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:14.513 00:01:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:14.772 00:01:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 1 00:21:14.772 00:01:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:14.772 00:01:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:14.772 00:01:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:21:14.772 00:01:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:14.772 00:01:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:14.772 00:01:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:14.772 00:01:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:14.772 00:01:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:14.772 00:01:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:14.772 00:01:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:14.772 00:01:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:14.772 00:01:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:14.772 00:21:15.031 00:01:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:15.031 00:01:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:15.031 00:01:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:15.031 00:01:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:15.031 00:01:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:15.031 00:01:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:15.031 00:01:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:15.031 00:01:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:15.031 00:01:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:15.031 { 00:21:15.031 "cntlid": 107, 00:21:15.031 "qid": 0, 00:21:15.031 "state": "enabled", 00:21:15.031 "thread": "nvmf_tgt_poll_group_000", 00:21:15.031 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:15.031 "listen_address": { 00:21:15.031 "trtype": "TCP", 00:21:15.031 "adrfam": "IPv4", 00:21:15.031 "traddr": "10.0.0.2", 00:21:15.031 "trsvcid": "4420" 00:21:15.031 }, 00:21:15.031 "peer_address": { 00:21:15.031 "trtype": "TCP", 00:21:15.031 "adrfam": "IPv4", 00:21:15.031 "traddr": "10.0.0.1", 00:21:15.031 "trsvcid": "41862" 00:21:15.031 }, 00:21:15.031 "auth": { 00:21:15.031 "state": "completed", 00:21:15.031 "digest": "sha512", 00:21:15.031 "dhgroup": "ffdhe2048" 00:21:15.031 } 00:21:15.031 } 00:21:15.031 ]' 00:21:15.031 00:01:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:15.290 00:01:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:15.290 00:01:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:15.290 00:01:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:15.290 00:01:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:15.290 00:01:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:15.290 00:01:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:15.290 00:01:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:15.548 00:01:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MjA1MDExM2VkN2JiZGY0ZDJiN2Q4MjY4Y2EyNTdiNmNaF3cn: --dhchap-ctrl-secret DHHC-1:02:ZTVkZmMyNDFlZTQ0NWY4ZDI5Y2Y5ZTQzMGY5YTNhZGY2MTYzZGM3MDhjZTIzZTg3m25QsA==: 00:21:15.548 00:01:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:MjA1MDExM2VkN2JiZGY0ZDJiN2Q4MjY4Y2EyNTdiNmNaF3cn: --dhchap-ctrl-secret DHHC-1:02:ZTVkZmMyNDFlZTQ0NWY4ZDI5Y2Y5ZTQzMGY5YTNhZGY2MTYzZGM3MDhjZTIzZTg3m25QsA==: 00:21:16.131 00:01:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:16.131 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:16.131 00:01:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:16.131 00:01:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:16.131 00:01:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:16.131 00:01:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:16.131 00:01:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:16.131 00:01:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:16.131 00:01:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:16.131 00:01:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 2 00:21:16.131 00:01:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:16.131 00:01:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:16.131 00:01:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:21:16.131 00:01:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:16.131 00:01:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:16.131 00:01:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:16.131 00:01:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:16.131 00:01:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:16.131 00:01:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:16.131 00:01:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:16.131 00:01:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:16.131 00:01:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:16.395 00:21:16.395 00:01:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:16.395 00:01:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:16.395 00:01:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:16.653 00:01:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:16.653 00:01:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:16.653 00:01:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:16.653 00:01:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:16.653 00:01:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:16.653 00:01:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:16.653 { 00:21:16.653 "cntlid": 109, 00:21:16.653 "qid": 0, 00:21:16.653 "state": "enabled", 00:21:16.653 "thread": "nvmf_tgt_poll_group_000", 00:21:16.653 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:16.653 "listen_address": { 00:21:16.653 "trtype": "TCP", 00:21:16.653 "adrfam": "IPv4", 00:21:16.653 "traddr": "10.0.0.2", 00:21:16.653 "trsvcid": "4420" 00:21:16.653 }, 00:21:16.653 "peer_address": { 00:21:16.653 "trtype": "TCP", 00:21:16.653 "adrfam": "IPv4", 00:21:16.653 "traddr": "10.0.0.1", 00:21:16.653 "trsvcid": "41882" 00:21:16.653 }, 00:21:16.653 "auth": { 00:21:16.653 "state": "completed", 00:21:16.653 "digest": "sha512", 00:21:16.653 "dhgroup": "ffdhe2048" 00:21:16.653 } 00:21:16.653 } 00:21:16.653 ]' 00:21:16.653 00:01:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:16.653 00:01:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:16.653 00:01:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:16.911 00:01:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:16.911 00:01:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:16.911 00:01:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:16.911 00:01:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:16.911 00:01:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:16.911 00:01:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Y2U0ZGQ1ZjUxMzYwM2E3N2RiNTBkNjZiZDdlMjEyYTQwNjM4NmJiMmEyOWVhZDdlkUs/MQ==: --dhchap-ctrl-secret DHHC-1:01:NzZmZDNhMjYzODljMGIzYjViY2RmN2ZkNDVjNGFhYmRKwaUG: 00:21:16.911 00:01:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:Y2U0ZGQ1ZjUxMzYwM2E3N2RiNTBkNjZiZDdlMjEyYTQwNjM4NmJiMmEyOWVhZDdlkUs/MQ==: --dhchap-ctrl-secret DHHC-1:01:NzZmZDNhMjYzODljMGIzYjViY2RmN2ZkNDVjNGFhYmRKwaUG: 00:21:17.478 00:01:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:17.478 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:17.478 00:01:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:17.478 00:01:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:17.478 00:01:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:17.478 00:01:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:17.478 00:01:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:17.478 00:01:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:17.478 00:01:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:17.737 00:01:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 3 00:21:17.737 00:01:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:17.737 00:01:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:17.737 00:01:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:21:17.737 00:01:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:17.737 00:01:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:17.737 00:01:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:21:17.737 00:01:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:17.737 00:01:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:17.737 00:01:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:17.737 00:01:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:17.737 00:01:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:17.738 00:01:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:17.996 00:21:17.996 00:01:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:17.996 00:01:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:17.997 00:01:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:18.255 00:01:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:18.256 00:01:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:18.256 00:01:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:18.256 00:01:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:18.256 00:01:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:18.256 00:01:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:18.256 { 00:21:18.256 "cntlid": 111, 00:21:18.256 "qid": 0, 00:21:18.256 "state": "enabled", 00:21:18.256 "thread": "nvmf_tgt_poll_group_000", 00:21:18.256 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:18.256 "listen_address": { 00:21:18.256 "trtype": "TCP", 00:21:18.256 "adrfam": "IPv4", 00:21:18.256 "traddr": "10.0.0.2", 00:21:18.256 "trsvcid": "4420" 00:21:18.256 }, 00:21:18.256 "peer_address": { 00:21:18.256 "trtype": "TCP", 00:21:18.256 "adrfam": "IPv4", 00:21:18.256 "traddr": "10.0.0.1", 00:21:18.256 "trsvcid": "60150" 00:21:18.256 }, 00:21:18.256 "auth": { 00:21:18.256 "state": "completed", 00:21:18.256 "digest": "sha512", 00:21:18.256 "dhgroup": "ffdhe2048" 00:21:18.256 } 00:21:18.256 } 00:21:18.256 ]' 00:21:18.256 00:01:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:18.256 00:01:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:18.256 00:01:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:18.256 00:01:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:18.256 00:01:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:18.519 00:01:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:18.519 00:01:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:18.519 00:01:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:18.519 00:01:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:N2MxMmMwYjA1MWJmNTNlNmM1NzNkMzQ0OWEwOWQ0NTQwYmFiNzRjNzJiYmZlY2VmNDlkMDdlZjUzNzA4N2FiNRcYr4o=: 00:21:18.519 00:01:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:N2MxMmMwYjA1MWJmNTNlNmM1NzNkMzQ0OWEwOWQ0NTQwYmFiNzRjNzJiYmZlY2VmNDlkMDdlZjUzNzA4N2FiNRcYr4o=: 00:21:19.088 00:01:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:19.088 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:19.088 00:01:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:19.088 00:01:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:19.088 00:01:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:19.088 00:01:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:19.088 00:01:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:19.088 00:01:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:19.088 00:01:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:19.088 00:01:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:19.355 00:01:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 0 00:21:19.355 00:01:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:19.355 00:01:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:19.355 00:01:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:21:19.355 00:01:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:19.355 00:01:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:19.355 00:01:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:19.355 00:01:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:19.355 00:01:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:19.355 00:01:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:19.355 00:01:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:19.355 00:01:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:19.355 00:01:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:19.618 00:21:19.618 00:01:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:19.618 00:01:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:19.618 00:01:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:19.876 00:01:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:19.876 00:01:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:19.876 00:01:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:19.876 00:01:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:19.876 00:01:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:19.876 00:01:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:19.876 { 00:21:19.876 "cntlid": 113, 00:21:19.876 "qid": 0, 00:21:19.876 "state": "enabled", 00:21:19.876 "thread": "nvmf_tgt_poll_group_000", 00:21:19.876 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:19.876 "listen_address": { 00:21:19.876 "trtype": "TCP", 00:21:19.876 "adrfam": "IPv4", 00:21:19.876 "traddr": "10.0.0.2", 00:21:19.876 "trsvcid": "4420" 00:21:19.876 }, 00:21:19.876 "peer_address": { 00:21:19.876 "trtype": "TCP", 00:21:19.876 "adrfam": "IPv4", 00:21:19.877 "traddr": "10.0.0.1", 00:21:19.877 "trsvcid": "60174" 00:21:19.877 }, 00:21:19.877 "auth": { 00:21:19.877 "state": "completed", 00:21:19.877 "digest": "sha512", 00:21:19.877 "dhgroup": "ffdhe3072" 00:21:19.877 } 00:21:19.877 } 00:21:19.877 ]' 00:21:19.877 00:01:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:19.877 00:01:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:19.877 00:01:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:19.877 00:01:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:19.877 00:01:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:19.877 00:01:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:19.877 00:01:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:19.877 00:01:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:20.134 00:01:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YjlhYjQxNTI1YWM5ZGNjNTZlMzNhMTM3MjExYWZlYThmMDZkODI2M2NmOTljMjc1svf59w==: --dhchap-ctrl-secret DHHC-1:03:MzI5MGI4Mjg1NjYyZTgwNWRjNjE3ZWExZTYzYzRiZGY3ZjZlMTliMDJlNWVlMzM5ZWRlZmFjNGZlYzg1ODA0Y+nxpHg=: 00:21:20.134 00:01:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:YjlhYjQxNTI1YWM5ZGNjNTZlMzNhMTM3MjExYWZlYThmMDZkODI2M2NmOTljMjc1svf59w==: --dhchap-ctrl-secret DHHC-1:03:MzI5MGI4Mjg1NjYyZTgwNWRjNjE3ZWExZTYzYzRiZGY3ZjZlMTliMDJlNWVlMzM5ZWRlZmFjNGZlYzg1ODA0Y+nxpHg=: 00:21:20.700 00:01:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:20.700 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:20.700 00:01:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:20.700 00:01:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:20.700 00:01:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:20.700 00:01:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:20.700 00:01:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:20.700 00:01:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:20.701 00:01:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:20.959 00:01:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 1 00:21:20.959 00:01:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:20.959 00:01:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:20.959 00:01:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:21:20.959 00:01:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:20.959 00:01:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:20.959 00:01:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:20.959 00:01:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:20.959 00:01:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:20.959 00:01:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:20.959 00:01:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:20.959 00:01:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:20.959 00:01:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:21.218 00:21:21.218 00:02:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:21.218 00:02:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:21.218 00:02:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:21.483 00:02:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:21.483 00:02:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:21.483 00:02:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:21.483 00:02:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:21.483 00:02:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:21.483 00:02:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:21.483 { 00:21:21.483 "cntlid": 115, 00:21:21.483 "qid": 0, 00:21:21.483 "state": "enabled", 00:21:21.483 "thread": "nvmf_tgt_poll_group_000", 00:21:21.483 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:21.483 "listen_address": { 00:21:21.483 "trtype": "TCP", 00:21:21.483 "adrfam": "IPv4", 00:21:21.483 "traddr": "10.0.0.2", 00:21:21.483 "trsvcid": "4420" 00:21:21.483 }, 00:21:21.483 "peer_address": { 00:21:21.483 "trtype": "TCP", 00:21:21.483 "adrfam": "IPv4", 00:21:21.483 "traddr": "10.0.0.1", 00:21:21.483 "trsvcid": "60200" 00:21:21.483 }, 00:21:21.483 "auth": { 00:21:21.483 "state": "completed", 00:21:21.483 "digest": "sha512", 00:21:21.483 "dhgroup": "ffdhe3072" 00:21:21.483 } 00:21:21.483 } 00:21:21.483 ]' 00:21:21.483 00:02:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:21.483 00:02:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:21.483 00:02:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:21.483 00:02:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:21.483 00:02:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:21.483 00:02:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:21.483 00:02:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:21.483 00:02:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:21.742 00:02:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MjA1MDExM2VkN2JiZGY0ZDJiN2Q4MjY4Y2EyNTdiNmNaF3cn: --dhchap-ctrl-secret DHHC-1:02:ZTVkZmMyNDFlZTQ0NWY4ZDI5Y2Y5ZTQzMGY5YTNhZGY2MTYzZGM3MDhjZTIzZTg3m25QsA==: 00:21:21.742 00:02:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:MjA1MDExM2VkN2JiZGY0ZDJiN2Q4MjY4Y2EyNTdiNmNaF3cn: --dhchap-ctrl-secret DHHC-1:02:ZTVkZmMyNDFlZTQ0NWY4ZDI5Y2Y5ZTQzMGY5YTNhZGY2MTYzZGM3MDhjZTIzZTg3m25QsA==: 00:21:22.309 00:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:22.309 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:22.309 00:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:22.309 00:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:22.309 00:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:22.309 00:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:22.309 00:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:22.309 00:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:22.309 00:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:22.567 00:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 2 00:21:22.567 00:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:22.567 00:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:22.567 00:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:21:22.567 00:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:22.567 00:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:22.568 00:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:22.568 00:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:22.568 00:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:22.568 00:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:22.568 00:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:22.568 00:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:22.568 00:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:22.831 00:21:22.831 00:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:22.831 00:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:22.831 00:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:23.093 00:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:23.093 00:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:23.093 00:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:23.093 00:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:23.093 00:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:23.093 00:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:23.093 { 00:21:23.093 "cntlid": 117, 00:21:23.093 "qid": 0, 00:21:23.093 "state": "enabled", 00:21:23.093 "thread": "nvmf_tgt_poll_group_000", 00:21:23.093 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:23.093 "listen_address": { 00:21:23.093 "trtype": "TCP", 00:21:23.093 "adrfam": "IPv4", 00:21:23.093 "traddr": "10.0.0.2", 00:21:23.093 "trsvcid": "4420" 00:21:23.093 }, 00:21:23.093 "peer_address": { 00:21:23.093 "trtype": "TCP", 00:21:23.093 "adrfam": "IPv4", 00:21:23.093 "traddr": "10.0.0.1", 00:21:23.093 "trsvcid": "60218" 00:21:23.093 }, 00:21:23.093 "auth": { 00:21:23.093 "state": "completed", 00:21:23.093 "digest": "sha512", 00:21:23.093 "dhgroup": "ffdhe3072" 00:21:23.093 } 00:21:23.093 } 00:21:23.093 ]' 00:21:23.093 00:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:23.093 00:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:23.093 00:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:23.093 00:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:23.093 00:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:23.093 00:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:23.093 00:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:23.093 00:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:23.352 00:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Y2U0ZGQ1ZjUxMzYwM2E3N2RiNTBkNjZiZDdlMjEyYTQwNjM4NmJiMmEyOWVhZDdlkUs/MQ==: --dhchap-ctrl-secret DHHC-1:01:NzZmZDNhMjYzODljMGIzYjViY2RmN2ZkNDVjNGFhYmRKwaUG: 00:21:23.352 00:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:Y2U0ZGQ1ZjUxMzYwM2E3N2RiNTBkNjZiZDdlMjEyYTQwNjM4NmJiMmEyOWVhZDdlkUs/MQ==: --dhchap-ctrl-secret DHHC-1:01:NzZmZDNhMjYzODljMGIzYjViY2RmN2ZkNDVjNGFhYmRKwaUG: 00:21:23.919 00:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:23.919 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:23.919 00:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:23.919 00:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:23.920 00:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:23.920 00:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:23.920 00:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:23.920 00:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:23.920 00:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:24.178 00:02:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 3 00:21:24.178 00:02:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:24.178 00:02:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:24.178 00:02:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:21:24.178 00:02:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:24.178 00:02:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:24.178 00:02:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:21:24.178 00:02:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:24.178 00:02:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:24.178 00:02:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:24.178 00:02:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:24.178 00:02:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:24.178 00:02:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:24.437 00:21:24.437 00:02:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:24.437 00:02:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:24.437 00:02:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:24.437 00:02:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:24.437 00:02:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:24.437 00:02:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:24.437 00:02:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:24.437 00:02:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:24.437 00:02:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:24.437 { 00:21:24.437 "cntlid": 119, 00:21:24.437 "qid": 0, 00:21:24.437 "state": "enabled", 00:21:24.437 "thread": "nvmf_tgt_poll_group_000", 00:21:24.437 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:24.437 "listen_address": { 00:21:24.437 "trtype": "TCP", 00:21:24.437 "adrfam": "IPv4", 00:21:24.437 "traddr": "10.0.0.2", 00:21:24.437 "trsvcid": "4420" 00:21:24.437 }, 00:21:24.437 "peer_address": { 00:21:24.437 "trtype": "TCP", 00:21:24.437 "adrfam": "IPv4", 00:21:24.437 "traddr": "10.0.0.1", 00:21:24.437 "trsvcid": "60244" 00:21:24.437 }, 00:21:24.437 "auth": { 00:21:24.437 "state": "completed", 00:21:24.437 "digest": "sha512", 00:21:24.437 "dhgroup": "ffdhe3072" 00:21:24.437 } 00:21:24.437 } 00:21:24.437 ]' 00:21:24.437 00:02:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:24.696 00:02:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:24.696 00:02:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:24.696 00:02:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:24.696 00:02:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:24.696 00:02:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:24.696 00:02:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:24.696 00:02:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:24.954 00:02:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:N2MxMmMwYjA1MWJmNTNlNmM1NzNkMzQ0OWEwOWQ0NTQwYmFiNzRjNzJiYmZlY2VmNDlkMDdlZjUzNzA4N2FiNRcYr4o=: 00:21:24.954 00:02:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:N2MxMmMwYjA1MWJmNTNlNmM1NzNkMzQ0OWEwOWQ0NTQwYmFiNzRjNzJiYmZlY2VmNDlkMDdlZjUzNzA4N2FiNRcYr4o=: 00:21:25.521 00:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:25.521 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:25.521 00:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:25.521 00:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:25.521 00:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:25.521 00:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:25.521 00:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:25.521 00:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:25.521 00:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:25.521 00:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:25.521 00:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 0 00:21:25.780 00:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:25.780 00:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:25.780 00:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:21:25.780 00:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:25.780 00:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:25.780 00:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:25.780 00:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:25.780 00:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:25.780 00:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:25.780 00:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:25.780 00:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:25.780 00:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:26.040 00:21:26.040 00:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:26.040 00:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:26.040 00:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:26.040 00:02:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:26.040 00:02:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:26.040 00:02:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:26.040 00:02:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:26.040 00:02:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:26.040 00:02:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:26.040 { 00:21:26.040 "cntlid": 121, 00:21:26.040 "qid": 0, 00:21:26.040 "state": "enabled", 00:21:26.040 "thread": "nvmf_tgt_poll_group_000", 00:21:26.040 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:26.040 "listen_address": { 00:21:26.040 "trtype": "TCP", 00:21:26.040 "adrfam": "IPv4", 00:21:26.040 "traddr": "10.0.0.2", 00:21:26.040 "trsvcid": "4420" 00:21:26.040 }, 00:21:26.040 "peer_address": { 00:21:26.040 "trtype": "TCP", 00:21:26.040 "adrfam": "IPv4", 00:21:26.040 "traddr": "10.0.0.1", 00:21:26.040 "trsvcid": "60260" 00:21:26.040 }, 00:21:26.040 "auth": { 00:21:26.040 "state": "completed", 00:21:26.040 "digest": "sha512", 00:21:26.040 "dhgroup": "ffdhe4096" 00:21:26.040 } 00:21:26.040 } 00:21:26.040 ]' 00:21:26.040 00:02:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:26.303 00:02:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:26.303 00:02:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:26.303 00:02:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:26.303 00:02:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:26.303 00:02:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:26.303 00:02:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:26.303 00:02:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:26.574 00:02:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YjlhYjQxNTI1YWM5ZGNjNTZlMzNhMTM3MjExYWZlYThmMDZkODI2M2NmOTljMjc1svf59w==: --dhchap-ctrl-secret DHHC-1:03:MzI5MGI4Mjg1NjYyZTgwNWRjNjE3ZWExZTYzYzRiZGY3ZjZlMTliMDJlNWVlMzM5ZWRlZmFjNGZlYzg1ODA0Y+nxpHg=: 00:21:26.574 00:02:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:YjlhYjQxNTI1YWM5ZGNjNTZlMzNhMTM3MjExYWZlYThmMDZkODI2M2NmOTljMjc1svf59w==: --dhchap-ctrl-secret DHHC-1:03:MzI5MGI4Mjg1NjYyZTgwNWRjNjE3ZWExZTYzYzRiZGY3ZjZlMTliMDJlNWVlMzM5ZWRlZmFjNGZlYzg1ODA0Y+nxpHg=: 00:21:27.147 00:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:27.147 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:27.147 00:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:27.147 00:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:27.147 00:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:27.147 00:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:27.147 00:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:27.147 00:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:27.147 00:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:27.147 00:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 1 00:21:27.147 00:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:27.147 00:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:27.147 00:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:21:27.147 00:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:27.147 00:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:27.147 00:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:27.147 00:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:27.147 00:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:27.147 00:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:27.147 00:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:27.147 00:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:27.147 00:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:27.406 00:21:27.666 00:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:27.666 00:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:27.666 00:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:27.666 00:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:27.666 00:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:27.666 00:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:27.666 00:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:27.666 00:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:27.666 00:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:27.666 { 00:21:27.666 "cntlid": 123, 00:21:27.666 "qid": 0, 00:21:27.666 "state": "enabled", 00:21:27.666 "thread": "nvmf_tgt_poll_group_000", 00:21:27.666 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:27.666 "listen_address": { 00:21:27.666 "trtype": "TCP", 00:21:27.666 "adrfam": "IPv4", 00:21:27.666 "traddr": "10.0.0.2", 00:21:27.666 "trsvcid": "4420" 00:21:27.666 }, 00:21:27.666 "peer_address": { 00:21:27.666 "trtype": "TCP", 00:21:27.666 "adrfam": "IPv4", 00:21:27.666 "traddr": "10.0.0.1", 00:21:27.666 "trsvcid": "60276" 00:21:27.666 }, 00:21:27.666 "auth": { 00:21:27.666 "state": "completed", 00:21:27.666 "digest": "sha512", 00:21:27.666 "dhgroup": "ffdhe4096" 00:21:27.666 } 00:21:27.666 } 00:21:27.666 ]' 00:21:27.666 00:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:27.666 00:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:27.666 00:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:27.924 00:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:27.924 00:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:27.924 00:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:27.924 00:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:27.924 00:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:28.207 00:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MjA1MDExM2VkN2JiZGY0ZDJiN2Q4MjY4Y2EyNTdiNmNaF3cn: --dhchap-ctrl-secret DHHC-1:02:ZTVkZmMyNDFlZTQ0NWY4ZDI5Y2Y5ZTQzMGY5YTNhZGY2MTYzZGM3MDhjZTIzZTg3m25QsA==: 00:21:28.207 00:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:MjA1MDExM2VkN2JiZGY0ZDJiN2Q4MjY4Y2EyNTdiNmNaF3cn: --dhchap-ctrl-secret DHHC-1:02:ZTVkZmMyNDFlZTQ0NWY4ZDI5Y2Y5ZTQzMGY5YTNhZGY2MTYzZGM3MDhjZTIzZTg3m25QsA==: 00:21:28.774 00:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:28.774 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:28.774 00:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:28.774 00:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:28.774 00:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:28.774 00:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:28.774 00:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:28.774 00:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:28.774 00:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:28.774 00:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 2 00:21:28.774 00:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:28.774 00:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:28.774 00:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:21:28.774 00:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:28.774 00:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:28.774 00:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:28.774 00:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:28.774 00:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:28.774 00:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:28.774 00:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:28.774 00:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:28.774 00:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:29.035 00:21:29.035 00:02:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:29.036 00:02:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:29.036 00:02:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:29.297 00:02:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:29.297 00:02:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:29.297 00:02:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:29.297 00:02:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:29.297 00:02:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:29.297 00:02:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:29.297 { 00:21:29.297 "cntlid": 125, 00:21:29.297 "qid": 0, 00:21:29.297 "state": "enabled", 00:21:29.297 "thread": "nvmf_tgt_poll_group_000", 00:21:29.297 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:29.297 "listen_address": { 00:21:29.297 "trtype": "TCP", 00:21:29.297 "adrfam": "IPv4", 00:21:29.297 "traddr": "10.0.0.2", 00:21:29.297 "trsvcid": "4420" 00:21:29.297 }, 00:21:29.297 "peer_address": { 00:21:29.297 "trtype": "TCP", 00:21:29.297 "adrfam": "IPv4", 00:21:29.297 "traddr": "10.0.0.1", 00:21:29.297 "trsvcid": "39256" 00:21:29.297 }, 00:21:29.297 "auth": { 00:21:29.297 "state": "completed", 00:21:29.297 "digest": "sha512", 00:21:29.297 "dhgroup": "ffdhe4096" 00:21:29.297 } 00:21:29.297 } 00:21:29.297 ]' 00:21:29.297 00:02:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:29.297 00:02:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:29.297 00:02:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:29.297 00:02:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:29.297 00:02:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:29.556 00:02:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:29.556 00:02:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:29.556 00:02:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:29.556 00:02:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Y2U0ZGQ1ZjUxMzYwM2E3N2RiNTBkNjZiZDdlMjEyYTQwNjM4NmJiMmEyOWVhZDdlkUs/MQ==: --dhchap-ctrl-secret DHHC-1:01:NzZmZDNhMjYzODljMGIzYjViY2RmN2ZkNDVjNGFhYmRKwaUG: 00:21:29.556 00:02:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:Y2U0ZGQ1ZjUxMzYwM2E3N2RiNTBkNjZiZDdlMjEyYTQwNjM4NmJiMmEyOWVhZDdlkUs/MQ==: --dhchap-ctrl-secret DHHC-1:01:NzZmZDNhMjYzODljMGIzYjViY2RmN2ZkNDVjNGFhYmRKwaUG: 00:21:30.132 00:02:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:30.132 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:30.132 00:02:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:30.132 00:02:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:30.132 00:02:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:30.132 00:02:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:30.132 00:02:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:30.132 00:02:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:30.132 00:02:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:30.391 00:02:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 3 00:21:30.392 00:02:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:30.392 00:02:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:30.392 00:02:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:21:30.392 00:02:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:30.392 00:02:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:30.392 00:02:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:21:30.392 00:02:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:30.392 00:02:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:30.392 00:02:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:30.392 00:02:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:30.392 00:02:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:30.392 00:02:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:30.649 00:21:30.649 00:02:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:30.649 00:02:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:30.649 00:02:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:30.907 00:02:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:30.907 00:02:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:30.907 00:02:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:30.907 00:02:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:30.907 00:02:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:30.907 00:02:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:30.907 { 00:21:30.907 "cntlid": 127, 00:21:30.907 "qid": 0, 00:21:30.907 "state": "enabled", 00:21:30.907 "thread": "nvmf_tgt_poll_group_000", 00:21:30.907 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:30.907 "listen_address": { 00:21:30.907 "trtype": "TCP", 00:21:30.907 "adrfam": "IPv4", 00:21:30.907 "traddr": "10.0.0.2", 00:21:30.907 "trsvcid": "4420" 00:21:30.907 }, 00:21:30.907 "peer_address": { 00:21:30.907 "trtype": "TCP", 00:21:30.907 "adrfam": "IPv4", 00:21:30.907 "traddr": "10.0.0.1", 00:21:30.907 "trsvcid": "39276" 00:21:30.907 }, 00:21:30.907 "auth": { 00:21:30.907 "state": "completed", 00:21:30.907 "digest": "sha512", 00:21:30.907 "dhgroup": "ffdhe4096" 00:21:30.907 } 00:21:30.907 } 00:21:30.907 ]' 00:21:30.907 00:02:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:30.907 00:02:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:30.907 00:02:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:30.907 00:02:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:30.907 00:02:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:31.165 00:02:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:31.165 00:02:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:31.165 00:02:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:31.165 00:02:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:N2MxMmMwYjA1MWJmNTNlNmM1NzNkMzQ0OWEwOWQ0NTQwYmFiNzRjNzJiYmZlY2VmNDlkMDdlZjUzNzA4N2FiNRcYr4o=: 00:21:31.165 00:02:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:N2MxMmMwYjA1MWJmNTNlNmM1NzNkMzQ0OWEwOWQ0NTQwYmFiNzRjNzJiYmZlY2VmNDlkMDdlZjUzNzA4N2FiNRcYr4o=: 00:21:31.730 00:02:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:31.730 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:31.730 00:02:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:31.730 00:02:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:31.730 00:02:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:31.730 00:02:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:31.730 00:02:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:31.730 00:02:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:31.730 00:02:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:31.730 00:02:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:31.989 00:02:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 0 00:21:31.989 00:02:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:31.989 00:02:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:31.989 00:02:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:21:31.989 00:02:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:31.989 00:02:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:31.989 00:02:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:31.989 00:02:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:31.989 00:02:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:31.989 00:02:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:31.989 00:02:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:31.989 00:02:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:31.989 00:02:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:32.247 00:21:32.504 00:02:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:32.504 00:02:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:32.504 00:02:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:32.505 00:02:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:32.505 00:02:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:32.505 00:02:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:32.505 00:02:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:32.505 00:02:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:32.505 00:02:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:32.505 { 00:21:32.505 "cntlid": 129, 00:21:32.505 "qid": 0, 00:21:32.505 "state": "enabled", 00:21:32.505 "thread": "nvmf_tgt_poll_group_000", 00:21:32.505 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:32.505 "listen_address": { 00:21:32.505 "trtype": "TCP", 00:21:32.505 "adrfam": "IPv4", 00:21:32.505 "traddr": "10.0.0.2", 00:21:32.505 "trsvcid": "4420" 00:21:32.505 }, 00:21:32.505 "peer_address": { 00:21:32.505 "trtype": "TCP", 00:21:32.505 "adrfam": "IPv4", 00:21:32.505 "traddr": "10.0.0.1", 00:21:32.505 "trsvcid": "39290" 00:21:32.505 }, 00:21:32.505 "auth": { 00:21:32.505 "state": "completed", 00:21:32.505 "digest": "sha512", 00:21:32.505 "dhgroup": "ffdhe6144" 00:21:32.505 } 00:21:32.505 } 00:21:32.505 ]' 00:21:32.505 00:02:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:32.505 00:02:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:32.762 00:02:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:32.762 00:02:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:32.762 00:02:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:32.762 00:02:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:32.762 00:02:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:32.762 00:02:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:33.025 00:02:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YjlhYjQxNTI1YWM5ZGNjNTZlMzNhMTM3MjExYWZlYThmMDZkODI2M2NmOTljMjc1svf59w==: --dhchap-ctrl-secret DHHC-1:03:MzI5MGI4Mjg1NjYyZTgwNWRjNjE3ZWExZTYzYzRiZGY3ZjZlMTliMDJlNWVlMzM5ZWRlZmFjNGZlYzg1ODA0Y+nxpHg=: 00:21:33.025 00:02:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:YjlhYjQxNTI1YWM5ZGNjNTZlMzNhMTM3MjExYWZlYThmMDZkODI2M2NmOTljMjc1svf59w==: --dhchap-ctrl-secret DHHC-1:03:MzI5MGI4Mjg1NjYyZTgwNWRjNjE3ZWExZTYzYzRiZGY3ZjZlMTliMDJlNWVlMzM5ZWRlZmFjNGZlYzg1ODA0Y+nxpHg=: 00:21:33.593 00:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:33.593 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:33.593 00:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:33.593 00:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:33.593 00:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:33.593 00:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:33.593 00:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:33.593 00:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:33.593 00:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:33.593 00:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 1 00:21:33.593 00:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:33.593 00:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:33.593 00:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:21:33.593 00:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:33.593 00:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:33.593 00:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:33.593 00:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:33.593 00:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:33.593 00:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:33.593 00:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:33.593 00:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:33.593 00:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:34.159 00:21:34.159 00:02:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:34.159 00:02:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:34.159 00:02:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:34.159 00:02:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:34.159 00:02:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:34.159 00:02:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:34.159 00:02:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:34.159 00:02:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:34.159 00:02:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:34.159 { 00:21:34.159 "cntlid": 131, 00:21:34.159 "qid": 0, 00:21:34.159 "state": "enabled", 00:21:34.159 "thread": "nvmf_tgt_poll_group_000", 00:21:34.159 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:34.159 "listen_address": { 00:21:34.159 "trtype": "TCP", 00:21:34.159 "adrfam": "IPv4", 00:21:34.159 "traddr": "10.0.0.2", 00:21:34.159 "trsvcid": "4420" 00:21:34.159 }, 00:21:34.159 "peer_address": { 00:21:34.159 "trtype": "TCP", 00:21:34.159 "adrfam": "IPv4", 00:21:34.159 "traddr": "10.0.0.1", 00:21:34.159 "trsvcid": "39310" 00:21:34.160 }, 00:21:34.160 "auth": { 00:21:34.160 "state": "completed", 00:21:34.160 "digest": "sha512", 00:21:34.160 "dhgroup": "ffdhe6144" 00:21:34.160 } 00:21:34.160 } 00:21:34.160 ]' 00:21:34.160 00:02:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:34.160 00:02:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:34.160 00:02:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:34.418 00:02:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:34.418 00:02:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:34.418 00:02:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:34.418 00:02:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:34.418 00:02:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:34.418 00:02:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MjA1MDExM2VkN2JiZGY0ZDJiN2Q4MjY4Y2EyNTdiNmNaF3cn: --dhchap-ctrl-secret DHHC-1:02:ZTVkZmMyNDFlZTQ0NWY4ZDI5Y2Y5ZTQzMGY5YTNhZGY2MTYzZGM3MDhjZTIzZTg3m25QsA==: 00:21:34.676 00:02:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:MjA1MDExM2VkN2JiZGY0ZDJiN2Q4MjY4Y2EyNTdiNmNaF3cn: --dhchap-ctrl-secret DHHC-1:02:ZTVkZmMyNDFlZTQ0NWY4ZDI5Y2Y5ZTQzMGY5YTNhZGY2MTYzZGM3MDhjZTIzZTg3m25QsA==: 00:21:35.242 00:02:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:35.242 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:35.242 00:02:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:35.242 00:02:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:35.242 00:02:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:35.242 00:02:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:35.242 00:02:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:35.242 00:02:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:35.242 00:02:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:35.242 00:02:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 2 00:21:35.242 00:02:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:35.242 00:02:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:35.242 00:02:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:21:35.242 00:02:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:35.242 00:02:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:35.242 00:02:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:35.242 00:02:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:35.242 00:02:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:35.242 00:02:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:35.242 00:02:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:35.242 00:02:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:35.242 00:02:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:35.816 00:21:35.816 00:02:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:35.816 00:02:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:35.816 00:02:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:35.816 00:02:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:35.816 00:02:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:35.816 00:02:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:35.816 00:02:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:35.816 00:02:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:35.816 00:02:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:35.816 { 00:21:35.816 "cntlid": 133, 00:21:35.816 "qid": 0, 00:21:35.816 "state": "enabled", 00:21:35.816 "thread": "nvmf_tgt_poll_group_000", 00:21:35.816 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:35.816 "listen_address": { 00:21:35.816 "trtype": "TCP", 00:21:35.816 "adrfam": "IPv4", 00:21:35.816 "traddr": "10.0.0.2", 00:21:35.816 "trsvcid": "4420" 00:21:35.816 }, 00:21:35.816 "peer_address": { 00:21:35.816 "trtype": "TCP", 00:21:35.816 "adrfam": "IPv4", 00:21:35.816 "traddr": "10.0.0.1", 00:21:35.816 "trsvcid": "39336" 00:21:35.816 }, 00:21:35.816 "auth": { 00:21:35.816 "state": "completed", 00:21:35.816 "digest": "sha512", 00:21:35.816 "dhgroup": "ffdhe6144" 00:21:35.816 } 00:21:35.816 } 00:21:35.816 ]' 00:21:35.816 00:02:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:36.076 00:02:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:36.076 00:02:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:36.076 00:02:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:36.076 00:02:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:36.076 00:02:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:36.076 00:02:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:36.076 00:02:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:36.334 00:02:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Y2U0ZGQ1ZjUxMzYwM2E3N2RiNTBkNjZiZDdlMjEyYTQwNjM4NmJiMmEyOWVhZDdlkUs/MQ==: --dhchap-ctrl-secret DHHC-1:01:NzZmZDNhMjYzODljMGIzYjViY2RmN2ZkNDVjNGFhYmRKwaUG: 00:21:36.334 00:02:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:Y2U0ZGQ1ZjUxMzYwM2E3N2RiNTBkNjZiZDdlMjEyYTQwNjM4NmJiMmEyOWVhZDdlkUs/MQ==: --dhchap-ctrl-secret DHHC-1:01:NzZmZDNhMjYzODljMGIzYjViY2RmN2ZkNDVjNGFhYmRKwaUG: 00:21:36.900 00:02:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:36.900 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:36.900 00:02:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:36.900 00:02:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:36.900 00:02:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:36.900 00:02:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:36.900 00:02:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:36.900 00:02:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:36.900 00:02:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:36.900 00:02:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 3 00:21:36.900 00:02:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:36.900 00:02:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:36.900 00:02:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:21:36.900 00:02:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:36.900 00:02:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:36.900 00:02:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:21:36.900 00:02:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:36.900 00:02:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:36.900 00:02:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:36.900 00:02:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:36.900 00:02:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:36.900 00:02:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:37.469 00:21:37.469 00:02:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:37.469 00:02:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:37.469 00:02:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:37.469 00:02:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:37.469 00:02:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:37.469 00:02:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:37.469 00:02:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:37.469 00:02:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:37.469 00:02:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:37.469 { 00:21:37.469 "cntlid": 135, 00:21:37.469 "qid": 0, 00:21:37.469 "state": "enabled", 00:21:37.469 "thread": "nvmf_tgt_poll_group_000", 00:21:37.469 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:37.469 "listen_address": { 00:21:37.469 "trtype": "TCP", 00:21:37.469 "adrfam": "IPv4", 00:21:37.469 "traddr": "10.0.0.2", 00:21:37.469 "trsvcid": "4420" 00:21:37.469 }, 00:21:37.469 "peer_address": { 00:21:37.469 "trtype": "TCP", 00:21:37.469 "adrfam": "IPv4", 00:21:37.469 "traddr": "10.0.0.1", 00:21:37.469 "trsvcid": "39364" 00:21:37.469 }, 00:21:37.469 "auth": { 00:21:37.469 "state": "completed", 00:21:37.469 "digest": "sha512", 00:21:37.469 "dhgroup": "ffdhe6144" 00:21:37.469 } 00:21:37.469 } 00:21:37.469 ]' 00:21:37.469 00:02:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:37.727 00:02:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:37.727 00:02:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:37.727 00:02:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:37.727 00:02:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:37.727 00:02:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:37.727 00:02:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:37.727 00:02:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:37.985 00:02:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:N2MxMmMwYjA1MWJmNTNlNmM1NzNkMzQ0OWEwOWQ0NTQwYmFiNzRjNzJiYmZlY2VmNDlkMDdlZjUzNzA4N2FiNRcYr4o=: 00:21:37.985 00:02:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:N2MxMmMwYjA1MWJmNTNlNmM1NzNkMzQ0OWEwOWQ0NTQwYmFiNzRjNzJiYmZlY2VmNDlkMDdlZjUzNzA4N2FiNRcYr4o=: 00:21:38.552 00:02:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:38.552 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:38.552 00:02:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:38.552 00:02:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:38.552 00:02:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:38.552 00:02:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:38.552 00:02:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:38.552 00:02:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:38.552 00:02:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:38.552 00:02:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:38.552 00:02:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 0 00:21:38.552 00:02:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:38.552 00:02:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:38.552 00:02:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:38.552 00:02:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:38.552 00:02:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:38.552 00:02:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:38.552 00:02:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:38.552 00:02:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:38.552 00:02:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:38.552 00:02:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:38.552 00:02:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:38.552 00:02:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:39.117 00:21:39.117 00:02:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:39.117 00:02:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:39.117 00:02:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:39.374 00:02:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:39.374 00:02:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:39.374 00:02:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:39.374 00:02:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:39.374 00:02:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:39.374 00:02:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:39.374 { 00:21:39.374 "cntlid": 137, 00:21:39.374 "qid": 0, 00:21:39.374 "state": "enabled", 00:21:39.374 "thread": "nvmf_tgt_poll_group_000", 00:21:39.374 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:39.374 "listen_address": { 00:21:39.374 "trtype": "TCP", 00:21:39.374 "adrfam": "IPv4", 00:21:39.374 "traddr": "10.0.0.2", 00:21:39.374 "trsvcid": "4420" 00:21:39.374 }, 00:21:39.374 "peer_address": { 00:21:39.374 "trtype": "TCP", 00:21:39.374 "adrfam": "IPv4", 00:21:39.374 "traddr": "10.0.0.1", 00:21:39.374 "trsvcid": "51476" 00:21:39.374 }, 00:21:39.374 "auth": { 00:21:39.374 "state": "completed", 00:21:39.374 "digest": "sha512", 00:21:39.374 "dhgroup": "ffdhe8192" 00:21:39.374 } 00:21:39.374 } 00:21:39.374 ]' 00:21:39.374 00:02:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:39.374 00:02:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:39.374 00:02:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:39.374 00:02:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:39.374 00:02:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:39.374 00:02:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:39.374 00:02:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:39.374 00:02:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:39.657 00:02:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YjlhYjQxNTI1YWM5ZGNjNTZlMzNhMTM3MjExYWZlYThmMDZkODI2M2NmOTljMjc1svf59w==: --dhchap-ctrl-secret DHHC-1:03:MzI5MGI4Mjg1NjYyZTgwNWRjNjE3ZWExZTYzYzRiZGY3ZjZlMTliMDJlNWVlMzM5ZWRlZmFjNGZlYzg1ODA0Y+nxpHg=: 00:21:39.657 00:02:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:YjlhYjQxNTI1YWM5ZGNjNTZlMzNhMTM3MjExYWZlYThmMDZkODI2M2NmOTljMjc1svf59w==: --dhchap-ctrl-secret DHHC-1:03:MzI5MGI4Mjg1NjYyZTgwNWRjNjE3ZWExZTYzYzRiZGY3ZjZlMTliMDJlNWVlMzM5ZWRlZmFjNGZlYzg1ODA0Y+nxpHg=: 00:21:40.329 00:02:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:40.329 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:40.329 00:02:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:40.329 00:02:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:40.329 00:02:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:40.329 00:02:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:40.329 00:02:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:40.329 00:02:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:40.329 00:02:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:40.616 00:02:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 1 00:21:40.616 00:02:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:40.617 00:02:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:40.617 00:02:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:40.617 00:02:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:40.617 00:02:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:40.617 00:02:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:40.617 00:02:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:40.617 00:02:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:40.617 00:02:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:40.617 00:02:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:40.617 00:02:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:40.617 00:02:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:41.188 00:21:41.188 00:02:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:41.188 00:02:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:41.188 00:02:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:41.188 00:02:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:41.188 00:02:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:41.188 00:02:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:41.188 00:02:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:41.188 00:02:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:41.188 00:02:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:41.188 { 00:21:41.188 "cntlid": 139, 00:21:41.188 "qid": 0, 00:21:41.188 "state": "enabled", 00:21:41.188 "thread": "nvmf_tgt_poll_group_000", 00:21:41.188 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:41.188 "listen_address": { 00:21:41.188 "trtype": "TCP", 00:21:41.188 "adrfam": "IPv4", 00:21:41.188 "traddr": "10.0.0.2", 00:21:41.188 "trsvcid": "4420" 00:21:41.188 }, 00:21:41.188 "peer_address": { 00:21:41.188 "trtype": "TCP", 00:21:41.188 "adrfam": "IPv4", 00:21:41.188 "traddr": "10.0.0.1", 00:21:41.188 "trsvcid": "51512" 00:21:41.188 }, 00:21:41.188 "auth": { 00:21:41.188 "state": "completed", 00:21:41.188 "digest": "sha512", 00:21:41.188 "dhgroup": "ffdhe8192" 00:21:41.188 } 00:21:41.188 } 00:21:41.188 ]' 00:21:41.188 00:02:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:41.446 00:02:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:41.446 00:02:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:41.446 00:02:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:41.446 00:02:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:41.446 00:02:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:41.446 00:02:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:41.446 00:02:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:41.704 00:02:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MjA1MDExM2VkN2JiZGY0ZDJiN2Q4MjY4Y2EyNTdiNmNaF3cn: --dhchap-ctrl-secret DHHC-1:02:ZTVkZmMyNDFlZTQ0NWY4ZDI5Y2Y5ZTQzMGY5YTNhZGY2MTYzZGM3MDhjZTIzZTg3m25QsA==: 00:21:41.704 00:02:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:MjA1MDExM2VkN2JiZGY0ZDJiN2Q4MjY4Y2EyNTdiNmNaF3cn: --dhchap-ctrl-secret DHHC-1:02:ZTVkZmMyNDFlZTQ0NWY4ZDI5Y2Y5ZTQzMGY5YTNhZGY2MTYzZGM3MDhjZTIzZTg3m25QsA==: 00:21:42.273 00:02:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:42.273 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:42.273 00:02:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:42.273 00:02:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:42.273 00:02:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:42.273 00:02:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:42.273 00:02:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:42.273 00:02:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:42.273 00:02:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:42.273 00:02:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 2 00:21:42.273 00:02:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:42.273 00:02:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:42.273 00:02:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:42.273 00:02:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:42.273 00:02:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:42.273 00:02:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:42.273 00:02:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:42.273 00:02:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:42.273 00:02:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:42.273 00:02:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:42.273 00:02:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:42.273 00:02:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:42.846 00:21:42.846 00:02:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:42.846 00:02:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:42.846 00:02:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:43.106 00:02:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:43.106 00:02:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:43.106 00:02:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:43.106 00:02:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:43.106 00:02:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:43.106 00:02:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:43.106 { 00:21:43.106 "cntlid": 141, 00:21:43.106 "qid": 0, 00:21:43.106 "state": "enabled", 00:21:43.106 "thread": "nvmf_tgt_poll_group_000", 00:21:43.106 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:43.106 "listen_address": { 00:21:43.106 "trtype": "TCP", 00:21:43.106 "adrfam": "IPv4", 00:21:43.106 "traddr": "10.0.0.2", 00:21:43.106 "trsvcid": "4420" 00:21:43.106 }, 00:21:43.106 "peer_address": { 00:21:43.106 "trtype": "TCP", 00:21:43.106 "adrfam": "IPv4", 00:21:43.106 "traddr": "10.0.0.1", 00:21:43.106 "trsvcid": "51554" 00:21:43.106 }, 00:21:43.106 "auth": { 00:21:43.106 "state": "completed", 00:21:43.106 "digest": "sha512", 00:21:43.106 "dhgroup": "ffdhe8192" 00:21:43.106 } 00:21:43.106 } 00:21:43.106 ]' 00:21:43.106 00:02:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:43.106 00:02:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:43.106 00:02:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:43.106 00:02:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:43.106 00:02:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:43.106 00:02:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:43.106 00:02:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:43.106 00:02:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:43.364 00:02:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Y2U0ZGQ1ZjUxMzYwM2E3N2RiNTBkNjZiZDdlMjEyYTQwNjM4NmJiMmEyOWVhZDdlkUs/MQ==: --dhchap-ctrl-secret DHHC-1:01:NzZmZDNhMjYzODljMGIzYjViY2RmN2ZkNDVjNGFhYmRKwaUG: 00:21:43.365 00:02:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:Y2U0ZGQ1ZjUxMzYwM2E3N2RiNTBkNjZiZDdlMjEyYTQwNjM4NmJiMmEyOWVhZDdlkUs/MQ==: --dhchap-ctrl-secret DHHC-1:01:NzZmZDNhMjYzODljMGIzYjViY2RmN2ZkNDVjNGFhYmRKwaUG: 00:21:43.932 00:02:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:43.932 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:43.932 00:02:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:43.932 00:02:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:43.932 00:02:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:43.932 00:02:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:43.932 00:02:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:43.932 00:02:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:43.932 00:02:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:44.191 00:02:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 3 00:21:44.191 00:02:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:44.191 00:02:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:44.191 00:02:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:44.191 00:02:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:44.191 00:02:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:44.191 00:02:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:21:44.191 00:02:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:44.191 00:02:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:44.191 00:02:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:44.191 00:02:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:44.191 00:02:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:44.191 00:02:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:44.757 00:21:44.757 00:02:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:44.757 00:02:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:44.757 00:02:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:45.016 00:02:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:45.016 00:02:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:45.016 00:02:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:45.016 00:02:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:45.016 00:02:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:45.016 00:02:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:45.016 { 00:21:45.016 "cntlid": 143, 00:21:45.016 "qid": 0, 00:21:45.016 "state": "enabled", 00:21:45.016 "thread": "nvmf_tgt_poll_group_000", 00:21:45.016 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:45.016 "listen_address": { 00:21:45.016 "trtype": "TCP", 00:21:45.016 "adrfam": "IPv4", 00:21:45.016 "traddr": "10.0.0.2", 00:21:45.016 "trsvcid": "4420" 00:21:45.016 }, 00:21:45.016 "peer_address": { 00:21:45.016 "trtype": "TCP", 00:21:45.016 "adrfam": "IPv4", 00:21:45.016 "traddr": "10.0.0.1", 00:21:45.016 "trsvcid": "51598" 00:21:45.016 }, 00:21:45.016 "auth": { 00:21:45.016 "state": "completed", 00:21:45.016 "digest": "sha512", 00:21:45.016 "dhgroup": "ffdhe8192" 00:21:45.016 } 00:21:45.016 } 00:21:45.016 ]' 00:21:45.016 00:02:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:45.016 00:02:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:45.016 00:02:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:45.016 00:02:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:45.016 00:02:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:45.016 00:02:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:45.016 00:02:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:45.016 00:02:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:45.275 00:02:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:N2MxMmMwYjA1MWJmNTNlNmM1NzNkMzQ0OWEwOWQ0NTQwYmFiNzRjNzJiYmZlY2VmNDlkMDdlZjUzNzA4N2FiNRcYr4o=: 00:21:45.275 00:02:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:N2MxMmMwYjA1MWJmNTNlNmM1NzNkMzQ0OWEwOWQ0NTQwYmFiNzRjNzJiYmZlY2VmNDlkMDdlZjUzNzA4N2FiNRcYr4o=: 00:21:45.843 00:02:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:45.843 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:45.843 00:02:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:45.843 00:02:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:45.843 00:02:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:45.843 00:02:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:45.843 00:02:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:21:45.843 00:02:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s sha256,sha384,sha512 00:21:45.843 00:02:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:21:45.843 00:02:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:21:45.843 00:02:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:21:45.843 00:02:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:21:46.102 00:02:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@141 -- # connect_authenticate sha512 ffdhe8192 0 00:21:46.102 00:02:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:46.102 00:02:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:46.102 00:02:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:46.102 00:02:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:46.102 00:02:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:46.102 00:02:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:46.102 00:02:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:46.102 00:02:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:46.102 00:02:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:46.102 00:02:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:46.102 00:02:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:46.102 00:02:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:46.361 00:21:46.361 00:02:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:46.361 00:02:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:46.361 00:02:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:46.620 00:02:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:46.620 00:02:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:46.620 00:02:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:46.620 00:02:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:46.620 00:02:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:46.620 00:02:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:46.620 { 00:21:46.620 "cntlid": 145, 00:21:46.620 "qid": 0, 00:21:46.620 "state": "enabled", 00:21:46.620 "thread": "nvmf_tgt_poll_group_000", 00:21:46.620 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:46.620 "listen_address": { 00:21:46.620 "trtype": "TCP", 00:21:46.620 "adrfam": "IPv4", 00:21:46.620 "traddr": "10.0.0.2", 00:21:46.620 "trsvcid": "4420" 00:21:46.620 }, 00:21:46.620 "peer_address": { 00:21:46.620 "trtype": "TCP", 00:21:46.620 "adrfam": "IPv4", 00:21:46.620 "traddr": "10.0.0.1", 00:21:46.620 "trsvcid": "51624" 00:21:46.620 }, 00:21:46.620 "auth": { 00:21:46.620 "state": "completed", 00:21:46.620 "digest": "sha512", 00:21:46.620 "dhgroup": "ffdhe8192" 00:21:46.620 } 00:21:46.620 } 00:21:46.620 ]' 00:21:46.620 00:02:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:46.620 00:02:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:46.620 00:02:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:46.879 00:02:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:46.879 00:02:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:46.879 00:02:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:46.879 00:02:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:46.879 00:02:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:47.138 00:02:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YjlhYjQxNTI1YWM5ZGNjNTZlMzNhMTM3MjExYWZlYThmMDZkODI2M2NmOTljMjc1svf59w==: --dhchap-ctrl-secret DHHC-1:03:MzI5MGI4Mjg1NjYyZTgwNWRjNjE3ZWExZTYzYzRiZGY3ZjZlMTliMDJlNWVlMzM5ZWRlZmFjNGZlYzg1ODA0Y+nxpHg=: 00:21:47.138 00:02:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:YjlhYjQxNTI1YWM5ZGNjNTZlMzNhMTM3MjExYWZlYThmMDZkODI2M2NmOTljMjc1svf59w==: --dhchap-ctrl-secret DHHC-1:03:MzI5MGI4Mjg1NjYyZTgwNWRjNjE3ZWExZTYzYzRiZGY3ZjZlMTliMDJlNWVlMzM5ZWRlZmFjNGZlYzg1ODA0Y+nxpHg=: 00:21:47.705 00:02:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:47.705 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:47.705 00:02:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:47.705 00:02:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:47.705 00:02:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:47.705 00:02:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:47.706 00:02:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@144 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 00:21:47.706 00:02:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:47.706 00:02:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:47.706 00:02:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:47.706 00:02:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@145 -- # NOT bdev_connect -b nvme0 --dhchap-key key2 00:21:47.706 00:02:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:21:47.706 00:02:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key2 00:21:47.706 00:02:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:21:47.706 00:02:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:47.706 00:02:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:21:47.706 00:02:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:47.706 00:02:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key2 00:21:47.706 00:02:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:21:47.706 00:02:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:21:47.965 request: 00:21:47.965 { 00:21:47.965 "name": "nvme0", 00:21:47.965 "trtype": "tcp", 00:21:47.965 "traddr": "10.0.0.2", 00:21:47.965 "adrfam": "ipv4", 00:21:47.965 "trsvcid": "4420", 00:21:47.965 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:21:47.965 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:47.965 "prchk_reftag": false, 00:21:47.965 "prchk_guard": false, 00:21:47.965 "hdgst": false, 00:21:47.965 "ddgst": false, 00:21:47.965 "dhchap_key": "key2", 00:21:47.965 "allow_unrecognized_csi": false, 00:21:47.965 "method": "bdev_nvme_attach_controller", 00:21:47.965 "req_id": 1 00:21:47.965 } 00:21:47.965 Got JSON-RPC error response 00:21:47.965 response: 00:21:47.965 { 00:21:47.965 "code": -5, 00:21:47.965 "message": "Input/output error" 00:21:47.965 } 00:21:47.965 00:02:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:21:47.965 00:02:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:47.965 00:02:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:47.965 00:02:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:47.965 00:02:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@146 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:47.965 00:02:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:47.965 00:02:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:47.965 00:02:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:47.965 00:02:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@149 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:47.965 00:02:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:47.965 00:02:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:47.965 00:02:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:47.965 00:02:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@150 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:21:47.965 00:02:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:21:47.965 00:02:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:21:47.965 00:02:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:21:47.965 00:02:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:47.965 00:02:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:21:47.965 00:02:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:47.965 00:02:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:21:47.965 00:02:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:21:47.965 00:02:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:21:48.533 request: 00:21:48.533 { 00:21:48.533 "name": "nvme0", 00:21:48.533 "trtype": "tcp", 00:21:48.533 "traddr": "10.0.0.2", 00:21:48.533 "adrfam": "ipv4", 00:21:48.533 "trsvcid": "4420", 00:21:48.533 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:21:48.533 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:48.533 "prchk_reftag": false, 00:21:48.533 "prchk_guard": false, 00:21:48.533 "hdgst": false, 00:21:48.533 "ddgst": false, 00:21:48.533 "dhchap_key": "key1", 00:21:48.533 "dhchap_ctrlr_key": "ckey2", 00:21:48.533 "allow_unrecognized_csi": false, 00:21:48.533 "method": "bdev_nvme_attach_controller", 00:21:48.533 "req_id": 1 00:21:48.533 } 00:21:48.533 Got JSON-RPC error response 00:21:48.533 response: 00:21:48.533 { 00:21:48.533 "code": -5, 00:21:48.533 "message": "Input/output error" 00:21:48.533 } 00:21:48.533 00:02:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:21:48.533 00:02:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:48.533 00:02:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:48.533 00:02:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:48.533 00:02:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@151 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:48.533 00:02:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:48.533 00:02:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:48.533 00:02:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:48.533 00:02:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@154 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 00:21:48.533 00:02:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:48.533 00:02:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:48.533 00:02:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:48.533 00:02:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@155 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:48.533 00:02:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:21:48.533 00:02:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:48.533 00:02:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:21:48.533 00:02:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:48.533 00:02:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:21:48.533 00:02:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:48.533 00:02:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:48.533 00:02:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:48.533 00:02:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:49.101 request: 00:21:49.101 { 00:21:49.101 "name": "nvme0", 00:21:49.101 "trtype": "tcp", 00:21:49.101 "traddr": "10.0.0.2", 00:21:49.101 "adrfam": "ipv4", 00:21:49.101 "trsvcid": "4420", 00:21:49.101 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:21:49.101 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:49.101 "prchk_reftag": false, 00:21:49.101 "prchk_guard": false, 00:21:49.101 "hdgst": false, 00:21:49.101 "ddgst": false, 00:21:49.101 "dhchap_key": "key1", 00:21:49.101 "dhchap_ctrlr_key": "ckey1", 00:21:49.101 "allow_unrecognized_csi": false, 00:21:49.101 "method": "bdev_nvme_attach_controller", 00:21:49.101 "req_id": 1 00:21:49.101 } 00:21:49.101 Got JSON-RPC error response 00:21:49.101 response: 00:21:49.101 { 00:21:49.101 "code": -5, 00:21:49.101 "message": "Input/output error" 00:21:49.101 } 00:21:49.101 00:02:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:21:49.101 00:02:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:49.101 00:02:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:49.101 00:02:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:49.101 00:02:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:49.101 00:02:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:49.101 00:02:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:49.101 00:02:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:49.101 00:02:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@159 -- # killprocess 4006331 00:21:49.101 00:02:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 4006331 ']' 00:21:49.101 00:02:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 4006331 00:21:49.101 00:02:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:21:49.101 00:02:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:49.101 00:02:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4006331 00:21:49.101 00:02:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:49.101 00:02:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:49.101 00:02:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4006331' 00:21:49.101 killing process with pid 4006331 00:21:49.101 00:02:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 4006331 00:21:49.101 00:02:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 4006331 00:21:50.476 00:02:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@160 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:21:50.476 00:02:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:50.476 00:02:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:50.476 00:02:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:50.476 00:02:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=4027583 00:21:50.476 00:02:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 4027583 00:21:50.476 00:02:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:21:50.476 00:02:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 4027583 ']' 00:21:50.476 00:02:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:50.476 00:02:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:50.476 00:02:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:50.476 00:02:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:50.476 00:02:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:51.044 00:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:51.044 00:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:21:51.044 00:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:51.044 00:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:51.044 00:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:51.044 00:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:51.044 00:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@161 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:21:51.044 00:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # waitforlisten 4027583 00:21:51.044 00:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 4027583 ']' 00:21:51.044 00:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:51.044 00:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:51.044 00:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:51.044 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:51.044 00:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:51.044 00:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:51.302 00:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:51.303 00:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:21:51.303 00:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # rpc_cmd 00:21:51.303 00:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:51.303 00:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:51.562 null0 00:21:51.821 00:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:51.821 00:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:21:51.821 00:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.5iu 00:21:51.821 00:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:51.821 00:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:51.821 00:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:51.821 00:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha512.dqN ]] 00:21:51.821 00:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.dqN 00:21:51.821 00:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:51.821 00:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:51.821 00:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:51.821 00:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:21:51.821 00:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.Iun 00:21:51.821 00:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:51.821 00:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:51.821 00:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:51.821 00:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha384.d8H ]] 00:21:51.821 00:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.d8H 00:21:51.821 00:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:51.821 00:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:51.821 00:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:51.821 00:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:21:51.821 00:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.gRk 00:21:51.821 00:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:51.821 00:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:51.821 00:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:51.821 00:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha256.BxF ]] 00:21:51.821 00:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.BxF 00:21:51.821 00:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:51.821 00:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:51.821 00:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:51.821 00:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:21:51.821 00:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.kmj 00:21:51.821 00:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:51.821 00:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:51.821 00:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:51.821 00:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n '' ]] 00:21:51.821 00:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@179 -- # connect_authenticate sha512 ffdhe8192 3 00:21:51.821 00:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:51.821 00:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:51.821 00:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:51.821 00:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:51.821 00:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:51.821 00:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:21:51.821 00:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:51.821 00:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:51.821 00:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:51.821 00:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:51.821 00:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:51.821 00:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:52.392 nvme0n1 00:21:52.651 00:02:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:52.651 00:02:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:52.651 00:02:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:52.651 00:02:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:52.651 00:02:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:52.651 00:02:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:52.651 00:02:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:52.651 00:02:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:52.651 00:02:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:52.651 { 00:21:52.651 "cntlid": 1, 00:21:52.651 "qid": 0, 00:21:52.651 "state": "enabled", 00:21:52.651 "thread": "nvmf_tgt_poll_group_000", 00:21:52.651 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:52.651 "listen_address": { 00:21:52.651 "trtype": "TCP", 00:21:52.651 "adrfam": "IPv4", 00:21:52.651 "traddr": "10.0.0.2", 00:21:52.651 "trsvcid": "4420" 00:21:52.651 }, 00:21:52.651 "peer_address": { 00:21:52.651 "trtype": "TCP", 00:21:52.651 "adrfam": "IPv4", 00:21:52.651 "traddr": "10.0.0.1", 00:21:52.651 "trsvcid": "43564" 00:21:52.651 }, 00:21:52.651 "auth": { 00:21:52.651 "state": "completed", 00:21:52.651 "digest": "sha512", 00:21:52.651 "dhgroup": "ffdhe8192" 00:21:52.651 } 00:21:52.651 } 00:21:52.651 ]' 00:21:52.651 00:02:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:52.651 00:02:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:52.651 00:02:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:52.910 00:02:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:52.910 00:02:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:52.910 00:02:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:52.910 00:02:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:52.910 00:02:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:52.910 00:02:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:N2MxMmMwYjA1MWJmNTNlNmM1NzNkMzQ0OWEwOWQ0NTQwYmFiNzRjNzJiYmZlY2VmNDlkMDdlZjUzNzA4N2FiNRcYr4o=: 00:21:52.911 00:02:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:N2MxMmMwYjA1MWJmNTNlNmM1NzNkMzQ0OWEwOWQ0NTQwYmFiNzRjNzJiYmZlY2VmNDlkMDdlZjUzNzA4N2FiNRcYr4o=: 00:21:53.477 00:02:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:53.477 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:53.477 00:02:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:53.477 00:02:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:53.736 00:02:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:53.736 00:02:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:53.736 00:02:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@182 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:21:53.736 00:02:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:53.736 00:02:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:53.736 00:02:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:53.736 00:02:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@183 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:21:53.736 00:02:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:21:53.736 00:02:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@184 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:21:53.736 00:02:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:21:53.736 00:02:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:21:53.736 00:02:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:21:53.736 00:02:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:53.736 00:02:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:21:53.736 00:02:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:53.736 00:02:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:53.736 00:02:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:53.736 00:02:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:53.995 request: 00:21:53.995 { 00:21:53.995 "name": "nvme0", 00:21:53.995 "trtype": "tcp", 00:21:53.995 "traddr": "10.0.0.2", 00:21:53.995 "adrfam": "ipv4", 00:21:53.995 "trsvcid": "4420", 00:21:53.995 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:21:53.995 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:53.995 "prchk_reftag": false, 00:21:53.995 "prchk_guard": false, 00:21:53.995 "hdgst": false, 00:21:53.995 "ddgst": false, 00:21:53.995 "dhchap_key": "key3", 00:21:53.995 "allow_unrecognized_csi": false, 00:21:53.995 "method": "bdev_nvme_attach_controller", 00:21:53.995 "req_id": 1 00:21:53.995 } 00:21:53.995 Got JSON-RPC error response 00:21:53.995 response: 00:21:53.995 { 00:21:53.995 "code": -5, 00:21:53.995 "message": "Input/output error" 00:21:53.995 } 00:21:53.995 00:02:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:21:53.995 00:02:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:53.995 00:02:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:53.995 00:02:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:53.995 00:02:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # IFS=, 00:21:53.995 00:02:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # printf %s sha256,sha384,sha512 00:21:53.995 00:02:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:21:53.995 00:02:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:21:54.254 00:02:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@193 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:21:54.254 00:02:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:21:54.254 00:02:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:21:54.254 00:02:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:21:54.254 00:02:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:54.254 00:02:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:21:54.254 00:02:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:54.254 00:02:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:54.254 00:02:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:54.254 00:02:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:54.512 request: 00:21:54.512 { 00:21:54.512 "name": "nvme0", 00:21:54.512 "trtype": "tcp", 00:21:54.512 "traddr": "10.0.0.2", 00:21:54.512 "adrfam": "ipv4", 00:21:54.512 "trsvcid": "4420", 00:21:54.512 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:21:54.512 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:54.512 "prchk_reftag": false, 00:21:54.512 "prchk_guard": false, 00:21:54.512 "hdgst": false, 00:21:54.512 "ddgst": false, 00:21:54.512 "dhchap_key": "key3", 00:21:54.512 "allow_unrecognized_csi": false, 00:21:54.512 "method": "bdev_nvme_attach_controller", 00:21:54.512 "req_id": 1 00:21:54.512 } 00:21:54.512 Got JSON-RPC error response 00:21:54.512 response: 00:21:54.512 { 00:21:54.512 "code": -5, 00:21:54.512 "message": "Input/output error" 00:21:54.512 } 00:21:54.512 00:02:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:21:54.512 00:02:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:54.512 00:02:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:54.512 00:02:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:54.512 00:02:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:21:54.512 00:02:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s sha256,sha384,sha512 00:21:54.512 00:02:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:21:54.512 00:02:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:21:54.512 00:02:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:21:54.512 00:02:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:21:54.512 00:02:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@208 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:54.512 00:02:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:54.512 00:02:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:54.771 00:02:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:54.771 00:02:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@209 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:54.771 00:02:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:54.771 00:02:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:54.771 00:02:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:54.771 00:02:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@210 -- # NOT bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:21:54.771 00:02:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:21:54.771 00:02:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:21:54.771 00:02:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:21:54.771 00:02:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:54.771 00:02:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:21:54.771 00:02:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:54.771 00:02:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:21:54.771 00:02:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:21:54.771 00:02:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:21:55.030 request: 00:21:55.030 { 00:21:55.030 "name": "nvme0", 00:21:55.030 "trtype": "tcp", 00:21:55.030 "traddr": "10.0.0.2", 00:21:55.030 "adrfam": "ipv4", 00:21:55.030 "trsvcid": "4420", 00:21:55.030 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:21:55.030 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:55.030 "prchk_reftag": false, 00:21:55.030 "prchk_guard": false, 00:21:55.030 "hdgst": false, 00:21:55.030 "ddgst": false, 00:21:55.030 "dhchap_key": "key0", 00:21:55.030 "dhchap_ctrlr_key": "key1", 00:21:55.030 "allow_unrecognized_csi": false, 00:21:55.030 "method": "bdev_nvme_attach_controller", 00:21:55.030 "req_id": 1 00:21:55.030 } 00:21:55.030 Got JSON-RPC error response 00:21:55.030 response: 00:21:55.030 { 00:21:55.030 "code": -5, 00:21:55.030 "message": "Input/output error" 00:21:55.030 } 00:21:55.030 00:02:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:21:55.030 00:02:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:55.030 00:02:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:55.030 00:02:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:55.030 00:02:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@213 -- # bdev_connect -b nvme0 --dhchap-key key0 00:21:55.030 00:02:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:21:55.030 00:02:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:21:55.288 nvme0n1 00:21:55.288 00:02:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # hostrpc bdev_nvme_get_controllers 00:21:55.288 00:02:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # jq -r '.[].name' 00:21:55.288 00:02:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:55.547 00:02:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:55.547 00:02:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@215 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:55.547 00:02:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:55.547 00:02:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@218 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 00:21:55.547 00:02:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:55.547 00:02:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:55.547 00:02:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:55.547 00:02:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@219 -- # bdev_connect -b nvme0 --dhchap-key key1 00:21:55.547 00:02:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:21:55.547 00:02:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:21:56.491 nvme0n1 00:21:56.491 00:02:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # hostrpc bdev_nvme_get_controllers 00:21:56.491 00:02:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # jq -r '.[].name' 00:21:56.491 00:02:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:56.491 00:02:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:56.491 00:02:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@222 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key key3 00:21:56.491 00:02:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:56.491 00:02:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:56.491 00:02:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:56.491 00:02:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # hostrpc bdev_nvme_get_controllers 00:21:56.491 00:02:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:56.491 00:02:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # jq -r '.[].name' 00:21:56.749 00:02:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:56.749 00:02:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@225 -- # nvme_connect --dhchap-secret DHHC-1:02:Y2U0ZGQ1ZjUxMzYwM2E3N2RiNTBkNjZiZDdlMjEyYTQwNjM4NmJiMmEyOWVhZDdlkUs/MQ==: --dhchap-ctrl-secret DHHC-1:03:N2MxMmMwYjA1MWJmNTNlNmM1NzNkMzQ0OWEwOWQ0NTQwYmFiNzRjNzJiYmZlY2VmNDlkMDdlZjUzNzA4N2FiNRcYr4o=: 00:21:56.749 00:02:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:Y2U0ZGQ1ZjUxMzYwM2E3N2RiNTBkNjZiZDdlMjEyYTQwNjM4NmJiMmEyOWVhZDdlkUs/MQ==: --dhchap-ctrl-secret DHHC-1:03:N2MxMmMwYjA1MWJmNTNlNmM1NzNkMzQ0OWEwOWQ0NTQwYmFiNzRjNzJiYmZlY2VmNDlkMDdlZjUzNzA4N2FiNRcYr4o=: 00:21:57.317 00:02:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nvme_get_ctrlr 00:21:57.317 00:02:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@41 -- # local dev 00:21:57.317 00:02:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@43 -- # for dev in /sys/devices/virtual/nvme-fabrics/ctl/nvme* 00:21:57.317 00:02:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nqn.2024-03.io.spdk:cnode0 == \n\q\n\.\2\0\2\4\-\0\3\.\i\o\.\s\p\d\k\:\c\n\o\d\e\0 ]] 00:21:57.317 00:02:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # echo nvme0 00:21:57.317 00:02:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # break 00:21:57.317 00:02:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nctrlr=nvme0 00:21:57.317 00:02:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@227 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:57.317 00:02:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:57.575 00:02:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@228 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 00:21:57.575 00:02:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:21:57.575 00:02:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 00:21:57.575 00:02:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:21:57.575 00:02:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:57.575 00:02:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:21:57.575 00:02:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:57.575 00:02:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 00:21:57.575 00:02:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:21:57.575 00:02:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:21:57.834 request: 00:21:57.834 { 00:21:57.834 "name": "nvme0", 00:21:57.834 "trtype": "tcp", 00:21:57.834 "traddr": "10.0.0.2", 00:21:57.834 "adrfam": "ipv4", 00:21:57.834 "trsvcid": "4420", 00:21:57.834 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:21:57.834 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:57.834 "prchk_reftag": false, 00:21:57.834 "prchk_guard": false, 00:21:57.834 "hdgst": false, 00:21:57.834 "ddgst": false, 00:21:57.834 "dhchap_key": "key1", 00:21:57.834 "allow_unrecognized_csi": false, 00:21:57.834 "method": "bdev_nvme_attach_controller", 00:21:57.834 "req_id": 1 00:21:57.834 } 00:21:57.834 Got JSON-RPC error response 00:21:57.834 response: 00:21:57.834 { 00:21:57.834 "code": -5, 00:21:57.834 "message": "Input/output error" 00:21:57.834 } 00:21:57.834 00:02:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:21:57.834 00:02:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:57.834 00:02:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:57.834 00:02:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:57.834 00:02:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@229 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:21:57.834 00:02:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:21:57.834 00:02:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:21:58.778 nvme0n1 00:21:58.778 00:02:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # jq -r '.[].name' 00:21:58.778 00:02:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # hostrpc bdev_nvme_get_controllers 00:21:58.778 00:02:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:58.778 00:02:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:58.778 00:02:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@231 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:58.778 00:02:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:59.036 00:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@233 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:59.036 00:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:59.036 00:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:59.036 00:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:59.036 00:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@234 -- # bdev_connect -b nvme0 00:21:59.036 00:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:21:59.036 00:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:21:59.295 nvme0n1 00:21:59.295 00:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # hostrpc bdev_nvme_get_controllers 00:21:59.295 00:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # jq -r '.[].name' 00:21:59.295 00:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:59.553 00:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:59.553 00:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@236 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:59.553 00:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:59.812 00:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@239 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key key3 00:21:59.812 00:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:59.812 00:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:59.812 00:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:59.812 00:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@240 -- # nvme_set_keys nvme0 DHHC-1:01:MjA1MDExM2VkN2JiZGY0ZDJiN2Q4MjY4Y2EyNTdiNmNaF3cn: '' 2s 00:21:59.812 00:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:21:59.812 00:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:21:59.812 00:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key=DHHC-1:01:MjA1MDExM2VkN2JiZGY0ZDJiN2Q4MjY4Y2EyNTdiNmNaF3cn: 00:21:59.812 00:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey= 00:21:59.812 00:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:21:59.812 00:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:21:59.812 00:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z DHHC-1:01:MjA1MDExM2VkN2JiZGY0ZDJiN2Q4MjY4Y2EyNTdiNmNaF3cn: ]] 00:21:59.812 00:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # echo DHHC-1:01:MjA1MDExM2VkN2JiZGY0ZDJiN2Q4MjY4Y2EyNTdiNmNaF3cn: 00:21:59.812 00:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z '' ]] 00:21:59.812 00:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:21:59.812 00:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:22:01.714 00:02:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@241 -- # waitforblk nvme0n1 00:22:01.714 00:02:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:22:01.714 00:02:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:22:01.714 00:02:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:22:01.714 00:02:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:22:01.714 00:02:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:22:01.714 00:02:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:22:01.714 00:02:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@243 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key key2 00:22:01.714 00:02:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:01.714 00:02:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:01.714 00:02:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:01.714 00:02:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@244 -- # nvme_set_keys nvme0 '' DHHC-1:02:Y2U0ZGQ1ZjUxMzYwM2E3N2RiNTBkNjZiZDdlMjEyYTQwNjM4NmJiMmEyOWVhZDdlkUs/MQ==: 2s 00:22:01.714 00:02:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:22:01.714 00:02:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:22:01.714 00:02:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key= 00:22:01.714 00:02:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey=DHHC-1:02:Y2U0ZGQ1ZjUxMzYwM2E3N2RiNTBkNjZiZDdlMjEyYTQwNjM4NmJiMmEyOWVhZDdlkUs/MQ==: 00:22:01.714 00:02:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:22:01.714 00:02:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:22:01.714 00:02:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z '' ]] 00:22:01.714 00:02:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z DHHC-1:02:Y2U0ZGQ1ZjUxMzYwM2E3N2RiNTBkNjZiZDdlMjEyYTQwNjM4NmJiMmEyOWVhZDdlkUs/MQ==: ]] 00:22:01.714 00:02:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # echo DHHC-1:02:Y2U0ZGQ1ZjUxMzYwM2E3N2RiNTBkNjZiZDdlMjEyYTQwNjM4NmJiMmEyOWVhZDdlkUs/MQ==: 00:22:01.714 00:02:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:22:01.714 00:02:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:22:04.246 00:02:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@245 -- # waitforblk nvme0n1 00:22:04.246 00:02:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:22:04.246 00:02:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:22:04.246 00:02:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:22:04.246 00:02:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:22:04.246 00:02:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:22:04.246 00:02:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:22:04.246 00:02:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@246 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:04.246 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:04.246 00:02:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@249 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:04.246 00:02:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:04.246 00:02:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:04.246 00:02:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:04.246 00:02:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@250 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:22:04.246 00:02:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:22:04.247 00:02:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:22:04.505 nvme0n1 00:22:04.505 00:02:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@252 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key key3 00:22:04.505 00:02:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:04.505 00:02:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:04.764 00:02:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:04.764 00:02:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@253 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:22:04.764 00:02:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:22:05.023 00:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # hostrpc bdev_nvme_get_controllers 00:22:05.023 00:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # jq -r '.[].name' 00:22:05.023 00:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:05.281 00:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:05.281 00:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@256 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:22:05.281 00:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:05.281 00:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:05.281 00:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:05.281 00:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@257 -- # hostrpc bdev_nvme_set_keys nvme0 00:22:05.281 00:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 00:22:05.538 00:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # hostrpc bdev_nvme_get_controllers 00:22:05.538 00:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # jq -r '.[].name' 00:22:05.538 00:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:05.796 00:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:05.796 00:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@260 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key key3 00:22:05.796 00:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:05.796 00:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:05.796 00:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:05.796 00:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@261 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:22:05.796 00:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:22:05.796 00:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:22:05.796 00:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:22:05.796 00:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:05.796 00:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:22:05.796 00:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:05.796 00:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:22:05.796 00:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:22:06.053 request: 00:22:06.053 { 00:22:06.053 "name": "nvme0", 00:22:06.053 "dhchap_key": "key1", 00:22:06.053 "dhchap_ctrlr_key": "key3", 00:22:06.053 "method": "bdev_nvme_set_keys", 00:22:06.053 "req_id": 1 00:22:06.053 } 00:22:06.053 Got JSON-RPC error response 00:22:06.053 response: 00:22:06.053 { 00:22:06.053 "code": -13, 00:22:06.053 "message": "Permission denied" 00:22:06.053 } 00:22:06.053 00:02:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:22:06.053 00:02:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:06.053 00:02:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:06.053 00:02:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:06.053 00:02:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:22:06.053 00:02:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:22:06.053 00:02:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:06.311 00:02:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 1 != 0 )) 00:22:06.311 00:02:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@263 -- # sleep 1s 00:22:07.246 00:02:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:22:07.247 00:02:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:22:07.247 00:02:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:07.505 00:02:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 0 != 0 )) 00:22:07.505 00:02:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@267 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:07.505 00:02:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:07.505 00:02:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:07.505 00:02:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:07.505 00:02:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@268 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:22:07.506 00:02:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:22:07.506 00:02:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:22:08.440 nvme0n1 00:22:08.440 00:02:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@270 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key key3 00:22:08.440 00:02:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:08.440 00:02:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:08.440 00:02:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:08.440 00:02:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@271 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:22:08.440 00:02:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:22:08.440 00:02:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:22:08.440 00:02:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:22:08.440 00:02:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:08.440 00:02:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:22:08.440 00:02:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:08.440 00:02:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:22:08.440 00:02:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:22:08.698 request: 00:22:08.698 { 00:22:08.698 "name": "nvme0", 00:22:08.698 "dhchap_key": "key2", 00:22:08.698 "dhchap_ctrlr_key": "key0", 00:22:08.698 "method": "bdev_nvme_set_keys", 00:22:08.698 "req_id": 1 00:22:08.698 } 00:22:08.698 Got JSON-RPC error response 00:22:08.698 response: 00:22:08.698 { 00:22:08.698 "code": -13, 00:22:08.698 "message": "Permission denied" 00:22:08.698 } 00:22:08.698 00:02:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:22:08.698 00:02:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:08.698 00:02:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:08.698 00:02:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:08.698 00:02:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:22:08.698 00:02:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:22:08.698 00:02:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:08.957 00:02:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 1 != 0 )) 00:22:08.957 00:02:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@273 -- # sleep 1s 00:22:09.892 00:02:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:22:09.892 00:02:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:22:09.892 00:02:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:10.150 00:02:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 0 != 0 )) 00:22:10.150 00:02:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@276 -- # trap - SIGINT SIGTERM EXIT 00:22:10.150 00:02:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@277 -- # cleanup 00:22:10.150 00:02:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 4006567 00:22:10.150 00:02:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 4006567 ']' 00:22:10.150 00:02:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 4006567 00:22:10.150 00:02:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:22:10.150 00:02:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:10.150 00:02:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4006567 00:22:10.150 00:02:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:22:10.150 00:02:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:22:10.150 00:02:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4006567' 00:22:10.150 killing process with pid 4006567 00:22:10.150 00:02:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 4006567 00:22:10.150 00:02:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 4006567 00:22:12.683 00:02:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:22:12.683 00:02:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:12.683 00:02:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@121 -- # sync 00:22:12.683 00:02:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:12.683 00:02:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@124 -- # set +e 00:22:12.683 00:02:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:12.683 00:02:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:12.683 rmmod nvme_tcp 00:22:12.683 rmmod nvme_fabrics 00:22:12.683 rmmod nvme_keyring 00:22:12.683 00:02:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:12.683 00:02:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@128 -- # set -e 00:22:12.683 00:02:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@129 -- # return 0 00:22:12.683 00:02:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@517 -- # '[' -n 4027583 ']' 00:22:12.683 00:02:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@518 -- # killprocess 4027583 00:22:12.683 00:02:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 4027583 ']' 00:22:12.683 00:02:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 4027583 00:22:12.683 00:02:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:22:12.683 00:02:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:12.683 00:02:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4027583 00:22:12.683 00:02:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:12.683 00:02:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:12.683 00:02:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4027583' 00:22:12.683 killing process with pid 4027583 00:22:12.683 00:02:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 4027583 00:22:12.683 00:02:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 4027583 00:22:13.618 00:02:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:13.618 00:02:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:13.618 00:02:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:13.618 00:02:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # iptr 00:22:13.618 00:02:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-save 00:22:13.618 00:02:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:13.618 00:02:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-restore 00:22:13.618 00:02:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:13.618 00:02:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:13.618 00:02:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:13.618 00:02:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:13.618 00:02:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:16.151 00:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:16.151 00:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.5iu /tmp/spdk.key-sha256.Iun /tmp/spdk.key-sha384.gRk /tmp/spdk.key-sha512.kmj /tmp/spdk.key-sha512.dqN /tmp/spdk.key-sha384.d8H /tmp/spdk.key-sha256.BxF '' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf-auth.log 00:22:16.151 00:22:16.151 real 2m36.348s 00:22:16.151 user 5m57.466s 00:22:16.151 sys 0m23.568s 00:22:16.151 00:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:16.151 00:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:16.151 ************************************ 00:22:16.151 END TEST nvmf_auth_target 00:22:16.151 ************************************ 00:22:16.151 00:02:54 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # '[' tcp = tcp ']' 00:22:16.151 00:02:54 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@40 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:22:16.151 00:02:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:22:16.151 00:02:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:16.151 00:02:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:22:16.151 ************************************ 00:22:16.151 START TEST nvmf_bdevio_no_huge 00:22:16.151 ************************************ 00:22:16.151 00:02:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:22:16.151 * Looking for test storage... 00:22:16.151 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:22:16.151 00:02:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:22:16.151 00:02:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1711 -- # lcov --version 00:22:16.151 00:02:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:22:16.151 00:02:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:22:16.151 00:02:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:16.151 00:02:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:16.151 00:02:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:16.151 00:02:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # IFS=.-: 00:22:16.151 00:02:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # read -ra ver1 00:22:16.151 00:02:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # IFS=.-: 00:22:16.151 00:02:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # read -ra ver2 00:22:16.151 00:02:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@338 -- # local 'op=<' 00:22:16.151 00:02:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@340 -- # ver1_l=2 00:22:16.151 00:02:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@341 -- # ver2_l=1 00:22:16.151 00:02:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:16.151 00:02:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@344 -- # case "$op" in 00:22:16.151 00:02:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@345 -- # : 1 00:22:16.151 00:02:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:16.151 00:02:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:16.151 00:02:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # decimal 1 00:22:16.151 00:02:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=1 00:22:16.151 00:02:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:16.151 00:02:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 1 00:22:16.151 00:02:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # ver1[v]=1 00:22:16.151 00:02:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # decimal 2 00:22:16.151 00:02:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=2 00:22:16.151 00:02:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:16.151 00:02:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 2 00:22:16.151 00:02:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # ver2[v]=2 00:22:16.151 00:02:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:16.151 00:02:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:16.151 00:02:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # return 0 00:22:16.151 00:02:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:16.151 00:02:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:22:16.151 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:16.151 --rc genhtml_branch_coverage=1 00:22:16.151 --rc genhtml_function_coverage=1 00:22:16.151 --rc genhtml_legend=1 00:22:16.151 --rc geninfo_all_blocks=1 00:22:16.151 --rc geninfo_unexecuted_blocks=1 00:22:16.151 00:22:16.151 ' 00:22:16.151 00:02:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:22:16.151 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:16.151 --rc genhtml_branch_coverage=1 00:22:16.151 --rc genhtml_function_coverage=1 00:22:16.151 --rc genhtml_legend=1 00:22:16.151 --rc geninfo_all_blocks=1 00:22:16.151 --rc geninfo_unexecuted_blocks=1 00:22:16.151 00:22:16.151 ' 00:22:16.151 00:02:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:22:16.151 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:16.151 --rc genhtml_branch_coverage=1 00:22:16.151 --rc genhtml_function_coverage=1 00:22:16.151 --rc genhtml_legend=1 00:22:16.151 --rc geninfo_all_blocks=1 00:22:16.151 --rc geninfo_unexecuted_blocks=1 00:22:16.151 00:22:16.151 ' 00:22:16.151 00:02:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:22:16.151 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:16.151 --rc genhtml_branch_coverage=1 00:22:16.151 --rc genhtml_function_coverage=1 00:22:16.151 --rc genhtml_legend=1 00:22:16.151 --rc geninfo_all_blocks=1 00:22:16.151 --rc geninfo_unexecuted_blocks=1 00:22:16.151 00:22:16.151 ' 00:22:16.151 00:02:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:16.151 00:02:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:22:16.151 00:02:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:16.151 00:02:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:16.151 00:02:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:16.151 00:02:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:16.151 00:02:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:16.151 00:02:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:16.151 00:02:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:16.151 00:02:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:16.151 00:02:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:16.151 00:02:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:16.151 00:02:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:22:16.151 00:02:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:22:16.151 00:02:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:16.151 00:02:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:16.151 00:02:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:16.151 00:02:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:16.151 00:02:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:16.151 00:02:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@15 -- # shopt -s extglob 00:22:16.152 00:02:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:16.152 00:02:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:16.152 00:02:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:16.152 00:02:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:16.152 00:02:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:16.152 00:02:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:16.152 00:02:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:22:16.152 00:02:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:16.152 00:02:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # : 0 00:22:16.152 00:02:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:16.152 00:02:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:16.152 00:02:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:16.152 00:02:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:16.152 00:02:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:16.152 00:02:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:16.152 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:16.152 00:02:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:16.152 00:02:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:16.152 00:02:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:16.152 00:02:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:22:16.152 00:02:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:22:16.152 00:02:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:22:16.152 00:02:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:16.152 00:02:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:16.152 00:02:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:16.152 00:02:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:16.152 00:02:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:16.152 00:02:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:16.152 00:02:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:16.152 00:02:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:16.152 00:02:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:16.152 00:02:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:16.152 00:02:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@309 -- # xtrace_disable 00:22:16.152 00:02:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:21.422 00:03:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:21.422 00:03:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # pci_devs=() 00:22:21.422 00:03:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:21.422 00:03:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:21.422 00:03:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:21.422 00:03:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:21.422 00:03:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:21.422 00:03:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # net_devs=() 00:22:21.422 00:03:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:21.422 00:03:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # e810=() 00:22:21.422 00:03:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # local -ga e810 00:22:21.422 00:03:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # x722=() 00:22:21.422 00:03:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # local -ga x722 00:22:21.422 00:03:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # mlx=() 00:22:21.422 00:03:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # local -ga mlx 00:22:21.422 00:03:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:21.422 00:03:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:21.422 00:03:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:21.422 00:03:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:21.422 00:03:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:21.422 00:03:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:21.422 00:03:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:21.422 00:03:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:21.422 00:03:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:21.422 00:03:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:21.422 00:03:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:21.422 00:03:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:21.422 00:03:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:21.422 00:03:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:21.422 00:03:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:21.422 00:03:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:21.422 00:03:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:21.422 00:03:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:21.422 00:03:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:21.422 00:03:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:22:21.422 Found 0000:af:00.0 (0x8086 - 0x159b) 00:22:21.422 00:03:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:21.422 00:03:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:21.422 00:03:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:21.422 00:03:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:21.422 00:03:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:21.422 00:03:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:21.422 00:03:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:22:21.422 Found 0000:af:00.1 (0x8086 - 0x159b) 00:22:21.422 00:03:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:21.422 00:03:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:21.422 00:03:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:21.422 00:03:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:21.422 00:03:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:21.422 00:03:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:21.422 00:03:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:21.422 00:03:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:21.422 00:03:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:21.422 00:03:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:21.422 00:03:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:21.422 00:03:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:21.422 00:03:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:21.422 00:03:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:21.422 00:03:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:21.422 00:03:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:22:21.422 Found net devices under 0000:af:00.0: cvl_0_0 00:22:21.422 00:03:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:21.422 00:03:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:21.422 00:03:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:21.422 00:03:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:21.422 00:03:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:21.422 00:03:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:21.422 00:03:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:21.422 00:03:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:21.422 00:03:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:22:21.422 Found net devices under 0000:af:00.1: cvl_0_1 00:22:21.422 00:03:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:21.422 00:03:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:21.422 00:03:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # is_hw=yes 00:22:21.422 00:03:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:21.422 00:03:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:21.422 00:03:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:21.422 00:03:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:21.422 00:03:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:21.422 00:03:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:21.422 00:03:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:21.422 00:03:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:21.423 00:03:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:21.423 00:03:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:21.423 00:03:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:21.423 00:03:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:21.423 00:03:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:21.423 00:03:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:21.423 00:03:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:21.423 00:03:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:21.423 00:03:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:21.423 00:03:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:21.423 00:03:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:21.423 00:03:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:21.423 00:03:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:21.423 00:03:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:21.423 00:03:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:21.423 00:03:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:21.423 00:03:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:21.423 00:03:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:21.682 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:21.682 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.342 ms 00:22:21.682 00:22:21.682 --- 10.0.0.2 ping statistics --- 00:22:21.682 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:21.682 rtt min/avg/max/mdev = 0.342/0.342/0.342/0.000 ms 00:22:21.682 00:03:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:21.682 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:21.682 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.216 ms 00:22:21.682 00:22:21.682 --- 10.0.0.1 ping statistics --- 00:22:21.682 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:21.682 rtt min/avg/max/mdev = 0.216/0.216/0.216/0.000 ms 00:22:21.682 00:03:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:21.682 00:03:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # return 0 00:22:21.682 00:03:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:21.682 00:03:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:21.682 00:03:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:21.682 00:03:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:21.682 00:03:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:21.682 00:03:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:21.682 00:03:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:21.682 00:03:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:22:21.682 00:03:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:21.682 00:03:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:21.682 00:03:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:21.682 00:03:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@509 -- # nvmfpid=4034932 00:22:21.682 00:03:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@510 -- # waitforlisten 4034932 00:22:21.682 00:03:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:22:21.682 00:03:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@835 -- # '[' -z 4034932 ']' 00:22:21.682 00:03:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:21.682 00:03:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:21.682 00:03:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:21.682 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:21.682 00:03:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:21.682 00:03:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:21.682 [2024-12-14 00:03:00.696754] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:22:21.682 [2024-12-14 00:03:00.696850] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:22:21.941 [2024-12-14 00:03:00.834747] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:21.941 [2024-12-14 00:03:00.957942] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:21.941 [2024-12-14 00:03:00.957992] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:21.941 [2024-12-14 00:03:00.958006] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:21.941 [2024-12-14 00:03:00.958020] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:21.941 [2024-12-14 00:03:00.958032] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:21.941 [2024-12-14 00:03:00.960015] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 4 00:22:21.941 [2024-12-14 00:03:00.960106] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 5 00:22:21.941 [2024-12-14 00:03:00.960173] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:22:21.941 [2024-12-14 00:03:00.960192] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 6 00:22:22.509 00:03:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:22.509 00:03:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@868 -- # return 0 00:22:22.509 00:03:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:22.509 00:03:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:22.509 00:03:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:22.509 00:03:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:22.509 00:03:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:22.509 00:03:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:22.509 00:03:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:22.509 [2024-12-14 00:03:01.566111] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:22.509 00:03:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:22.509 00:03:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:22:22.509 00:03:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:22.509 00:03:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:22.509 Malloc0 00:22:22.509 00:03:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:22.509 00:03:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:22.509 00:03:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:22.509 00:03:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:22.509 00:03:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:22.509 00:03:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:22.509 00:03:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:22.509 00:03:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:22.777 00:03:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:22.777 00:03:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:22.777 00:03:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:22.777 00:03:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:22.777 [2024-12-14 00:03:01.658405] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:22.777 00:03:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:22.777 00:03:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:22:22.777 00:03:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:22:22.778 00:03:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # config=() 00:22:22.778 00:03:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # local subsystem config 00:22:22.778 00:03:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:22.778 00:03:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:22.778 { 00:22:22.778 "params": { 00:22:22.778 "name": "Nvme$subsystem", 00:22:22.778 "trtype": "$TEST_TRANSPORT", 00:22:22.778 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:22.778 "adrfam": "ipv4", 00:22:22.778 "trsvcid": "$NVMF_PORT", 00:22:22.778 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:22.778 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:22.778 "hdgst": ${hdgst:-false}, 00:22:22.778 "ddgst": ${ddgst:-false} 00:22:22.778 }, 00:22:22.778 "method": "bdev_nvme_attach_controller" 00:22:22.778 } 00:22:22.778 EOF 00:22:22.778 )") 00:22:22.778 00:03:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # cat 00:22:22.778 00:03:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@584 -- # jq . 00:22:22.778 00:03:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@585 -- # IFS=, 00:22:22.778 00:03:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:22:22.778 "params": { 00:22:22.778 "name": "Nvme1", 00:22:22.778 "trtype": "tcp", 00:22:22.778 "traddr": "10.0.0.2", 00:22:22.778 "adrfam": "ipv4", 00:22:22.778 "trsvcid": "4420", 00:22:22.778 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:22.778 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:22.778 "hdgst": false, 00:22:22.778 "ddgst": false 00:22:22.778 }, 00:22:22.778 "method": "bdev_nvme_attach_controller" 00:22:22.778 }' 00:22:22.778 [2024-12-14 00:03:01.732157] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:22:22.778 [2024-12-14 00:03:01.732247] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid4035242 ] 00:22:22.778 [2024-12-14 00:03:01.863996] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:22:23.037 [2024-12-14 00:03:01.977410] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:22:23.037 [2024-12-14 00:03:01.977477] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:22:23.037 [2024-12-14 00:03:01.977485] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:22:23.603 I/O targets: 00:22:23.603 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:22:23.603 00:22:23.603 00:22:23.603 CUnit - A unit testing framework for C - Version 2.1-3 00:22:23.603 http://cunit.sourceforge.net/ 00:22:23.603 00:22:23.603 00:22:23.603 Suite: bdevio tests on: Nvme1n1 00:22:23.603 Test: blockdev write read block ...passed 00:22:23.603 Test: blockdev write zeroes read block ...passed 00:22:23.603 Test: blockdev write zeroes read no split ...passed 00:22:23.603 Test: blockdev write zeroes read split ...passed 00:22:23.603 Test: blockdev write zeroes read split partial ...passed 00:22:23.603 Test: blockdev reset ...[2024-12-14 00:03:02.645070] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:22:23.603 [2024-12-14 00:03:02.645173] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000323a00 (9): Bad file descriptor 00:22:23.603 [2024-12-14 00:03:02.702379] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:22:23.603 passed 00:22:23.603 Test: blockdev write read 8 blocks ...passed 00:22:23.883 Test: blockdev write read size > 128k ...passed 00:22:23.883 Test: blockdev write read invalid size ...passed 00:22:23.883 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:22:23.883 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:22:23.883 Test: blockdev write read max offset ...passed 00:22:23.883 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:22:23.883 Test: blockdev writev readv 8 blocks ...passed 00:22:23.883 Test: blockdev writev readv 30 x 1block ...passed 00:22:23.883 Test: blockdev writev readv block ...passed 00:22:23.883 Test: blockdev writev readv size > 128k ...passed 00:22:23.883 Test: blockdev writev readv size > 128k in two iovs ...passed 00:22:23.883 Test: blockdev comparev and writev ...[2024-12-14 00:03:02.956938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:23.883 [2024-12-14 00:03:02.956984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:23.883 [2024-12-14 00:03:02.957007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:23.883 [2024-12-14 00:03:02.957018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:23.883 [2024-12-14 00:03:02.957304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:23.883 [2024-12-14 00:03:02.957320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:22:23.883 [2024-12-14 00:03:02.957342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:23.883 [2024-12-14 00:03:02.957356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:22:23.883 [2024-12-14 00:03:02.957644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:23.883 [2024-12-14 00:03:02.957662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:22:23.883 [2024-12-14 00:03:02.957679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:23.883 [2024-12-14 00:03:02.957688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:22:23.883 [2024-12-14 00:03:02.957975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:23.883 [2024-12-14 00:03:02.957990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:22:23.883 [2024-12-14 00:03:02.958006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:23.883 [2024-12-14 00:03:02.958017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:22:23.883 passed 00:22:24.261 Test: blockdev nvme passthru rw ...passed 00:22:24.261 Test: blockdev nvme passthru vendor specific ...[2024-12-14 00:03:03.039912] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:24.261 [2024-12-14 00:03:03.039944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:22:24.261 [2024-12-14 00:03:03.040082] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:24.261 [2024-12-14 00:03:03.040097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:22:24.261 [2024-12-14 00:03:03.040220] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:24.261 [2024-12-14 00:03:03.040234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:22:24.261 [2024-12-14 00:03:03.040360] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:24.261 [2024-12-14 00:03:03.040374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:22:24.261 passed 00:22:24.261 Test: blockdev nvme admin passthru ...passed 00:22:24.261 Test: blockdev copy ...passed 00:22:24.261 00:22:24.261 Run Summary: Type Total Ran Passed Failed Inactive 00:22:24.261 suites 1 1 n/a 0 0 00:22:24.261 tests 23 23 23 0 0 00:22:24.261 asserts 152 152 152 0 n/a 00:22:24.261 00:22:24.261 Elapsed time = 1.345 seconds 00:22:24.864 00:03:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:24.864 00:03:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:24.864 00:03:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:24.864 00:03:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:24.864 00:03:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:22:24.864 00:03:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:22:24.864 00:03:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:24.864 00:03:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # sync 00:22:24.864 00:03:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:24.864 00:03:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set +e 00:22:24.864 00:03:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:24.864 00:03:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:24.864 rmmod nvme_tcp 00:22:24.864 rmmod nvme_fabrics 00:22:24.864 rmmod nvme_keyring 00:22:24.864 00:03:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:24.864 00:03:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@128 -- # set -e 00:22:24.864 00:03:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@129 -- # return 0 00:22:24.864 00:03:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@517 -- # '[' -n 4034932 ']' 00:22:24.864 00:03:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@518 -- # killprocess 4034932 00:22:24.864 00:03:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # '[' -z 4034932 ']' 00:22:24.864 00:03:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # kill -0 4034932 00:22:24.864 00:03:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # uname 00:22:24.864 00:03:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:24.864 00:03:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4034932 00:22:24.864 00:03:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:22:24.864 00:03:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:22:24.864 00:03:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4034932' 00:22:24.864 killing process with pid 4034932 00:22:24.864 00:03:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@973 -- # kill 4034932 00:22:24.864 00:03:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@978 -- # wait 4034932 00:22:25.798 00:03:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:25.798 00:03:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:25.798 00:03:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:25.798 00:03:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # iptr 00:22:25.798 00:03:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-save 00:22:25.798 00:03:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-restore 00:22:25.798 00:03:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:25.799 00:03:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:25.799 00:03:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:25.799 00:03:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:25.799 00:03:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:25.799 00:03:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:27.701 00:03:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:27.701 00:22:27.701 real 0m11.833s 00:22:27.701 user 0m19.907s 00:22:27.701 sys 0m5.320s 00:22:27.702 00:03:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:27.702 00:03:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:27.702 ************************************ 00:22:27.702 END TEST nvmf_bdevio_no_huge 00:22:27.702 ************************************ 00:22:27.702 00:03:06 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@41 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:22:27.702 00:03:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:27.702 00:03:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:27.702 00:03:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:22:27.702 ************************************ 00:22:27.702 START TEST nvmf_tls 00:22:27.702 ************************************ 00:22:27.702 00:03:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:22:27.702 * Looking for test storage... 00:22:27.702 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:22:27.702 00:03:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:22:27.702 00:03:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1711 -- # lcov --version 00:22:27.702 00:03:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:22:27.960 00:03:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:22:27.960 00:03:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:27.960 00:03:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:27.960 00:03:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:27.960 00:03:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # IFS=.-: 00:22:27.960 00:03:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # read -ra ver1 00:22:27.960 00:03:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # IFS=.-: 00:22:27.960 00:03:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # read -ra ver2 00:22:27.960 00:03:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@338 -- # local 'op=<' 00:22:27.960 00:03:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@340 -- # ver1_l=2 00:22:27.960 00:03:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@341 -- # ver2_l=1 00:22:27.960 00:03:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:27.961 00:03:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@344 -- # case "$op" in 00:22:27.961 00:03:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@345 -- # : 1 00:22:27.961 00:03:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:27.961 00:03:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:27.961 00:03:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # decimal 1 00:22:27.961 00:03:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=1 00:22:27.961 00:03:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:27.961 00:03:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 1 00:22:27.961 00:03:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # ver1[v]=1 00:22:27.961 00:03:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # decimal 2 00:22:27.961 00:03:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=2 00:22:27.961 00:03:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:27.961 00:03:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 2 00:22:27.961 00:03:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # ver2[v]=2 00:22:27.961 00:03:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:27.961 00:03:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:27.961 00:03:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # return 0 00:22:27.961 00:03:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:27.961 00:03:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:22:27.961 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:27.961 --rc genhtml_branch_coverage=1 00:22:27.961 --rc genhtml_function_coverage=1 00:22:27.961 --rc genhtml_legend=1 00:22:27.961 --rc geninfo_all_blocks=1 00:22:27.961 --rc geninfo_unexecuted_blocks=1 00:22:27.961 00:22:27.961 ' 00:22:27.961 00:03:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:22:27.961 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:27.961 --rc genhtml_branch_coverage=1 00:22:27.961 --rc genhtml_function_coverage=1 00:22:27.961 --rc genhtml_legend=1 00:22:27.961 --rc geninfo_all_blocks=1 00:22:27.961 --rc geninfo_unexecuted_blocks=1 00:22:27.961 00:22:27.961 ' 00:22:27.961 00:03:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:22:27.961 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:27.961 --rc genhtml_branch_coverage=1 00:22:27.961 --rc genhtml_function_coverage=1 00:22:27.961 --rc genhtml_legend=1 00:22:27.961 --rc geninfo_all_blocks=1 00:22:27.961 --rc geninfo_unexecuted_blocks=1 00:22:27.961 00:22:27.961 ' 00:22:27.961 00:03:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:22:27.961 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:27.961 --rc genhtml_branch_coverage=1 00:22:27.961 --rc genhtml_function_coverage=1 00:22:27.961 --rc genhtml_legend=1 00:22:27.961 --rc geninfo_all_blocks=1 00:22:27.961 --rc geninfo_unexecuted_blocks=1 00:22:27.961 00:22:27.961 ' 00:22:27.961 00:03:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:27.961 00:03:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:22:27.961 00:03:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:27.961 00:03:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:27.961 00:03:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:27.961 00:03:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:27.961 00:03:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:27.961 00:03:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:27.961 00:03:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:27.961 00:03:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:27.961 00:03:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:27.961 00:03:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:27.961 00:03:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:22:27.961 00:03:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:22:27.961 00:03:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:27.961 00:03:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:27.961 00:03:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:27.961 00:03:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:27.961 00:03:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:27.961 00:03:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@15 -- # shopt -s extglob 00:22:27.961 00:03:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:27.961 00:03:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:27.961 00:03:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:27.961 00:03:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:27.961 00:03:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:27.961 00:03:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:27.961 00:03:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:22:27.961 00:03:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:27.961 00:03:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@51 -- # : 0 00:22:27.961 00:03:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:27.961 00:03:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:27.961 00:03:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:27.961 00:03:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:27.961 00:03:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:27.961 00:03:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:27.961 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:27.961 00:03:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:27.961 00:03:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:27.961 00:03:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:27.961 00:03:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:22:27.961 00:03:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@63 -- # nvmftestinit 00:22:27.961 00:03:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:27.961 00:03:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:27.961 00:03:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:27.961 00:03:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:27.961 00:03:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:27.961 00:03:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:27.961 00:03:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:27.961 00:03:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:27.961 00:03:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:27.961 00:03:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:27.961 00:03:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@309 -- # xtrace_disable 00:22:27.961 00:03:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:34.525 00:03:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:34.525 00:03:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # pci_devs=() 00:22:34.525 00:03:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:34.525 00:03:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:34.525 00:03:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:34.525 00:03:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:34.525 00:03:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:34.525 00:03:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # net_devs=() 00:22:34.525 00:03:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:34.525 00:03:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # e810=() 00:22:34.525 00:03:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # local -ga e810 00:22:34.525 00:03:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # x722=() 00:22:34.525 00:03:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # local -ga x722 00:22:34.525 00:03:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # mlx=() 00:22:34.525 00:03:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # local -ga mlx 00:22:34.525 00:03:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:34.525 00:03:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:34.525 00:03:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:34.525 00:03:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:34.525 00:03:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:34.525 00:03:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:34.525 00:03:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:34.525 00:03:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:34.525 00:03:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:34.525 00:03:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:34.525 00:03:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:34.525 00:03:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:34.525 00:03:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:34.525 00:03:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:34.525 00:03:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:34.525 00:03:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:34.525 00:03:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:34.525 00:03:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:34.525 00:03:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:34.525 00:03:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:22:34.525 Found 0000:af:00.0 (0x8086 - 0x159b) 00:22:34.525 00:03:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:34.525 00:03:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:34.525 00:03:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:34.525 00:03:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:34.525 00:03:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:34.525 00:03:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:34.525 00:03:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:22:34.525 Found 0000:af:00.1 (0x8086 - 0x159b) 00:22:34.525 00:03:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:34.525 00:03:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:34.525 00:03:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:34.525 00:03:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:34.525 00:03:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:34.525 00:03:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:34.525 00:03:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:34.525 00:03:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:34.525 00:03:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:34.525 00:03:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:34.525 00:03:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:34.525 00:03:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:34.525 00:03:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:34.525 00:03:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:34.525 00:03:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:34.525 00:03:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:22:34.525 Found net devices under 0000:af:00.0: cvl_0_0 00:22:34.525 00:03:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:34.525 00:03:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:34.525 00:03:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:34.525 00:03:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:34.525 00:03:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:34.525 00:03:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:34.525 00:03:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:34.525 00:03:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:34.525 00:03:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:22:34.525 Found net devices under 0000:af:00.1: cvl_0_1 00:22:34.525 00:03:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:34.525 00:03:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:34.525 00:03:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # is_hw=yes 00:22:34.525 00:03:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:34.525 00:03:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:34.525 00:03:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:34.525 00:03:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:34.525 00:03:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:34.526 00:03:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:34.526 00:03:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:34.526 00:03:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:34.526 00:03:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:34.526 00:03:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:34.526 00:03:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:34.526 00:03:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:34.526 00:03:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:34.526 00:03:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:34.526 00:03:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:34.526 00:03:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:34.526 00:03:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:34.526 00:03:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:34.526 00:03:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:34.526 00:03:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:34.526 00:03:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:34.526 00:03:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:34.526 00:03:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:34.526 00:03:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:34.526 00:03:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:34.526 00:03:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:34.526 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:34.526 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.161 ms 00:22:34.526 00:22:34.526 --- 10.0.0.2 ping statistics --- 00:22:34.526 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:34.526 rtt min/avg/max/mdev = 0.161/0.161/0.161/0.000 ms 00:22:34.526 00:03:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:34.526 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:34.526 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.215 ms 00:22:34.526 00:22:34.526 --- 10.0.0.1 ping statistics --- 00:22:34.526 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:34.526 rtt min/avg/max/mdev = 0.215/0.215/0.215/0.000 ms 00:22:34.526 00:03:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:34.526 00:03:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@450 -- # return 0 00:22:34.526 00:03:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:34.526 00:03:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:34.526 00:03:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:34.526 00:03:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:34.526 00:03:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:34.526 00:03:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:34.526 00:03:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:34.526 00:03:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@64 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:22:34.526 00:03:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:34.526 00:03:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:34.526 00:03:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:34.526 00:03:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=4039492 00:22:34.526 00:03:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 4039492 00:22:34.526 00:03:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:22:34.526 00:03:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 4039492 ']' 00:22:34.526 00:03:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:34.526 00:03:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:34.526 00:03:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:34.526 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:34.526 00:03:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:34.526 00:03:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:34.526 [2024-12-14 00:03:12.753248] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:22:34.526 [2024-12-14 00:03:12.753334] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:34.526 [2024-12-14 00:03:12.870245] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:34.526 [2024-12-14 00:03:12.972346] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:34.526 [2024-12-14 00:03:12.972396] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:34.526 [2024-12-14 00:03:12.972406] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:34.526 [2024-12-14 00:03:12.972432] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:34.526 [2024-12-14 00:03:12.972445] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:34.526 [2024-12-14 00:03:12.973726] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:22:34.526 00:03:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:34.526 00:03:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:22:34.526 00:03:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:34.526 00:03:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:34.526 00:03:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:34.526 00:03:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:34.526 00:03:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@66 -- # '[' tcp '!=' tcp ']' 00:22:34.526 00:03:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:22:34.785 true 00:22:34.785 00:03:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:34.785 00:03:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # jq -r .tls_version 00:22:35.044 00:03:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # version=0 00:22:35.044 00:03:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@75 -- # [[ 0 != \0 ]] 00:22:35.044 00:03:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:22:35.044 00:03:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:35.044 00:03:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # jq -r .tls_version 00:22:35.302 00:03:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # version=13 00:22:35.302 00:03:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@83 -- # [[ 13 != \1\3 ]] 00:22:35.302 00:03:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:22:35.563 00:03:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:35.563 00:03:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # jq -r .tls_version 00:22:35.563 00:03:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # version=7 00:22:35.563 00:03:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@91 -- # [[ 7 != \7 ]] 00:22:35.821 00:03:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:35.821 00:03:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # jq -r .enable_ktls 00:22:35.821 00:03:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # ktls=false 00:22:35.821 00:03:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@98 -- # [[ false != \f\a\l\s\e ]] 00:22:35.821 00:03:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:22:36.081 00:03:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:36.081 00:03:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # jq -r .enable_ktls 00:22:36.340 00:03:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # ktls=true 00:22:36.340 00:03:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@106 -- # [[ true != \t\r\u\e ]] 00:22:36.340 00:03:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:22:36.341 00:03:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:36.341 00:03:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # jq -r .enable_ktls 00:22:36.599 00:03:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # ktls=false 00:22:36.599 00:03:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@114 -- # [[ false != \f\a\l\s\e ]] 00:22:36.599 00:03:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:22:36.599 00:03:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:22:36.599 00:03:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:22:36.599 00:03:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:22:36.599 00:03:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:22:36.599 00:03:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:22:36.599 00:03:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:22:36.599 00:03:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:22:36.600 00:03:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:22:36.600 00:03:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:22:36.600 00:03:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:22:36.600 00:03:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:22:36.600 00:03:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=ffeeddccbbaa99887766554433221100 00:22:36.600 00:03:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:22:36.600 00:03:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:22:36.600 00:03:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:22:36.600 00:03:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:22:36.600 00:03:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # key_path=/tmp/tmp.3CrHVtsC2f 00:22:36.600 00:03:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # mktemp 00:22:36.600 00:03:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # key_2_path=/tmp/tmp.gOiRXcGyxL 00:22:36.600 00:03:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:22:36.600 00:03:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@126 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:22:36.600 00:03:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.3CrHVtsC2f 00:22:36.600 00:03:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@129 -- # chmod 0600 /tmp/tmp.gOiRXcGyxL 00:22:36.600 00:03:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:22:36.858 00:03:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@132 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:22:37.426 00:03:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@134 -- # setup_nvmf_tgt /tmp/tmp.3CrHVtsC2f 00:22:37.426 00:03:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.3CrHVtsC2f 00:22:37.426 00:03:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:22:37.684 [2024-12-14 00:03:16.599023] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:37.684 00:03:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:22:37.684 00:03:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:22:37.943 [2024-12-14 00:03:16.943865] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:37.943 [2024-12-14 00:03:16.944155] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:37.943 00:03:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:22:38.202 malloc0 00:22:38.202 00:03:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:22:38.460 00:03:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.3CrHVtsC2f 00:22:38.460 00:03:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:22:38.719 00:03:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@138 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.3CrHVtsC2f 00:22:50.922 Initializing NVMe Controllers 00:22:50.922 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:50.922 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:22:50.922 Initialization complete. Launching workers. 00:22:50.922 ======================================================== 00:22:50.922 Latency(us) 00:22:50.922 Device Information : IOPS MiB/s Average min max 00:22:50.922 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 12970.86 50.67 4934.46 1188.89 6616.79 00:22:50.922 ======================================================== 00:22:50.922 Total : 12970.86 50.67 4934.46 1188.89 6616.79 00:22:50.922 00:22:50.922 00:03:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@144 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.3CrHVtsC2f 00:22:50.922 00:03:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:50.922 00:03:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:22:50.922 00:03:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:22:50.922 00:03:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.3CrHVtsC2f 00:22:50.922 00:03:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:50.922 00:03:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=4041978 00:22:50.922 00:03:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:50.922 00:03:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:50.922 00:03:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 4041978 /var/tmp/bdevperf.sock 00:22:50.922 00:03:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 4041978 ']' 00:22:50.922 00:03:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:50.922 00:03:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:50.922 00:03:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:50.922 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:50.922 00:03:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:50.922 00:03:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:50.922 [2024-12-14 00:03:28.012007] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:22:50.922 [2024-12-14 00:03:28.012112] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4041978 ] 00:22:50.922 [2024-12-14 00:03:28.119023] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:50.922 [2024-12-14 00:03:28.231073] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:22:50.922 00:03:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:50.922 00:03:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:22:50.922 00:03:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.3CrHVtsC2f 00:22:50.922 00:03:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:22:50.922 [2024-12-14 00:03:29.190586] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:50.922 TLSTESTn1 00:22:50.922 00:03:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:22:50.922 Running I/O for 10 seconds... 00:22:52.423 4540.00 IOPS, 17.73 MiB/s [2024-12-13T23:03:32.508Z] 4552.00 IOPS, 17.78 MiB/s [2024-12-13T23:03:33.444Z] 4501.33 IOPS, 17.58 MiB/s [2024-12-13T23:03:34.818Z] 4457.25 IOPS, 17.41 MiB/s [2024-12-13T23:03:35.754Z] 4429.60 IOPS, 17.30 MiB/s [2024-12-13T23:03:36.689Z] 4308.00 IOPS, 16.83 MiB/s [2024-12-13T23:03:37.627Z] 4224.00 IOPS, 16.50 MiB/s [2024-12-13T23:03:38.565Z] 4160.00 IOPS, 16.25 MiB/s [2024-12-13T23:03:39.501Z] 4101.44 IOPS, 16.02 MiB/s [2024-12-13T23:03:39.501Z] 4065.80 IOPS, 15.88 MiB/s 00:23:00.360 Latency(us) 00:23:00.360 [2024-12-13T23:03:39.501Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:00.360 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:23:00.360 Verification LBA range: start 0x0 length 0x2000 00:23:00.360 TLSTESTn1 : 10.03 4067.75 15.89 0.00 0.00 31407.80 8238.81 31082.79 00:23:00.360 [2024-12-13T23:03:39.502Z] =================================================================================================================== 00:23:00.361 [2024-12-13T23:03:39.502Z] Total : 4067.75 15.89 0.00 0.00 31407.80 8238.81 31082.79 00:23:00.361 { 00:23:00.361 "results": [ 00:23:00.361 { 00:23:00.361 "job": "TLSTESTn1", 00:23:00.361 "core_mask": "0x4", 00:23:00.361 "workload": "verify", 00:23:00.361 "status": "finished", 00:23:00.361 "verify_range": { 00:23:00.361 "start": 0, 00:23:00.361 "length": 8192 00:23:00.361 }, 00:23:00.361 "queue_depth": 128, 00:23:00.361 "io_size": 4096, 00:23:00.361 "runtime": 10.026423, 00:23:00.361 "iops": 4067.7517794730984, 00:23:00.361 "mibps": 15.88965538856679, 00:23:00.361 "io_failed": 0, 00:23:00.361 "io_timeout": 0, 00:23:00.361 "avg_latency_us": 31407.803853190657, 00:23:00.361 "min_latency_us": 8238.81142857143, 00:23:00.361 "max_latency_us": 31082.788571428573 00:23:00.361 } 00:23:00.361 ], 00:23:00.361 "core_count": 1 00:23:00.361 } 00:23:00.361 00:03:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:00.361 00:03:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 4041978 00:23:00.361 00:03:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 4041978 ']' 00:23:00.361 00:03:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 4041978 00:23:00.361 00:03:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:00.361 00:03:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:00.361 00:03:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4041978 00:23:00.620 00:03:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:23:00.620 00:03:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:23:00.620 00:03:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4041978' 00:23:00.620 killing process with pid 4041978 00:23:00.620 00:03:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 4041978 00:23:00.620 Received shutdown signal, test time was about 10.000000 seconds 00:23:00.620 00:23:00.620 Latency(us) 00:23:00.620 [2024-12-13T23:03:39.761Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:00.620 [2024-12-13T23:03:39.761Z] =================================================================================================================== 00:23:00.620 [2024-12-13T23:03:39.761Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:00.620 00:03:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 4041978 00:23:01.572 00:03:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@147 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.gOiRXcGyxL 00:23:01.572 00:03:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:23:01.572 00:03:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.gOiRXcGyxL 00:23:01.572 00:03:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:23:01.572 00:03:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:01.572 00:03:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:23:01.572 00:03:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:01.572 00:03:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.gOiRXcGyxL 00:23:01.572 00:03:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:01.572 00:03:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:01.572 00:03:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:01.572 00:03:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.gOiRXcGyxL 00:23:01.572 00:03:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:01.572 00:03:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=4043877 00:23:01.572 00:03:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:01.572 00:03:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:01.572 00:03:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 4043877 /var/tmp/bdevperf.sock 00:23:01.572 00:03:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 4043877 ']' 00:23:01.572 00:03:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:01.572 00:03:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:01.572 00:03:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:01.572 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:01.572 00:03:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:01.572 00:03:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:01.572 [2024-12-14 00:03:40.505973] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:23:01.572 [2024-12-14 00:03:40.506069] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4043877 ] 00:23:01.572 [2024-12-14 00:03:40.614879] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:01.832 [2024-12-14 00:03:40.727740] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:23:02.399 00:03:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:02.399 00:03:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:02.399 00:03:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.gOiRXcGyxL 00:23:02.399 00:03:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:23:02.658 [2024-12-14 00:03:41.692531] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:02.658 [2024-12-14 00:03:41.704883] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:23:02.658 [2024-12-14 00:03:41.705365] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (107): Transport endpoint is not connected 00:23:02.658 [2024-12-14 00:03:41.706350] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:23:02.658 [2024-12-14 00:03:41.707344] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:23:02.658 [2024-12-14 00:03:41.707368] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:23:02.658 [2024-12-14 00:03:41.707382] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:23:02.658 [2024-12-14 00:03:41.707400] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:23:02.658 request: 00:23:02.658 { 00:23:02.658 "name": "TLSTEST", 00:23:02.658 "trtype": "tcp", 00:23:02.658 "traddr": "10.0.0.2", 00:23:02.658 "adrfam": "ipv4", 00:23:02.658 "trsvcid": "4420", 00:23:02.658 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:02.658 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:02.658 "prchk_reftag": false, 00:23:02.658 "prchk_guard": false, 00:23:02.658 "hdgst": false, 00:23:02.658 "ddgst": false, 00:23:02.658 "psk": "key0", 00:23:02.658 "allow_unrecognized_csi": false, 00:23:02.658 "method": "bdev_nvme_attach_controller", 00:23:02.658 "req_id": 1 00:23:02.658 } 00:23:02.658 Got JSON-RPC error response 00:23:02.658 response: 00:23:02.658 { 00:23:02.658 "code": -5, 00:23:02.658 "message": "Input/output error" 00:23:02.658 } 00:23:02.658 00:03:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 4043877 00:23:02.658 00:03:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 4043877 ']' 00:23:02.658 00:03:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 4043877 00:23:02.658 00:03:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:02.658 00:03:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:02.658 00:03:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4043877 00:23:02.658 00:03:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:23:02.658 00:03:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:23:02.658 00:03:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4043877' 00:23:02.658 killing process with pid 4043877 00:23:02.658 00:03:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 4043877 00:23:02.658 Received shutdown signal, test time was about 10.000000 seconds 00:23:02.658 00:23:02.658 Latency(us) 00:23:02.658 [2024-12-13T23:03:41.799Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:02.658 [2024-12-13T23:03:41.799Z] =================================================================================================================== 00:23:02.658 [2024-12-13T23:03:41.799Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:02.658 00:03:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 4043877 00:23:03.595 00:03:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:23:03.595 00:03:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:23:03.595 00:03:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:03.595 00:03:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:03.595 00:03:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:03.595 00:03:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@150 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.3CrHVtsC2f 00:23:03.595 00:03:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:23:03.595 00:03:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.3CrHVtsC2f 00:23:03.595 00:03:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:23:03.595 00:03:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:03.595 00:03:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:23:03.595 00:03:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:03.595 00:03:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.3CrHVtsC2f 00:23:03.595 00:03:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:03.595 00:03:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:03.595 00:03:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:23:03.595 00:03:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.3CrHVtsC2f 00:23:03.595 00:03:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:03.595 00:03:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=4044317 00:23:03.595 00:03:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:03.595 00:03:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:03.595 00:03:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 4044317 /var/tmp/bdevperf.sock 00:23:03.595 00:03:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 4044317 ']' 00:23:03.595 00:03:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:03.595 00:03:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:03.595 00:03:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:03.595 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:03.595 00:03:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:03.595 00:03:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:03.595 [2024-12-14 00:03:42.727358] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:23:03.595 [2024-12-14 00:03:42.727460] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4044317 ] 00:23:03.854 [2024-12-14 00:03:42.834174] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:03.854 [2024-12-14 00:03:42.937358] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:23:04.432 00:03:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:04.432 00:03:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:04.432 00:03:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.3CrHVtsC2f 00:23:04.691 00:03:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk key0 00:23:04.950 [2024-12-14 00:03:43.868617] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:04.950 [2024-12-14 00:03:43.880061] tcp.c: 987:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:23:04.950 [2024-12-14 00:03:43.880094] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:23:04.950 [2024-12-14 00:03:43.880145] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:23:04.950 [2024-12-14 00:03:43.880422] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (107): Transport endpoint is not connected 00:23:04.950 [2024-12-14 00:03:43.881402] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:23:04.950 [2024-12-14 00:03:43.882406] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:23:04.950 [2024-12-14 00:03:43.882427] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:23:04.950 [2024-12-14 00:03:43.882446] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:23:04.950 [2024-12-14 00:03:43.882461] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:23:04.950 request: 00:23:04.950 { 00:23:04.950 "name": "TLSTEST", 00:23:04.950 "trtype": "tcp", 00:23:04.950 "traddr": "10.0.0.2", 00:23:04.950 "adrfam": "ipv4", 00:23:04.950 "trsvcid": "4420", 00:23:04.950 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:04.950 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:23:04.950 "prchk_reftag": false, 00:23:04.950 "prchk_guard": false, 00:23:04.950 "hdgst": false, 00:23:04.950 "ddgst": false, 00:23:04.950 "psk": "key0", 00:23:04.950 "allow_unrecognized_csi": false, 00:23:04.950 "method": "bdev_nvme_attach_controller", 00:23:04.950 "req_id": 1 00:23:04.950 } 00:23:04.950 Got JSON-RPC error response 00:23:04.950 response: 00:23:04.950 { 00:23:04.950 "code": -5, 00:23:04.950 "message": "Input/output error" 00:23:04.950 } 00:23:04.950 00:03:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 4044317 00:23:04.950 00:03:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 4044317 ']' 00:23:04.950 00:03:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 4044317 00:23:04.950 00:03:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:04.950 00:03:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:04.950 00:03:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4044317 00:23:04.950 00:03:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:23:04.950 00:03:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:23:04.950 00:03:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4044317' 00:23:04.950 killing process with pid 4044317 00:23:04.950 00:03:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 4044317 00:23:04.950 Received shutdown signal, test time was about 10.000000 seconds 00:23:04.950 00:23:04.950 Latency(us) 00:23:04.950 [2024-12-13T23:03:44.091Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:04.950 [2024-12-13T23:03:44.091Z] =================================================================================================================== 00:23:04.950 [2024-12-13T23:03:44.091Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:04.950 00:03:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 4044317 00:23:05.884 00:03:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:23:05.884 00:03:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:23:05.884 00:03:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:05.884 00:03:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:05.884 00:03:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:05.884 00:03:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@153 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.3CrHVtsC2f 00:23:05.884 00:03:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:23:05.884 00:03:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.3CrHVtsC2f 00:23:05.884 00:03:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:23:05.884 00:03:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:05.884 00:03:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:23:05.884 00:03:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:05.884 00:03:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.3CrHVtsC2f 00:23:05.884 00:03:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:05.884 00:03:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:23:05.884 00:03:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:05.884 00:03:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.3CrHVtsC2f 00:23:05.884 00:03:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:05.884 00:03:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=4044674 00:23:05.884 00:03:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:05.884 00:03:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:05.884 00:03:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 4044674 /var/tmp/bdevperf.sock 00:23:05.884 00:03:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 4044674 ']' 00:23:05.884 00:03:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:05.884 00:03:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:05.884 00:03:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:05.884 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:05.884 00:03:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:05.884 00:03:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:05.884 [2024-12-14 00:03:44.903138] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:23:05.885 [2024-12-14 00:03:44.903231] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4044674 ] 00:23:05.885 [2024-12-14 00:03:45.014310] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:06.142 [2024-12-14 00:03:45.125168] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:23:06.708 00:03:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:06.708 00:03:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:06.708 00:03:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.3CrHVtsC2f 00:23:06.967 00:03:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk key0 00:23:06.967 [2024-12-14 00:03:46.039117] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:06.967 [2024-12-14 00:03:46.046804] tcp.c: 987:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:23:06.967 [2024-12-14 00:03:46.046835] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:23:06.967 [2024-12-14 00:03:46.046873] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:23:06.967 [2024-12-14 00:03:46.047141] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (107): Transport endpoint is not connected 00:23:06.967 [2024-12-14 00:03:46.048122] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:23:06.967 [2024-12-14 00:03:46.049123] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] Ctrlr is in error state 00:23:06.967 [2024-12-14 00:03:46.049145] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:23:06.967 [2024-12-14 00:03:46.049160] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode2, Operation not permitted 00:23:06.967 [2024-12-14 00:03:46.049173] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] in failed state. 00:23:06.967 request: 00:23:06.967 { 00:23:06.967 "name": "TLSTEST", 00:23:06.967 "trtype": "tcp", 00:23:06.967 "traddr": "10.0.0.2", 00:23:06.967 "adrfam": "ipv4", 00:23:06.967 "trsvcid": "4420", 00:23:06.967 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:23:06.967 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:06.967 "prchk_reftag": false, 00:23:06.967 "prchk_guard": false, 00:23:06.967 "hdgst": false, 00:23:06.967 "ddgst": false, 00:23:06.967 "psk": "key0", 00:23:06.967 "allow_unrecognized_csi": false, 00:23:06.967 "method": "bdev_nvme_attach_controller", 00:23:06.967 "req_id": 1 00:23:06.967 } 00:23:06.967 Got JSON-RPC error response 00:23:06.967 response: 00:23:06.967 { 00:23:06.967 "code": -5, 00:23:06.967 "message": "Input/output error" 00:23:06.967 } 00:23:06.967 00:03:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 4044674 00:23:06.967 00:03:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 4044674 ']' 00:23:06.967 00:03:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 4044674 00:23:06.967 00:03:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:06.967 00:03:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:06.967 00:03:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4044674 00:23:07.226 00:03:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:23:07.226 00:03:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:23:07.226 00:03:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4044674' 00:23:07.226 killing process with pid 4044674 00:23:07.226 00:03:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 4044674 00:23:07.226 Received shutdown signal, test time was about 10.000000 seconds 00:23:07.226 00:23:07.226 Latency(us) 00:23:07.226 [2024-12-13T23:03:46.367Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:07.226 [2024-12-13T23:03:46.367Z] =================================================================================================================== 00:23:07.226 [2024-12-13T23:03:46.367Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:07.226 00:03:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 4044674 00:23:08.168 00:03:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:23:08.168 00:03:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:23:08.168 00:03:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:08.168 00:03:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:08.168 00:03:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:08.168 00:03:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@156 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:23:08.168 00:03:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:23:08.168 00:03:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:23:08.168 00:03:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:23:08.168 00:03:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:08.168 00:03:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:23:08.168 00:03:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:08.168 00:03:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:23:08.168 00:03:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:08.168 00:03:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:08.168 00:03:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:08.168 00:03:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk= 00:23:08.168 00:03:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:08.168 00:03:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=4045006 00:23:08.168 00:03:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:08.168 00:03:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:08.168 00:03:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 4045006 /var/tmp/bdevperf.sock 00:23:08.168 00:03:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 4045006 ']' 00:23:08.168 00:03:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:08.168 00:03:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:08.168 00:03:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:08.168 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:08.168 00:03:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:08.168 00:03:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:08.168 [2024-12-14 00:03:47.077091] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:23:08.168 [2024-12-14 00:03:47.077189] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4045006 ] 00:23:08.168 [2024-12-14 00:03:47.182763] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:08.168 [2024-12-14 00:03:47.289501] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:23:09.105 00:03:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:09.105 00:03:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:09.105 00:03:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 '' 00:23:09.105 [2024-12-14 00:03:48.054413] keyring.c: 24:keyring_file_check_path: *ERROR*: Non-absolute paths are not allowed: 00:23:09.105 [2024-12-14 00:03:48.054459] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:23:09.105 request: 00:23:09.105 { 00:23:09.105 "name": "key0", 00:23:09.105 "path": "", 00:23:09.105 "method": "keyring_file_add_key", 00:23:09.105 "req_id": 1 00:23:09.105 } 00:23:09.105 Got JSON-RPC error response 00:23:09.105 response: 00:23:09.105 { 00:23:09.105 "code": -1, 00:23:09.105 "message": "Operation not permitted" 00:23:09.105 } 00:23:09.105 00:03:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:23:09.105 [2024-12-14 00:03:48.234999] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:09.105 [2024-12-14 00:03:48.235045] bdev_nvme.c:6754:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:23:09.105 request: 00:23:09.105 { 00:23:09.105 "name": "TLSTEST", 00:23:09.105 "trtype": "tcp", 00:23:09.105 "traddr": "10.0.0.2", 00:23:09.105 "adrfam": "ipv4", 00:23:09.105 "trsvcid": "4420", 00:23:09.105 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:09.105 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:09.105 "prchk_reftag": false, 00:23:09.105 "prchk_guard": false, 00:23:09.105 "hdgst": false, 00:23:09.105 "ddgst": false, 00:23:09.105 "psk": "key0", 00:23:09.105 "allow_unrecognized_csi": false, 00:23:09.105 "method": "bdev_nvme_attach_controller", 00:23:09.105 "req_id": 1 00:23:09.105 } 00:23:09.105 Got JSON-RPC error response 00:23:09.105 response: 00:23:09.105 { 00:23:09.105 "code": -126, 00:23:09.105 "message": "Required key not available" 00:23:09.105 } 00:23:09.364 00:03:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 4045006 00:23:09.364 00:03:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 4045006 ']' 00:23:09.364 00:03:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 4045006 00:23:09.364 00:03:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:09.364 00:03:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:09.364 00:03:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4045006 00:23:09.364 00:03:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:23:09.364 00:03:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:23:09.364 00:03:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4045006' 00:23:09.364 killing process with pid 4045006 00:23:09.364 00:03:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 4045006 00:23:09.364 Received shutdown signal, test time was about 10.000000 seconds 00:23:09.364 00:23:09.364 Latency(us) 00:23:09.364 [2024-12-13T23:03:48.505Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:09.364 [2024-12-13T23:03:48.505Z] =================================================================================================================== 00:23:09.364 [2024-12-13T23:03:48.505Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:09.364 00:03:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 4045006 00:23:10.301 00:03:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:23:10.301 00:03:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:23:10.301 00:03:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:10.301 00:03:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:10.301 00:03:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:10.301 00:03:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # killprocess 4039492 00:23:10.301 00:03:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 4039492 ']' 00:23:10.301 00:03:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 4039492 00:23:10.301 00:03:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:10.301 00:03:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:10.301 00:03:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4039492 00:23:10.301 00:03:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:10.301 00:03:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:10.301 00:03:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4039492' 00:23:10.301 killing process with pid 4039492 00:23:10.301 00:03:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 4039492 00:23:10.301 00:03:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 4039492 00:23:11.678 00:03:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:23:11.678 00:03:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:23:11.678 00:03:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:23:11.678 00:03:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:23:11.679 00:03:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:23:11.679 00:03:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=2 00:23:11.679 00:03:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:23:11.679 00:03:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:23:11.679 00:03:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # mktemp 00:23:11.679 00:03:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # key_long_path=/tmp/tmp.RxVOK3vMXt 00:23:11.679 00:03:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@162 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:23:11.679 00:03:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@163 -- # chmod 0600 /tmp/tmp.RxVOK3vMXt 00:23:11.679 00:03:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@164 -- # nvmfappstart -m 0x2 00:23:11.679 00:03:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:11.679 00:03:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:11.679 00:03:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:11.679 00:03:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=4045679 00:23:11.679 00:03:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:23:11.679 00:03:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 4045679 00:23:11.679 00:03:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 4045679 ']' 00:23:11.679 00:03:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:11.679 00:03:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:11.679 00:03:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:11.679 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:11.679 00:03:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:11.679 00:03:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:11.679 [2024-12-14 00:03:50.632158] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:23:11.679 [2024-12-14 00:03:50.632269] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:11.679 [2024-12-14 00:03:50.751175] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:11.938 [2024-12-14 00:03:50.854599] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:11.938 [2024-12-14 00:03:50.854643] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:11.938 [2024-12-14 00:03:50.854654] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:11.938 [2024-12-14 00:03:50.854664] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:11.938 [2024-12-14 00:03:50.854672] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:11.938 [2024-12-14 00:03:50.856049] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:23:12.505 00:03:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:12.505 00:03:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:12.505 00:03:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:12.505 00:03:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:12.505 00:03:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:12.505 00:03:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:12.505 00:03:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@166 -- # setup_nvmf_tgt /tmp/tmp.RxVOK3vMXt 00:23:12.505 00:03:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.RxVOK3vMXt 00:23:12.505 00:03:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:23:12.505 [2024-12-14 00:03:51.636457] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:12.768 00:03:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:23:12.768 00:03:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:23:13.032 [2024-12-14 00:03:52.009425] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:13.032 [2024-12-14 00:03:52.009694] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:13.032 00:03:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:23:13.291 malloc0 00:23:13.291 00:03:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:23:13.291 00:03:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.RxVOK3vMXt 00:23:13.560 00:03:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:23:13.823 00:03:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@168 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.RxVOK3vMXt 00:23:13.823 00:03:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:13.823 00:03:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:13.823 00:03:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:13.823 00:03:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.RxVOK3vMXt 00:23:13.823 00:03:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:13.823 00:03:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=4045948 00:23:13.823 00:03:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:13.823 00:03:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:13.823 00:03:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 4045948 /var/tmp/bdevperf.sock 00:23:13.823 00:03:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 4045948 ']' 00:23:13.823 00:03:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:13.823 00:03:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:13.823 00:03:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:13.823 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:13.823 00:03:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:13.823 00:03:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:13.823 [2024-12-14 00:03:52.849822] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:23:13.823 [2024-12-14 00:03:52.849909] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4045948 ] 00:23:13.823 [2024-12-14 00:03:52.955849] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:14.080 [2024-12-14 00:03:53.063488] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:23:14.648 00:03:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:14.648 00:03:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:14.648 00:03:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.RxVOK3vMXt 00:23:14.907 00:03:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:23:14.907 [2024-12-14 00:03:54.006950] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:15.166 TLSTESTn1 00:23:15.166 00:03:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:23:15.166 Running I/O for 10 seconds... 00:23:17.481 4591.00 IOPS, 17.93 MiB/s [2024-12-13T23:03:57.558Z] 4640.00 IOPS, 18.12 MiB/s [2024-12-13T23:03:58.538Z] 4527.33 IOPS, 17.68 MiB/s [2024-12-13T23:03:59.238Z] 4476.25 IOPS, 17.49 MiB/s [2024-12-13T23:04:00.614Z] 4443.80 IOPS, 17.36 MiB/s [2024-12-13T23:04:01.553Z] 4366.50 IOPS, 17.06 MiB/s [2024-12-13T23:04:02.490Z] 4340.57 IOPS, 16.96 MiB/s [2024-12-13T23:04:03.426Z] 4341.75 IOPS, 16.96 MiB/s [2024-12-13T23:04:04.362Z] 4334.22 IOPS, 16.93 MiB/s [2024-12-13T23:04:04.362Z] 4325.20 IOPS, 16.90 MiB/s 00:23:25.221 Latency(us) 00:23:25.221 [2024-12-13T23:04:04.362Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:25.221 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:23:25.221 Verification LBA range: start 0x0 length 0x2000 00:23:25.221 TLSTESTn1 : 10.02 4328.53 16.91 0.00 0.00 29524.61 5898.24 33704.23 00:23:25.221 [2024-12-13T23:04:04.362Z] =================================================================================================================== 00:23:25.221 [2024-12-13T23:04:04.362Z] Total : 4328.53 16.91 0.00 0.00 29524.61 5898.24 33704.23 00:23:25.221 { 00:23:25.221 "results": [ 00:23:25.221 { 00:23:25.221 "job": "TLSTESTn1", 00:23:25.221 "core_mask": "0x4", 00:23:25.221 "workload": "verify", 00:23:25.221 "status": "finished", 00:23:25.221 "verify_range": { 00:23:25.221 "start": 0, 00:23:25.221 "length": 8192 00:23:25.221 }, 00:23:25.221 "queue_depth": 128, 00:23:25.221 "io_size": 4096, 00:23:25.221 "runtime": 10.021655, 00:23:25.221 "iops": 4328.526575700321, 00:23:25.221 "mibps": 16.908306936329378, 00:23:25.221 "io_failed": 0, 00:23:25.221 "io_timeout": 0, 00:23:25.221 "avg_latency_us": 29524.611555075477, 00:23:25.221 "min_latency_us": 5898.24, 00:23:25.221 "max_latency_us": 33704.22857142857 00:23:25.221 } 00:23:25.221 ], 00:23:25.221 "core_count": 1 00:23:25.221 } 00:23:25.221 00:04:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:25.221 00:04:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 4045948 00:23:25.221 00:04:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 4045948 ']' 00:23:25.221 00:04:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 4045948 00:23:25.221 00:04:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:25.221 00:04:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:25.221 00:04:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4045948 00:23:25.221 00:04:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:23:25.221 00:04:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:23:25.221 00:04:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4045948' 00:23:25.221 killing process with pid 4045948 00:23:25.221 00:04:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 4045948 00:23:25.221 Received shutdown signal, test time was about 10.000000 seconds 00:23:25.221 00:23:25.221 Latency(us) 00:23:25.221 [2024-12-13T23:04:04.362Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:25.221 [2024-12-13T23:04:04.362Z] =================================================================================================================== 00:23:25.221 [2024-12-13T23:04:04.362Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:25.221 00:04:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 4045948 00:23:26.156 00:04:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@171 -- # chmod 0666 /tmp/tmp.RxVOK3vMXt 00:23:26.156 00:04:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@172 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.RxVOK3vMXt 00:23:26.156 00:04:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:23:26.156 00:04:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.RxVOK3vMXt 00:23:26.156 00:04:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:23:26.156 00:04:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:26.156 00:04:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:23:26.156 00:04:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:26.156 00:04:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.RxVOK3vMXt 00:23:26.156 00:04:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:26.156 00:04:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:26.156 00:04:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:26.156 00:04:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.RxVOK3vMXt 00:23:26.156 00:04:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:26.156 00:04:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=4047963 00:23:26.156 00:04:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:26.156 00:04:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:26.156 00:04:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 4047963 /var/tmp/bdevperf.sock 00:23:26.156 00:04:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 4047963 ']' 00:23:26.156 00:04:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:26.156 00:04:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:26.156 00:04:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:26.156 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:26.156 00:04:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:26.156 00:04:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:26.415 [2024-12-14 00:04:05.303894] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:23:26.415 [2024-12-14 00:04:05.303982] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4047963 ] 00:23:26.415 [2024-12-14 00:04:05.411417] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:26.415 [2024-12-14 00:04:05.518871] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:23:26.981 00:04:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:27.239 00:04:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:27.239 00:04:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.RxVOK3vMXt 00:23:27.239 [2024-12-14 00:04:06.287935] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.RxVOK3vMXt': 0100666 00:23:27.239 [2024-12-14 00:04:06.287978] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:23:27.239 request: 00:23:27.239 { 00:23:27.239 "name": "key0", 00:23:27.239 "path": "/tmp/tmp.RxVOK3vMXt", 00:23:27.239 "method": "keyring_file_add_key", 00:23:27.239 "req_id": 1 00:23:27.239 } 00:23:27.239 Got JSON-RPC error response 00:23:27.239 response: 00:23:27.239 { 00:23:27.239 "code": -1, 00:23:27.239 "message": "Operation not permitted" 00:23:27.239 } 00:23:27.239 00:04:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:23:27.498 [2024-12-14 00:04:06.472522] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:27.498 [2024-12-14 00:04:06.472560] bdev_nvme.c:6754:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:23:27.498 request: 00:23:27.498 { 00:23:27.498 "name": "TLSTEST", 00:23:27.498 "trtype": "tcp", 00:23:27.498 "traddr": "10.0.0.2", 00:23:27.498 "adrfam": "ipv4", 00:23:27.498 "trsvcid": "4420", 00:23:27.498 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:27.498 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:27.498 "prchk_reftag": false, 00:23:27.498 "prchk_guard": false, 00:23:27.498 "hdgst": false, 00:23:27.498 "ddgst": false, 00:23:27.498 "psk": "key0", 00:23:27.498 "allow_unrecognized_csi": false, 00:23:27.498 "method": "bdev_nvme_attach_controller", 00:23:27.498 "req_id": 1 00:23:27.498 } 00:23:27.498 Got JSON-RPC error response 00:23:27.498 response: 00:23:27.498 { 00:23:27.498 "code": -126, 00:23:27.498 "message": "Required key not available" 00:23:27.498 } 00:23:27.498 00:04:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 4047963 00:23:27.498 00:04:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 4047963 ']' 00:23:27.498 00:04:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 4047963 00:23:27.498 00:04:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:27.498 00:04:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:27.498 00:04:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4047963 00:23:27.498 00:04:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:23:27.498 00:04:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:23:27.498 00:04:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4047963' 00:23:27.498 killing process with pid 4047963 00:23:27.498 00:04:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 4047963 00:23:27.498 Received shutdown signal, test time was about 10.000000 seconds 00:23:27.498 00:23:27.498 Latency(us) 00:23:27.498 [2024-12-13T23:04:06.639Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:27.498 [2024-12-13T23:04:06.639Z] =================================================================================================================== 00:23:27.498 [2024-12-13T23:04:06.639Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:27.498 00:04:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 4047963 00:23:28.433 00:04:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:23:28.433 00:04:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:23:28.433 00:04:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:28.433 00:04:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:28.433 00:04:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:28.433 00:04:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@175 -- # killprocess 4045679 00:23:28.433 00:04:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 4045679 ']' 00:23:28.433 00:04:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 4045679 00:23:28.433 00:04:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:28.433 00:04:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:28.433 00:04:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4045679 00:23:28.433 00:04:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:28.433 00:04:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:28.433 00:04:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4045679' 00:23:28.433 killing process with pid 4045679 00:23:28.433 00:04:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 4045679 00:23:28.433 00:04:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 4045679 00:23:29.810 00:04:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@176 -- # nvmfappstart -m 0x2 00:23:29.810 00:04:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:29.810 00:04:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:29.810 00:04:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:29.810 00:04:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=4048522 00:23:29.810 00:04:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 4048522 00:23:29.810 00:04:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:23:29.810 00:04:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 4048522 ']' 00:23:29.810 00:04:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:29.810 00:04:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:29.810 00:04:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:29.810 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:29.810 00:04:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:29.810 00:04:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:29.810 [2024-12-14 00:04:08.756370] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:23:29.810 [2024-12-14 00:04:08.756484] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:29.810 [2024-12-14 00:04:08.873817] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:30.067 [2024-12-14 00:04:08.976249] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:30.067 [2024-12-14 00:04:08.976292] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:30.067 [2024-12-14 00:04:08.976301] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:30.067 [2024-12-14 00:04:08.976311] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:30.067 [2024-12-14 00:04:08.976319] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:30.067 [2024-12-14 00:04:08.977570] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:23:30.633 00:04:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:30.633 00:04:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:30.633 00:04:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:30.633 00:04:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:30.633 00:04:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:30.633 00:04:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:30.633 00:04:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@178 -- # NOT setup_nvmf_tgt /tmp/tmp.RxVOK3vMXt 00:23:30.633 00:04:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:23:30.633 00:04:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.RxVOK3vMXt 00:23:30.633 00:04:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=setup_nvmf_tgt 00:23:30.633 00:04:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:30.634 00:04:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t setup_nvmf_tgt 00:23:30.634 00:04:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:30.634 00:04:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # setup_nvmf_tgt /tmp/tmp.RxVOK3vMXt 00:23:30.634 00:04:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.RxVOK3vMXt 00:23:30.634 00:04:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:23:30.634 [2024-12-14 00:04:09.769262] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:30.892 00:04:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:23:30.892 00:04:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:23:31.151 [2024-12-14 00:04:10.154302] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:31.151 [2024-12-14 00:04:10.154562] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:31.151 00:04:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:23:31.410 malloc0 00:23:31.410 00:04:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:23:31.667 00:04:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.RxVOK3vMXt 00:23:31.667 [2024-12-14 00:04:10.745336] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.RxVOK3vMXt': 0100666 00:23:31.667 [2024-12-14 00:04:10.745369] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:23:31.667 request: 00:23:31.667 { 00:23:31.667 "name": "key0", 00:23:31.667 "path": "/tmp/tmp.RxVOK3vMXt", 00:23:31.667 "method": "keyring_file_add_key", 00:23:31.667 "req_id": 1 00:23:31.667 } 00:23:31.667 Got JSON-RPC error response 00:23:31.667 response: 00:23:31.667 { 00:23:31.667 "code": -1, 00:23:31.667 "message": "Operation not permitted" 00:23:31.667 } 00:23:31.667 00:04:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:23:31.925 [2024-12-14 00:04:10.917816] tcp.c:3777:nvmf_tcp_subsystem_add_host: *ERROR*: Key 'key0' does not exist 00:23:31.925 [2024-12-14 00:04:10.917859] subsystem.c:1051:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:23:31.925 request: 00:23:31.925 { 00:23:31.925 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:31.925 "host": "nqn.2016-06.io.spdk:host1", 00:23:31.925 "psk": "key0", 00:23:31.925 "method": "nvmf_subsystem_add_host", 00:23:31.925 "req_id": 1 00:23:31.925 } 00:23:31.925 Got JSON-RPC error response 00:23:31.925 response: 00:23:31.925 { 00:23:31.925 "code": -32603, 00:23:31.925 "message": "Internal error" 00:23:31.925 } 00:23:31.925 00:04:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:23:31.925 00:04:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:31.925 00:04:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:31.925 00:04:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:31.925 00:04:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@181 -- # killprocess 4048522 00:23:31.925 00:04:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 4048522 ']' 00:23:31.925 00:04:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 4048522 00:23:31.925 00:04:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:31.925 00:04:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:31.925 00:04:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4048522 00:23:31.925 00:04:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:31.925 00:04:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:31.925 00:04:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4048522' 00:23:31.925 killing process with pid 4048522 00:23:31.925 00:04:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 4048522 00:23:31.925 00:04:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 4048522 00:23:33.301 00:04:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@182 -- # chmod 0600 /tmp/tmp.RxVOK3vMXt 00:23:33.301 00:04:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@185 -- # nvmfappstart -m 0x2 00:23:33.301 00:04:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:33.301 00:04:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:33.301 00:04:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:33.301 00:04:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=4049128 00:23:33.301 00:04:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:23:33.301 00:04:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 4049128 00:23:33.301 00:04:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 4049128 ']' 00:23:33.301 00:04:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:33.301 00:04:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:33.301 00:04:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:33.301 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:33.301 00:04:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:33.301 00:04:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:33.301 [2024-12-14 00:04:12.259246] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:23:33.301 [2024-12-14 00:04:12.259338] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:33.301 [2024-12-14 00:04:12.377534] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:33.558 [2024-12-14 00:04:12.484107] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:33.559 [2024-12-14 00:04:12.484151] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:33.559 [2024-12-14 00:04:12.484161] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:33.559 [2024-12-14 00:04:12.484187] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:33.559 [2024-12-14 00:04:12.484195] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:33.559 [2024-12-14 00:04:12.485585] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:23:34.125 00:04:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:34.125 00:04:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:34.125 00:04:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:34.125 00:04:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:34.125 00:04:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:34.125 00:04:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:34.125 00:04:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@186 -- # setup_nvmf_tgt /tmp/tmp.RxVOK3vMXt 00:23:34.125 00:04:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.RxVOK3vMXt 00:23:34.125 00:04:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:23:34.384 [2024-12-14 00:04:13.268952] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:34.384 00:04:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:23:34.384 00:04:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:23:34.643 [2024-12-14 00:04:13.625874] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:34.643 [2024-12-14 00:04:13.626134] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:34.643 00:04:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:23:34.902 malloc0 00:23:34.902 00:04:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:23:35.160 00:04:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.RxVOK3vMXt 00:23:35.160 00:04:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:23:35.419 00:04:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@189 -- # bdevperf_pid=4049546 00:23:35.419 00:04:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@188 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:35.419 00:04:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@191 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:35.419 00:04:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@192 -- # waitforlisten 4049546 /var/tmp/bdevperf.sock 00:23:35.419 00:04:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 4049546 ']' 00:23:35.419 00:04:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:35.419 00:04:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:35.419 00:04:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:35.419 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:35.419 00:04:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:35.419 00:04:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:35.419 [2024-12-14 00:04:14.490289] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:23:35.419 [2024-12-14 00:04:14.490380] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4049546 ] 00:23:35.677 [2024-12-14 00:04:14.597431] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:35.677 [2024-12-14 00:04:14.708671] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:23:36.242 00:04:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:36.242 00:04:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:36.242 00:04:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@193 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.RxVOK3vMXt 00:23:36.501 00:04:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@194 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:23:36.759 [2024-12-14 00:04:15.655981] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:36.759 TLSTESTn1 00:23:36.759 00:04:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:23:37.018 00:04:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # tgtconf='{ 00:23:37.018 "subsystems": [ 00:23:37.018 { 00:23:37.018 "subsystem": "keyring", 00:23:37.018 "config": [ 00:23:37.018 { 00:23:37.018 "method": "keyring_file_add_key", 00:23:37.018 "params": { 00:23:37.018 "name": "key0", 00:23:37.018 "path": "/tmp/tmp.RxVOK3vMXt" 00:23:37.018 } 00:23:37.018 } 00:23:37.018 ] 00:23:37.018 }, 00:23:37.018 { 00:23:37.018 "subsystem": "iobuf", 00:23:37.018 "config": [ 00:23:37.018 { 00:23:37.018 "method": "iobuf_set_options", 00:23:37.018 "params": { 00:23:37.018 "small_pool_count": 8192, 00:23:37.018 "large_pool_count": 1024, 00:23:37.018 "small_bufsize": 8192, 00:23:37.018 "large_bufsize": 135168, 00:23:37.018 "enable_numa": false 00:23:37.018 } 00:23:37.018 } 00:23:37.018 ] 00:23:37.018 }, 00:23:37.018 { 00:23:37.018 "subsystem": "sock", 00:23:37.018 "config": [ 00:23:37.018 { 00:23:37.018 "method": "sock_set_default_impl", 00:23:37.018 "params": { 00:23:37.018 "impl_name": "posix" 00:23:37.018 } 00:23:37.018 }, 00:23:37.018 { 00:23:37.018 "method": "sock_impl_set_options", 00:23:37.018 "params": { 00:23:37.018 "impl_name": "ssl", 00:23:37.018 "recv_buf_size": 4096, 00:23:37.018 "send_buf_size": 4096, 00:23:37.018 "enable_recv_pipe": true, 00:23:37.018 "enable_quickack": false, 00:23:37.018 "enable_placement_id": 0, 00:23:37.018 "enable_zerocopy_send_server": true, 00:23:37.018 "enable_zerocopy_send_client": false, 00:23:37.018 "zerocopy_threshold": 0, 00:23:37.018 "tls_version": 0, 00:23:37.018 "enable_ktls": false 00:23:37.018 } 00:23:37.018 }, 00:23:37.018 { 00:23:37.018 "method": "sock_impl_set_options", 00:23:37.018 "params": { 00:23:37.018 "impl_name": "posix", 00:23:37.018 "recv_buf_size": 2097152, 00:23:37.018 "send_buf_size": 2097152, 00:23:37.018 "enable_recv_pipe": true, 00:23:37.018 "enable_quickack": false, 00:23:37.018 "enable_placement_id": 0, 00:23:37.018 "enable_zerocopy_send_server": true, 00:23:37.018 "enable_zerocopy_send_client": false, 00:23:37.018 "zerocopy_threshold": 0, 00:23:37.018 "tls_version": 0, 00:23:37.018 "enable_ktls": false 00:23:37.018 } 00:23:37.018 } 00:23:37.018 ] 00:23:37.018 }, 00:23:37.018 { 00:23:37.018 "subsystem": "vmd", 00:23:37.018 "config": [] 00:23:37.018 }, 00:23:37.018 { 00:23:37.018 "subsystem": "accel", 00:23:37.018 "config": [ 00:23:37.018 { 00:23:37.018 "method": "accel_set_options", 00:23:37.018 "params": { 00:23:37.018 "small_cache_size": 128, 00:23:37.018 "large_cache_size": 16, 00:23:37.018 "task_count": 2048, 00:23:37.018 "sequence_count": 2048, 00:23:37.018 "buf_count": 2048 00:23:37.018 } 00:23:37.018 } 00:23:37.018 ] 00:23:37.018 }, 00:23:37.018 { 00:23:37.018 "subsystem": "bdev", 00:23:37.018 "config": [ 00:23:37.018 { 00:23:37.018 "method": "bdev_set_options", 00:23:37.018 "params": { 00:23:37.018 "bdev_io_pool_size": 65535, 00:23:37.018 "bdev_io_cache_size": 256, 00:23:37.018 "bdev_auto_examine": true, 00:23:37.018 "iobuf_small_cache_size": 128, 00:23:37.018 "iobuf_large_cache_size": 16 00:23:37.018 } 00:23:37.018 }, 00:23:37.018 { 00:23:37.018 "method": "bdev_raid_set_options", 00:23:37.018 "params": { 00:23:37.018 "process_window_size_kb": 1024, 00:23:37.018 "process_max_bandwidth_mb_sec": 0 00:23:37.018 } 00:23:37.018 }, 00:23:37.018 { 00:23:37.018 "method": "bdev_iscsi_set_options", 00:23:37.018 "params": { 00:23:37.018 "timeout_sec": 30 00:23:37.018 } 00:23:37.018 }, 00:23:37.018 { 00:23:37.018 "method": "bdev_nvme_set_options", 00:23:37.018 "params": { 00:23:37.018 "action_on_timeout": "none", 00:23:37.018 "timeout_us": 0, 00:23:37.018 "timeout_admin_us": 0, 00:23:37.018 "keep_alive_timeout_ms": 10000, 00:23:37.018 "arbitration_burst": 0, 00:23:37.018 "low_priority_weight": 0, 00:23:37.018 "medium_priority_weight": 0, 00:23:37.018 "high_priority_weight": 0, 00:23:37.018 "nvme_adminq_poll_period_us": 10000, 00:23:37.018 "nvme_ioq_poll_period_us": 0, 00:23:37.018 "io_queue_requests": 0, 00:23:37.018 "delay_cmd_submit": true, 00:23:37.018 "transport_retry_count": 4, 00:23:37.018 "bdev_retry_count": 3, 00:23:37.018 "transport_ack_timeout": 0, 00:23:37.018 "ctrlr_loss_timeout_sec": 0, 00:23:37.018 "reconnect_delay_sec": 0, 00:23:37.018 "fast_io_fail_timeout_sec": 0, 00:23:37.018 "disable_auto_failback": false, 00:23:37.018 "generate_uuids": false, 00:23:37.018 "transport_tos": 0, 00:23:37.018 "nvme_error_stat": false, 00:23:37.018 "rdma_srq_size": 0, 00:23:37.018 "io_path_stat": false, 00:23:37.018 "allow_accel_sequence": false, 00:23:37.018 "rdma_max_cq_size": 0, 00:23:37.018 "rdma_cm_event_timeout_ms": 0, 00:23:37.018 "dhchap_digests": [ 00:23:37.018 "sha256", 00:23:37.018 "sha384", 00:23:37.018 "sha512" 00:23:37.018 ], 00:23:37.018 "dhchap_dhgroups": [ 00:23:37.018 "null", 00:23:37.018 "ffdhe2048", 00:23:37.018 "ffdhe3072", 00:23:37.018 "ffdhe4096", 00:23:37.018 "ffdhe6144", 00:23:37.018 "ffdhe8192" 00:23:37.018 ], 00:23:37.018 "rdma_umr_per_io": false 00:23:37.018 } 00:23:37.018 }, 00:23:37.018 { 00:23:37.018 "method": "bdev_nvme_set_hotplug", 00:23:37.018 "params": { 00:23:37.018 "period_us": 100000, 00:23:37.018 "enable": false 00:23:37.018 } 00:23:37.018 }, 00:23:37.018 { 00:23:37.018 "method": "bdev_malloc_create", 00:23:37.018 "params": { 00:23:37.018 "name": "malloc0", 00:23:37.018 "num_blocks": 8192, 00:23:37.018 "block_size": 4096, 00:23:37.018 "physical_block_size": 4096, 00:23:37.018 "uuid": "be5508b5-5c26-4ba7-9607-a8daf6241e6e", 00:23:37.018 "optimal_io_boundary": 0, 00:23:37.018 "md_size": 0, 00:23:37.018 "dif_type": 0, 00:23:37.018 "dif_is_head_of_md": false, 00:23:37.018 "dif_pi_format": 0 00:23:37.018 } 00:23:37.018 }, 00:23:37.018 { 00:23:37.018 "method": "bdev_wait_for_examine" 00:23:37.018 } 00:23:37.018 ] 00:23:37.018 }, 00:23:37.018 { 00:23:37.018 "subsystem": "nbd", 00:23:37.018 "config": [] 00:23:37.018 }, 00:23:37.018 { 00:23:37.018 "subsystem": "scheduler", 00:23:37.018 "config": [ 00:23:37.018 { 00:23:37.018 "method": "framework_set_scheduler", 00:23:37.018 "params": { 00:23:37.018 "name": "static" 00:23:37.018 } 00:23:37.018 } 00:23:37.018 ] 00:23:37.018 }, 00:23:37.018 { 00:23:37.018 "subsystem": "nvmf", 00:23:37.018 "config": [ 00:23:37.018 { 00:23:37.018 "method": "nvmf_set_config", 00:23:37.018 "params": { 00:23:37.018 "discovery_filter": "match_any", 00:23:37.018 "admin_cmd_passthru": { 00:23:37.018 "identify_ctrlr": false 00:23:37.018 }, 00:23:37.018 "dhchap_digests": [ 00:23:37.018 "sha256", 00:23:37.019 "sha384", 00:23:37.019 "sha512" 00:23:37.019 ], 00:23:37.019 "dhchap_dhgroups": [ 00:23:37.019 "null", 00:23:37.019 "ffdhe2048", 00:23:37.019 "ffdhe3072", 00:23:37.019 "ffdhe4096", 00:23:37.019 "ffdhe6144", 00:23:37.019 "ffdhe8192" 00:23:37.019 ] 00:23:37.019 } 00:23:37.019 }, 00:23:37.019 { 00:23:37.019 "method": "nvmf_set_max_subsystems", 00:23:37.019 "params": { 00:23:37.019 "max_subsystems": 1024 00:23:37.019 } 00:23:37.019 }, 00:23:37.019 { 00:23:37.019 "method": "nvmf_set_crdt", 00:23:37.019 "params": { 00:23:37.019 "crdt1": 0, 00:23:37.019 "crdt2": 0, 00:23:37.019 "crdt3": 0 00:23:37.019 } 00:23:37.019 }, 00:23:37.019 { 00:23:37.019 "method": "nvmf_create_transport", 00:23:37.019 "params": { 00:23:37.019 "trtype": "TCP", 00:23:37.019 "max_queue_depth": 128, 00:23:37.019 "max_io_qpairs_per_ctrlr": 127, 00:23:37.019 "in_capsule_data_size": 4096, 00:23:37.019 "max_io_size": 131072, 00:23:37.019 "io_unit_size": 131072, 00:23:37.019 "max_aq_depth": 128, 00:23:37.019 "num_shared_buffers": 511, 00:23:37.019 "buf_cache_size": 4294967295, 00:23:37.019 "dif_insert_or_strip": false, 00:23:37.019 "zcopy": false, 00:23:37.019 "c2h_success": false, 00:23:37.019 "sock_priority": 0, 00:23:37.019 "abort_timeout_sec": 1, 00:23:37.019 "ack_timeout": 0, 00:23:37.019 "data_wr_pool_size": 0 00:23:37.019 } 00:23:37.019 }, 00:23:37.019 { 00:23:37.019 "method": "nvmf_create_subsystem", 00:23:37.019 "params": { 00:23:37.019 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:37.019 "allow_any_host": false, 00:23:37.019 "serial_number": "SPDK00000000000001", 00:23:37.019 "model_number": "SPDK bdev Controller", 00:23:37.019 "max_namespaces": 10, 00:23:37.019 "min_cntlid": 1, 00:23:37.019 "max_cntlid": 65519, 00:23:37.019 "ana_reporting": false 00:23:37.019 } 00:23:37.019 }, 00:23:37.019 { 00:23:37.019 "method": "nvmf_subsystem_add_host", 00:23:37.019 "params": { 00:23:37.019 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:37.019 "host": "nqn.2016-06.io.spdk:host1", 00:23:37.019 "psk": "key0" 00:23:37.019 } 00:23:37.019 }, 00:23:37.019 { 00:23:37.019 "method": "nvmf_subsystem_add_ns", 00:23:37.019 "params": { 00:23:37.019 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:37.019 "namespace": { 00:23:37.019 "nsid": 1, 00:23:37.019 "bdev_name": "malloc0", 00:23:37.019 "nguid": "BE5508B55C264BA79607A8DAF6241E6E", 00:23:37.019 "uuid": "be5508b5-5c26-4ba7-9607-a8daf6241e6e", 00:23:37.019 "no_auto_visible": false 00:23:37.019 } 00:23:37.019 } 00:23:37.019 }, 00:23:37.019 { 00:23:37.019 "method": "nvmf_subsystem_add_listener", 00:23:37.019 "params": { 00:23:37.019 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:37.019 "listen_address": { 00:23:37.019 "trtype": "TCP", 00:23:37.019 "adrfam": "IPv4", 00:23:37.019 "traddr": "10.0.0.2", 00:23:37.019 "trsvcid": "4420" 00:23:37.019 }, 00:23:37.019 "secure_channel": true 00:23:37.019 } 00:23:37.019 } 00:23:37.019 ] 00:23:37.019 } 00:23:37.019 ] 00:23:37.019 }' 00:23:37.019 00:04:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:23:37.278 00:04:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # bdevperfconf='{ 00:23:37.278 "subsystems": [ 00:23:37.278 { 00:23:37.278 "subsystem": "keyring", 00:23:37.278 "config": [ 00:23:37.278 { 00:23:37.278 "method": "keyring_file_add_key", 00:23:37.278 "params": { 00:23:37.278 "name": "key0", 00:23:37.278 "path": "/tmp/tmp.RxVOK3vMXt" 00:23:37.278 } 00:23:37.278 } 00:23:37.278 ] 00:23:37.278 }, 00:23:37.278 { 00:23:37.278 "subsystem": "iobuf", 00:23:37.278 "config": [ 00:23:37.278 { 00:23:37.278 "method": "iobuf_set_options", 00:23:37.278 "params": { 00:23:37.278 "small_pool_count": 8192, 00:23:37.278 "large_pool_count": 1024, 00:23:37.278 "small_bufsize": 8192, 00:23:37.278 "large_bufsize": 135168, 00:23:37.278 "enable_numa": false 00:23:37.278 } 00:23:37.278 } 00:23:37.278 ] 00:23:37.278 }, 00:23:37.278 { 00:23:37.278 "subsystem": "sock", 00:23:37.278 "config": [ 00:23:37.278 { 00:23:37.278 "method": "sock_set_default_impl", 00:23:37.278 "params": { 00:23:37.278 "impl_name": "posix" 00:23:37.278 } 00:23:37.278 }, 00:23:37.278 { 00:23:37.278 "method": "sock_impl_set_options", 00:23:37.278 "params": { 00:23:37.278 "impl_name": "ssl", 00:23:37.278 "recv_buf_size": 4096, 00:23:37.278 "send_buf_size": 4096, 00:23:37.278 "enable_recv_pipe": true, 00:23:37.278 "enable_quickack": false, 00:23:37.278 "enable_placement_id": 0, 00:23:37.278 "enable_zerocopy_send_server": true, 00:23:37.278 "enable_zerocopy_send_client": false, 00:23:37.278 "zerocopy_threshold": 0, 00:23:37.278 "tls_version": 0, 00:23:37.278 "enable_ktls": false 00:23:37.278 } 00:23:37.278 }, 00:23:37.278 { 00:23:37.278 "method": "sock_impl_set_options", 00:23:37.278 "params": { 00:23:37.278 "impl_name": "posix", 00:23:37.278 "recv_buf_size": 2097152, 00:23:37.278 "send_buf_size": 2097152, 00:23:37.278 "enable_recv_pipe": true, 00:23:37.278 "enable_quickack": false, 00:23:37.278 "enable_placement_id": 0, 00:23:37.278 "enable_zerocopy_send_server": true, 00:23:37.278 "enable_zerocopy_send_client": false, 00:23:37.278 "zerocopy_threshold": 0, 00:23:37.278 "tls_version": 0, 00:23:37.278 "enable_ktls": false 00:23:37.278 } 00:23:37.278 } 00:23:37.278 ] 00:23:37.278 }, 00:23:37.278 { 00:23:37.278 "subsystem": "vmd", 00:23:37.278 "config": [] 00:23:37.278 }, 00:23:37.278 { 00:23:37.278 "subsystem": "accel", 00:23:37.278 "config": [ 00:23:37.278 { 00:23:37.278 "method": "accel_set_options", 00:23:37.278 "params": { 00:23:37.278 "small_cache_size": 128, 00:23:37.278 "large_cache_size": 16, 00:23:37.278 "task_count": 2048, 00:23:37.278 "sequence_count": 2048, 00:23:37.278 "buf_count": 2048 00:23:37.278 } 00:23:37.278 } 00:23:37.278 ] 00:23:37.278 }, 00:23:37.278 { 00:23:37.278 "subsystem": "bdev", 00:23:37.278 "config": [ 00:23:37.278 { 00:23:37.278 "method": "bdev_set_options", 00:23:37.278 "params": { 00:23:37.278 "bdev_io_pool_size": 65535, 00:23:37.278 "bdev_io_cache_size": 256, 00:23:37.279 "bdev_auto_examine": true, 00:23:37.279 "iobuf_small_cache_size": 128, 00:23:37.279 "iobuf_large_cache_size": 16 00:23:37.279 } 00:23:37.279 }, 00:23:37.279 { 00:23:37.279 "method": "bdev_raid_set_options", 00:23:37.279 "params": { 00:23:37.279 "process_window_size_kb": 1024, 00:23:37.279 "process_max_bandwidth_mb_sec": 0 00:23:37.279 } 00:23:37.279 }, 00:23:37.279 { 00:23:37.279 "method": "bdev_iscsi_set_options", 00:23:37.279 "params": { 00:23:37.279 "timeout_sec": 30 00:23:37.279 } 00:23:37.279 }, 00:23:37.279 { 00:23:37.279 "method": "bdev_nvme_set_options", 00:23:37.279 "params": { 00:23:37.279 "action_on_timeout": "none", 00:23:37.279 "timeout_us": 0, 00:23:37.279 "timeout_admin_us": 0, 00:23:37.279 "keep_alive_timeout_ms": 10000, 00:23:37.279 "arbitration_burst": 0, 00:23:37.279 "low_priority_weight": 0, 00:23:37.279 "medium_priority_weight": 0, 00:23:37.279 "high_priority_weight": 0, 00:23:37.279 "nvme_adminq_poll_period_us": 10000, 00:23:37.279 "nvme_ioq_poll_period_us": 0, 00:23:37.279 "io_queue_requests": 512, 00:23:37.279 "delay_cmd_submit": true, 00:23:37.279 "transport_retry_count": 4, 00:23:37.279 "bdev_retry_count": 3, 00:23:37.279 "transport_ack_timeout": 0, 00:23:37.279 "ctrlr_loss_timeout_sec": 0, 00:23:37.279 "reconnect_delay_sec": 0, 00:23:37.279 "fast_io_fail_timeout_sec": 0, 00:23:37.279 "disable_auto_failback": false, 00:23:37.279 "generate_uuids": false, 00:23:37.279 "transport_tos": 0, 00:23:37.279 "nvme_error_stat": false, 00:23:37.279 "rdma_srq_size": 0, 00:23:37.279 "io_path_stat": false, 00:23:37.279 "allow_accel_sequence": false, 00:23:37.279 "rdma_max_cq_size": 0, 00:23:37.279 "rdma_cm_event_timeout_ms": 0, 00:23:37.279 "dhchap_digests": [ 00:23:37.279 "sha256", 00:23:37.279 "sha384", 00:23:37.279 "sha512" 00:23:37.279 ], 00:23:37.279 "dhchap_dhgroups": [ 00:23:37.279 "null", 00:23:37.279 "ffdhe2048", 00:23:37.279 "ffdhe3072", 00:23:37.279 "ffdhe4096", 00:23:37.279 "ffdhe6144", 00:23:37.279 "ffdhe8192" 00:23:37.279 ], 00:23:37.279 "rdma_umr_per_io": false 00:23:37.279 } 00:23:37.279 }, 00:23:37.279 { 00:23:37.279 "method": "bdev_nvme_attach_controller", 00:23:37.279 "params": { 00:23:37.279 "name": "TLSTEST", 00:23:37.279 "trtype": "TCP", 00:23:37.279 "adrfam": "IPv4", 00:23:37.279 "traddr": "10.0.0.2", 00:23:37.279 "trsvcid": "4420", 00:23:37.279 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:37.279 "prchk_reftag": false, 00:23:37.279 "prchk_guard": false, 00:23:37.279 "ctrlr_loss_timeout_sec": 0, 00:23:37.279 "reconnect_delay_sec": 0, 00:23:37.279 "fast_io_fail_timeout_sec": 0, 00:23:37.279 "psk": "key0", 00:23:37.279 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:37.279 "hdgst": false, 00:23:37.279 "ddgst": false, 00:23:37.279 "multipath": "multipath" 00:23:37.279 } 00:23:37.279 }, 00:23:37.279 { 00:23:37.279 "method": "bdev_nvme_set_hotplug", 00:23:37.279 "params": { 00:23:37.279 "period_us": 100000, 00:23:37.279 "enable": false 00:23:37.279 } 00:23:37.279 }, 00:23:37.279 { 00:23:37.279 "method": "bdev_wait_for_examine" 00:23:37.279 } 00:23:37.279 ] 00:23:37.279 }, 00:23:37.279 { 00:23:37.279 "subsystem": "nbd", 00:23:37.279 "config": [] 00:23:37.279 } 00:23:37.279 ] 00:23:37.279 }' 00:23:37.279 00:04:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@201 -- # killprocess 4049546 00:23:37.279 00:04:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 4049546 ']' 00:23:37.279 00:04:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 4049546 00:23:37.279 00:04:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:37.279 00:04:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:37.279 00:04:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4049546 00:23:37.279 00:04:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:23:37.279 00:04:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:23:37.279 00:04:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4049546' 00:23:37.279 killing process with pid 4049546 00:23:37.279 00:04:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 4049546 00:23:37.279 Received shutdown signal, test time was about 10.000000 seconds 00:23:37.279 00:23:37.279 Latency(us) 00:23:37.279 [2024-12-13T23:04:16.420Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:37.279 [2024-12-13T23:04:16.420Z] =================================================================================================================== 00:23:37.279 [2024-12-13T23:04:16.420Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:37.279 00:04:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 4049546 00:23:38.216 00:04:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@202 -- # killprocess 4049128 00:23:38.216 00:04:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 4049128 ']' 00:23:38.216 00:04:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 4049128 00:23:38.216 00:04:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:38.216 00:04:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:38.216 00:04:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4049128 00:23:38.216 00:04:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:38.216 00:04:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:38.216 00:04:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4049128' 00:23:38.216 killing process with pid 4049128 00:23:38.216 00:04:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 4049128 00:23:38.216 00:04:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 4049128 00:23:39.593 00:04:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:23:39.593 00:04:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:39.593 00:04:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:39.593 00:04:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # echo '{ 00:23:39.593 "subsystems": [ 00:23:39.593 { 00:23:39.593 "subsystem": "keyring", 00:23:39.593 "config": [ 00:23:39.593 { 00:23:39.593 "method": "keyring_file_add_key", 00:23:39.593 "params": { 00:23:39.593 "name": "key0", 00:23:39.593 "path": "/tmp/tmp.RxVOK3vMXt" 00:23:39.593 } 00:23:39.593 } 00:23:39.593 ] 00:23:39.593 }, 00:23:39.593 { 00:23:39.593 "subsystem": "iobuf", 00:23:39.593 "config": [ 00:23:39.593 { 00:23:39.593 "method": "iobuf_set_options", 00:23:39.593 "params": { 00:23:39.593 "small_pool_count": 8192, 00:23:39.593 "large_pool_count": 1024, 00:23:39.593 "small_bufsize": 8192, 00:23:39.593 "large_bufsize": 135168, 00:23:39.593 "enable_numa": false 00:23:39.593 } 00:23:39.593 } 00:23:39.593 ] 00:23:39.593 }, 00:23:39.593 { 00:23:39.593 "subsystem": "sock", 00:23:39.593 "config": [ 00:23:39.593 { 00:23:39.593 "method": "sock_set_default_impl", 00:23:39.593 "params": { 00:23:39.593 "impl_name": "posix" 00:23:39.593 } 00:23:39.593 }, 00:23:39.593 { 00:23:39.593 "method": "sock_impl_set_options", 00:23:39.593 "params": { 00:23:39.593 "impl_name": "ssl", 00:23:39.593 "recv_buf_size": 4096, 00:23:39.593 "send_buf_size": 4096, 00:23:39.593 "enable_recv_pipe": true, 00:23:39.593 "enable_quickack": false, 00:23:39.593 "enable_placement_id": 0, 00:23:39.593 "enable_zerocopy_send_server": true, 00:23:39.593 "enable_zerocopy_send_client": false, 00:23:39.593 "zerocopy_threshold": 0, 00:23:39.593 "tls_version": 0, 00:23:39.593 "enable_ktls": false 00:23:39.593 } 00:23:39.593 }, 00:23:39.593 { 00:23:39.593 "method": "sock_impl_set_options", 00:23:39.593 "params": { 00:23:39.593 "impl_name": "posix", 00:23:39.593 "recv_buf_size": 2097152, 00:23:39.593 "send_buf_size": 2097152, 00:23:39.593 "enable_recv_pipe": true, 00:23:39.593 "enable_quickack": false, 00:23:39.593 "enable_placement_id": 0, 00:23:39.593 "enable_zerocopy_send_server": true, 00:23:39.593 "enable_zerocopy_send_client": false, 00:23:39.593 "zerocopy_threshold": 0, 00:23:39.593 "tls_version": 0, 00:23:39.593 "enable_ktls": false 00:23:39.593 } 00:23:39.593 } 00:23:39.593 ] 00:23:39.593 }, 00:23:39.593 { 00:23:39.593 "subsystem": "vmd", 00:23:39.593 "config": [] 00:23:39.593 }, 00:23:39.593 { 00:23:39.593 "subsystem": "accel", 00:23:39.593 "config": [ 00:23:39.593 { 00:23:39.593 "method": "accel_set_options", 00:23:39.593 "params": { 00:23:39.593 "small_cache_size": 128, 00:23:39.593 "large_cache_size": 16, 00:23:39.593 "task_count": 2048, 00:23:39.593 "sequence_count": 2048, 00:23:39.593 "buf_count": 2048 00:23:39.593 } 00:23:39.593 } 00:23:39.593 ] 00:23:39.593 }, 00:23:39.593 { 00:23:39.593 "subsystem": "bdev", 00:23:39.593 "config": [ 00:23:39.593 { 00:23:39.593 "method": "bdev_set_options", 00:23:39.593 "params": { 00:23:39.593 "bdev_io_pool_size": 65535, 00:23:39.593 "bdev_io_cache_size": 256, 00:23:39.593 "bdev_auto_examine": true, 00:23:39.593 "iobuf_small_cache_size": 128, 00:23:39.593 "iobuf_large_cache_size": 16 00:23:39.593 } 00:23:39.593 }, 00:23:39.593 { 00:23:39.593 "method": "bdev_raid_set_options", 00:23:39.593 "params": { 00:23:39.593 "process_window_size_kb": 1024, 00:23:39.593 "process_max_bandwidth_mb_sec": 0 00:23:39.593 } 00:23:39.593 }, 00:23:39.593 { 00:23:39.593 "method": "bdev_iscsi_set_options", 00:23:39.593 "params": { 00:23:39.593 "timeout_sec": 30 00:23:39.593 } 00:23:39.593 }, 00:23:39.593 { 00:23:39.593 "method": "bdev_nvme_set_options", 00:23:39.593 "params": { 00:23:39.593 "action_on_timeout": "none", 00:23:39.593 "timeout_us": 0, 00:23:39.593 "timeout_admin_us": 0, 00:23:39.593 "keep_alive_timeout_ms": 10000, 00:23:39.593 "arbitration_burst": 0, 00:23:39.593 "low_priority_weight": 0, 00:23:39.593 "medium_priority_weight": 0, 00:23:39.593 "high_priority_weight": 0, 00:23:39.593 "nvme_adminq_poll_period_us": 10000, 00:23:39.593 "nvme_ioq_poll_period_us": 0, 00:23:39.593 "io_queue_requests": 0, 00:23:39.593 "delay_cmd_submit": true, 00:23:39.593 "transport_retry_count": 4, 00:23:39.593 "bdev_retry_count": 3, 00:23:39.593 "transport_ack_timeout": 0, 00:23:39.593 "ctrlr_loss_timeout_sec": 0, 00:23:39.593 "reconnect_delay_sec": 0, 00:23:39.593 "fast_io_fail_timeout_sec": 0, 00:23:39.593 "disable_auto_failback": false, 00:23:39.593 "generate_uuids": false, 00:23:39.593 "transport_tos": 0, 00:23:39.593 "nvme_error_stat": false, 00:23:39.593 "rdma_srq_size": 0, 00:23:39.593 "io_path_stat": false, 00:23:39.593 "allow_accel_sequence": false, 00:23:39.593 "rdma_max_cq_size": 0, 00:23:39.593 "rdma_cm_event_timeout_ms": 0, 00:23:39.593 "dhchap_digests": [ 00:23:39.593 "sha256", 00:23:39.593 "sha384", 00:23:39.593 "sha512" 00:23:39.593 ], 00:23:39.593 "dhchap_dhgroups": [ 00:23:39.593 "null", 00:23:39.593 "ffdhe2048", 00:23:39.593 "ffdhe3072", 00:23:39.593 "ffdhe4096", 00:23:39.593 "ffdhe6144", 00:23:39.593 "ffdhe8192" 00:23:39.593 ], 00:23:39.593 "rdma_umr_per_io": false 00:23:39.593 } 00:23:39.593 }, 00:23:39.593 { 00:23:39.593 "method": "bdev_nvme_set_hotplug", 00:23:39.593 "params": { 00:23:39.593 "period_us": 100000, 00:23:39.593 "enable": false 00:23:39.593 } 00:23:39.593 }, 00:23:39.593 { 00:23:39.593 "method": "bdev_malloc_create", 00:23:39.593 "params": { 00:23:39.593 "name": "malloc0", 00:23:39.593 "num_blocks": 8192, 00:23:39.593 "block_size": 4096, 00:23:39.593 "physical_block_size": 4096, 00:23:39.594 "uuid": "be5508b5-5c26-4ba7-9607-a8daf6241e6e", 00:23:39.594 "optimal_io_boundary": 0, 00:23:39.594 "md_size": 0, 00:23:39.594 "dif_type": 0, 00:23:39.594 "dif_is_head_of_md": false, 00:23:39.594 "dif_pi_format": 0 00:23:39.594 } 00:23:39.594 }, 00:23:39.594 { 00:23:39.594 "method": "bdev_wait_for_examine" 00:23:39.594 } 00:23:39.594 ] 00:23:39.594 }, 00:23:39.594 { 00:23:39.594 "subsystem": "nbd", 00:23:39.594 "config": [] 00:23:39.594 }, 00:23:39.594 { 00:23:39.594 "subsystem": "scheduler", 00:23:39.594 "config": [ 00:23:39.594 { 00:23:39.594 "method": "framework_set_scheduler", 00:23:39.594 "params": { 00:23:39.594 "name": "static" 00:23:39.594 } 00:23:39.594 } 00:23:39.594 ] 00:23:39.594 }, 00:23:39.594 { 00:23:39.594 "subsystem": "nvmf", 00:23:39.594 "config": [ 00:23:39.594 { 00:23:39.594 "method": "nvmf_set_config", 00:23:39.594 "params": { 00:23:39.594 "discovery_filter": "match_any", 00:23:39.594 "admin_cmd_passthru": { 00:23:39.594 "identify_ctrlr": false 00:23:39.594 }, 00:23:39.594 "dhchap_digests": [ 00:23:39.594 "sha256", 00:23:39.594 "sha384", 00:23:39.594 "sha512" 00:23:39.594 ], 00:23:39.594 "dhchap_dhgroups": [ 00:23:39.594 "null", 00:23:39.594 "ffdhe2048", 00:23:39.594 "ffdhe3072", 00:23:39.594 "ffdhe4096", 00:23:39.594 "ffdhe6144", 00:23:39.594 "ffdhe8192" 00:23:39.594 ] 00:23:39.594 } 00:23:39.594 }, 00:23:39.594 { 00:23:39.594 "method": "nvmf_set_max_subsystems", 00:23:39.594 "params": { 00:23:39.594 "max_subsystems": 1024 00:23:39.594 } 00:23:39.594 }, 00:23:39.594 { 00:23:39.594 "method": "nvmf_set_crdt", 00:23:39.594 "params": { 00:23:39.594 "crdt1": 0, 00:23:39.594 "crdt2": 0, 00:23:39.594 "crdt3": 0 00:23:39.594 } 00:23:39.594 }, 00:23:39.594 { 00:23:39.594 "method": "nvmf_create_transport", 00:23:39.594 "params": { 00:23:39.594 "trtype": "TCP", 00:23:39.594 "max_queue_depth": 128, 00:23:39.594 "max_io_qpairs_per_ctrlr": 127, 00:23:39.594 "in_capsule_data_size": 4096, 00:23:39.594 "max_io_size": 131072, 00:23:39.594 "io_unit_size": 131072, 00:23:39.594 "max_aq_depth": 128, 00:23:39.594 "num_shared_buffers": 511, 00:23:39.594 "buf_cache_size": 4294967295, 00:23:39.594 "dif_insert_or_strip": false, 00:23:39.594 "zcopy": false, 00:23:39.594 "c2h_success": false, 00:23:39.594 "sock_priority": 0, 00:23:39.594 "abort_timeout_sec": 1, 00:23:39.594 "ack_timeout": 0, 00:23:39.594 "data_wr_pool_size": 0 00:23:39.594 } 00:23:39.594 }, 00:23:39.594 { 00:23:39.594 "method": "nvmf_create_subsystem", 00:23:39.594 "params": { 00:23:39.594 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:39.594 "allow_any_host": false, 00:23:39.594 "serial_number": "SPDK00000000000001", 00:23:39.594 "model_number": "SPDK bdev Controller", 00:23:39.594 "max_namespaces": 10, 00:23:39.594 "min_cntlid": 1, 00:23:39.594 "max_cntlid": 65519, 00:23:39.594 "ana_reporting": false 00:23:39.594 } 00:23:39.594 }, 00:23:39.594 { 00:23:39.594 "method": "nvmf_subsystem_add_host", 00:23:39.594 "params": { 00:23:39.594 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:39.594 "host": "nqn.2016-06.io.spdk:host1", 00:23:39.594 "psk": "key0" 00:23:39.594 } 00:23:39.594 }, 00:23:39.594 { 00:23:39.594 "method": "nvmf_subsystem_add_ns", 00:23:39.594 "params": { 00:23:39.594 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:39.594 "namespace": { 00:23:39.594 "nsid": 1, 00:23:39.594 "bdev_name": "malloc0", 00:23:39.594 "nguid": "BE5508B55C264BA79607A8DAF6241E6E", 00:23:39.594 "uuid": "be5508b5-5c26-4ba7-9607-a8daf6241e6e", 00:23:39.594 "no_auto_visible": false 00:23:39.594 } 00:23:39.594 } 00:23:39.594 }, 00:23:39.594 { 00:23:39.594 "method": "nvmf_subsystem_add_listener", 00:23:39.594 "params": { 00:23:39.594 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:39.594 "listen_address": { 00:23:39.594 "trtype": "TCP", 00:23:39.594 "adrfam": "IPv4", 00:23:39.594 "traddr": "10.0.0.2", 00:23:39.594 "trsvcid": "4420" 00:23:39.594 }, 00:23:39.594 "secure_channel": true 00:23:39.594 } 00:23:39.594 } 00:23:39.594 ] 00:23:39.594 } 00:23:39.594 ] 00:23:39.594 }' 00:23:39.594 00:04:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:39.594 00:04:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=4050133 00:23:39.594 00:04:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 4050133 00:23:39.594 00:04:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:23:39.594 00:04:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 4050133 ']' 00:23:39.594 00:04:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:39.594 00:04:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:39.594 00:04:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:39.594 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:39.594 00:04:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:39.594 00:04:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:39.594 [2024-12-14 00:04:18.514745] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:23:39.594 [2024-12-14 00:04:18.514837] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:39.594 [2024-12-14 00:04:18.632685] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:39.853 [2024-12-14 00:04:18.735498] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:39.853 [2024-12-14 00:04:18.735543] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:39.853 [2024-12-14 00:04:18.735553] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:39.853 [2024-12-14 00:04:18.735581] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:39.853 [2024-12-14 00:04:18.735590] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:39.853 [2024-12-14 00:04:18.737180] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:23:40.112 [2024-12-14 00:04:19.220404] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:40.112 [2024-12-14 00:04:19.252435] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:40.112 [2024-12-14 00:04:19.252691] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:40.371 00:04:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:40.371 00:04:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:40.371 00:04:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:40.371 00:04:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:40.371 00:04:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:40.371 00:04:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:40.371 00:04:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@209 -- # bdevperf_pid=4050314 00:23:40.371 00:04:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@210 -- # waitforlisten 4050314 /var/tmp/bdevperf.sock 00:23:40.371 00:04:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:23:40.371 00:04:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 4050314 ']' 00:23:40.371 00:04:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:40.371 00:04:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:40.371 00:04:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # echo '{ 00:23:40.371 "subsystems": [ 00:23:40.371 { 00:23:40.371 "subsystem": "keyring", 00:23:40.371 "config": [ 00:23:40.371 { 00:23:40.371 "method": "keyring_file_add_key", 00:23:40.371 "params": { 00:23:40.371 "name": "key0", 00:23:40.371 "path": "/tmp/tmp.RxVOK3vMXt" 00:23:40.371 } 00:23:40.371 } 00:23:40.371 ] 00:23:40.371 }, 00:23:40.371 { 00:23:40.371 "subsystem": "iobuf", 00:23:40.371 "config": [ 00:23:40.371 { 00:23:40.371 "method": "iobuf_set_options", 00:23:40.371 "params": { 00:23:40.371 "small_pool_count": 8192, 00:23:40.371 "large_pool_count": 1024, 00:23:40.371 "small_bufsize": 8192, 00:23:40.371 "large_bufsize": 135168, 00:23:40.371 "enable_numa": false 00:23:40.371 } 00:23:40.371 } 00:23:40.371 ] 00:23:40.371 }, 00:23:40.371 { 00:23:40.371 "subsystem": "sock", 00:23:40.371 "config": [ 00:23:40.371 { 00:23:40.371 "method": "sock_set_default_impl", 00:23:40.371 "params": { 00:23:40.371 "impl_name": "posix" 00:23:40.371 } 00:23:40.371 }, 00:23:40.371 { 00:23:40.371 "method": "sock_impl_set_options", 00:23:40.371 "params": { 00:23:40.371 "impl_name": "ssl", 00:23:40.371 "recv_buf_size": 4096, 00:23:40.371 "send_buf_size": 4096, 00:23:40.371 "enable_recv_pipe": true, 00:23:40.371 "enable_quickack": false, 00:23:40.371 "enable_placement_id": 0, 00:23:40.371 "enable_zerocopy_send_server": true, 00:23:40.371 "enable_zerocopy_send_client": false, 00:23:40.371 "zerocopy_threshold": 0, 00:23:40.371 "tls_version": 0, 00:23:40.371 "enable_ktls": false 00:23:40.371 } 00:23:40.371 }, 00:23:40.371 { 00:23:40.371 "method": "sock_impl_set_options", 00:23:40.371 "params": { 00:23:40.371 "impl_name": "posix", 00:23:40.371 "recv_buf_size": 2097152, 00:23:40.371 "send_buf_size": 2097152, 00:23:40.371 "enable_recv_pipe": true, 00:23:40.371 "enable_quickack": false, 00:23:40.371 "enable_placement_id": 0, 00:23:40.371 "enable_zerocopy_send_server": true, 00:23:40.371 "enable_zerocopy_send_client": false, 00:23:40.371 "zerocopy_threshold": 0, 00:23:40.371 "tls_version": 0, 00:23:40.371 "enable_ktls": false 00:23:40.371 } 00:23:40.371 } 00:23:40.371 ] 00:23:40.371 }, 00:23:40.371 { 00:23:40.371 "subsystem": "vmd", 00:23:40.371 "config": [] 00:23:40.371 }, 00:23:40.371 { 00:23:40.371 "subsystem": "accel", 00:23:40.371 "config": [ 00:23:40.371 { 00:23:40.371 "method": "accel_set_options", 00:23:40.371 "params": { 00:23:40.371 "small_cache_size": 128, 00:23:40.371 "large_cache_size": 16, 00:23:40.371 "task_count": 2048, 00:23:40.371 "sequence_count": 2048, 00:23:40.371 "buf_count": 2048 00:23:40.371 } 00:23:40.371 } 00:23:40.371 ] 00:23:40.371 }, 00:23:40.371 { 00:23:40.371 "subsystem": "bdev", 00:23:40.371 "config": [ 00:23:40.371 { 00:23:40.371 "method": "bdev_set_options", 00:23:40.371 "params": { 00:23:40.371 "bdev_io_pool_size": 65535, 00:23:40.371 "bdev_io_cache_size": 256, 00:23:40.371 "bdev_auto_examine": true, 00:23:40.371 "iobuf_small_cache_size": 128, 00:23:40.371 "iobuf_large_cache_size": 16 00:23:40.371 } 00:23:40.371 }, 00:23:40.371 { 00:23:40.371 "method": "bdev_raid_set_options", 00:23:40.371 "params": { 00:23:40.371 "process_window_size_kb": 1024, 00:23:40.371 "process_max_bandwidth_mb_sec": 0 00:23:40.371 } 00:23:40.371 }, 00:23:40.371 { 00:23:40.371 "method": "bdev_iscsi_set_options", 00:23:40.371 "params": { 00:23:40.371 "timeout_sec": 30 00:23:40.371 } 00:23:40.371 }, 00:23:40.371 { 00:23:40.371 "method": "bdev_nvme_set_options", 00:23:40.371 "params": { 00:23:40.371 "action_on_timeout": "none", 00:23:40.371 "timeout_us": 0, 00:23:40.371 "timeout_admin_us": 0, 00:23:40.371 "keep_alive_timeout_ms": 10000, 00:23:40.371 "arbitration_burst": 0, 00:23:40.371 "low_priority_weight": 0, 00:23:40.371 "medium_priority_weight": 0, 00:23:40.371 "high_priority_weight": 0, 00:23:40.371 "nvme_adminq_poll_period_us": 10000, 00:23:40.371 "nvme_ioq_poll_period_us": 0, 00:23:40.371 "io_queue_requests": 512, 00:23:40.371 "delay_cmd_submit": true, 00:23:40.371 "transport_retry_count": 4, 00:23:40.371 "bdev_retry_count": 3, 00:23:40.371 "transport_ack_timeout": 0, 00:23:40.371 "ctrlr_loss_timeout_sec": 0, 00:23:40.371 "reconnect_delay_sec": 0, 00:23:40.371 "fast_io_fail_timeout_sec": 0, 00:23:40.371 "disable_auto_failback": false, 00:23:40.371 "generate_uuids": false, 00:23:40.371 "transport_tos": 0, 00:23:40.371 "nvme_error_stat": false, 00:23:40.371 "rdma_srq_size": 0, 00:23:40.371 "io_path_stat": false, 00:23:40.371 "allow_accel_sequence": false, 00:23:40.371 "rdma_max_cq_size": 0, 00:23:40.371 "rdma_cm_event_timeout_ms": 0, 00:23:40.371 "dhchap_digests": [ 00:23:40.371 "sha256", 00:23:40.371 "sha384", 00:23:40.371 "sha512" 00:23:40.371 ], 00:23:40.371 "dhchap_dhgroups": [ 00:23:40.371 "null", 00:23:40.371 "ffdhe2048", 00:23:40.371 "ffdhe3072", 00:23:40.371 "ffdhe4096", 00:23:40.371 "ffdhe6144", 00:23:40.371 "ffdhe8192" 00:23:40.371 ], 00:23:40.371 "rdma_umr_per_io": false 00:23:40.371 } 00:23:40.371 }, 00:23:40.371 { 00:23:40.371 "method": "bdev_nvme_attach_controller", 00:23:40.371 "params": { 00:23:40.371 "name": "TLSTEST", 00:23:40.371 "trtype": "TCP", 00:23:40.371 "adrfam": "IPv4", 00:23:40.371 "traddr": "10.0.0.2", 00:23:40.372 "trsvcid": "4420", 00:23:40.372 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:40.372 "prchk_reftag": false, 00:23:40.372 "prchk_guard": false, 00:23:40.372 "ctrlr_loss_timeout_sec": 0, 00:23:40.372 "reconnect_delay_sec": 0, 00:23:40.372 "fast_io_fail_timeout_sec": 0, 00:23:40.372 "psk": "key0", 00:23:40.372 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:40.372 "hdgst": false, 00:23:40.372 "ddgst": false, 00:23:40.372 "multipath": "multipath" 00:23:40.372 } 00:23:40.372 }, 00:23:40.372 { 00:23:40.372 "method": "bdev_nvme_set_hotplug", 00:23:40.372 "params": { 00:23:40.372 "period_us": 100000, 00:23:40.372 "enable": false 00:23:40.372 } 00:23:40.372 }, 00:23:40.372 { 00:23:40.372 "method": "bdev_wait_for_examine" 00:23:40.372 } 00:23:40.372 ] 00:23:40.372 }, 00:23:40.372 { 00:23:40.372 "subsystem": "nbd", 00:23:40.372 "config": [] 00:23:40.372 } 00:23:40.372 ] 00:23:40.372 }' 00:23:40.372 00:04:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:40.372 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:40.372 00:04:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:40.372 00:04:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:40.372 [2024-12-14 00:04:19.419066] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:23:40.372 [2024-12-14 00:04:19.419160] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4050314 ] 00:23:40.630 [2024-12-14 00:04:19.527124] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:40.630 [2024-12-14 00:04:19.637482] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:23:41.196 [2024-12-14 00:04:20.049323] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:41.196 00:04:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:41.196 00:04:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:41.196 00:04:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@213 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:23:41.196 Running I/O for 10 seconds... 00:23:43.515 4488.00 IOPS, 17.53 MiB/s [2024-12-13T23:04:23.594Z] 4609.50 IOPS, 18.01 MiB/s [2024-12-13T23:04:24.530Z] 4667.67 IOPS, 18.23 MiB/s [2024-12-13T23:04:25.468Z] 4662.75 IOPS, 18.21 MiB/s [2024-12-13T23:04:26.410Z] 4619.80 IOPS, 18.05 MiB/s [2024-12-13T23:04:27.345Z] 4621.50 IOPS, 18.05 MiB/s [2024-12-13T23:04:28.721Z] 4604.57 IOPS, 17.99 MiB/s [2024-12-13T23:04:29.665Z] 4609.12 IOPS, 18.00 MiB/s [2024-12-13T23:04:30.602Z] 4620.33 IOPS, 18.05 MiB/s [2024-12-13T23:04:30.602Z] 4620.70 IOPS, 18.05 MiB/s 00:23:51.461 Latency(us) 00:23:51.461 [2024-12-13T23:04:30.602Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:51.461 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:23:51.461 Verification LBA range: start 0x0 length 0x2000 00:23:51.461 TLSTESTn1 : 10.02 4625.38 18.07 0.00 0.00 27629.87 5804.62 33704.23 00:23:51.461 [2024-12-13T23:04:30.602Z] =================================================================================================================== 00:23:51.461 [2024-12-13T23:04:30.602Z] Total : 4625.38 18.07 0.00 0.00 27629.87 5804.62 33704.23 00:23:51.461 { 00:23:51.461 "results": [ 00:23:51.461 { 00:23:51.461 "job": "TLSTESTn1", 00:23:51.461 "core_mask": "0x4", 00:23:51.461 "workload": "verify", 00:23:51.461 "status": "finished", 00:23:51.461 "verify_range": { 00:23:51.461 "start": 0, 00:23:51.461 "length": 8192 00:23:51.461 }, 00:23:51.461 "queue_depth": 128, 00:23:51.461 "io_size": 4096, 00:23:51.461 "runtime": 10.016911, 00:23:51.461 "iops": 4625.378023224925, 00:23:51.461 "mibps": 18.067882903222362, 00:23:51.461 "io_failed": 0, 00:23:51.461 "io_timeout": 0, 00:23:51.461 "avg_latency_us": 27629.867458631903, 00:23:51.461 "min_latency_us": 5804.617142857142, 00:23:51.461 "max_latency_us": 33704.22857142857 00:23:51.461 } 00:23:51.461 ], 00:23:51.461 "core_count": 1 00:23:51.461 } 00:23:51.461 00:04:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@215 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:51.461 00:04:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@216 -- # killprocess 4050314 00:23:51.461 00:04:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 4050314 ']' 00:23:51.461 00:04:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 4050314 00:23:51.461 00:04:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:51.461 00:04:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:51.461 00:04:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4050314 00:23:51.461 00:04:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:23:51.461 00:04:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:23:51.461 00:04:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4050314' 00:23:51.461 killing process with pid 4050314 00:23:51.461 00:04:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 4050314 00:23:51.461 Received shutdown signal, test time was about 10.000000 seconds 00:23:51.461 00:23:51.461 Latency(us) 00:23:51.461 [2024-12-13T23:04:30.602Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:51.461 [2024-12-13T23:04:30.602Z] =================================================================================================================== 00:23:51.461 [2024-12-13T23:04:30.602Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:51.461 00:04:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 4050314 00:23:52.398 00:04:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@217 -- # killprocess 4050133 00:23:52.398 00:04:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 4050133 ']' 00:23:52.398 00:04:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 4050133 00:23:52.398 00:04:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:52.398 00:04:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:52.398 00:04:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4050133 00:23:52.398 00:04:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:52.398 00:04:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:52.398 00:04:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4050133' 00:23:52.398 killing process with pid 4050133 00:23:52.398 00:04:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 4050133 00:23:52.398 00:04:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 4050133 00:23:53.785 00:04:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@220 -- # nvmfappstart 00:23:53.785 00:04:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:53.785 00:04:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:53.785 00:04:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:53.785 00:04:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=4052439 00:23:53.785 00:04:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 4052439 00:23:53.785 00:04:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:23:53.785 00:04:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 4052439 ']' 00:23:53.785 00:04:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:53.785 00:04:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:53.785 00:04:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:53.785 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:53.785 00:04:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:53.785 00:04:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:53.785 [2024-12-14 00:04:32.696813] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:23:53.785 [2024-12-14 00:04:32.696907] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:53.785 [2024-12-14 00:04:32.815855] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:53.785 [2024-12-14 00:04:32.918672] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:53.785 [2024-12-14 00:04:32.918721] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:53.785 [2024-12-14 00:04:32.918731] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:53.785 [2024-12-14 00:04:32.918742] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:53.785 [2024-12-14 00:04:32.918749] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:53.785 [2024-12-14 00:04:32.920218] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:23:54.353 00:04:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:54.611 00:04:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:54.611 00:04:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:54.611 00:04:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:54.611 00:04:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:54.611 00:04:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:54.612 00:04:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@221 -- # setup_nvmf_tgt /tmp/tmp.RxVOK3vMXt 00:23:54.612 00:04:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.RxVOK3vMXt 00:23:54.612 00:04:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:23:54.612 [2024-12-14 00:04:33.692688] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:54.612 00:04:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:23:54.870 00:04:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:23:55.127 [2024-12-14 00:04:34.085734] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:55.127 [2024-12-14 00:04:34.086000] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:55.127 00:04:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:23:55.386 malloc0 00:23:55.386 00:04:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:23:55.386 00:04:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.RxVOK3vMXt 00:23:55.644 00:04:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:23:55.903 00:04:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@222 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:23:55.903 00:04:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@224 -- # bdevperf_pid=4052804 00:23:55.903 00:04:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@226 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:55.903 00:04:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@227 -- # waitforlisten 4052804 /var/tmp/bdevperf.sock 00:23:55.903 00:04:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 4052804 ']' 00:23:55.903 00:04:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:55.903 00:04:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:55.903 00:04:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:55.903 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:55.903 00:04:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:55.903 00:04:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:55.903 [2024-12-14 00:04:34.955688] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:23:55.903 [2024-12-14 00:04:34.955775] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4052804 ] 00:23:56.161 [2024-12-14 00:04:35.069031] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:56.161 [2024-12-14 00:04:35.179346] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:23:56.727 00:04:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:56.727 00:04:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:56.727 00:04:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@229 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.RxVOK3vMXt 00:23:56.986 00:04:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@230 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:23:56.986 [2024-12-14 00:04:36.106081] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:57.244 nvme0n1 00:23:57.244 00:04:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@234 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:57.244 Running I/O for 1 seconds... 00:23:58.180 4531.00 IOPS, 17.70 MiB/s 00:23:58.180 Latency(us) 00:23:58.180 [2024-12-13T23:04:37.321Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:58.180 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:23:58.180 Verification LBA range: start 0x0 length 0x2000 00:23:58.180 nvme0n1 : 1.02 4584.57 17.91 0.00 0.00 27696.42 6491.18 30208.98 00:23:58.180 [2024-12-13T23:04:37.322Z] =================================================================================================================== 00:23:58.181 [2024-12-13T23:04:37.322Z] Total : 4584.57 17.91 0.00 0.00 27696.42 6491.18 30208.98 00:23:58.181 { 00:23:58.181 "results": [ 00:23:58.181 { 00:23:58.181 "job": "nvme0n1", 00:23:58.181 "core_mask": "0x2", 00:23:58.181 "workload": "verify", 00:23:58.181 "status": "finished", 00:23:58.181 "verify_range": { 00:23:58.181 "start": 0, 00:23:58.181 "length": 8192 00:23:58.181 }, 00:23:58.181 "queue_depth": 128, 00:23:58.181 "io_size": 4096, 00:23:58.181 "runtime": 1.016235, 00:23:58.181 "iops": 4584.5695139411655, 00:23:58.181 "mibps": 17.908474663832678, 00:23:58.181 "io_failed": 0, 00:23:58.181 "io_timeout": 0, 00:23:58.181 "avg_latency_us": 27696.420288841877, 00:23:58.181 "min_latency_us": 6491.184761904762, 00:23:58.181 "max_latency_us": 30208.975238095238 00:23:58.181 } 00:23:58.181 ], 00:23:58.181 "core_count": 1 00:23:58.181 } 00:23:58.181 00:04:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@236 -- # killprocess 4052804 00:23:58.181 00:04:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 4052804 ']' 00:23:58.181 00:04:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 4052804 00:23:58.181 00:04:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:58.439 00:04:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:58.439 00:04:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4052804 00:23:58.439 00:04:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:58.439 00:04:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:58.439 00:04:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4052804' 00:23:58.439 killing process with pid 4052804 00:23:58.439 00:04:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 4052804 00:23:58.439 Received shutdown signal, test time was about 1.000000 seconds 00:23:58.439 00:23:58.439 Latency(us) 00:23:58.439 [2024-12-13T23:04:37.580Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:58.439 [2024-12-13T23:04:37.580Z] =================================================================================================================== 00:23:58.439 [2024-12-13T23:04:37.580Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:58.439 00:04:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 4052804 00:23:59.373 00:04:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@237 -- # killprocess 4052439 00:23:59.373 00:04:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 4052439 ']' 00:23:59.373 00:04:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 4052439 00:23:59.373 00:04:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:59.373 00:04:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:59.373 00:04:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4052439 00:23:59.373 00:04:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:59.373 00:04:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:59.373 00:04:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4052439' 00:23:59.373 killing process with pid 4052439 00:23:59.373 00:04:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 4052439 00:23:59.373 00:04:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 4052439 00:24:00.752 00:04:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@242 -- # nvmfappstart 00:24:00.752 00:04:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:00.752 00:04:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:00.752 00:04:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:00.752 00:04:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=4053542 00:24:00.752 00:04:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 4053542 00:24:00.752 00:04:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:24:00.752 00:04:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 4053542 ']' 00:24:00.752 00:04:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:00.752 00:04:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:00.752 00:04:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:00.752 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:00.752 00:04:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:00.752 00:04:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:00.752 [2024-12-14 00:04:39.575421] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:24:00.752 [2024-12-14 00:04:39.575537] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:00.752 [2024-12-14 00:04:39.695065] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:00.752 [2024-12-14 00:04:39.798817] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:00.752 [2024-12-14 00:04:39.798866] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:00.752 [2024-12-14 00:04:39.798876] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:00.752 [2024-12-14 00:04:39.798903] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:00.752 [2024-12-14 00:04:39.798911] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:00.752 [2024-12-14 00:04:39.800179] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:24:01.321 00:04:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:01.321 00:04:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:24:01.321 00:04:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:01.321 00:04:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:01.321 00:04:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:01.321 00:04:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:01.321 00:04:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@243 -- # rpc_cmd 00:24:01.321 00:04:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:01.321 00:04:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:01.321 [2024-12-14 00:04:40.436291] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:01.580 malloc0 00:24:01.580 [2024-12-14 00:04:40.490955] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:01.580 [2024-12-14 00:04:40.491201] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:01.580 00:04:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:01.580 00:04:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@256 -- # bdevperf_pid=4053732 00:24:01.580 00:04:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@258 -- # waitforlisten 4053732 /var/tmp/bdevperf.sock 00:24:01.580 00:04:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@254 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:24:01.580 00:04:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 4053732 ']' 00:24:01.580 00:04:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:01.580 00:04:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:01.580 00:04:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:01.580 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:01.580 00:04:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:01.580 00:04:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:01.580 [2024-12-14 00:04:40.590921] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:24:01.580 [2024-12-14 00:04:40.591009] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4053732 ] 00:24:01.580 [2024-12-14 00:04:40.704290] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:01.839 [2024-12-14 00:04:40.815103] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:24:02.510 00:04:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:02.510 00:04:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:24:02.510 00:04:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@259 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.RxVOK3vMXt 00:24:02.510 00:04:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@260 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:24:02.814 [2024-12-14 00:04:41.753299] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:02.814 nvme0n1 00:24:02.814 00:04:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@264 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:02.814 Running I/O for 1 seconds... 00:24:04.197 4484.00 IOPS, 17.52 MiB/s 00:24:04.197 Latency(us) 00:24:04.197 [2024-12-13T23:04:43.338Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:04.197 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:24:04.197 Verification LBA range: start 0x0 length 0x2000 00:24:04.197 nvme0n1 : 1.02 4524.73 17.67 0.00 0.00 28062.71 7770.70 33204.91 00:24:04.197 [2024-12-13T23:04:43.338Z] =================================================================================================================== 00:24:04.197 [2024-12-13T23:04:43.338Z] Total : 4524.73 17.67 0.00 0.00 28062.71 7770.70 33204.91 00:24:04.197 { 00:24:04.197 "results": [ 00:24:04.197 { 00:24:04.197 "job": "nvme0n1", 00:24:04.197 "core_mask": "0x2", 00:24:04.197 "workload": "verify", 00:24:04.197 "status": "finished", 00:24:04.197 "verify_range": { 00:24:04.197 "start": 0, 00:24:04.197 "length": 8192 00:24:04.197 }, 00:24:04.197 "queue_depth": 128, 00:24:04.197 "io_size": 4096, 00:24:04.197 "runtime": 1.019288, 00:24:04.197 "iops": 4524.727064382196, 00:24:04.197 "mibps": 17.674715095242952, 00:24:04.197 "io_failed": 0, 00:24:04.197 "io_timeout": 0, 00:24:04.197 "avg_latency_us": 28062.7104844505, 00:24:04.197 "min_latency_us": 7770.697142857143, 00:24:04.197 "max_latency_us": 33204.90666666667 00:24:04.197 } 00:24:04.197 ], 00:24:04.197 "core_count": 1 00:24:04.197 } 00:24:04.197 00:04:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # rpc_cmd save_config 00:24:04.197 00:04:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:04.197 00:04:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:04.197 00:04:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:04.197 00:04:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # tgtcfg='{ 00:24:04.197 "subsystems": [ 00:24:04.197 { 00:24:04.197 "subsystem": "keyring", 00:24:04.197 "config": [ 00:24:04.197 { 00:24:04.197 "method": "keyring_file_add_key", 00:24:04.197 "params": { 00:24:04.197 "name": "key0", 00:24:04.197 "path": "/tmp/tmp.RxVOK3vMXt" 00:24:04.197 } 00:24:04.197 } 00:24:04.197 ] 00:24:04.197 }, 00:24:04.197 { 00:24:04.197 "subsystem": "iobuf", 00:24:04.197 "config": [ 00:24:04.197 { 00:24:04.197 "method": "iobuf_set_options", 00:24:04.197 "params": { 00:24:04.197 "small_pool_count": 8192, 00:24:04.197 "large_pool_count": 1024, 00:24:04.197 "small_bufsize": 8192, 00:24:04.197 "large_bufsize": 135168, 00:24:04.197 "enable_numa": false 00:24:04.197 } 00:24:04.197 } 00:24:04.197 ] 00:24:04.197 }, 00:24:04.197 { 00:24:04.197 "subsystem": "sock", 00:24:04.197 "config": [ 00:24:04.197 { 00:24:04.197 "method": "sock_set_default_impl", 00:24:04.197 "params": { 00:24:04.197 "impl_name": "posix" 00:24:04.197 } 00:24:04.197 }, 00:24:04.197 { 00:24:04.197 "method": "sock_impl_set_options", 00:24:04.197 "params": { 00:24:04.197 "impl_name": "ssl", 00:24:04.197 "recv_buf_size": 4096, 00:24:04.197 "send_buf_size": 4096, 00:24:04.197 "enable_recv_pipe": true, 00:24:04.197 "enable_quickack": false, 00:24:04.197 "enable_placement_id": 0, 00:24:04.197 "enable_zerocopy_send_server": true, 00:24:04.197 "enable_zerocopy_send_client": false, 00:24:04.197 "zerocopy_threshold": 0, 00:24:04.197 "tls_version": 0, 00:24:04.197 "enable_ktls": false 00:24:04.197 } 00:24:04.197 }, 00:24:04.197 { 00:24:04.197 "method": "sock_impl_set_options", 00:24:04.197 "params": { 00:24:04.197 "impl_name": "posix", 00:24:04.197 "recv_buf_size": 2097152, 00:24:04.197 "send_buf_size": 2097152, 00:24:04.197 "enable_recv_pipe": true, 00:24:04.197 "enable_quickack": false, 00:24:04.197 "enable_placement_id": 0, 00:24:04.197 "enable_zerocopy_send_server": true, 00:24:04.197 "enable_zerocopy_send_client": false, 00:24:04.198 "zerocopy_threshold": 0, 00:24:04.198 "tls_version": 0, 00:24:04.198 "enable_ktls": false 00:24:04.198 } 00:24:04.198 } 00:24:04.198 ] 00:24:04.198 }, 00:24:04.198 { 00:24:04.198 "subsystem": "vmd", 00:24:04.198 "config": [] 00:24:04.198 }, 00:24:04.198 { 00:24:04.198 "subsystem": "accel", 00:24:04.198 "config": [ 00:24:04.198 { 00:24:04.198 "method": "accel_set_options", 00:24:04.198 "params": { 00:24:04.198 "small_cache_size": 128, 00:24:04.198 "large_cache_size": 16, 00:24:04.198 "task_count": 2048, 00:24:04.198 "sequence_count": 2048, 00:24:04.198 "buf_count": 2048 00:24:04.198 } 00:24:04.198 } 00:24:04.198 ] 00:24:04.198 }, 00:24:04.198 { 00:24:04.198 "subsystem": "bdev", 00:24:04.198 "config": [ 00:24:04.198 { 00:24:04.198 "method": "bdev_set_options", 00:24:04.198 "params": { 00:24:04.198 "bdev_io_pool_size": 65535, 00:24:04.198 "bdev_io_cache_size": 256, 00:24:04.198 "bdev_auto_examine": true, 00:24:04.198 "iobuf_small_cache_size": 128, 00:24:04.198 "iobuf_large_cache_size": 16 00:24:04.198 } 00:24:04.198 }, 00:24:04.198 { 00:24:04.198 "method": "bdev_raid_set_options", 00:24:04.198 "params": { 00:24:04.198 "process_window_size_kb": 1024, 00:24:04.198 "process_max_bandwidth_mb_sec": 0 00:24:04.198 } 00:24:04.198 }, 00:24:04.198 { 00:24:04.198 "method": "bdev_iscsi_set_options", 00:24:04.198 "params": { 00:24:04.198 "timeout_sec": 30 00:24:04.198 } 00:24:04.198 }, 00:24:04.198 { 00:24:04.198 "method": "bdev_nvme_set_options", 00:24:04.198 "params": { 00:24:04.198 "action_on_timeout": "none", 00:24:04.198 "timeout_us": 0, 00:24:04.198 "timeout_admin_us": 0, 00:24:04.198 "keep_alive_timeout_ms": 10000, 00:24:04.198 "arbitration_burst": 0, 00:24:04.198 "low_priority_weight": 0, 00:24:04.198 "medium_priority_weight": 0, 00:24:04.198 "high_priority_weight": 0, 00:24:04.198 "nvme_adminq_poll_period_us": 10000, 00:24:04.198 "nvme_ioq_poll_period_us": 0, 00:24:04.198 "io_queue_requests": 0, 00:24:04.198 "delay_cmd_submit": true, 00:24:04.198 "transport_retry_count": 4, 00:24:04.198 "bdev_retry_count": 3, 00:24:04.198 "transport_ack_timeout": 0, 00:24:04.198 "ctrlr_loss_timeout_sec": 0, 00:24:04.198 "reconnect_delay_sec": 0, 00:24:04.198 "fast_io_fail_timeout_sec": 0, 00:24:04.198 "disable_auto_failback": false, 00:24:04.198 "generate_uuids": false, 00:24:04.198 "transport_tos": 0, 00:24:04.198 "nvme_error_stat": false, 00:24:04.198 "rdma_srq_size": 0, 00:24:04.198 "io_path_stat": false, 00:24:04.198 "allow_accel_sequence": false, 00:24:04.198 "rdma_max_cq_size": 0, 00:24:04.198 "rdma_cm_event_timeout_ms": 0, 00:24:04.198 "dhchap_digests": [ 00:24:04.198 "sha256", 00:24:04.198 "sha384", 00:24:04.198 "sha512" 00:24:04.198 ], 00:24:04.198 "dhchap_dhgroups": [ 00:24:04.198 "null", 00:24:04.198 "ffdhe2048", 00:24:04.198 "ffdhe3072", 00:24:04.198 "ffdhe4096", 00:24:04.198 "ffdhe6144", 00:24:04.198 "ffdhe8192" 00:24:04.198 ], 00:24:04.198 "rdma_umr_per_io": false 00:24:04.198 } 00:24:04.198 }, 00:24:04.198 { 00:24:04.198 "method": "bdev_nvme_set_hotplug", 00:24:04.198 "params": { 00:24:04.198 "period_us": 100000, 00:24:04.198 "enable": false 00:24:04.198 } 00:24:04.198 }, 00:24:04.198 { 00:24:04.198 "method": "bdev_malloc_create", 00:24:04.198 "params": { 00:24:04.198 "name": "malloc0", 00:24:04.198 "num_blocks": 8192, 00:24:04.198 "block_size": 4096, 00:24:04.198 "physical_block_size": 4096, 00:24:04.198 "uuid": "aa6ba84a-1720-48c2-8601-00364634ae87", 00:24:04.198 "optimal_io_boundary": 0, 00:24:04.198 "md_size": 0, 00:24:04.198 "dif_type": 0, 00:24:04.198 "dif_is_head_of_md": false, 00:24:04.198 "dif_pi_format": 0 00:24:04.198 } 00:24:04.198 }, 00:24:04.198 { 00:24:04.198 "method": "bdev_wait_for_examine" 00:24:04.198 } 00:24:04.198 ] 00:24:04.198 }, 00:24:04.198 { 00:24:04.198 "subsystem": "nbd", 00:24:04.198 "config": [] 00:24:04.198 }, 00:24:04.198 { 00:24:04.198 "subsystem": "scheduler", 00:24:04.198 "config": [ 00:24:04.198 { 00:24:04.198 "method": "framework_set_scheduler", 00:24:04.198 "params": { 00:24:04.198 "name": "static" 00:24:04.198 } 00:24:04.198 } 00:24:04.198 ] 00:24:04.198 }, 00:24:04.198 { 00:24:04.198 "subsystem": "nvmf", 00:24:04.198 "config": [ 00:24:04.198 { 00:24:04.198 "method": "nvmf_set_config", 00:24:04.198 "params": { 00:24:04.198 "discovery_filter": "match_any", 00:24:04.198 "admin_cmd_passthru": { 00:24:04.198 "identify_ctrlr": false 00:24:04.198 }, 00:24:04.198 "dhchap_digests": [ 00:24:04.198 "sha256", 00:24:04.198 "sha384", 00:24:04.198 "sha512" 00:24:04.198 ], 00:24:04.198 "dhchap_dhgroups": [ 00:24:04.198 "null", 00:24:04.198 "ffdhe2048", 00:24:04.198 "ffdhe3072", 00:24:04.198 "ffdhe4096", 00:24:04.198 "ffdhe6144", 00:24:04.198 "ffdhe8192" 00:24:04.198 ] 00:24:04.198 } 00:24:04.198 }, 00:24:04.198 { 00:24:04.198 "method": "nvmf_set_max_subsystems", 00:24:04.198 "params": { 00:24:04.198 "max_subsystems": 1024 00:24:04.198 } 00:24:04.198 }, 00:24:04.198 { 00:24:04.198 "method": "nvmf_set_crdt", 00:24:04.198 "params": { 00:24:04.198 "crdt1": 0, 00:24:04.198 "crdt2": 0, 00:24:04.198 "crdt3": 0 00:24:04.198 } 00:24:04.198 }, 00:24:04.198 { 00:24:04.198 "method": "nvmf_create_transport", 00:24:04.198 "params": { 00:24:04.198 "trtype": "TCP", 00:24:04.198 "max_queue_depth": 128, 00:24:04.198 "max_io_qpairs_per_ctrlr": 127, 00:24:04.198 "in_capsule_data_size": 4096, 00:24:04.198 "max_io_size": 131072, 00:24:04.198 "io_unit_size": 131072, 00:24:04.198 "max_aq_depth": 128, 00:24:04.198 "num_shared_buffers": 511, 00:24:04.198 "buf_cache_size": 4294967295, 00:24:04.198 "dif_insert_or_strip": false, 00:24:04.198 "zcopy": false, 00:24:04.198 "c2h_success": false, 00:24:04.198 "sock_priority": 0, 00:24:04.198 "abort_timeout_sec": 1, 00:24:04.198 "ack_timeout": 0, 00:24:04.198 "data_wr_pool_size": 0 00:24:04.198 } 00:24:04.198 }, 00:24:04.198 { 00:24:04.198 "method": "nvmf_create_subsystem", 00:24:04.198 "params": { 00:24:04.198 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:04.198 "allow_any_host": false, 00:24:04.198 "serial_number": "00000000000000000000", 00:24:04.198 "model_number": "SPDK bdev Controller", 00:24:04.198 "max_namespaces": 32, 00:24:04.198 "min_cntlid": 1, 00:24:04.198 "max_cntlid": 65519, 00:24:04.198 "ana_reporting": false 00:24:04.198 } 00:24:04.198 }, 00:24:04.198 { 00:24:04.198 "method": "nvmf_subsystem_add_host", 00:24:04.198 "params": { 00:24:04.198 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:04.198 "host": "nqn.2016-06.io.spdk:host1", 00:24:04.198 "psk": "key0" 00:24:04.198 } 00:24:04.198 }, 00:24:04.198 { 00:24:04.198 "method": "nvmf_subsystem_add_ns", 00:24:04.198 "params": { 00:24:04.198 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:04.198 "namespace": { 00:24:04.198 "nsid": 1, 00:24:04.198 "bdev_name": "malloc0", 00:24:04.198 "nguid": "AA6BA84A172048C2860100364634AE87", 00:24:04.198 "uuid": "aa6ba84a-1720-48c2-8601-00364634ae87", 00:24:04.198 "no_auto_visible": false 00:24:04.198 } 00:24:04.198 } 00:24:04.198 }, 00:24:04.198 { 00:24:04.198 "method": "nvmf_subsystem_add_listener", 00:24:04.198 "params": { 00:24:04.198 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:04.198 "listen_address": { 00:24:04.198 "trtype": "TCP", 00:24:04.198 "adrfam": "IPv4", 00:24:04.198 "traddr": "10.0.0.2", 00:24:04.198 "trsvcid": "4420" 00:24:04.198 }, 00:24:04.198 "secure_channel": false, 00:24:04.198 "sock_impl": "ssl" 00:24:04.198 } 00:24:04.198 } 00:24:04.198 ] 00:24:04.198 } 00:24:04.198 ] 00:24:04.198 }' 00:24:04.198 00:04:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:24:04.457 00:04:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # bperfcfg='{ 00:24:04.457 "subsystems": [ 00:24:04.457 { 00:24:04.457 "subsystem": "keyring", 00:24:04.457 "config": [ 00:24:04.457 { 00:24:04.457 "method": "keyring_file_add_key", 00:24:04.457 "params": { 00:24:04.457 "name": "key0", 00:24:04.457 "path": "/tmp/tmp.RxVOK3vMXt" 00:24:04.457 } 00:24:04.457 } 00:24:04.457 ] 00:24:04.457 }, 00:24:04.457 { 00:24:04.457 "subsystem": "iobuf", 00:24:04.457 "config": [ 00:24:04.457 { 00:24:04.457 "method": "iobuf_set_options", 00:24:04.457 "params": { 00:24:04.457 "small_pool_count": 8192, 00:24:04.458 "large_pool_count": 1024, 00:24:04.458 "small_bufsize": 8192, 00:24:04.458 "large_bufsize": 135168, 00:24:04.458 "enable_numa": false 00:24:04.458 } 00:24:04.458 } 00:24:04.458 ] 00:24:04.458 }, 00:24:04.458 { 00:24:04.458 "subsystem": "sock", 00:24:04.458 "config": [ 00:24:04.458 { 00:24:04.458 "method": "sock_set_default_impl", 00:24:04.458 "params": { 00:24:04.458 "impl_name": "posix" 00:24:04.458 } 00:24:04.458 }, 00:24:04.458 { 00:24:04.458 "method": "sock_impl_set_options", 00:24:04.458 "params": { 00:24:04.458 "impl_name": "ssl", 00:24:04.458 "recv_buf_size": 4096, 00:24:04.458 "send_buf_size": 4096, 00:24:04.458 "enable_recv_pipe": true, 00:24:04.458 "enable_quickack": false, 00:24:04.458 "enable_placement_id": 0, 00:24:04.458 "enable_zerocopy_send_server": true, 00:24:04.458 "enable_zerocopy_send_client": false, 00:24:04.458 "zerocopy_threshold": 0, 00:24:04.458 "tls_version": 0, 00:24:04.458 "enable_ktls": false 00:24:04.458 } 00:24:04.458 }, 00:24:04.458 { 00:24:04.458 "method": "sock_impl_set_options", 00:24:04.458 "params": { 00:24:04.458 "impl_name": "posix", 00:24:04.458 "recv_buf_size": 2097152, 00:24:04.458 "send_buf_size": 2097152, 00:24:04.458 "enable_recv_pipe": true, 00:24:04.458 "enable_quickack": false, 00:24:04.458 "enable_placement_id": 0, 00:24:04.458 "enable_zerocopy_send_server": true, 00:24:04.458 "enable_zerocopy_send_client": false, 00:24:04.458 "zerocopy_threshold": 0, 00:24:04.458 "tls_version": 0, 00:24:04.458 "enable_ktls": false 00:24:04.458 } 00:24:04.458 } 00:24:04.458 ] 00:24:04.458 }, 00:24:04.458 { 00:24:04.458 "subsystem": "vmd", 00:24:04.458 "config": [] 00:24:04.458 }, 00:24:04.458 { 00:24:04.458 "subsystem": "accel", 00:24:04.458 "config": [ 00:24:04.458 { 00:24:04.458 "method": "accel_set_options", 00:24:04.458 "params": { 00:24:04.458 "small_cache_size": 128, 00:24:04.458 "large_cache_size": 16, 00:24:04.458 "task_count": 2048, 00:24:04.458 "sequence_count": 2048, 00:24:04.458 "buf_count": 2048 00:24:04.458 } 00:24:04.458 } 00:24:04.458 ] 00:24:04.458 }, 00:24:04.458 { 00:24:04.458 "subsystem": "bdev", 00:24:04.458 "config": [ 00:24:04.458 { 00:24:04.458 "method": "bdev_set_options", 00:24:04.458 "params": { 00:24:04.458 "bdev_io_pool_size": 65535, 00:24:04.458 "bdev_io_cache_size": 256, 00:24:04.458 "bdev_auto_examine": true, 00:24:04.458 "iobuf_small_cache_size": 128, 00:24:04.458 "iobuf_large_cache_size": 16 00:24:04.458 } 00:24:04.458 }, 00:24:04.458 { 00:24:04.458 "method": "bdev_raid_set_options", 00:24:04.458 "params": { 00:24:04.458 "process_window_size_kb": 1024, 00:24:04.458 "process_max_bandwidth_mb_sec": 0 00:24:04.458 } 00:24:04.458 }, 00:24:04.458 { 00:24:04.458 "method": "bdev_iscsi_set_options", 00:24:04.458 "params": { 00:24:04.458 "timeout_sec": 30 00:24:04.458 } 00:24:04.458 }, 00:24:04.458 { 00:24:04.458 "method": "bdev_nvme_set_options", 00:24:04.458 "params": { 00:24:04.458 "action_on_timeout": "none", 00:24:04.458 "timeout_us": 0, 00:24:04.458 "timeout_admin_us": 0, 00:24:04.458 "keep_alive_timeout_ms": 10000, 00:24:04.458 "arbitration_burst": 0, 00:24:04.458 "low_priority_weight": 0, 00:24:04.458 "medium_priority_weight": 0, 00:24:04.458 "high_priority_weight": 0, 00:24:04.458 "nvme_adminq_poll_period_us": 10000, 00:24:04.458 "nvme_ioq_poll_period_us": 0, 00:24:04.458 "io_queue_requests": 512, 00:24:04.458 "delay_cmd_submit": true, 00:24:04.458 "transport_retry_count": 4, 00:24:04.458 "bdev_retry_count": 3, 00:24:04.458 "transport_ack_timeout": 0, 00:24:04.458 "ctrlr_loss_timeout_sec": 0, 00:24:04.458 "reconnect_delay_sec": 0, 00:24:04.458 "fast_io_fail_timeout_sec": 0, 00:24:04.458 "disable_auto_failback": false, 00:24:04.458 "generate_uuids": false, 00:24:04.458 "transport_tos": 0, 00:24:04.458 "nvme_error_stat": false, 00:24:04.458 "rdma_srq_size": 0, 00:24:04.458 "io_path_stat": false, 00:24:04.458 "allow_accel_sequence": false, 00:24:04.458 "rdma_max_cq_size": 0, 00:24:04.458 "rdma_cm_event_timeout_ms": 0, 00:24:04.458 "dhchap_digests": [ 00:24:04.458 "sha256", 00:24:04.458 "sha384", 00:24:04.458 "sha512" 00:24:04.458 ], 00:24:04.458 "dhchap_dhgroups": [ 00:24:04.458 "null", 00:24:04.458 "ffdhe2048", 00:24:04.458 "ffdhe3072", 00:24:04.458 "ffdhe4096", 00:24:04.458 "ffdhe6144", 00:24:04.458 "ffdhe8192" 00:24:04.458 ], 00:24:04.458 "rdma_umr_per_io": false 00:24:04.458 } 00:24:04.458 }, 00:24:04.458 { 00:24:04.458 "method": "bdev_nvme_attach_controller", 00:24:04.458 "params": { 00:24:04.458 "name": "nvme0", 00:24:04.458 "trtype": "TCP", 00:24:04.458 "adrfam": "IPv4", 00:24:04.458 "traddr": "10.0.0.2", 00:24:04.458 "trsvcid": "4420", 00:24:04.458 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:04.458 "prchk_reftag": false, 00:24:04.458 "prchk_guard": false, 00:24:04.458 "ctrlr_loss_timeout_sec": 0, 00:24:04.458 "reconnect_delay_sec": 0, 00:24:04.458 "fast_io_fail_timeout_sec": 0, 00:24:04.458 "psk": "key0", 00:24:04.458 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:04.458 "hdgst": false, 00:24:04.458 "ddgst": false, 00:24:04.458 "multipath": "multipath" 00:24:04.458 } 00:24:04.458 }, 00:24:04.458 { 00:24:04.458 "method": "bdev_nvme_set_hotplug", 00:24:04.458 "params": { 00:24:04.458 "period_us": 100000, 00:24:04.458 "enable": false 00:24:04.458 } 00:24:04.458 }, 00:24:04.458 { 00:24:04.458 "method": "bdev_enable_histogram", 00:24:04.458 "params": { 00:24:04.458 "name": "nvme0n1", 00:24:04.458 "enable": true 00:24:04.458 } 00:24:04.458 }, 00:24:04.458 { 00:24:04.458 "method": "bdev_wait_for_examine" 00:24:04.458 } 00:24:04.458 ] 00:24:04.458 }, 00:24:04.458 { 00:24:04.458 "subsystem": "nbd", 00:24:04.458 "config": [] 00:24:04.458 } 00:24:04.458 ] 00:24:04.458 }' 00:24:04.458 00:04:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@270 -- # killprocess 4053732 00:24:04.458 00:04:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 4053732 ']' 00:24:04.458 00:04:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 4053732 00:24:04.458 00:04:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:24:04.458 00:04:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:04.458 00:04:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4053732 00:24:04.458 00:04:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:24:04.458 00:04:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:24:04.458 00:04:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4053732' 00:24:04.458 killing process with pid 4053732 00:24:04.458 00:04:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 4053732 00:24:04.458 Received shutdown signal, test time was about 1.000000 seconds 00:24:04.458 00:24:04.458 Latency(us) 00:24:04.458 [2024-12-13T23:04:43.599Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:04.458 [2024-12-13T23:04:43.599Z] =================================================================================================================== 00:24:04.458 [2024-12-13T23:04:43.599Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:04.458 00:04:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 4053732 00:24:05.394 00:04:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # killprocess 4053542 00:24:05.394 00:04:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 4053542 ']' 00:24:05.394 00:04:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 4053542 00:24:05.394 00:04:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:24:05.394 00:04:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:05.394 00:04:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4053542 00:24:05.394 00:04:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:05.394 00:04:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:05.394 00:04:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4053542' 00:24:05.394 killing process with pid 4053542 00:24:05.394 00:04:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 4053542 00:24:05.394 00:04:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 4053542 00:24:06.771 00:04:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # nvmfappstart -c /dev/fd/62 00:24:06.771 00:04:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:06.771 00:04:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:06.771 00:04:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # echo '{ 00:24:06.771 "subsystems": [ 00:24:06.771 { 00:24:06.771 "subsystem": "keyring", 00:24:06.771 "config": [ 00:24:06.771 { 00:24:06.771 "method": "keyring_file_add_key", 00:24:06.771 "params": { 00:24:06.771 "name": "key0", 00:24:06.771 "path": "/tmp/tmp.RxVOK3vMXt" 00:24:06.771 } 00:24:06.771 } 00:24:06.771 ] 00:24:06.771 }, 00:24:06.771 { 00:24:06.771 "subsystem": "iobuf", 00:24:06.771 "config": [ 00:24:06.771 { 00:24:06.771 "method": "iobuf_set_options", 00:24:06.771 "params": { 00:24:06.771 "small_pool_count": 8192, 00:24:06.771 "large_pool_count": 1024, 00:24:06.771 "small_bufsize": 8192, 00:24:06.771 "large_bufsize": 135168, 00:24:06.771 "enable_numa": false 00:24:06.771 } 00:24:06.771 } 00:24:06.771 ] 00:24:06.771 }, 00:24:06.771 { 00:24:06.771 "subsystem": "sock", 00:24:06.771 "config": [ 00:24:06.771 { 00:24:06.771 "method": "sock_set_default_impl", 00:24:06.771 "params": { 00:24:06.771 "impl_name": "posix" 00:24:06.771 } 00:24:06.771 }, 00:24:06.771 { 00:24:06.771 "method": "sock_impl_set_options", 00:24:06.771 "params": { 00:24:06.771 "impl_name": "ssl", 00:24:06.771 "recv_buf_size": 4096, 00:24:06.771 "send_buf_size": 4096, 00:24:06.771 "enable_recv_pipe": true, 00:24:06.771 "enable_quickack": false, 00:24:06.771 "enable_placement_id": 0, 00:24:06.771 "enable_zerocopy_send_server": true, 00:24:06.771 "enable_zerocopy_send_client": false, 00:24:06.771 "zerocopy_threshold": 0, 00:24:06.771 "tls_version": 0, 00:24:06.771 "enable_ktls": false 00:24:06.771 } 00:24:06.771 }, 00:24:06.771 { 00:24:06.771 "method": "sock_impl_set_options", 00:24:06.771 "params": { 00:24:06.771 "impl_name": "posix", 00:24:06.771 "recv_buf_size": 2097152, 00:24:06.771 "send_buf_size": 2097152, 00:24:06.771 "enable_recv_pipe": true, 00:24:06.771 "enable_quickack": false, 00:24:06.771 "enable_placement_id": 0, 00:24:06.771 "enable_zerocopy_send_server": true, 00:24:06.771 "enable_zerocopy_send_client": false, 00:24:06.771 "zerocopy_threshold": 0, 00:24:06.771 "tls_version": 0, 00:24:06.771 "enable_ktls": false 00:24:06.771 } 00:24:06.771 } 00:24:06.771 ] 00:24:06.771 }, 00:24:06.771 { 00:24:06.771 "subsystem": "vmd", 00:24:06.771 "config": [] 00:24:06.771 }, 00:24:06.771 { 00:24:06.771 "subsystem": "accel", 00:24:06.771 "config": [ 00:24:06.771 { 00:24:06.771 "method": "accel_set_options", 00:24:06.771 "params": { 00:24:06.771 "small_cache_size": 128, 00:24:06.771 "large_cache_size": 16, 00:24:06.771 "task_count": 2048, 00:24:06.771 "sequence_count": 2048, 00:24:06.771 "buf_count": 2048 00:24:06.771 } 00:24:06.771 } 00:24:06.771 ] 00:24:06.771 }, 00:24:06.772 { 00:24:06.772 "subsystem": "bdev", 00:24:06.772 "config": [ 00:24:06.772 { 00:24:06.772 "method": "bdev_set_options", 00:24:06.772 "params": { 00:24:06.772 "bdev_io_pool_size": 65535, 00:24:06.772 "bdev_io_cache_size": 256, 00:24:06.772 "bdev_auto_examine": true, 00:24:06.772 "iobuf_small_cache_size": 128, 00:24:06.772 "iobuf_large_cache_size": 16 00:24:06.772 } 00:24:06.772 }, 00:24:06.772 { 00:24:06.772 "method": "bdev_raid_set_options", 00:24:06.772 "params": { 00:24:06.772 "process_window_size_kb": 1024, 00:24:06.772 "process_max_bandwidth_mb_sec": 0 00:24:06.772 } 00:24:06.772 }, 00:24:06.772 { 00:24:06.772 "method": "bdev_iscsi_set_options", 00:24:06.772 "params": { 00:24:06.772 "timeout_sec": 30 00:24:06.772 } 00:24:06.772 }, 00:24:06.772 { 00:24:06.772 "method": "bdev_nvme_set_options", 00:24:06.772 "params": { 00:24:06.772 "action_on_timeout": "none", 00:24:06.772 "timeout_us": 0, 00:24:06.772 "timeout_admin_us": 0, 00:24:06.772 "keep_alive_timeout_ms": 10000, 00:24:06.772 "arbitration_burst": 0, 00:24:06.772 "low_priority_weight": 0, 00:24:06.772 "medium_priority_weight": 0, 00:24:06.772 "high_priority_weight": 0, 00:24:06.772 "nvme_adminq_poll_period_us": 10000, 00:24:06.772 "nvme_ioq_poll_period_us": 0, 00:24:06.772 "io_queue_requests": 0, 00:24:06.772 "delay_cmd_submit": true, 00:24:06.772 "transport_retry_count": 4, 00:24:06.772 "bdev_retry_count": 3, 00:24:06.772 "transport_ack_timeout": 0, 00:24:06.772 "ctrlr_loss_timeout_sec": 0, 00:24:06.772 "reconnect_delay_sec": 0, 00:24:06.772 "fast_io_fail_timeout_sec": 0, 00:24:06.772 "disable_auto_failback": false, 00:24:06.772 "generate_uuids": false, 00:24:06.772 "transport_tos": 0, 00:24:06.772 "nvme_error_stat": false, 00:24:06.772 "rdma_srq_size": 0, 00:24:06.772 "io_path_stat": false, 00:24:06.772 "allow_accel_sequence": false, 00:24:06.772 "rdma_max_cq_size": 0, 00:24:06.772 "rdma_cm_event_timeout_ms": 0, 00:24:06.772 "dhchap_digests": [ 00:24:06.772 "sha256", 00:24:06.772 "sha384", 00:24:06.772 "sha512" 00:24:06.772 ], 00:24:06.772 "dhchap_dhgroups": [ 00:24:06.772 "null", 00:24:06.772 "ffdhe2048", 00:24:06.772 "ffdhe3072", 00:24:06.772 "ffdhe4096", 00:24:06.772 "ffdhe6144", 00:24:06.772 "ffdhe8192" 00:24:06.772 ], 00:24:06.772 "rdma_umr_per_io": false 00:24:06.772 } 00:24:06.772 }, 00:24:06.772 { 00:24:06.772 "method": "bdev_nvme_set_hotplug", 00:24:06.772 "params": { 00:24:06.772 "period_us": 100000, 00:24:06.772 "enable": false 00:24:06.772 } 00:24:06.772 }, 00:24:06.772 { 00:24:06.772 "method": "bdev_malloc_create", 00:24:06.772 "params": { 00:24:06.772 "name": "malloc0", 00:24:06.772 "num_blocks": 8192, 00:24:06.772 "block_size": 4096, 00:24:06.772 "physical_block_size": 4096, 00:24:06.772 "uuid": "aa6ba84a-1720-48c2-8601-00364634ae87", 00:24:06.772 "optimal_io_boundary": 0, 00:24:06.772 "md_size": 0, 00:24:06.772 "dif_type": 0, 00:24:06.772 "dif_is_head_of_md": false, 00:24:06.772 "dif_pi_format": 0 00:24:06.772 } 00:24:06.772 }, 00:24:06.772 { 00:24:06.772 "method": "bdev_wait_for_examine" 00:24:06.772 } 00:24:06.772 ] 00:24:06.772 }, 00:24:06.772 { 00:24:06.772 "subsystem": "nbd", 00:24:06.772 "config": [] 00:24:06.772 }, 00:24:06.772 { 00:24:06.772 "subsystem": "scheduler", 00:24:06.772 "config": [ 00:24:06.772 { 00:24:06.772 "method": "framework_set_scheduler", 00:24:06.772 "params": { 00:24:06.772 "name": "static" 00:24:06.772 } 00:24:06.772 } 00:24:06.772 ] 00:24:06.772 }, 00:24:06.772 { 00:24:06.772 "subsystem": "nvmf", 00:24:06.772 "config": [ 00:24:06.772 { 00:24:06.772 "method": "nvmf_set_config", 00:24:06.772 "params": { 00:24:06.772 "discovery_filter": "match_any", 00:24:06.772 "admin_cmd_passthru": { 00:24:06.772 "identify_ctrlr": false 00:24:06.772 }, 00:24:06.772 "dhchap_digests": [ 00:24:06.772 "sha256", 00:24:06.772 "sha384", 00:24:06.772 "sha512" 00:24:06.772 ], 00:24:06.772 "dhchap_dhgroups": [ 00:24:06.772 "null", 00:24:06.772 "ffdhe2048", 00:24:06.772 "ffdhe3072", 00:24:06.772 "ffdhe4096", 00:24:06.772 "ffdhe6144", 00:24:06.772 "ffdhe8192" 00:24:06.772 ] 00:24:06.772 } 00:24:06.772 }, 00:24:06.772 { 00:24:06.772 "method": "nvmf_set_max_subsystems", 00:24:06.772 "params": { 00:24:06.772 "max_subsystems": 1024 00:24:06.772 } 00:24:06.772 }, 00:24:06.772 { 00:24:06.772 "method": "nvmf_set_crdt", 00:24:06.772 "params": { 00:24:06.772 "crdt1": 0, 00:24:06.772 "crdt2": 0, 00:24:06.772 "crdt3": 0 00:24:06.772 } 00:24:06.772 }, 00:24:06.772 { 00:24:06.772 "method": "nvmf_create_transport", 00:24:06.772 "params": { 00:24:06.772 "trtype": "TCP", 00:24:06.772 "max_queue_depth": 128, 00:24:06.772 "max_io_qpairs_per_ctrlr": 127, 00:24:06.772 "in_capsule_data_size": 4096, 00:24:06.772 "max_io_size": 131072, 00:24:06.772 "io_unit_size": 131072, 00:24:06.772 "max_aq_depth": 128, 00:24:06.772 "num_shared_buffers": 511, 00:24:06.772 "buf_cache_size": 4294967295, 00:24:06.772 "dif_insert_or_strip": false, 00:24:06.772 "zcopy": false, 00:24:06.772 "c2h_success": false, 00:24:06.772 "sock_priority": 0, 00:24:06.772 "abort_timeout_sec": 1, 00:24:06.772 "ack_timeout": 0, 00:24:06.772 "data_wr_pool_size": 0 00:24:06.772 } 00:24:06.772 }, 00:24:06.772 { 00:24:06.772 "method": "nvmf_create_subsystem", 00:24:06.772 "params": { 00:24:06.773 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:06.773 "allow_any_host": false, 00:24:06.773 "serial_number": "00000000000000000000", 00:24:06.773 "model_number": "SPDK bdev Controller", 00:24:06.773 "max_namespaces": 32, 00:24:06.773 "min_cntlid": 1, 00:24:06.773 "max_cntlid": 65519, 00:24:06.773 "ana_reporting": false 00:24:06.773 } 00:24:06.773 }, 00:24:06.773 { 00:24:06.773 "method": "nvmf_subsystem_add_host", 00:24:06.773 "params": { 00:24:06.773 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:06.773 "host": "nqn.2016-06.io.spdk:host1", 00:24:06.773 "psk": "key0" 00:24:06.773 } 00:24:06.773 }, 00:24:06.773 { 00:24:06.773 "method": "nvmf_subsystem_add_ns", 00:24:06.773 "params": { 00:24:06.773 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:06.773 "namespace": { 00:24:06.773 "nsid": 1, 00:24:06.773 "bdev_name": "malloc0", 00:24:06.773 "nguid": "AA6BA84A172048C2860100364634AE87", 00:24:06.773 "uuid": "aa6ba84a-1720-48c2-8601-00364634ae87", 00:24:06.773 "no_auto_visible": false 00:24:06.773 } 00:24:06.773 } 00:24:06.773 }, 00:24:06.773 { 00:24:06.773 "method": "nvmf_subsystem_add_listener", 00:24:06.773 "params": { 00:24:06.773 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:06.773 "listen_address": { 00:24:06.773 "trtype": "TCP", 00:24:06.773 "adrfam": "IPv4", 00:24:06.773 "traddr": "10.0.0.2", 00:24:06.773 "trsvcid": "4420" 00:24:06.773 }, 00:24:06.773 "secure_channel": false, 00:24:06.773 "sock_impl": "ssl" 00:24:06.773 } 00:24:06.773 } 00:24:06.773 ] 00:24:06.773 } 00:24:06.773 ] 00:24:06.773 }' 00:24:06.773 00:04:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:06.773 00:04:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=4054644 00:24:06.773 00:04:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:24:06.773 00:04:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 4054644 00:24:06.773 00:04:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 4054644 ']' 00:24:06.773 00:04:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:06.773 00:04:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:06.773 00:04:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:06.773 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:06.773 00:04:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:06.773 00:04:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:06.773 [2024-12-14 00:04:45.615447] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:24:06.773 [2024-12-14 00:04:45.615539] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:06.773 [2024-12-14 00:04:45.732039] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:06.773 [2024-12-14 00:04:45.838181] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:06.773 [2024-12-14 00:04:45.838227] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:06.773 [2024-12-14 00:04:45.838239] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:06.773 [2024-12-14 00:04:45.838249] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:06.773 [2024-12-14 00:04:45.838257] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:06.773 [2024-12-14 00:04:45.839769] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:24:07.341 [2024-12-14 00:04:46.320494] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:07.341 [2024-12-14 00:04:46.352550] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:07.341 [2024-12-14 00:04:46.352802] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:07.341 00:04:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:07.341 00:04:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:24:07.341 00:04:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:07.341 00:04:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:07.341 00:04:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:07.341 00:04:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:07.341 00:04:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@276 -- # bdevperf_pid=4054679 00:24:07.341 00:04:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # waitforlisten 4054679 /var/tmp/bdevperf.sock 00:24:07.341 00:04:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 4054679 ']' 00:24:07.341 00:04:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:07.341 00:04:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:24:07.341 00:04:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:07.341 00:04:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:07.341 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:07.341 00:04:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # echo '{ 00:24:07.341 "subsystems": [ 00:24:07.341 { 00:24:07.341 "subsystem": "keyring", 00:24:07.341 "config": [ 00:24:07.341 { 00:24:07.341 "method": "keyring_file_add_key", 00:24:07.341 "params": { 00:24:07.341 "name": "key0", 00:24:07.341 "path": "/tmp/tmp.RxVOK3vMXt" 00:24:07.341 } 00:24:07.341 } 00:24:07.341 ] 00:24:07.341 }, 00:24:07.341 { 00:24:07.341 "subsystem": "iobuf", 00:24:07.341 "config": [ 00:24:07.341 { 00:24:07.341 "method": "iobuf_set_options", 00:24:07.341 "params": { 00:24:07.341 "small_pool_count": 8192, 00:24:07.341 "large_pool_count": 1024, 00:24:07.341 "small_bufsize": 8192, 00:24:07.341 "large_bufsize": 135168, 00:24:07.341 "enable_numa": false 00:24:07.341 } 00:24:07.341 } 00:24:07.341 ] 00:24:07.341 }, 00:24:07.341 { 00:24:07.341 "subsystem": "sock", 00:24:07.341 "config": [ 00:24:07.341 { 00:24:07.341 "method": "sock_set_default_impl", 00:24:07.341 "params": { 00:24:07.341 "impl_name": "posix" 00:24:07.341 } 00:24:07.341 }, 00:24:07.341 { 00:24:07.341 "method": "sock_impl_set_options", 00:24:07.341 "params": { 00:24:07.341 "impl_name": "ssl", 00:24:07.341 "recv_buf_size": 4096, 00:24:07.341 "send_buf_size": 4096, 00:24:07.341 "enable_recv_pipe": true, 00:24:07.341 "enable_quickack": false, 00:24:07.341 "enable_placement_id": 0, 00:24:07.341 "enable_zerocopy_send_server": true, 00:24:07.341 "enable_zerocopy_send_client": false, 00:24:07.341 "zerocopy_threshold": 0, 00:24:07.341 "tls_version": 0, 00:24:07.341 "enable_ktls": false 00:24:07.341 } 00:24:07.341 }, 00:24:07.341 { 00:24:07.341 "method": "sock_impl_set_options", 00:24:07.341 "params": { 00:24:07.341 "impl_name": "posix", 00:24:07.341 "recv_buf_size": 2097152, 00:24:07.341 "send_buf_size": 2097152, 00:24:07.341 "enable_recv_pipe": true, 00:24:07.341 "enable_quickack": false, 00:24:07.341 "enable_placement_id": 0, 00:24:07.341 "enable_zerocopy_send_server": true, 00:24:07.341 "enable_zerocopy_send_client": false, 00:24:07.341 "zerocopy_threshold": 0, 00:24:07.341 "tls_version": 0, 00:24:07.341 "enable_ktls": false 00:24:07.341 } 00:24:07.341 } 00:24:07.341 ] 00:24:07.341 }, 00:24:07.341 { 00:24:07.341 "subsystem": "vmd", 00:24:07.341 "config": [] 00:24:07.341 }, 00:24:07.341 { 00:24:07.341 "subsystem": "accel", 00:24:07.341 "config": [ 00:24:07.341 { 00:24:07.341 "method": "accel_set_options", 00:24:07.341 "params": { 00:24:07.341 "small_cache_size": 128, 00:24:07.341 "large_cache_size": 16, 00:24:07.341 "task_count": 2048, 00:24:07.341 "sequence_count": 2048, 00:24:07.341 "buf_count": 2048 00:24:07.341 } 00:24:07.341 } 00:24:07.341 ] 00:24:07.341 }, 00:24:07.341 { 00:24:07.341 "subsystem": "bdev", 00:24:07.341 "config": [ 00:24:07.341 { 00:24:07.341 "method": "bdev_set_options", 00:24:07.341 "params": { 00:24:07.341 "bdev_io_pool_size": 65535, 00:24:07.341 "bdev_io_cache_size": 256, 00:24:07.341 "bdev_auto_examine": true, 00:24:07.341 "iobuf_small_cache_size": 128, 00:24:07.341 "iobuf_large_cache_size": 16 00:24:07.341 } 00:24:07.341 }, 00:24:07.341 { 00:24:07.341 "method": "bdev_raid_set_options", 00:24:07.341 "params": { 00:24:07.341 "process_window_size_kb": 1024, 00:24:07.341 "process_max_bandwidth_mb_sec": 0 00:24:07.341 } 00:24:07.341 }, 00:24:07.341 { 00:24:07.341 "method": "bdev_iscsi_set_options", 00:24:07.341 "params": { 00:24:07.341 "timeout_sec": 30 00:24:07.341 } 00:24:07.341 }, 00:24:07.341 { 00:24:07.341 "method": "bdev_nvme_set_options", 00:24:07.341 "params": { 00:24:07.341 "action_on_timeout": "none", 00:24:07.341 "timeout_us": 0, 00:24:07.341 "timeout_admin_us": 0, 00:24:07.341 "keep_alive_timeout_ms": 10000, 00:24:07.341 "arbitration_burst": 0, 00:24:07.341 "low_priority_weight": 0, 00:24:07.341 "medium_priority_weight": 0, 00:24:07.341 "high_priority_weight": 0, 00:24:07.341 "nvme_adminq_poll_period_us": 10000, 00:24:07.341 "nvme_ioq_poll_period_us": 0, 00:24:07.341 "io_queue_requests": 512, 00:24:07.341 "delay_cmd_submit": true, 00:24:07.341 "transport_retry_count": 4, 00:24:07.341 "bdev_retry_count": 3, 00:24:07.341 "transport_ack_timeout": 0, 00:24:07.341 "ctrlr_loss_timeout_sec": 0, 00:24:07.341 "reconnect_delay_sec": 0, 00:24:07.341 "fast_io_fail_timeout_sec": 0, 00:24:07.341 "disable_auto_failback": false, 00:24:07.341 "generate_uuids": false, 00:24:07.341 "transport_tos": 0, 00:24:07.341 "nvme_error_stat": false, 00:24:07.341 "rdma_srq_size": 0, 00:24:07.341 "io_path_stat": false, 00:24:07.341 "allow_accel_sequence": false, 00:24:07.341 "rdma_max_cq_size": 0, 00:24:07.341 "rdma_cm_event_timeout_ms": 0, 00:24:07.341 "dhchap_digests": [ 00:24:07.341 "sha256", 00:24:07.341 "sha384", 00:24:07.341 "sha512" 00:24:07.341 ], 00:24:07.341 "dhchap_dhgroups": [ 00:24:07.341 "null", 00:24:07.341 "ffdhe2048", 00:24:07.341 "ffdhe3072", 00:24:07.341 "ffdhe4096", 00:24:07.341 "ffdhe6144", 00:24:07.341 "ffdhe8192" 00:24:07.341 ], 00:24:07.341 "rdma_umr_per_io": false 00:24:07.341 } 00:24:07.341 }, 00:24:07.341 { 00:24:07.341 "method": "bdev_nvme_attach_controller", 00:24:07.341 "params": { 00:24:07.341 "name": "nvme0", 00:24:07.341 "trtype": "TCP", 00:24:07.341 "adrfam": "IPv4", 00:24:07.341 "traddr": "10.0.0.2", 00:24:07.341 "trsvcid": "4420", 00:24:07.341 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:07.341 "prchk_reftag": false, 00:24:07.342 "prchk_guard": false, 00:24:07.342 "ctrlr_loss_timeout_sec": 0, 00:24:07.342 "reconnect_delay_sec": 0, 00:24:07.342 "fast_io_fail_timeout_sec": 0, 00:24:07.342 "psk": "key0", 00:24:07.342 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:07.342 "hdgst": false, 00:24:07.342 "ddgst": false, 00:24:07.342 "multipath": "multipath" 00:24:07.342 } 00:24:07.342 }, 00:24:07.342 { 00:24:07.342 "method": "bdev_nvme_set_hotplug", 00:24:07.342 "params": { 00:24:07.342 "period_us": 100000, 00:24:07.342 "enable": false 00:24:07.342 } 00:24:07.342 }, 00:24:07.342 { 00:24:07.342 "method": "bdev_enable_histogram", 00:24:07.342 "params": { 00:24:07.342 "name": "nvme0n1", 00:24:07.342 "enable": true 00:24:07.342 } 00:24:07.342 }, 00:24:07.342 { 00:24:07.342 "method": "bdev_wait_for_examine" 00:24:07.342 } 00:24:07.342 ] 00:24:07.342 }, 00:24:07.342 { 00:24:07.342 "subsystem": "nbd", 00:24:07.342 "config": [] 00:24:07.342 } 00:24:07.342 ] 00:24:07.342 }' 00:24:07.342 00:04:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:07.342 00:04:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:07.600 [2024-12-14 00:04:46.529262] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:24:07.600 [2024-12-14 00:04:46.529347] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4054679 ] 00:24:07.600 [2024-12-14 00:04:46.644246] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:07.859 [2024-12-14 00:04:46.756444] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:24:08.118 [2024-12-14 00:04:47.165591] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:08.376 00:04:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:08.376 00:04:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:24:08.376 00:04:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:08.376 00:04:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # jq -r '.[].name' 00:24:08.635 00:04:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:08.635 00:04:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:08.635 Running I/O for 1 seconds... 00:24:09.571 4421.00 IOPS, 17.27 MiB/s 00:24:09.571 Latency(us) 00:24:09.571 [2024-12-13T23:04:48.712Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:09.571 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:24:09.571 Verification LBA range: start 0x0 length 0x2000 00:24:09.571 nvme0n1 : 1.01 4485.55 17.52 0.00 0.00 28329.92 5398.92 29584.82 00:24:09.571 [2024-12-13T23:04:48.712Z] =================================================================================================================== 00:24:09.571 [2024-12-13T23:04:48.712Z] Total : 4485.55 17.52 0.00 0.00 28329.92 5398.92 29584.82 00:24:09.571 { 00:24:09.571 "results": [ 00:24:09.571 { 00:24:09.571 "job": "nvme0n1", 00:24:09.571 "core_mask": "0x2", 00:24:09.571 "workload": "verify", 00:24:09.571 "status": "finished", 00:24:09.571 "verify_range": { 00:24:09.571 "start": 0, 00:24:09.571 "length": 8192 00:24:09.571 }, 00:24:09.571 "queue_depth": 128, 00:24:09.571 "io_size": 4096, 00:24:09.571 "runtime": 1.014368, 00:24:09.571 "iops": 4485.551594687529, 00:24:09.571 "mibps": 17.52168591674816, 00:24:09.571 "io_failed": 0, 00:24:09.571 "io_timeout": 0, 00:24:09.571 "avg_latency_us": 28329.916483516485, 00:24:09.571 "min_latency_us": 5398.918095238095, 00:24:09.571 "max_latency_us": 29584.822857142855 00:24:09.571 } 00:24:09.571 ], 00:24:09.571 "core_count": 1 00:24:09.571 } 00:24:09.571 00:04:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@282 -- # trap - SIGINT SIGTERM EXIT 00:24:09.571 00:04:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@283 -- # cleanup 00:24:09.571 00:04:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:24:09.571 00:04:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@812 -- # type=--id 00:24:09.571 00:04:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@813 -- # id=0 00:24:09.571 00:04:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:24:09.571 00:04:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:24:09.571 00:04:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:24:09.571 00:04:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:24:09.571 00:04:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@824 -- # for n in $shm_files 00:24:09.571 00:04:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:24:09.571 nvmf_trace.0 00:24:09.830 00:04:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@827 -- # return 0 00:24:09.830 00:04:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@16 -- # killprocess 4054679 00:24:09.830 00:04:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 4054679 ']' 00:24:09.830 00:04:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 4054679 00:24:09.830 00:04:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:24:09.830 00:04:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:09.831 00:04:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4054679 00:24:09.831 00:04:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:24:09.831 00:04:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:24:09.831 00:04:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4054679' 00:24:09.831 killing process with pid 4054679 00:24:09.831 00:04:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 4054679 00:24:09.831 Received shutdown signal, test time was about 1.000000 seconds 00:24:09.831 00:24:09.831 Latency(us) 00:24:09.831 [2024-12-13T23:04:48.972Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:09.831 [2024-12-13T23:04:48.972Z] =================================================================================================================== 00:24:09.831 [2024-12-13T23:04:48.972Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:09.831 00:04:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 4054679 00:24:10.767 00:04:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:24:10.767 00:04:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:10.767 00:04:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@121 -- # sync 00:24:10.767 00:04:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:10.768 00:04:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@124 -- # set +e 00:24:10.768 00:04:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:10.768 00:04:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:10.768 rmmod nvme_tcp 00:24:10.768 rmmod nvme_fabrics 00:24:10.768 rmmod nvme_keyring 00:24:10.768 00:04:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:10.768 00:04:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@128 -- # set -e 00:24:10.768 00:04:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@129 -- # return 0 00:24:10.768 00:04:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@517 -- # '[' -n 4054644 ']' 00:24:10.768 00:04:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@518 -- # killprocess 4054644 00:24:10.768 00:04:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 4054644 ']' 00:24:10.768 00:04:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 4054644 00:24:10.768 00:04:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:24:10.768 00:04:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:10.768 00:04:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4054644 00:24:10.768 00:04:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:10.768 00:04:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:10.768 00:04:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4054644' 00:24:10.768 killing process with pid 4054644 00:24:10.768 00:04:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 4054644 00:24:10.768 00:04:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 4054644 00:24:12.144 00:04:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:12.144 00:04:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:12.144 00:04:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:12.144 00:04:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@297 -- # iptr 00:24:12.145 00:04:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-save 00:24:12.145 00:04:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:12.145 00:04:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-restore 00:24:12.145 00:04:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:12.145 00:04:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:12.145 00:04:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:12.145 00:04:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:12.145 00:04:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:14.063 00:04:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:14.063 00:04:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.3CrHVtsC2f /tmp/tmp.gOiRXcGyxL /tmp/tmp.RxVOK3vMXt 00:24:14.063 00:24:14.063 real 1m46.374s 00:24:14.063 user 2m44.981s 00:24:14.063 sys 0m31.315s 00:24:14.063 00:04:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:14.063 00:04:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:14.063 ************************************ 00:24:14.063 END TEST nvmf_tls 00:24:14.063 ************************************ 00:24:14.063 00:04:53 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@42 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:24:14.063 00:04:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:14.063 00:04:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:14.063 00:04:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:24:14.063 ************************************ 00:24:14.063 START TEST nvmf_fips 00:24:14.063 ************************************ 00:24:14.063 00:04:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:24:14.323 * Looking for test storage... 00:24:14.323 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:24:14.323 00:04:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:24:14.323 00:04:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1711 -- # lcov --version 00:24:14.323 00:04:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:24:14.323 00:04:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:24:14.323 00:04:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:14.323 00:04:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:14.323 00:04:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:14.323 00:04:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:24:14.323 00:04:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:24:14.323 00:04:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:24:14.323 00:04:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:24:14.323 00:04:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=<' 00:24:14.323 00:04:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=2 00:24:14.323 00:04:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=1 00:24:14.323 00:04:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:14.323 00:04:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:24:14.323 00:04:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:24:14.323 00:04:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:14.323 00:04:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:14.323 00:04:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:24:14.323 00:04:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:24:14.323 00:04:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:14.323 00:04:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:24:14.323 00:04:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:24:14.323 00:04:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 2 00:24:14.323 00:04:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=2 00:24:14.323 00:04:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:14.323 00:04:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 2 00:24:14.323 00:04:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=2 00:24:14.323 00:04:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:14.323 00:04:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:14.323 00:04:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # return 0 00:24:14.323 00:04:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:14.323 00:04:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:24:14.323 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:14.323 --rc genhtml_branch_coverage=1 00:24:14.323 --rc genhtml_function_coverage=1 00:24:14.323 --rc genhtml_legend=1 00:24:14.323 --rc geninfo_all_blocks=1 00:24:14.323 --rc geninfo_unexecuted_blocks=1 00:24:14.323 00:24:14.323 ' 00:24:14.323 00:04:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:24:14.323 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:14.323 --rc genhtml_branch_coverage=1 00:24:14.323 --rc genhtml_function_coverage=1 00:24:14.323 --rc genhtml_legend=1 00:24:14.323 --rc geninfo_all_blocks=1 00:24:14.324 --rc geninfo_unexecuted_blocks=1 00:24:14.324 00:24:14.324 ' 00:24:14.324 00:04:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:24:14.324 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:14.324 --rc genhtml_branch_coverage=1 00:24:14.324 --rc genhtml_function_coverage=1 00:24:14.324 --rc genhtml_legend=1 00:24:14.324 --rc geninfo_all_blocks=1 00:24:14.324 --rc geninfo_unexecuted_blocks=1 00:24:14.324 00:24:14.324 ' 00:24:14.324 00:04:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:24:14.324 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:14.324 --rc genhtml_branch_coverage=1 00:24:14.324 --rc genhtml_function_coverage=1 00:24:14.324 --rc genhtml_legend=1 00:24:14.324 --rc geninfo_all_blocks=1 00:24:14.324 --rc geninfo_unexecuted_blocks=1 00:24:14.324 00:24:14.324 ' 00:24:14.324 00:04:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:14.324 00:04:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:24:14.324 00:04:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:14.324 00:04:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:14.324 00:04:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:14.324 00:04:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:14.324 00:04:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:14.324 00:04:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:14.324 00:04:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:14.324 00:04:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:14.324 00:04:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:14.324 00:04:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:14.324 00:04:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:24:14.324 00:04:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:24:14.324 00:04:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:14.324 00:04:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:14.324 00:04:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:14.324 00:04:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:14.324 00:04:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:14.324 00:04:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@15 -- # shopt -s extglob 00:24:14.324 00:04:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:14.324 00:04:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:14.324 00:04:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:14.324 00:04:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:14.324 00:04:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:14.324 00:04:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:14.324 00:04:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:24:14.324 00:04:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:14.324 00:04:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@51 -- # : 0 00:24:14.324 00:04:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:14.324 00:04:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:14.324 00:04:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:14.324 00:04:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:14.324 00:04:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:14.324 00:04:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:14.324 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:14.324 00:04:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:14.324 00:04:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:14.324 00:04:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:14.324 00:04:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:24:14.324 00:04:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@90 -- # check_openssl_version 00:24:14.324 00:04:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@84 -- # local target=3.0.0 00:24:14.324 00:04:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # openssl version 00:24:14.324 00:04:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # awk '{print $2}' 00:24:14.324 00:04:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # ge 3.1.1 3.0.0 00:24:14.324 00:04:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@376 -- # cmp_versions 3.1.1 '>=' 3.0.0 00:24:14.324 00:04:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:14.324 00:04:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:14.324 00:04:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:24:14.324 00:04:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:24:14.324 00:04:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:24:14.324 00:04:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:24:14.324 00:04:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=>=' 00:24:14.324 00:04:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=3 00:24:14.324 00:04:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=3 00:24:14.324 00:04:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:14.324 00:04:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:24:14.324 00:04:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@348 -- # : 1 00:24:14.324 00:04:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:14.324 00:04:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:14.324 00:04:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 3 00:24:14.324 00:04:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:24:14.324 00:04:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:24:14.324 00:04:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:24:14.324 00:04:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=3 00:24:14.324 00:04:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 3 00:24:14.324 00:04:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:24:14.324 00:04:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:24:14.324 00:04:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:24:14.324 00:04:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=3 00:24:14.324 00:04:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:14.324 00:04:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:14.324 00:04:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v++ )) 00:24:14.324 00:04:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:14.324 00:04:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:24:14.324 00:04:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:24:14.324 00:04:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:14.324 00:04:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:24:14.324 00:04:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:24:14.324 00:04:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 0 00:24:14.324 00:04:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=0 00:24:14.324 00:04:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 0 =~ ^[0-9]+$ ]] 00:24:14.324 00:04:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 0 00:24:14.324 00:04:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=0 00:24:14.324 00:04:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:14.324 00:04:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # return 0 00:24:14.324 00:04:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # openssl info -modulesdir 00:24:14.324 00:04:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:24:14.325 00:04:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # openssl fipsinstall -help 00:24:14.325 00:04:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:24:14.325 00:04:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@102 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:24:14.325 00:04:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # export callback=build_openssl_config 00:24:14.325 00:04:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # callback=build_openssl_config 00:24:14.325 00:04:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # build_openssl_config 00:24:14.325 00:04:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@38 -- # cat 00:24:14.325 00:04:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@58 -- # [[ ! -t 0 ]] 00:24:14.325 00:04:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@59 -- # cat - 00:24:14.325 00:04:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # export OPENSSL_CONF=spdk_fips.conf 00:24:14.325 00:04:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # OPENSSL_CONF=spdk_fips.conf 00:24:14.325 00:04:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # mapfile -t providers 00:24:14.325 00:04:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # grep name 00:24:14.325 00:04:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # openssl list -providers 00:24:14.584 00:04:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # (( 2 != 2 )) 00:24:14.584 00:04:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: openssl base provider != *base* ]] 00:24:14.584 00:04:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:24:14.584 00:04:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # NOT openssl md5 /dev/fd/62 00:24:14.584 00:04:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@652 -- # local es=0 00:24:14.584 00:04:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # : 00:24:14.584 00:04:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@654 -- # valid_exec_arg openssl md5 /dev/fd/62 00:24:14.584 00:04:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@640 -- # local arg=openssl 00:24:14.584 00:04:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:14.584 00:04:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # type -t openssl 00:24:14.584 00:04:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:14.584 00:04:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # type -P openssl 00:24:14.584 00:04:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:14.584 00:04:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # arg=/usr/bin/openssl 00:24:14.584 00:04:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # [[ -x /usr/bin/openssl ]] 00:24:14.584 00:04:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@655 -- # openssl md5 /dev/fd/62 00:24:14.584 Error setting digest 00:24:14.584 4032CAAF127F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:341:Global default library context, Algorithm (MD5 : 95), Properties () 00:24:14.584 4032CAAF127F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:272: 00:24:14.584 00:04:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@655 -- # es=1 00:24:14.584 00:04:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:24:14.584 00:04:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:24:14.584 00:04:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:24:14.584 00:04:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@131 -- # nvmftestinit 00:24:14.584 00:04:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:14.584 00:04:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:14.584 00:04:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:14.584 00:04:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:14.584 00:04:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:14.584 00:04:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:14.584 00:04:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:14.584 00:04:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:14.584 00:04:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:14.584 00:04:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:14.584 00:04:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@309 -- # xtrace_disable 00:24:14.584 00:04:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:24:19.858 00:04:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:19.858 00:04:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # pci_devs=() 00:24:19.858 00:04:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:19.858 00:04:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:19.858 00:04:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:19.858 00:04:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:19.858 00:04:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:19.858 00:04:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # net_devs=() 00:24:19.858 00:04:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:19.858 00:04:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # e810=() 00:24:19.858 00:04:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # local -ga e810 00:24:19.858 00:04:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # x722=() 00:24:19.858 00:04:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # local -ga x722 00:24:19.858 00:04:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # mlx=() 00:24:19.858 00:04:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # local -ga mlx 00:24:19.858 00:04:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:19.858 00:04:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:19.858 00:04:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:19.858 00:04:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:19.858 00:04:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:19.858 00:04:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:19.858 00:04:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:19.858 00:04:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:19.858 00:04:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:19.858 00:04:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:19.858 00:04:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:19.858 00:04:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:19.858 00:04:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:19.858 00:04:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:19.858 00:04:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:19.858 00:04:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:19.858 00:04:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:19.858 00:04:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:19.858 00:04:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:19.858 00:04:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:24:19.858 Found 0000:af:00.0 (0x8086 - 0x159b) 00:24:19.858 00:04:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:19.858 00:04:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:19.858 00:04:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:19.858 00:04:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:19.858 00:04:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:19.858 00:04:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:19.858 00:04:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:24:19.858 Found 0000:af:00.1 (0x8086 - 0x159b) 00:24:19.858 00:04:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:19.858 00:04:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:19.858 00:04:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:19.858 00:04:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:19.858 00:04:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:19.858 00:04:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:19.858 00:04:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:19.858 00:04:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:19.858 00:04:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:19.858 00:04:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:19.858 00:04:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:19.858 00:04:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:19.858 00:04:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:19.858 00:04:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:19.858 00:04:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:19.858 00:04:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:24:19.858 Found net devices under 0000:af:00.0: cvl_0_0 00:24:19.858 00:04:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:19.858 00:04:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:19.858 00:04:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:19.858 00:04:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:19.858 00:04:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:19.858 00:04:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:19.858 00:04:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:19.858 00:04:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:19.858 00:04:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:24:19.858 Found net devices under 0000:af:00.1: cvl_0_1 00:24:19.858 00:04:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:19.858 00:04:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:19.858 00:04:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # is_hw=yes 00:24:19.858 00:04:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:19.858 00:04:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:19.858 00:04:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:19.858 00:04:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:19.858 00:04:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:19.858 00:04:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:19.858 00:04:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:19.858 00:04:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:19.858 00:04:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:19.858 00:04:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:19.858 00:04:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:19.858 00:04:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:19.858 00:04:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:19.858 00:04:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:19.858 00:04:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:19.858 00:04:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:19.858 00:04:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:19.858 00:04:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:20.117 00:04:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:20.117 00:04:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:20.117 00:04:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:20.117 00:04:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:20.117 00:04:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:20.117 00:04:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:20.117 00:04:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:20.117 00:04:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:20.117 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:20.117 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.406 ms 00:24:20.117 00:24:20.117 --- 10.0.0.2 ping statistics --- 00:24:20.117 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:20.117 rtt min/avg/max/mdev = 0.406/0.406/0.406/0.000 ms 00:24:20.117 00:04:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:20.376 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:20.376 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.208 ms 00:24:20.376 00:24:20.376 --- 10.0.0.1 ping statistics --- 00:24:20.376 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:20.376 rtt min/avg/max/mdev = 0.208/0.208/0.208/0.000 ms 00:24:20.376 00:04:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:20.376 00:04:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@450 -- # return 0 00:24:20.376 00:04:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:20.376 00:04:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:20.376 00:04:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:20.376 00:04:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:20.376 00:04:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:20.376 00:04:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:20.376 00:04:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:20.376 00:04:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@132 -- # nvmfappstart -m 0x2 00:24:20.376 00:04:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:20.376 00:04:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:20.376 00:04:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:24:20.376 00:04:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@509 -- # nvmfpid=4059026 00:24:20.376 00:04:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@510 -- # waitforlisten 4059026 00:24:20.376 00:04:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:24:20.376 00:04:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # '[' -z 4059026 ']' 00:24:20.376 00:04:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:20.376 00:04:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:20.376 00:04:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:20.376 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:20.376 00:04:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:20.376 00:04:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:24:20.376 [2024-12-14 00:04:59.419205] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:24:20.376 [2024-12-14 00:04:59.419299] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:20.635 [2024-12-14 00:04:59.535830] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:20.635 [2024-12-14 00:04:59.639909] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:20.636 [2024-12-14 00:04:59.639954] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:20.636 [2024-12-14 00:04:59.639965] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:20.636 [2024-12-14 00:04:59.639976] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:20.636 [2024-12-14 00:04:59.639984] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:20.636 [2024-12-14 00:04:59.641453] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:24:21.203 00:05:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:21.203 00:05:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@868 -- # return 0 00:24:21.203 00:05:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:21.203 00:05:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:21.203 00:05:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:24:21.203 00:05:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:21.203 00:05:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@134 -- # trap cleanup EXIT 00:24:21.203 00:05:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@137 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:24:21.203 00:05:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # mktemp -t spdk-psk.XXX 00:24:21.203 00:05:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # key_path=/tmp/spdk-psk.h6r 00:24:21.203 00:05:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@139 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:24:21.203 00:05:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@140 -- # chmod 0600 /tmp/spdk-psk.h6r 00:24:21.203 00:05:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@142 -- # setup_nvmf_tgt_conf /tmp/spdk-psk.h6r 00:24:21.203 00:05:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@22 -- # local key=/tmp/spdk-psk.h6r 00:24:21.203 00:05:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:24:21.461 [2024-12-14 00:05:00.397992] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:21.461 [2024-12-14 00:05:00.413961] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:21.461 [2024-12-14 00:05:00.414221] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:21.461 malloc0 00:24:21.461 00:05:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@145 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:21.461 00:05:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@148 -- # bdevperf_pid=4059150 00:24:21.461 00:05:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@149 -- # waitforlisten 4059150 /var/tmp/bdevperf.sock 00:24:21.461 00:05:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@146 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:24:21.461 00:05:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # '[' -z 4059150 ']' 00:24:21.461 00:05:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:21.461 00:05:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:21.461 00:05:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:21.461 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:21.461 00:05:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:21.461 00:05:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:24:21.461 [2024-12-14 00:05:00.597959] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:24:21.461 [2024-12-14 00:05:00.598057] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4059150 ] 00:24:21.719 [2024-12-14 00:05:00.709619] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:21.719 [2024-12-14 00:05:00.822461] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:24:22.286 00:05:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:22.286 00:05:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@868 -- # return 0 00:24:22.286 00:05:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/spdk-psk.h6r 00:24:22.544 00:05:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@152 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:24:22.803 [2024-12-14 00:05:01.717649] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:22.803 TLSTESTn1 00:24:22.803 00:05:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@156 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:22.803 Running I/O for 10 seconds... 00:24:25.132 4529.00 IOPS, 17.69 MiB/s [2024-12-13T23:05:05.212Z] 4678.00 IOPS, 18.27 MiB/s [2024-12-13T23:05:06.159Z] 4721.67 IOPS, 18.44 MiB/s [2024-12-13T23:05:07.095Z] 4713.25 IOPS, 18.41 MiB/s [2024-12-13T23:05:08.030Z] 4707.60 IOPS, 18.39 MiB/s [2024-12-13T23:05:08.965Z] 4696.17 IOPS, 18.34 MiB/s [2024-12-13T23:05:10.339Z] 4703.43 IOPS, 18.37 MiB/s [2024-12-13T23:05:11.274Z] 4691.25 IOPS, 18.33 MiB/s [2024-12-13T23:05:12.220Z] 4686.89 IOPS, 18.31 MiB/s [2024-12-13T23:05:12.220Z] 4687.70 IOPS, 18.31 MiB/s 00:24:33.079 Latency(us) 00:24:33.079 [2024-12-13T23:05:12.220Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:33.079 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:24:33.079 Verification LBA range: start 0x0 length 0x2000 00:24:33.079 TLSTESTn1 : 10.02 4690.67 18.32 0.00 0.00 27240.57 7146.54 36200.84 00:24:33.079 [2024-12-13T23:05:12.220Z] =================================================================================================================== 00:24:33.079 [2024-12-13T23:05:12.220Z] Total : 4690.67 18.32 0.00 0.00 27240.57 7146.54 36200.84 00:24:33.079 { 00:24:33.079 "results": [ 00:24:33.079 { 00:24:33.079 "job": "TLSTESTn1", 00:24:33.079 "core_mask": "0x4", 00:24:33.079 "workload": "verify", 00:24:33.079 "status": "finished", 00:24:33.079 "verify_range": { 00:24:33.079 "start": 0, 00:24:33.079 "length": 8192 00:24:33.079 }, 00:24:33.079 "queue_depth": 128, 00:24:33.079 "io_size": 4096, 00:24:33.079 "runtime": 10.020947, 00:24:33.079 "iops": 4690.674444241647, 00:24:33.079 "mibps": 18.322947047818932, 00:24:33.079 "io_failed": 0, 00:24:33.079 "io_timeout": 0, 00:24:33.079 "avg_latency_us": 27240.571313001154, 00:24:33.079 "min_latency_us": 7146.544761904762, 00:24:33.079 "max_latency_us": 36200.8380952381 00:24:33.079 } 00:24:33.079 ], 00:24:33.079 "core_count": 1 00:24:33.079 } 00:24:33.079 00:05:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:24:33.079 00:05:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:24:33.079 00:05:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@812 -- # type=--id 00:24:33.079 00:05:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@813 -- # id=0 00:24:33.079 00:05:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:24:33.079 00:05:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:24:33.079 00:05:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:24:33.079 00:05:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:24:33.079 00:05:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@824 -- # for n in $shm_files 00:24:33.080 00:05:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:24:33.080 nvmf_trace.0 00:24:33.080 00:05:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@827 -- # return 0 00:24:33.080 00:05:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@16 -- # killprocess 4059150 00:24:33.080 00:05:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # '[' -z 4059150 ']' 00:24:33.080 00:05:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # kill -0 4059150 00:24:33.080 00:05:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # uname 00:24:33.080 00:05:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:33.080 00:05:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4059150 00:24:33.080 00:05:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:24:33.080 00:05:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:24:33.080 00:05:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4059150' 00:24:33.080 killing process with pid 4059150 00:24:33.080 00:05:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@973 -- # kill 4059150 00:24:33.080 Received shutdown signal, test time was about 10.000000 seconds 00:24:33.080 00:24:33.080 Latency(us) 00:24:33.080 [2024-12-13T23:05:12.221Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:33.080 [2024-12-13T23:05:12.221Z] =================================================================================================================== 00:24:33.080 [2024-12-13T23:05:12.221Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:33.080 00:05:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@978 -- # wait 4059150 00:24:34.023 00:05:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:24:34.023 00:05:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:34.023 00:05:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@121 -- # sync 00:24:34.023 00:05:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:34.023 00:05:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@124 -- # set +e 00:24:34.023 00:05:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:34.023 00:05:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:34.023 rmmod nvme_tcp 00:24:34.023 rmmod nvme_fabrics 00:24:34.023 rmmod nvme_keyring 00:24:34.023 00:05:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:34.023 00:05:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@128 -- # set -e 00:24:34.023 00:05:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@129 -- # return 0 00:24:34.023 00:05:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@517 -- # '[' -n 4059026 ']' 00:24:34.023 00:05:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@518 -- # killprocess 4059026 00:24:34.023 00:05:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # '[' -z 4059026 ']' 00:24:34.023 00:05:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # kill -0 4059026 00:24:34.023 00:05:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # uname 00:24:34.023 00:05:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:34.023 00:05:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4059026 00:24:34.023 00:05:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:24:34.023 00:05:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:24:34.023 00:05:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4059026' 00:24:34.023 killing process with pid 4059026 00:24:34.023 00:05:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@973 -- # kill 4059026 00:24:34.023 00:05:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@978 -- # wait 4059026 00:24:35.399 00:05:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:35.399 00:05:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:35.399 00:05:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:35.399 00:05:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@297 -- # iptr 00:24:35.399 00:05:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-save 00:24:35.399 00:05:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:35.399 00:05:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-restore 00:24:35.399 00:05:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:35.399 00:05:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:35.399 00:05:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:35.399 00:05:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:35.399 00:05:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:37.933 00:05:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:37.933 00:05:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@18 -- # rm -f /tmp/spdk-psk.h6r 00:24:37.933 00:24:37.933 real 0m23.283s 00:24:37.933 user 0m26.179s 00:24:37.933 sys 0m9.036s 00:24:37.933 00:05:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:37.933 00:05:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:24:37.933 ************************************ 00:24:37.933 END TEST nvmf_fips 00:24:37.933 ************************************ 00:24:37.933 00:05:16 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@43 -- # run_test nvmf_control_msg_list /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:24:37.933 00:05:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:37.933 00:05:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:37.933 00:05:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:24:37.933 ************************************ 00:24:37.933 START TEST nvmf_control_msg_list 00:24:37.933 ************************************ 00:24:37.933 00:05:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:24:37.933 * Looking for test storage... 00:24:37.933 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:24:37.933 00:05:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:24:37.933 00:05:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1711 -- # lcov --version 00:24:37.933 00:05:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:24:37.933 00:05:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:24:37.933 00:05:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:37.933 00:05:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:37.933 00:05:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:37.933 00:05:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # IFS=.-: 00:24:37.933 00:05:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # read -ra ver1 00:24:37.933 00:05:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # IFS=.-: 00:24:37.933 00:05:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # read -ra ver2 00:24:37.933 00:05:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@338 -- # local 'op=<' 00:24:37.933 00:05:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@340 -- # ver1_l=2 00:24:37.933 00:05:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@341 -- # ver2_l=1 00:24:37.933 00:05:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:37.933 00:05:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@344 -- # case "$op" in 00:24:37.933 00:05:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@345 -- # : 1 00:24:37.933 00:05:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:37.933 00:05:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:37.933 00:05:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # decimal 1 00:24:37.933 00:05:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=1 00:24:37.933 00:05:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:37.933 00:05:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 1 00:24:37.933 00:05:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # ver1[v]=1 00:24:37.933 00:05:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # decimal 2 00:24:37.933 00:05:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=2 00:24:37.933 00:05:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:37.933 00:05:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 2 00:24:37.933 00:05:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # ver2[v]=2 00:24:37.933 00:05:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:37.933 00:05:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:37.933 00:05:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # return 0 00:24:37.933 00:05:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:37.933 00:05:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:24:37.933 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:37.933 --rc genhtml_branch_coverage=1 00:24:37.933 --rc genhtml_function_coverage=1 00:24:37.933 --rc genhtml_legend=1 00:24:37.933 --rc geninfo_all_blocks=1 00:24:37.933 --rc geninfo_unexecuted_blocks=1 00:24:37.933 00:24:37.933 ' 00:24:37.933 00:05:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:24:37.933 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:37.933 --rc genhtml_branch_coverage=1 00:24:37.933 --rc genhtml_function_coverage=1 00:24:37.933 --rc genhtml_legend=1 00:24:37.933 --rc geninfo_all_blocks=1 00:24:37.933 --rc geninfo_unexecuted_blocks=1 00:24:37.933 00:24:37.933 ' 00:24:37.933 00:05:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:24:37.933 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:37.933 --rc genhtml_branch_coverage=1 00:24:37.934 --rc genhtml_function_coverage=1 00:24:37.934 --rc genhtml_legend=1 00:24:37.934 --rc geninfo_all_blocks=1 00:24:37.934 --rc geninfo_unexecuted_blocks=1 00:24:37.934 00:24:37.934 ' 00:24:37.934 00:05:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:24:37.934 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:37.934 --rc genhtml_branch_coverage=1 00:24:37.934 --rc genhtml_function_coverage=1 00:24:37.934 --rc genhtml_legend=1 00:24:37.934 --rc geninfo_all_blocks=1 00:24:37.934 --rc geninfo_unexecuted_blocks=1 00:24:37.934 00:24:37.934 ' 00:24:37.934 00:05:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:37.934 00:05:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # uname -s 00:24:37.934 00:05:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:37.934 00:05:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:37.934 00:05:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:37.934 00:05:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:37.934 00:05:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:37.934 00:05:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:37.934 00:05:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:37.934 00:05:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:37.934 00:05:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:37.934 00:05:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:37.934 00:05:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:24:37.934 00:05:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:24:37.934 00:05:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:37.934 00:05:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:37.934 00:05:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:37.934 00:05:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:37.934 00:05:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:37.934 00:05:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@15 -- # shopt -s extglob 00:24:37.934 00:05:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:37.934 00:05:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:37.934 00:05:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:37.934 00:05:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:37.934 00:05:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:37.934 00:05:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:37.934 00:05:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@5 -- # export PATH 00:24:37.934 00:05:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:37.934 00:05:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@51 -- # : 0 00:24:37.934 00:05:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:37.934 00:05:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:37.934 00:05:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:37.934 00:05:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:37.934 00:05:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:37.934 00:05:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:37.934 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:37.934 00:05:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:37.934 00:05:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:37.934 00:05:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:37.934 00:05:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@12 -- # nvmftestinit 00:24:37.934 00:05:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:37.934 00:05:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:37.934 00:05:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:37.934 00:05:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:37.934 00:05:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:37.934 00:05:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:37.934 00:05:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:37.934 00:05:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:37.934 00:05:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:37.934 00:05:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:37.934 00:05:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@309 -- # xtrace_disable 00:24:37.934 00:05:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:24:43.199 00:05:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:43.199 00:05:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # pci_devs=() 00:24:43.199 00:05:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:43.199 00:05:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:43.199 00:05:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:43.199 00:05:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:43.199 00:05:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:43.199 00:05:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # net_devs=() 00:24:43.199 00:05:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:43.199 00:05:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # e810=() 00:24:43.199 00:05:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # local -ga e810 00:24:43.199 00:05:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # x722=() 00:24:43.199 00:05:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # local -ga x722 00:24:43.199 00:05:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # mlx=() 00:24:43.200 00:05:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # local -ga mlx 00:24:43.200 00:05:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:43.200 00:05:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:43.200 00:05:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:43.200 00:05:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:43.200 00:05:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:43.200 00:05:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:43.200 00:05:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:43.200 00:05:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:43.200 00:05:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:43.200 00:05:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:43.200 00:05:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:43.200 00:05:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:43.200 00:05:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:43.200 00:05:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:43.200 00:05:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:43.200 00:05:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:43.200 00:05:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:43.200 00:05:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:43.200 00:05:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:43.200 00:05:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:24:43.200 Found 0000:af:00.0 (0x8086 - 0x159b) 00:24:43.200 00:05:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:43.200 00:05:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:43.200 00:05:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:43.200 00:05:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:43.200 00:05:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:43.200 00:05:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:43.200 00:05:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:24:43.200 Found 0000:af:00.1 (0x8086 - 0x159b) 00:24:43.200 00:05:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:43.200 00:05:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:43.200 00:05:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:43.200 00:05:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:43.200 00:05:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:43.200 00:05:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:43.200 00:05:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:43.200 00:05:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:43.200 00:05:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:43.200 00:05:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:43.200 00:05:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:43.200 00:05:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:43.200 00:05:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:43.200 00:05:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:43.200 00:05:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:43.200 00:05:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:24:43.200 Found net devices under 0000:af:00.0: cvl_0_0 00:24:43.200 00:05:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:43.200 00:05:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:43.200 00:05:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:43.200 00:05:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:43.200 00:05:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:43.200 00:05:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:43.200 00:05:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:43.200 00:05:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:43.200 00:05:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:24:43.200 Found net devices under 0000:af:00.1: cvl_0_1 00:24:43.200 00:05:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:43.200 00:05:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:43.200 00:05:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # is_hw=yes 00:24:43.200 00:05:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:43.200 00:05:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:43.200 00:05:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:43.200 00:05:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:43.200 00:05:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:43.200 00:05:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:43.200 00:05:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:43.200 00:05:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:43.200 00:05:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:43.200 00:05:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:43.200 00:05:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:43.200 00:05:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:43.200 00:05:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:43.200 00:05:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:43.200 00:05:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:43.200 00:05:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:43.200 00:05:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:43.200 00:05:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:43.200 00:05:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:43.200 00:05:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:43.200 00:05:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:43.200 00:05:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:43.200 00:05:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:43.200 00:05:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:43.200 00:05:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:43.200 00:05:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:43.200 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:43.200 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.481 ms 00:24:43.200 00:24:43.200 --- 10.0.0.2 ping statistics --- 00:24:43.200 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:43.200 rtt min/avg/max/mdev = 0.481/0.481/0.481/0.000 ms 00:24:43.200 00:05:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:43.200 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:43.200 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.198 ms 00:24:43.200 00:24:43.200 --- 10.0.0.1 ping statistics --- 00:24:43.200 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:43.200 rtt min/avg/max/mdev = 0.198/0.198/0.198/0.000 ms 00:24:43.200 00:05:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:43.200 00:05:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@450 -- # return 0 00:24:43.200 00:05:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:43.200 00:05:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:43.200 00:05:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:43.200 00:05:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:43.200 00:05:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:43.200 00:05:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:43.200 00:05:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:43.200 00:05:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@13 -- # nvmfappstart 00:24:43.200 00:05:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:43.200 00:05:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:43.200 00:05:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:24:43.200 00:05:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@509 -- # nvmfpid=4064773 00:24:43.200 00:05:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@510 -- # waitforlisten 4064773 00:24:43.200 00:05:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:24:43.200 00:05:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@835 -- # '[' -z 4064773 ']' 00:24:43.200 00:05:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:43.200 00:05:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:43.200 00:05:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:43.200 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:43.201 00:05:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:43.201 00:05:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:24:43.201 [2024-12-14 00:05:22.330804] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:24:43.201 [2024-12-14 00:05:22.330914] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:43.459 [2024-12-14 00:05:22.449934] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:43.459 [2024-12-14 00:05:22.556600] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:43.459 [2024-12-14 00:05:22.556645] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:43.459 [2024-12-14 00:05:22.556655] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:43.459 [2024-12-14 00:05:22.556665] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:43.459 [2024-12-14 00:05:22.556673] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:43.459 [2024-12-14 00:05:22.558017] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:24:44.025 00:05:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:44.025 00:05:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@868 -- # return 0 00:24:44.025 00:05:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:44.025 00:05:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:44.025 00:05:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:24:44.284 00:05:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:44.284 00:05:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:24:44.284 00:05:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:24:44.284 00:05:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@19 -- # rpc_cmd nvmf_create_transport '-t tcp -o' --in-capsule-data-size 768 --control-msg-num 1 00:24:44.284 00:05:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:44.284 00:05:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:24:44.284 [2024-12-14 00:05:23.175484] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:44.284 00:05:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:44.284 00:05:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a 00:24:44.284 00:05:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:44.284 00:05:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:24:44.284 00:05:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:44.284 00:05:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:24:44.284 00:05:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:44.284 00:05:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:24:44.284 Malloc0 00:24:44.284 00:05:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:44.284 00:05:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:24:44.284 00:05:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:44.284 00:05:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:24:44.284 00:05:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:44.284 00:05:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:24:44.284 00:05:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:44.284 00:05:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:24:44.284 [2024-12-14 00:05:23.248332] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:44.284 00:05:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:44.284 00:05:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@27 -- # perf_pid1=4065012 00:24:44.284 00:05:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x2 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:44.284 00:05:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@29 -- # perf_pid2=4065013 00:24:44.284 00:05:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x4 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:44.284 00:05:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@31 -- # perf_pid3=4065014 00:24:44.284 00:05:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@33 -- # wait 4065012 00:24:44.284 00:05:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x8 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:44.284 [2024-12-14 00:05:23.364497] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:24:44.284 [2024-12-14 00:05:23.364802] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:24:44.284 [2024-12-14 00:05:23.365046] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:24:45.659 Initializing NVMe Controllers 00:24:45.659 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:24:45.659 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 3 00:24:45.659 Initialization complete. Launching workers. 00:24:45.659 ======================================================== 00:24:45.659 Latency(us) 00:24:45.659 Device Information : IOPS MiB/s Average min max 00:24:45.659 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 3: 25.00 0.10 41245.69 40787.94 45270.81 00:24:45.659 ======================================================== 00:24:45.659 Total : 25.00 0.10 41245.69 40787.94 45270.81 00:24:45.659 00:24:45.659 Initializing NVMe Controllers 00:24:45.659 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:24:45.659 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 2 00:24:45.659 Initialization complete. Launching workers. 00:24:45.659 ======================================================== 00:24:45.659 Latency(us) 00:24:45.659 Device Information : IOPS MiB/s Average min max 00:24:45.659 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 2: 25.00 0.10 41291.15 40851.43 46341.24 00:24:45.659 ======================================================== 00:24:45.659 Total : 25.00 0.10 41291.15 40851.43 46341.24 00:24:45.660 00:24:45.660 Initializing NVMe Controllers 00:24:45.660 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:24:45.660 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 1 00:24:45.660 Initialization complete. Launching workers. 00:24:45.660 ======================================================== 00:24:45.660 Latency(us) 00:24:45.660 Device Information : IOPS MiB/s Average min max 00:24:45.660 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 1: 6151.00 24.03 162.16 152.54 722.66 00:24:45.660 ======================================================== 00:24:45.660 Total : 6151.00 24.03 162.16 152.54 722.66 00:24:45.660 00:24:45.660 00:05:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@34 -- # wait 4065013 00:24:45.660 00:05:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@35 -- # wait 4065014 00:24:45.660 00:05:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:24:45.660 00:05:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@38 -- # nvmftestfini 00:24:45.660 00:05:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:45.660 00:05:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@121 -- # sync 00:24:45.660 00:05:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:45.660 00:05:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@124 -- # set +e 00:24:45.660 00:05:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:45.660 00:05:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:45.660 rmmod nvme_tcp 00:24:45.660 rmmod nvme_fabrics 00:24:45.660 rmmod nvme_keyring 00:24:45.660 00:05:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:45.660 00:05:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@128 -- # set -e 00:24:45.660 00:05:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@129 -- # return 0 00:24:45.660 00:05:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@517 -- # '[' -n 4064773 ']' 00:24:45.660 00:05:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@518 -- # killprocess 4064773 00:24:45.660 00:05:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@954 -- # '[' -z 4064773 ']' 00:24:45.660 00:05:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@958 -- # kill -0 4064773 00:24:45.660 00:05:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@959 -- # uname 00:24:45.660 00:05:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:45.660 00:05:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4064773 00:24:45.660 00:05:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:45.660 00:05:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:45.660 00:05:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4064773' 00:24:45.660 killing process with pid 4064773 00:24:45.660 00:05:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@973 -- # kill 4064773 00:24:45.660 00:05:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@978 -- # wait 4064773 00:24:47.033 00:05:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:47.033 00:05:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:47.033 00:05:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:47.033 00:05:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@297 -- # iptr 00:24:47.033 00:05:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-save 00:24:47.033 00:05:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:47.033 00:05:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-restore 00:24:47.033 00:05:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:47.033 00:05:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:47.034 00:05:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:47.034 00:05:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:47.034 00:05:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:48.958 00:05:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:48.959 00:24:48.959 real 0m11.342s 00:24:48.959 user 0m8.232s 00:24:48.959 sys 0m5.242s 00:24:48.959 00:05:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:48.959 00:05:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:24:48.959 ************************************ 00:24:48.959 END TEST nvmf_control_msg_list 00:24:48.959 ************************************ 00:24:48.959 00:05:27 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@44 -- # run_test nvmf_wait_for_buf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:24:48.959 00:05:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:48.959 00:05:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:48.959 00:05:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:24:48.959 ************************************ 00:24:48.959 START TEST nvmf_wait_for_buf 00:24:48.959 ************************************ 00:24:48.959 00:05:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:24:48.959 * Looking for test storage... 00:24:48.959 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:24:48.959 00:05:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:24:48.959 00:05:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1711 -- # lcov --version 00:24:48.959 00:05:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:24:49.220 00:05:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:24:49.220 00:05:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:49.220 00:05:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:49.220 00:05:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:49.220 00:05:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # IFS=.-: 00:24:49.220 00:05:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # read -ra ver1 00:24:49.220 00:05:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # IFS=.-: 00:24:49.220 00:05:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # read -ra ver2 00:24:49.220 00:05:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@338 -- # local 'op=<' 00:24:49.220 00:05:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@340 -- # ver1_l=2 00:24:49.220 00:05:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@341 -- # ver2_l=1 00:24:49.220 00:05:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:49.220 00:05:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@344 -- # case "$op" in 00:24:49.220 00:05:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@345 -- # : 1 00:24:49.220 00:05:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:49.220 00:05:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:49.220 00:05:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # decimal 1 00:24:49.220 00:05:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=1 00:24:49.220 00:05:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:49.220 00:05:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 1 00:24:49.220 00:05:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # ver1[v]=1 00:24:49.220 00:05:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # decimal 2 00:24:49.220 00:05:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=2 00:24:49.220 00:05:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:49.220 00:05:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 2 00:24:49.220 00:05:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # ver2[v]=2 00:24:49.220 00:05:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:49.220 00:05:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:49.220 00:05:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # return 0 00:24:49.220 00:05:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:49.220 00:05:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:24:49.220 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:49.221 --rc genhtml_branch_coverage=1 00:24:49.221 --rc genhtml_function_coverage=1 00:24:49.221 --rc genhtml_legend=1 00:24:49.221 --rc geninfo_all_blocks=1 00:24:49.221 --rc geninfo_unexecuted_blocks=1 00:24:49.221 00:24:49.221 ' 00:24:49.221 00:05:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:24:49.221 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:49.221 --rc genhtml_branch_coverage=1 00:24:49.221 --rc genhtml_function_coverage=1 00:24:49.221 --rc genhtml_legend=1 00:24:49.221 --rc geninfo_all_blocks=1 00:24:49.221 --rc geninfo_unexecuted_blocks=1 00:24:49.221 00:24:49.221 ' 00:24:49.221 00:05:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:24:49.221 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:49.221 --rc genhtml_branch_coverage=1 00:24:49.221 --rc genhtml_function_coverage=1 00:24:49.221 --rc genhtml_legend=1 00:24:49.221 --rc geninfo_all_blocks=1 00:24:49.221 --rc geninfo_unexecuted_blocks=1 00:24:49.221 00:24:49.221 ' 00:24:49.221 00:05:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:24:49.221 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:49.221 --rc genhtml_branch_coverage=1 00:24:49.221 --rc genhtml_function_coverage=1 00:24:49.221 --rc genhtml_legend=1 00:24:49.221 --rc geninfo_all_blocks=1 00:24:49.221 --rc geninfo_unexecuted_blocks=1 00:24:49.221 00:24:49.221 ' 00:24:49.221 00:05:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:49.221 00:05:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # uname -s 00:24:49.221 00:05:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:49.221 00:05:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:49.221 00:05:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:49.221 00:05:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:49.221 00:05:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:49.221 00:05:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:49.221 00:05:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:49.221 00:05:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:49.221 00:05:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:49.221 00:05:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:49.221 00:05:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:24:49.221 00:05:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:24:49.221 00:05:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:49.221 00:05:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:49.221 00:05:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:49.221 00:05:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:49.221 00:05:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:49.221 00:05:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@15 -- # shopt -s extglob 00:24:49.221 00:05:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:49.221 00:05:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:49.221 00:05:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:49.221 00:05:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:49.221 00:05:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:49.221 00:05:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:49.221 00:05:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@5 -- # export PATH 00:24:49.221 00:05:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:49.221 00:05:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@51 -- # : 0 00:24:49.221 00:05:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:49.221 00:05:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:49.221 00:05:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:49.221 00:05:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:49.221 00:05:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:49.221 00:05:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:49.221 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:49.221 00:05:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:49.221 00:05:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:49.221 00:05:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:49.221 00:05:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@12 -- # nvmftestinit 00:24:49.221 00:05:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:49.221 00:05:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:49.221 00:05:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:49.221 00:05:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:49.221 00:05:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:49.221 00:05:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:49.221 00:05:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:49.222 00:05:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:49.222 00:05:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:49.222 00:05:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:49.222 00:05:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@309 -- # xtrace_disable 00:24:49.222 00:05:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:54.506 00:05:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:54.506 00:05:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # pci_devs=() 00:24:54.506 00:05:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:54.506 00:05:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:54.506 00:05:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:54.506 00:05:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:54.506 00:05:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:54.506 00:05:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # net_devs=() 00:24:54.506 00:05:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:54.506 00:05:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # e810=() 00:24:54.506 00:05:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # local -ga e810 00:24:54.506 00:05:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # x722=() 00:24:54.506 00:05:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # local -ga x722 00:24:54.506 00:05:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # mlx=() 00:24:54.506 00:05:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # local -ga mlx 00:24:54.506 00:05:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:54.506 00:05:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:54.506 00:05:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:54.506 00:05:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:54.506 00:05:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:54.506 00:05:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:54.506 00:05:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:54.506 00:05:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:54.506 00:05:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:54.506 00:05:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:54.506 00:05:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:54.506 00:05:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:54.506 00:05:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:54.506 00:05:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:54.506 00:05:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:54.506 00:05:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:54.507 00:05:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:54.507 00:05:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:54.507 00:05:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:54.507 00:05:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:24:54.507 Found 0000:af:00.0 (0x8086 - 0x159b) 00:24:54.507 00:05:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:54.507 00:05:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:54.507 00:05:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:54.507 00:05:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:54.507 00:05:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:54.507 00:05:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:54.507 00:05:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:24:54.507 Found 0000:af:00.1 (0x8086 - 0x159b) 00:24:54.507 00:05:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:54.507 00:05:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:54.507 00:05:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:54.507 00:05:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:54.507 00:05:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:54.507 00:05:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:54.507 00:05:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:54.507 00:05:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:54.507 00:05:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:54.507 00:05:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:54.507 00:05:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:54.507 00:05:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:54.507 00:05:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:54.507 00:05:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:54.507 00:05:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:54.507 00:05:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:24:54.507 Found net devices under 0000:af:00.0: cvl_0_0 00:24:54.507 00:05:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:54.507 00:05:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:54.507 00:05:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:54.507 00:05:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:54.507 00:05:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:54.507 00:05:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:54.507 00:05:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:54.507 00:05:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:54.507 00:05:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:24:54.507 Found net devices under 0000:af:00.1: cvl_0_1 00:24:54.507 00:05:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:54.507 00:05:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:54.507 00:05:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # is_hw=yes 00:24:54.507 00:05:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:54.507 00:05:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:54.507 00:05:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:54.507 00:05:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:54.507 00:05:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:54.507 00:05:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:54.507 00:05:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:54.507 00:05:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:54.507 00:05:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:54.507 00:05:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:54.507 00:05:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:54.507 00:05:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:54.507 00:05:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:54.507 00:05:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:54.507 00:05:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:54.507 00:05:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:54.507 00:05:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:54.507 00:05:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:54.507 00:05:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:54.507 00:05:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:54.507 00:05:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:54.507 00:05:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:54.507 00:05:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:54.507 00:05:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:54.507 00:05:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:54.507 00:05:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:54.507 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:54.507 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.445 ms 00:24:54.507 00:24:54.507 --- 10.0.0.2 ping statistics --- 00:24:54.507 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:54.507 rtt min/avg/max/mdev = 0.445/0.445/0.445/0.000 ms 00:24:54.507 00:05:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:54.507 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:54.507 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.196 ms 00:24:54.507 00:24:54.507 --- 10.0.0.1 ping statistics --- 00:24:54.507 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:54.507 rtt min/avg/max/mdev = 0.196/0.196/0.196/0.000 ms 00:24:54.507 00:05:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:54.507 00:05:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@450 -- # return 0 00:24:54.507 00:05:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:54.507 00:05:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:54.507 00:05:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:54.507 00:05:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:54.507 00:05:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:54.507 00:05:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:54.507 00:05:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:54.507 00:05:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@13 -- # nvmfappstart --wait-for-rpc 00:24:54.507 00:05:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:54.507 00:05:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:54.507 00:05:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:54.507 00:05:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@509 -- # nvmfpid=4068710 00:24:54.507 00:05:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@510 -- # waitforlisten 4068710 00:24:54.507 00:05:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:24:54.507 00:05:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@835 -- # '[' -z 4068710 ']' 00:24:54.507 00:05:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:54.507 00:05:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:54.507 00:05:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:54.507 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:54.507 00:05:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:54.507 00:05:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:54.773 [2024-12-14 00:05:33.657896] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:24:54.773 [2024-12-14 00:05:33.657985] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:54.773 [2024-12-14 00:05:33.775195] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:54.774 [2024-12-14 00:05:33.879749] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:54.774 [2024-12-14 00:05:33.879795] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:54.774 [2024-12-14 00:05:33.879805] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:54.774 [2024-12-14 00:05:33.879817] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:54.774 [2024-12-14 00:05:33.879825] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:54.774 [2024-12-14 00:05:33.880999] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:24:55.414 00:05:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:55.414 00:05:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@868 -- # return 0 00:24:55.414 00:05:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:55.414 00:05:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:55.414 00:05:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:55.414 00:05:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:55.415 00:05:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:24:55.415 00:05:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:24:55.415 00:05:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@19 -- # rpc_cmd accel_set_options --small-cache-size 0 --large-cache-size 0 00:24:55.415 00:05:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:55.415 00:05:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:55.415 00:05:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:55.415 00:05:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@20 -- # rpc_cmd iobuf_set_options --small-pool-count 154 --small_bufsize=8192 00:24:55.415 00:05:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:55.415 00:05:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:55.415 00:05:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:55.415 00:05:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@21 -- # rpc_cmd framework_start_init 00:24:55.415 00:05:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:55.415 00:05:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:55.735 00:05:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:55.735 00:05:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@22 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:24:55.735 00:05:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:55.735 00:05:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:55.735 Malloc0 00:24:55.735 00:05:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:55.735 00:05:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@23 -- # rpc_cmd nvmf_create_transport '-t tcp -o' -u 8192 -n 24 -b 24 00:24:55.735 00:05:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:55.735 00:05:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:55.735 [2024-12-14 00:05:34.809933] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:55.735 00:05:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:55.735 00:05:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a -s SPDK00000000000001 00:24:55.735 00:05:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:55.735 00:05:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:55.735 00:05:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:55.735 00:05:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:24:55.735 00:05:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:55.735 00:05:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:55.735 00:05:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:55.735 00:05:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:24:55.735 00:05:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:55.735 00:05:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:55.735 [2024-12-14 00:05:34.834122] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:55.735 00:05:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:55.735 00:05:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 4 -o 131072 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:55.993 [2024-12-14 00:05:34.964559] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:24:57.368 Initializing NVMe Controllers 00:24:57.368 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:24:57.368 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 0 00:24:57.368 Initialization complete. Launching workers. 00:24:57.368 ======================================================== 00:24:57.368 Latency(us) 00:24:57.368 Device Information : IOPS MiB/s Average min max 00:24:57.368 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 0: 124.56 15.57 33230.18 26928.60 70672.17 00:24:57.368 ======================================================== 00:24:57.368 Total : 124.56 15.57 33230.18 26928.60 70672.17 00:24:57.368 00:24:57.627 00:05:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # rpc_cmd iobuf_get_stats 00:24:57.627 00:05:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # jq -r '.[] | select(.module == "nvmf_TCP") | .small_pool.retry' 00:24:57.627 00:05:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:57.627 00:05:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:57.627 00:05:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:57.627 00:05:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # retry_count=1974 00:24:57.627 00:05:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@33 -- # [[ 1974 -eq 0 ]] 00:24:57.627 00:05:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:24:57.627 00:05:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@38 -- # nvmftestfini 00:24:57.627 00:05:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:57.627 00:05:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@121 -- # sync 00:24:57.627 00:05:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:57.627 00:05:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@124 -- # set +e 00:24:57.627 00:05:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:57.627 00:05:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:57.627 rmmod nvme_tcp 00:24:57.627 rmmod nvme_fabrics 00:24:57.627 rmmod nvme_keyring 00:24:57.627 00:05:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:57.627 00:05:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@128 -- # set -e 00:24:57.627 00:05:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@129 -- # return 0 00:24:57.627 00:05:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@517 -- # '[' -n 4068710 ']' 00:24:57.627 00:05:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@518 -- # killprocess 4068710 00:24:57.627 00:05:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@954 -- # '[' -z 4068710 ']' 00:24:57.627 00:05:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@958 -- # kill -0 4068710 00:24:57.627 00:05:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@959 -- # uname 00:24:57.627 00:05:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:57.627 00:05:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4068710 00:24:57.627 00:05:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:57.627 00:05:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:57.627 00:05:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4068710' 00:24:57.627 killing process with pid 4068710 00:24:57.627 00:05:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@973 -- # kill 4068710 00:24:57.627 00:05:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@978 -- # wait 4068710 00:24:59.004 00:05:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:59.004 00:05:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:59.004 00:05:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:59.004 00:05:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@297 -- # iptr 00:24:59.004 00:05:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-save 00:24:59.004 00:05:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:59.004 00:05:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-restore 00:24:59.004 00:05:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:59.004 00:05:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:59.004 00:05:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:59.004 00:05:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:59.004 00:05:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:00.907 00:05:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:00.907 00:25:00.907 real 0m11.850s 00:25:00.907 user 0m5.841s 00:25:00.907 sys 0m4.660s 00:25:00.907 00:05:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:00.907 00:05:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:00.907 ************************************ 00:25:00.907 END TEST nvmf_wait_for_buf 00:25:00.907 ************************************ 00:25:00.907 00:05:39 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@47 -- # '[' 1 -eq 1 ']' 00:25:00.907 00:05:39 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@48 -- # run_test nvmf_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:25:00.907 00:05:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:25:00.907 00:05:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:00.907 00:05:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:25:00.907 ************************************ 00:25:00.907 START TEST nvmf_fuzz 00:25:00.907 ************************************ 00:25:00.907 00:05:39 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:25:00.907 * Looking for test storage... 00:25:00.907 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:25:00.907 00:05:39 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:25:00.907 00:05:39 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1711 -- # lcov --version 00:25:00.907 00:05:39 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:25:00.907 00:05:40 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:25:00.907 00:05:40 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:00.907 00:05:40 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:00.907 00:05:40 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:00.907 00:05:40 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:25:00.907 00:05:40 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:25:00.907 00:05:40 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:25:00.907 00:05:40 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:25:00.907 00:05:40 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:25:00.907 00:05:40 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:25:00.907 00:05:40 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:25:00.907 00:05:40 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:00.907 00:05:40 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:25:00.907 00:05:40 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@345 -- # : 1 00:25:00.907 00:05:40 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:00.907 00:05:40 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:00.907 00:05:40 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@365 -- # decimal 1 00:25:00.907 00:05:40 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@353 -- # local d=1 00:25:00.907 00:05:40 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:00.907 00:05:40 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@355 -- # echo 1 00:25:00.907 00:05:40 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:25:00.907 00:05:40 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@366 -- # decimal 2 00:25:00.907 00:05:40 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@353 -- # local d=2 00:25:00.907 00:05:40 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:00.907 00:05:40 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@355 -- # echo 2 00:25:00.907 00:05:40 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:25:00.907 00:05:40 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:00.907 00:05:40 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:00.907 00:05:40 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@368 -- # return 0 00:25:00.907 00:05:40 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:00.907 00:05:40 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:25:00.907 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:00.907 --rc genhtml_branch_coverage=1 00:25:00.907 --rc genhtml_function_coverage=1 00:25:00.907 --rc genhtml_legend=1 00:25:00.907 --rc geninfo_all_blocks=1 00:25:00.907 --rc geninfo_unexecuted_blocks=1 00:25:00.907 00:25:00.907 ' 00:25:00.907 00:05:40 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:25:00.907 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:00.907 --rc genhtml_branch_coverage=1 00:25:00.907 --rc genhtml_function_coverage=1 00:25:00.907 --rc genhtml_legend=1 00:25:00.907 --rc geninfo_all_blocks=1 00:25:00.907 --rc geninfo_unexecuted_blocks=1 00:25:00.907 00:25:00.907 ' 00:25:00.907 00:05:40 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:25:00.907 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:00.907 --rc genhtml_branch_coverage=1 00:25:00.907 --rc genhtml_function_coverage=1 00:25:00.907 --rc genhtml_legend=1 00:25:00.907 --rc geninfo_all_blocks=1 00:25:00.907 --rc geninfo_unexecuted_blocks=1 00:25:00.907 00:25:00.907 ' 00:25:01.167 00:05:40 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:25:01.167 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:01.167 --rc genhtml_branch_coverage=1 00:25:01.167 --rc genhtml_function_coverage=1 00:25:01.167 --rc genhtml_legend=1 00:25:01.167 --rc geninfo_all_blocks=1 00:25:01.167 --rc geninfo_unexecuted_blocks=1 00:25:01.167 00:25:01.167 ' 00:25:01.167 00:05:40 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:01.167 00:05:40 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@7 -- # uname -s 00:25:01.167 00:05:40 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:01.167 00:05:40 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:01.167 00:05:40 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:01.167 00:05:40 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:01.167 00:05:40 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:01.167 00:05:40 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:01.167 00:05:40 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:01.167 00:05:40 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:01.167 00:05:40 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:01.167 00:05:40 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:01.167 00:05:40 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:25:01.167 00:05:40 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:25:01.167 00:05:40 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:01.167 00:05:40 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:01.167 00:05:40 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:01.167 00:05:40 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:01.167 00:05:40 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:01.167 00:05:40 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@15 -- # shopt -s extglob 00:25:01.167 00:05:40 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:01.167 00:05:40 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:01.167 00:05:40 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:01.167 00:05:40 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:01.167 00:05:40 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:01.167 00:05:40 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:01.167 00:05:40 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@5 -- # export PATH 00:25:01.167 00:05:40 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:01.167 00:05:40 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@51 -- # : 0 00:25:01.167 00:05:40 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:01.167 00:05:40 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:01.167 00:05:40 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:01.167 00:05:40 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:01.167 00:05:40 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:01.167 00:05:40 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:01.167 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:01.167 00:05:40 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:01.167 00:05:40 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:01.167 00:05:40 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:01.167 00:05:40 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@11 -- # nvmftestinit 00:25:01.167 00:05:40 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:01.167 00:05:40 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:01.167 00:05:40 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:01.167 00:05:40 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:01.167 00:05:40 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:01.167 00:05:40 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:01.167 00:05:40 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:01.167 00:05:40 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:01.167 00:05:40 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:25:01.167 00:05:40 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:25:01.167 00:05:40 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@309 -- # xtrace_disable 00:25:01.167 00:05:40 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:25:07.731 00:05:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:07.731 00:05:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@315 -- # pci_devs=() 00:25:07.731 00:05:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:07.731 00:05:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:07.731 00:05:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:07.731 00:05:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:07.731 00:05:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:07.731 00:05:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@319 -- # net_devs=() 00:25:07.731 00:05:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:07.731 00:05:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@320 -- # e810=() 00:25:07.731 00:05:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@320 -- # local -ga e810 00:25:07.731 00:05:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@321 -- # x722=() 00:25:07.731 00:05:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@321 -- # local -ga x722 00:25:07.731 00:05:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@322 -- # mlx=() 00:25:07.731 00:05:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@322 -- # local -ga mlx 00:25:07.731 00:05:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:07.731 00:05:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:07.731 00:05:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:07.731 00:05:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:07.731 00:05:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:07.731 00:05:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:07.731 00:05:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:07.731 00:05:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:07.731 00:05:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:07.731 00:05:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:07.731 00:05:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:07.731 00:05:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:07.731 00:05:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:07.731 00:05:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:07.731 00:05:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:07.731 00:05:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:07.731 00:05:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:07.731 00:05:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:07.731 00:05:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:07.731 00:05:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:25:07.731 Found 0000:af:00.0 (0x8086 - 0x159b) 00:25:07.731 00:05:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:07.731 00:05:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:07.731 00:05:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:07.731 00:05:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:07.731 00:05:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:07.731 00:05:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:07.731 00:05:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:25:07.731 Found 0000:af:00.1 (0x8086 - 0x159b) 00:25:07.731 00:05:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:07.731 00:05:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:07.731 00:05:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:07.731 00:05:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:07.731 00:05:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:07.731 00:05:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:07.731 00:05:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:07.731 00:05:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:07.731 00:05:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:07.731 00:05:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:07.731 00:05:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:07.731 00:05:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:07.731 00:05:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:07.731 00:05:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:07.731 00:05:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:07.731 00:05:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:25:07.731 Found net devices under 0000:af:00.0: cvl_0_0 00:25:07.732 00:05:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:07.732 00:05:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:07.732 00:05:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:07.732 00:05:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:07.732 00:05:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:07.732 00:05:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:07.732 00:05:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:07.732 00:05:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:07.732 00:05:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:25:07.732 Found net devices under 0000:af:00.1: cvl_0_1 00:25:07.732 00:05:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:07.732 00:05:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:25:07.732 00:05:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@442 -- # is_hw=yes 00:25:07.732 00:05:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:25:07.732 00:05:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:25:07.732 00:05:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:25:07.732 00:05:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:07.732 00:05:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:07.732 00:05:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:07.732 00:05:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:07.732 00:05:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:07.732 00:05:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:07.732 00:05:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:07.732 00:05:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:07.732 00:05:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:07.732 00:05:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:07.732 00:05:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:07.732 00:05:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:07.732 00:05:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:07.732 00:05:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:07.732 00:05:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:07.732 00:05:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:07.732 00:05:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:07.732 00:05:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:07.732 00:05:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:07.732 00:05:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:07.732 00:05:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:07.732 00:05:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:07.732 00:05:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:07.732 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:07.732 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.415 ms 00:25:07.732 00:25:07.732 --- 10.0.0.2 ping statistics --- 00:25:07.732 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:07.732 rtt min/avg/max/mdev = 0.415/0.415/0.415/0.000 ms 00:25:07.732 00:05:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:07.732 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:07.732 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.187 ms 00:25:07.732 00:25:07.732 --- 10.0.0.1 ping statistics --- 00:25:07.732 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:07.732 rtt min/avg/max/mdev = 0.187/0.187/0.187/0.000 ms 00:25:07.732 00:05:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:07.732 00:05:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@450 -- # return 0 00:25:07.732 00:05:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:07.732 00:05:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:07.732 00:05:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:07.732 00:05:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:07.732 00:05:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:07.732 00:05:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:07.732 00:05:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:07.732 00:05:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@14 -- # nvmfpid=4072867 00:25:07.732 00:05:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@13 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:25:07.732 00:05:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@16 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:25:07.732 00:05:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@18 -- # waitforlisten 4072867 00:25:07.732 00:05:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@835 -- # '[' -z 4072867 ']' 00:25:07.732 00:05:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:07.732 00:05:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:07.732 00:05:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:07.732 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:07.732 00:05:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:07.732 00:05:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:25:07.732 00:05:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:07.732 00:05:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@868 -- # return 0 00:25:07.732 00:05:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:07.732 00:05:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:07.732 00:05:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:25:07.732 00:05:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:07.732 00:05:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 64 512 00:25:07.732 00:05:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:07.732 00:05:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:25:07.732 Malloc0 00:25:07.732 00:05:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:07.732 00:05:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:07.732 00:05:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:07.732 00:05:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:25:07.732 00:05:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:07.732 00:05:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:07.732 00:05:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:07.732 00:05:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:25:07.732 00:05:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:07.732 00:05:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:07.732 00:05:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:07.732 00:05:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:25:07.732 00:05:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:07.732 00:05:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@27 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' 00:25:07.732 00:05:46 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -N -a 00:25:39.811 Fuzzing completed. Shutting down the fuzz application 00:25:39.811 00:25:39.811 Dumping successful admin opcodes: 00:25:39.811 9, 10, 00:25:39.811 Dumping successful io opcodes: 00:25:39.811 0, 9, 00:25:39.811 NS: 0x2000008efec0 I/O qp, Total commands completed: 753400, total successful commands: 4389, random_seed: 1365667904 00:25:39.811 NS: 0x2000008efec0 admin qp, Total commands completed: 71824, total successful commands: 16, random_seed: 620911872 00:25:39.811 00:06:17 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -j /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/example.json -a 00:25:40.379 Fuzzing completed. Shutting down the fuzz application 00:25:40.379 00:25:40.379 Dumping successful admin opcodes: 00:25:40.379 00:25:40.379 Dumping successful io opcodes: 00:25:40.379 00:25:40.379 NS: 0x2000008efec0 I/O qp, Total commands completed: 0, total successful commands: 0, random_seed: 1166417090 00:25:40.379 NS: 0x2000008efec0 admin qp, Total commands completed: 16, total successful commands: 0, random_seed: 1166514244 00:25:40.379 00:06:19 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:40.379 00:06:19 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:40.379 00:06:19 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:25:40.379 00:06:19 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:40.379 00:06:19 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:25:40.379 00:06:19 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@38 -- # nvmftestfini 00:25:40.379 00:06:19 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:40.379 00:06:19 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@121 -- # sync 00:25:40.379 00:06:19 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:40.379 00:06:19 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@124 -- # set +e 00:25:40.379 00:06:19 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:40.379 00:06:19 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:40.379 rmmod nvme_tcp 00:25:40.379 rmmod nvme_fabrics 00:25:40.379 rmmod nvme_keyring 00:25:40.379 00:06:19 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:40.379 00:06:19 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@128 -- # set -e 00:25:40.379 00:06:19 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@129 -- # return 0 00:25:40.379 00:06:19 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@517 -- # '[' -n 4072867 ']' 00:25:40.379 00:06:19 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@518 -- # killprocess 4072867 00:25:40.379 00:06:19 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@954 -- # '[' -z 4072867 ']' 00:25:40.379 00:06:19 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@958 -- # kill -0 4072867 00:25:40.379 00:06:19 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@959 -- # uname 00:25:40.379 00:06:19 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:40.379 00:06:19 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4072867 00:25:40.379 00:06:19 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:40.379 00:06:19 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:40.379 00:06:19 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4072867' 00:25:40.379 killing process with pid 4072867 00:25:40.379 00:06:19 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@973 -- # kill 4072867 00:25:40.379 00:06:19 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@978 -- # wait 4072867 00:25:41.758 00:06:20 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:41.758 00:06:20 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:41.758 00:06:20 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:41.758 00:06:20 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@297 -- # iptr 00:25:41.758 00:06:20 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@791 -- # iptables-save 00:25:41.758 00:06:20 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:41.758 00:06:20 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@791 -- # iptables-restore 00:25:41.758 00:06:20 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:41.758 00:06:20 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:41.758 00:06:20 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:41.758 00:06:20 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:41.758 00:06:20 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:44.296 00:06:22 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:44.296 00:06:22 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@39 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_fuzz_logs1.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_fuzz_logs2.txt 00:25:44.296 00:25:44.296 real 0m43.034s 00:25:44.296 user 0m58.909s 00:25:44.296 sys 0m14.717s 00:25:44.296 00:06:22 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:44.296 00:06:22 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:25:44.296 ************************************ 00:25:44.296 END TEST nvmf_fuzz 00:25:44.296 ************************************ 00:25:44.296 00:06:22 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@49 -- # run_test nvmf_multiconnection /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:25:44.296 00:06:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:25:44.296 00:06:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:44.296 00:06:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:25:44.296 ************************************ 00:25:44.296 START TEST nvmf_multiconnection 00:25:44.296 ************************************ 00:25:44.296 00:06:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:25:44.296 * Looking for test storage... 00:25:44.296 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:25:44.296 00:06:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:25:44.296 00:06:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1711 -- # lcov --version 00:25:44.296 00:06:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:25:44.296 00:06:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:25:44.296 00:06:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:44.296 00:06:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:44.296 00:06:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:44.296 00:06:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@336 -- # IFS=.-: 00:25:44.296 00:06:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@336 -- # read -ra ver1 00:25:44.296 00:06:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@337 -- # IFS=.-: 00:25:44.296 00:06:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@337 -- # read -ra ver2 00:25:44.296 00:06:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@338 -- # local 'op=<' 00:25:44.296 00:06:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@340 -- # ver1_l=2 00:25:44.296 00:06:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@341 -- # ver2_l=1 00:25:44.296 00:06:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:44.296 00:06:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@344 -- # case "$op" in 00:25:44.296 00:06:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@345 -- # : 1 00:25:44.296 00:06:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:44.296 00:06:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:44.296 00:06:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@365 -- # decimal 1 00:25:44.296 00:06:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@353 -- # local d=1 00:25:44.296 00:06:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:44.296 00:06:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@355 -- # echo 1 00:25:44.296 00:06:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@365 -- # ver1[v]=1 00:25:44.296 00:06:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@366 -- # decimal 2 00:25:44.296 00:06:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@353 -- # local d=2 00:25:44.296 00:06:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:44.296 00:06:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@355 -- # echo 2 00:25:44.296 00:06:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@366 -- # ver2[v]=2 00:25:44.296 00:06:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:44.296 00:06:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:44.296 00:06:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@368 -- # return 0 00:25:44.296 00:06:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:44.296 00:06:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:25:44.296 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:44.296 --rc genhtml_branch_coverage=1 00:25:44.296 --rc genhtml_function_coverage=1 00:25:44.296 --rc genhtml_legend=1 00:25:44.296 --rc geninfo_all_blocks=1 00:25:44.296 --rc geninfo_unexecuted_blocks=1 00:25:44.296 00:25:44.296 ' 00:25:44.296 00:06:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:25:44.296 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:44.296 --rc genhtml_branch_coverage=1 00:25:44.296 --rc genhtml_function_coverage=1 00:25:44.296 --rc genhtml_legend=1 00:25:44.296 --rc geninfo_all_blocks=1 00:25:44.297 --rc geninfo_unexecuted_blocks=1 00:25:44.297 00:25:44.297 ' 00:25:44.297 00:06:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:25:44.297 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:44.297 --rc genhtml_branch_coverage=1 00:25:44.297 --rc genhtml_function_coverage=1 00:25:44.297 --rc genhtml_legend=1 00:25:44.297 --rc geninfo_all_blocks=1 00:25:44.297 --rc geninfo_unexecuted_blocks=1 00:25:44.297 00:25:44.297 ' 00:25:44.297 00:06:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:25:44.297 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:44.297 --rc genhtml_branch_coverage=1 00:25:44.297 --rc genhtml_function_coverage=1 00:25:44.297 --rc genhtml_legend=1 00:25:44.297 --rc geninfo_all_blocks=1 00:25:44.297 --rc geninfo_unexecuted_blocks=1 00:25:44.297 00:25:44.297 ' 00:25:44.297 00:06:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:44.297 00:06:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@7 -- # uname -s 00:25:44.297 00:06:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:44.297 00:06:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:44.297 00:06:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:44.297 00:06:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:44.297 00:06:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:44.297 00:06:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:44.297 00:06:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:44.297 00:06:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:44.297 00:06:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:44.297 00:06:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:44.297 00:06:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:25:44.297 00:06:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:25:44.297 00:06:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:44.297 00:06:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:44.297 00:06:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:44.297 00:06:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:44.297 00:06:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:44.297 00:06:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@15 -- # shopt -s extglob 00:25:44.297 00:06:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:44.297 00:06:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:44.297 00:06:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:44.297 00:06:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:44.297 00:06:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:44.297 00:06:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:44.297 00:06:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@5 -- # export PATH 00:25:44.297 00:06:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:44.297 00:06:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@51 -- # : 0 00:25:44.297 00:06:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:44.297 00:06:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:44.297 00:06:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:44.297 00:06:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:44.297 00:06:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:44.297 00:06:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:44.297 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:44.297 00:06:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:44.297 00:06:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:44.297 00:06:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:44.297 00:06:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@11 -- # MALLOC_BDEV_SIZE=64 00:25:44.297 00:06:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:25:44.297 00:06:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@14 -- # NVMF_SUBSYS=11 00:25:44.297 00:06:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@16 -- # nvmftestinit 00:25:44.297 00:06:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:44.297 00:06:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:44.297 00:06:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:44.297 00:06:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:44.297 00:06:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:44.297 00:06:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:44.297 00:06:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:44.297 00:06:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:44.297 00:06:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:25:44.297 00:06:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:25:44.297 00:06:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@309 -- # xtrace_disable 00:25:44.297 00:06:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:49.573 00:06:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:49.573 00:06:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@315 -- # pci_devs=() 00:25:49.573 00:06:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:49.573 00:06:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:49.573 00:06:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:49.573 00:06:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:49.573 00:06:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:49.573 00:06:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@319 -- # net_devs=() 00:25:49.573 00:06:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:49.573 00:06:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@320 -- # e810=() 00:25:49.573 00:06:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@320 -- # local -ga e810 00:25:49.573 00:06:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@321 -- # x722=() 00:25:49.573 00:06:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@321 -- # local -ga x722 00:25:49.573 00:06:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@322 -- # mlx=() 00:25:49.573 00:06:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@322 -- # local -ga mlx 00:25:49.573 00:06:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:49.573 00:06:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:49.573 00:06:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:49.573 00:06:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:49.573 00:06:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:49.573 00:06:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:49.573 00:06:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:49.573 00:06:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:49.573 00:06:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:49.573 00:06:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:49.573 00:06:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:49.573 00:06:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:49.573 00:06:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:49.573 00:06:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:49.573 00:06:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:49.573 00:06:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:49.573 00:06:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:49.573 00:06:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:49.573 00:06:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:49.573 00:06:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:25:49.573 Found 0000:af:00.0 (0x8086 - 0x159b) 00:25:49.573 00:06:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:49.573 00:06:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:49.573 00:06:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:49.574 00:06:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:49.574 00:06:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:49.574 00:06:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:49.574 00:06:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:25:49.574 Found 0000:af:00.1 (0x8086 - 0x159b) 00:25:49.574 00:06:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:49.574 00:06:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:49.574 00:06:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:49.574 00:06:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:49.574 00:06:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:49.574 00:06:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:49.574 00:06:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:49.574 00:06:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:49.574 00:06:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:49.574 00:06:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:49.574 00:06:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:49.574 00:06:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:49.574 00:06:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:49.574 00:06:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:49.574 00:06:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:49.574 00:06:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:25:49.574 Found net devices under 0000:af:00.0: cvl_0_0 00:25:49.574 00:06:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:49.574 00:06:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:49.574 00:06:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:49.574 00:06:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:49.574 00:06:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:49.574 00:06:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:49.574 00:06:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:49.574 00:06:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:49.574 00:06:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:25:49.574 Found net devices under 0000:af:00.1: cvl_0_1 00:25:49.574 00:06:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:49.574 00:06:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:25:49.574 00:06:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@442 -- # is_hw=yes 00:25:49.574 00:06:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:25:49.574 00:06:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:25:49.574 00:06:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:25:49.574 00:06:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:49.574 00:06:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:49.574 00:06:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:49.574 00:06:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:49.574 00:06:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:49.574 00:06:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:49.574 00:06:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:49.574 00:06:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:49.574 00:06:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:49.574 00:06:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:49.574 00:06:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:49.574 00:06:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:49.574 00:06:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:49.574 00:06:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:49.574 00:06:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:49.574 00:06:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:49.574 00:06:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:49.574 00:06:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:49.574 00:06:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:49.574 00:06:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:49.574 00:06:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:49.574 00:06:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:49.574 00:06:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:49.574 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:49.574 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.470 ms 00:25:49.574 00:25:49.574 --- 10.0.0.2 ping statistics --- 00:25:49.574 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:49.574 rtt min/avg/max/mdev = 0.470/0.470/0.470/0.000 ms 00:25:49.574 00:06:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:49.574 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:49.574 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.216 ms 00:25:49.574 00:25:49.574 --- 10.0.0.1 ping statistics --- 00:25:49.574 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:49.574 rtt min/avg/max/mdev = 0.216/0.216/0.216/0.000 ms 00:25:49.574 00:06:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:49.574 00:06:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@450 -- # return 0 00:25:49.574 00:06:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:49.574 00:06:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:49.574 00:06:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:49.574 00:06:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:49.574 00:06:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:49.574 00:06:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:49.574 00:06:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:49.574 00:06:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@17 -- # nvmfappstart -m 0xF 00:25:49.574 00:06:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:49.574 00:06:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:49.574 00:06:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:49.574 00:06:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@509 -- # nvmfpid=4082203 00:25:49.574 00:06:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:25:49.574 00:06:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@510 -- # waitforlisten 4082203 00:25:49.574 00:06:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@835 -- # '[' -z 4082203 ']' 00:25:49.574 00:06:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:49.574 00:06:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:49.574 00:06:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:49.574 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:49.574 00:06:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:49.574 00:06:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:49.575 [2024-12-14 00:06:28.668245] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:25:49.575 [2024-12-14 00:06:28.668333] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:49.834 [2024-12-14 00:06:28.786540] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:49.834 [2024-12-14 00:06:28.895986] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:49.834 [2024-12-14 00:06:28.896032] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:49.834 [2024-12-14 00:06:28.896043] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:49.834 [2024-12-14 00:06:28.896069] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:49.834 [2024-12-14 00:06:28.896078] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:49.834 [2024-12-14 00:06:28.898405] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:25:49.834 [2024-12-14 00:06:28.898507] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:25:49.834 [2024-12-14 00:06:28.898529] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:25:49.834 [2024-12-14 00:06:28.898539] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:25:50.403 00:06:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:50.403 00:06:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@868 -- # return 0 00:25:50.403 00:06:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:50.403 00:06:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:50.403 00:06:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:50.403 00:06:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:50.403 00:06:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:50.403 00:06:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:50.403 00:06:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:50.403 [2024-12-14 00:06:29.510949] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:50.403 00:06:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:50.403 00:06:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # seq 1 11 00:25:50.403 00:06:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:50.403 00:06:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:25:50.403 00:06:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:50.403 00:06:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:50.663 Malloc1 00:25:50.663 00:06:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:50.663 00:06:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK1 00:25:50.663 00:06:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:50.663 00:06:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:50.663 00:06:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:50.663 00:06:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:25:50.663 00:06:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:50.663 00:06:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:50.663 00:06:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:50.663 00:06:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:50.663 00:06:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:50.663 00:06:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:50.663 [2024-12-14 00:06:29.636151] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:50.663 00:06:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:50.664 00:06:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:50.664 00:06:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc2 00:25:50.664 00:06:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:50.664 00:06:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:50.664 Malloc2 00:25:50.664 00:06:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:50.664 00:06:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:25:50.664 00:06:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:50.664 00:06:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:50.664 00:06:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:50.664 00:06:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc2 00:25:50.664 00:06:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:50.664 00:06:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:50.664 00:06:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:50.664 00:06:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:25:50.664 00:06:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:50.664 00:06:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:50.664 00:06:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:50.664 00:06:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:50.664 00:06:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc3 00:25:50.664 00:06:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:50.664 00:06:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:50.923 Malloc3 00:25:50.923 00:06:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:50.923 00:06:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK3 00:25:50.923 00:06:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:50.923 00:06:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:50.923 00:06:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:50.923 00:06:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Malloc3 00:25:50.923 00:06:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:50.923 00:06:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:50.923 00:06:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:50.923 00:06:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:25:50.923 00:06:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:50.923 00:06:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:50.923 00:06:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:50.923 00:06:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:50.923 00:06:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc4 00:25:50.923 00:06:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:50.924 00:06:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:50.924 Malloc4 00:25:50.924 00:06:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:50.924 00:06:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK4 00:25:50.924 00:06:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:50.924 00:06:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:50.924 00:06:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:50.924 00:06:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Malloc4 00:25:50.924 00:06:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:50.924 00:06:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:50.924 00:06:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:50.924 00:06:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:25:50.924 00:06:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:50.924 00:06:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:50.924 00:06:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:50.924 00:06:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:50.924 00:06:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc5 00:25:50.924 00:06:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:50.924 00:06:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:50.924 Malloc5 00:25:50.924 00:06:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:50.924 00:06:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5 -a -s SPDK5 00:25:50.924 00:06:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:50.924 00:06:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:50.924 00:06:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:50.924 00:06:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode5 Malloc5 00:25:50.924 00:06:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:50.924 00:06:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:50.924 00:06:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:50.924 00:06:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode5 -t tcp -a 10.0.0.2 -s 4420 00:25:50.924 00:06:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:50.924 00:06:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:50.924 00:06:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:50.924 00:06:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:50.924 00:06:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc6 00:25:50.924 00:06:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:50.924 00:06:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:51.184 Malloc6 00:25:51.184 00:06:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:51.184 00:06:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode6 -a -s SPDK6 00:25:51.184 00:06:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:51.184 00:06:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:51.184 00:06:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:51.184 00:06:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode6 Malloc6 00:25:51.184 00:06:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:51.184 00:06:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:51.184 00:06:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:51.184 00:06:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode6 -t tcp -a 10.0.0.2 -s 4420 00:25:51.184 00:06:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:51.184 00:06:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:51.184 00:06:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:51.184 00:06:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:51.184 00:06:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc7 00:25:51.184 00:06:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:51.184 00:06:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:51.184 Malloc7 00:25:51.184 00:06:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:51.184 00:06:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode7 -a -s SPDK7 00:25:51.184 00:06:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:51.184 00:06:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:51.184 00:06:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:51.184 00:06:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode7 Malloc7 00:25:51.184 00:06:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:51.184 00:06:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:51.184 00:06:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:51.184 00:06:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode7 -t tcp -a 10.0.0.2 -s 4420 00:25:51.184 00:06:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:51.184 00:06:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:51.184 00:06:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:51.184 00:06:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:51.184 00:06:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc8 00:25:51.184 00:06:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:51.184 00:06:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:51.444 Malloc8 00:25:51.444 00:06:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:51.444 00:06:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode8 -a -s SPDK8 00:25:51.444 00:06:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:51.444 00:06:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:51.444 00:06:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:51.444 00:06:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode8 Malloc8 00:25:51.444 00:06:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:51.444 00:06:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:51.444 00:06:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:51.444 00:06:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode8 -t tcp -a 10.0.0.2 -s 4420 00:25:51.444 00:06:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:51.444 00:06:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:51.444 00:06:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:51.444 00:06:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:51.444 00:06:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc9 00:25:51.444 00:06:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:51.444 00:06:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:51.444 Malloc9 00:25:51.444 00:06:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:51.444 00:06:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode9 -a -s SPDK9 00:25:51.444 00:06:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:51.444 00:06:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:51.444 00:06:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:51.444 00:06:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode9 Malloc9 00:25:51.444 00:06:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:51.444 00:06:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:51.444 00:06:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:51.444 00:06:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode9 -t tcp -a 10.0.0.2 -s 4420 00:25:51.444 00:06:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:51.444 00:06:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:51.444 00:06:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:51.444 00:06:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:51.444 00:06:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc10 00:25:51.444 00:06:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:51.444 00:06:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:51.444 Malloc10 00:25:51.444 00:06:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:51.444 00:06:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode10 -a -s SPDK10 00:25:51.444 00:06:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:51.444 00:06:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:51.444 00:06:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:51.444 00:06:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode10 Malloc10 00:25:51.444 00:06:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:51.444 00:06:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:51.444 00:06:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:51.444 00:06:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode10 -t tcp -a 10.0.0.2 -s 4420 00:25:51.444 00:06:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:51.444 00:06:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:51.703 00:06:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:51.703 00:06:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:51.703 00:06:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc11 00:25:51.704 00:06:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:51.704 00:06:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:51.704 Malloc11 00:25:51.704 00:06:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:51.704 00:06:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode11 -a -s SPDK11 00:25:51.704 00:06:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:51.704 00:06:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:51.704 00:06:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:51.704 00:06:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode11 Malloc11 00:25:51.704 00:06:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:51.704 00:06:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:51.704 00:06:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:51.704 00:06:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode11 -t tcp -a 10.0.0.2 -s 4420 00:25:51.704 00:06:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:51.704 00:06:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:51.704 00:06:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:51.704 00:06:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # seq 1 11 00:25:51.704 00:06:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:51.704 00:06:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:25:53.082 00:06:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK1 00:25:53.082 00:06:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:25:53.082 00:06:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:25:53.082 00:06:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:25:53.082 00:06:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:25:54.988 00:06:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:25:54.988 00:06:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK1 00:25:54.988 00:06:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:25:54.988 00:06:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:25:54.988 00:06:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:25:54.988 00:06:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:25:54.988 00:06:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:54.988 00:06:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode2 -a 10.0.0.2 -s 4420 00:25:55.925 00:06:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK2 00:25:55.925 00:06:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:25:55.925 00:06:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:25:55.925 00:06:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:25:55.925 00:06:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:25:58.459 00:06:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:25:58.459 00:06:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:25:58.459 00:06:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK2 00:25:58.459 00:06:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:25:58.459 00:06:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:25:58.459 00:06:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:25:58.459 00:06:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:58.459 00:06:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode3 -a 10.0.0.2 -s 4420 00:25:59.396 00:06:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK3 00:25:59.396 00:06:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:25:59.396 00:06:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:25:59.396 00:06:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:25:59.396 00:06:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:26:01.302 00:06:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:26:01.302 00:06:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:26:01.302 00:06:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK3 00:26:01.302 00:06:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:26:01.302 00:06:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:26:01.302 00:06:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:26:01.302 00:06:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:01.302 00:06:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode4 -a 10.0.0.2 -s 4420 00:26:02.681 00:06:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK4 00:26:02.681 00:06:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:26:02.681 00:06:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:26:02.681 00:06:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:26:02.681 00:06:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:26:04.632 00:06:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:26:04.632 00:06:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:26:04.632 00:06:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK4 00:26:04.632 00:06:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:26:04.632 00:06:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:26:04.632 00:06:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:26:04.632 00:06:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:04.632 00:06:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode5 -a 10.0.0.2 -s 4420 00:26:06.009 00:06:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK5 00:26:06.009 00:06:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:26:06.009 00:06:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:26:06.009 00:06:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:26:06.009 00:06:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:26:07.911 00:06:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:26:07.912 00:06:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:26:07.912 00:06:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK5 00:26:07.912 00:06:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:26:07.912 00:06:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:26:07.912 00:06:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:26:07.912 00:06:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:07.912 00:06:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode6 -a 10.0.0.2 -s 4420 00:26:09.289 00:06:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK6 00:26:09.289 00:06:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:26:09.289 00:06:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:26:09.289 00:06:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:26:09.289 00:06:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:26:11.194 00:06:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:26:11.194 00:06:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:26:11.194 00:06:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK6 00:26:11.194 00:06:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:26:11.194 00:06:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:26:11.194 00:06:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:26:11.194 00:06:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:11.194 00:06:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode7 -a 10.0.0.2 -s 4420 00:26:12.573 00:06:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK7 00:26:12.573 00:06:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:26:12.573 00:06:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:26:12.573 00:06:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:26:12.573 00:06:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:26:14.478 00:06:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:26:14.478 00:06:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:26:14.478 00:06:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK7 00:26:14.478 00:06:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:26:14.478 00:06:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:26:14.478 00:06:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:26:14.478 00:06:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:14.478 00:06:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode8 -a 10.0.0.2 -s 4420 00:26:15.857 00:06:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK8 00:26:15.857 00:06:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:26:15.857 00:06:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:26:15.857 00:06:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:26:15.857 00:06:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:26:17.928 00:06:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:26:17.928 00:06:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:26:17.928 00:06:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK8 00:26:17.928 00:06:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:26:17.928 00:06:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:26:17.928 00:06:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:26:17.928 00:06:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:17.928 00:06:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode9 -a 10.0.0.2 -s 4420 00:26:19.307 00:06:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK9 00:26:19.307 00:06:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:26:19.307 00:06:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:26:19.307 00:06:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:26:19.307 00:06:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:26:21.213 00:07:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:26:21.213 00:07:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:26:21.213 00:07:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK9 00:26:21.472 00:07:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:26:21.472 00:07:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:26:21.472 00:07:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:26:21.472 00:07:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:21.472 00:07:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode10 -a 10.0.0.2 -s 4420 00:26:22.850 00:07:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK10 00:26:22.850 00:07:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:26:22.850 00:07:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:26:22.850 00:07:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:26:22.850 00:07:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:26:24.756 00:07:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:26:24.756 00:07:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:26:24.756 00:07:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK10 00:26:24.756 00:07:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:26:24.756 00:07:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:26:24.756 00:07:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:26:24.756 00:07:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:24.756 00:07:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode11 -a 10.0.0.2 -s 4420 00:26:26.134 00:07:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK11 00:26:26.134 00:07:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:26:26.135 00:07:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:26:26.135 00:07:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:26:26.135 00:07:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:26:28.668 00:07:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:26:28.668 00:07:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:26:28.668 00:07:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK11 00:26:28.668 00:07:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:26:28.668 00:07:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:26:28.668 00:07:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:26:28.668 00:07:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t read -r 10 00:26:28.668 [global] 00:26:28.668 thread=1 00:26:28.668 invalidate=1 00:26:28.668 rw=read 00:26:28.668 time_based=1 00:26:28.669 runtime=10 00:26:28.669 ioengine=libaio 00:26:28.669 direct=1 00:26:28.669 bs=262144 00:26:28.669 iodepth=64 00:26:28.669 norandommap=1 00:26:28.669 numjobs=1 00:26:28.669 00:26:28.669 [job0] 00:26:28.669 filename=/dev/nvme0n1 00:26:28.669 [job1] 00:26:28.669 filename=/dev/nvme10n1 00:26:28.669 [job2] 00:26:28.669 filename=/dev/nvme1n1 00:26:28.669 [job3] 00:26:28.669 filename=/dev/nvme2n1 00:26:28.669 [job4] 00:26:28.669 filename=/dev/nvme3n1 00:26:28.669 [job5] 00:26:28.669 filename=/dev/nvme4n1 00:26:28.669 [job6] 00:26:28.669 filename=/dev/nvme5n1 00:26:28.669 [job7] 00:26:28.669 filename=/dev/nvme6n1 00:26:28.669 [job8] 00:26:28.669 filename=/dev/nvme7n1 00:26:28.669 [job9] 00:26:28.669 filename=/dev/nvme8n1 00:26:28.669 [job10] 00:26:28.669 filename=/dev/nvme9n1 00:26:28.669 Could not set queue depth (nvme0n1) 00:26:28.669 Could not set queue depth (nvme10n1) 00:26:28.669 Could not set queue depth (nvme1n1) 00:26:28.669 Could not set queue depth (nvme2n1) 00:26:28.669 Could not set queue depth (nvme3n1) 00:26:28.669 Could not set queue depth (nvme4n1) 00:26:28.669 Could not set queue depth (nvme5n1) 00:26:28.669 Could not set queue depth (nvme6n1) 00:26:28.669 Could not set queue depth (nvme7n1) 00:26:28.669 Could not set queue depth (nvme8n1) 00:26:28.669 Could not set queue depth (nvme9n1) 00:26:28.669 job0: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:28.669 job1: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:28.669 job2: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:28.669 job3: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:28.669 job4: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:28.669 job5: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:28.669 job6: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:28.669 job7: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:28.669 job8: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:28.669 job9: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:28.669 job10: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:28.669 fio-3.35 00:26:28.669 Starting 11 threads 00:26:40.886 00:26:40.886 job0: (groupid=0, jobs=1): err= 0: pid=4088942: Sat Dec 14 00:07:18 2024 00:26:40.886 read: IOPS=420, BW=105MiB/s (110MB/s)(1060MiB/10092msec) 00:26:40.886 slat (usec): min=16, max=160584, avg=1265.80, stdev=6434.95 00:26:40.886 clat (usec): min=1373, max=730321, avg=150941.11, stdev=115141.50 00:26:40.886 lat (usec): min=1409, max=730353, avg=152206.92, stdev=115948.28 00:26:40.886 clat percentiles (msec): 00:26:40.886 | 1.00th=[ 5], 5.00th=[ 9], 10.00th=[ 20], 20.00th=[ 49], 00:26:40.886 | 30.00th=[ 72], 40.00th=[ 117], 50.00th=[ 140], 60.00th=[ 155], 00:26:40.886 | 70.00th=[ 192], 80.00th=[ 241], 90.00th=[ 300], 95.00th=[ 334], 00:26:40.886 | 99.00th=[ 531], 99.50th=[ 693], 99.90th=[ 718], 99.95th=[ 735], 00:26:40.886 | 99.99th=[ 735] 00:26:40.886 bw ( KiB/s): min=50176, max=303616, per=13.47%, avg=106905.60, stdev=63977.60, samples=20 00:26:40.886 iops : min= 196, max= 1186, avg=417.60, stdev=249.91, samples=20 00:26:40.886 lat (msec) : 2=0.24%, 4=0.66%, 10=5.24%, 20=4.46%, 50=10.31% 00:26:40.886 lat (msec) : 100=14.74%, 250=46.69%, 500=16.35%, 750=1.32% 00:26:40.886 cpu : usr=0.18%, sys=1.65%, ctx=1286, majf=0, minf=4097 00:26:40.886 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.5% 00:26:40.886 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:40.886 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:40.886 issued rwts: total=4239,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:40.886 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:40.886 job1: (groupid=0, jobs=1): err= 0: pid=4088943: Sat Dec 14 00:07:18 2024 00:26:40.886 read: IOPS=338, BW=84.6MiB/s (88.7MB/s)(859MiB/10158msec) 00:26:40.886 slat (usec): min=14, max=323940, avg=2082.22, stdev=9458.62 00:26:40.886 clat (msec): min=16, max=695, avg=186.83, stdev=140.90 00:26:40.886 lat (msec): min=16, max=695, avg=188.92, stdev=141.77 00:26:40.886 clat percentiles (msec): 00:26:40.886 | 1.00th=[ 33], 5.00th=[ 37], 10.00th=[ 43], 20.00th=[ 62], 00:26:40.886 | 30.00th=[ 72], 40.00th=[ 93], 50.00th=[ 144], 60.00th=[ 218], 00:26:40.886 | 70.00th=[ 268], 80.00th=[ 300], 90.00th=[ 384], 95.00th=[ 456], 00:26:40.886 | 99.00th=[ 600], 99.50th=[ 642], 99.90th=[ 693], 99.95th=[ 693], 00:26:40.886 | 99.99th=[ 693] 00:26:40.886 bw ( KiB/s): min=26112, max=332288, per=10.89%, avg=86365.65, stdev=74495.63, samples=20 00:26:40.886 iops : min= 102, max= 1298, avg=337.35, stdev=291.00, samples=20 00:26:40.886 lat (msec) : 20=0.09%, 50=13.33%, 100=29.10%, 250=23.51%, 500=31.16% 00:26:40.886 lat (msec) : 750=2.82% 00:26:40.886 cpu : usr=0.12%, sys=1.40%, ctx=517, majf=0, minf=4097 00:26:40.886 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=0.9%, >=64=98.2% 00:26:40.886 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:40.886 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:40.886 issued rwts: total=3437,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:40.886 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:40.886 job2: (groupid=0, jobs=1): err= 0: pid=4088946: Sat Dec 14 00:07:18 2024 00:26:40.886 read: IOPS=218, BW=54.7MiB/s (57.4MB/s)(556MiB/10160msec) 00:26:40.886 slat (usec): min=14, max=348270, avg=2585.09, stdev=15372.77 00:26:40.886 clat (msec): min=15, max=925, avg=289.57, stdev=205.46 00:26:40.886 lat (msec): min=16, max=925, avg=292.16, stdev=207.56 00:26:40.886 clat percentiles (msec): 00:26:40.886 | 1.00th=[ 24], 5.00th=[ 32], 10.00th=[ 46], 20.00th=[ 88], 00:26:40.886 | 30.00th=[ 129], 40.00th=[ 192], 50.00th=[ 271], 60.00th=[ 347], 00:26:40.886 | 70.00th=[ 397], 80.00th=[ 447], 90.00th=[ 535], 95.00th=[ 709], 00:26:40.886 | 99.00th=[ 844], 99.50th=[ 911], 99.90th=[ 927], 99.95th=[ 927], 00:26:40.886 | 99.99th=[ 927] 00:26:40.886 bw ( KiB/s): min=21504, max=154112, per=6.97%, avg=55266.35, stdev=36197.41, samples=20 00:26:40.886 iops : min= 84, max= 602, avg=215.85, stdev=141.41, samples=20 00:26:40.886 lat (msec) : 20=0.36%, 50=11.20%, 100=10.75%, 250=25.28%, 500=40.13% 00:26:40.886 lat (msec) : 750=8.37%, 1000=3.91% 00:26:40.886 cpu : usr=0.02%, sys=0.84%, ctx=443, majf=0, minf=4097 00:26:40.886 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.7%, 32=1.4%, >=64=97.2% 00:26:40.886 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:40.886 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:40.886 issued rwts: total=2223,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:40.886 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:40.886 job3: (groupid=0, jobs=1): err= 0: pid=4088947: Sat Dec 14 00:07:18 2024 00:26:40.886 read: IOPS=416, BW=104MiB/s (109MB/s)(1055MiB/10134msec) 00:26:40.886 slat (usec): min=15, max=195617, avg=1850.81, stdev=9144.59 00:26:40.886 clat (msec): min=18, max=714, avg=151.65, stdev=149.81 00:26:40.886 lat (msec): min=18, max=714, avg=153.50, stdev=151.37 00:26:40.886 clat percentiles (msec): 00:26:40.886 | 1.00th=[ 26], 5.00th=[ 29], 10.00th=[ 30], 20.00th=[ 32], 00:26:40.886 | 30.00th=[ 34], 40.00th=[ 39], 50.00th=[ 84], 60.00th=[ 142], 00:26:40.886 | 70.00th=[ 203], 80.00th=[ 279], 90.00th=[ 384], 95.00th=[ 485], 00:26:40.886 | 99.00th=[ 567], 99.50th=[ 609], 99.90th=[ 684], 99.95th=[ 693], 00:26:40.886 | 99.99th=[ 718] 00:26:40.886 bw ( KiB/s): min=26112, max=493568, per=13.41%, avg=106390.60, stdev=122192.55, samples=20 00:26:40.886 iops : min= 102, max= 1928, avg=415.55, stdev=477.33, samples=20 00:26:40.886 lat (msec) : 20=0.28%, 50=45.55%, 100=5.24%, 250=26.59%, 500=18.48% 00:26:40.886 lat (msec) : 750=3.86% 00:26:40.886 cpu : usr=0.13%, sys=1.67%, ctx=638, majf=0, minf=4097 00:26:40.886 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.5% 00:26:40.886 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:40.886 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:40.886 issued rwts: total=4220,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:40.886 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:40.886 job4: (groupid=0, jobs=1): err= 0: pid=4088948: Sat Dec 14 00:07:18 2024 00:26:40.886 read: IOPS=219, BW=54.9MiB/s (57.6MB/s)(555MiB/10112msec) 00:26:40.886 slat (usec): min=21, max=289303, avg=4221.69, stdev=15271.69 00:26:40.886 clat (msec): min=16, max=703, avg=286.81, stdev=121.69 00:26:40.886 lat (msec): min=16, max=703, avg=291.03, stdev=123.65 00:26:40.886 clat percentiles (msec): 00:26:40.886 | 1.00th=[ 54], 5.00th=[ 97], 10.00th=[ 140], 20.00th=[ 186], 00:26:40.886 | 30.00th=[ 218], 40.00th=[ 243], 50.00th=[ 271], 60.00th=[ 305], 00:26:40.886 | 70.00th=[ 347], 80.00th=[ 397], 90.00th=[ 447], 95.00th=[ 498], 00:26:40.886 | 99.00th=[ 592], 99.50th=[ 625], 99.90th=[ 693], 99.95th=[ 701], 00:26:40.886 | 99.99th=[ 701] 00:26:40.886 bw ( KiB/s): min=23040, max=114176, per=6.96%, avg=55213.35, stdev=22343.45, samples=20 00:26:40.886 iops : min= 90, max= 446, avg=215.65, stdev=87.28, samples=20 00:26:40.886 lat (msec) : 20=0.18%, 50=0.63%, 100=4.59%, 250=38.18%, 500=51.51% 00:26:40.886 lat (msec) : 750=4.91% 00:26:40.886 cpu : usr=0.10%, sys=1.03%, ctx=369, majf=0, minf=4097 00:26:40.886 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.7%, 32=1.4%, >=64=97.2% 00:26:40.886 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:40.886 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:40.887 issued rwts: total=2221,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:40.887 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:40.887 job5: (groupid=0, jobs=1): err= 0: pid=4088950: Sat Dec 14 00:07:18 2024 00:26:40.887 read: IOPS=215, BW=53.8MiB/s (56.4MB/s)(546MiB/10146msec) 00:26:40.887 slat (usec): min=15, max=264176, avg=1794.23, stdev=12453.09 00:26:40.887 clat (usec): min=1534, max=769379, avg=295482.65, stdev=162995.17 00:26:40.887 lat (usec): min=1578, max=769409, avg=297276.88, stdev=163729.31 00:26:40.887 clat percentiles (msec): 00:26:40.887 | 1.00th=[ 4], 5.00th=[ 19], 10.00th=[ 35], 20.00th=[ 171], 00:26:40.887 | 30.00th=[ 213], 40.00th=[ 262], 50.00th=[ 296], 60.00th=[ 334], 00:26:40.887 | 70.00th=[ 372], 80.00th=[ 430], 90.00th=[ 510], 95.00th=[ 575], 00:26:40.887 | 99.00th=[ 684], 99.50th=[ 701], 99.90th=[ 751], 99.95th=[ 751], 00:26:40.887 | 99.99th=[ 768] 00:26:40.887 bw ( KiB/s): min=35328, max=123904, per=6.84%, avg=54246.40, stdev=21511.36, samples=20 00:26:40.887 iops : min= 138, max= 484, avg=211.90, stdev=84.03, samples=20 00:26:40.887 lat (msec) : 2=0.14%, 4=1.19%, 10=2.66%, 20=1.65%, 50=5.59% 00:26:40.887 lat (msec) : 100=3.02%, 250=23.05%, 500=51.33%, 750=11.32%, 1000=0.05% 00:26:40.887 cpu : usr=0.14%, sys=0.83%, ctx=559, majf=0, minf=4097 00:26:40.887 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.7%, 32=1.5%, >=64=97.1% 00:26:40.887 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:40.887 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:40.887 issued rwts: total=2182,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:40.887 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:40.887 job6: (groupid=0, jobs=1): err= 0: pid=4088951: Sat Dec 14 00:07:18 2024 00:26:40.887 read: IOPS=205, BW=51.3MiB/s (53.8MB/s)(521MiB/10154msec) 00:26:40.887 slat (usec): min=16, max=112339, avg=3635.07, stdev=13722.06 00:26:40.887 clat (msec): min=7, max=667, avg=308.08, stdev=116.08 00:26:40.887 lat (msec): min=7, max=667, avg=311.72, stdev=117.18 00:26:40.887 clat percentiles (msec): 00:26:40.887 | 1.00th=[ 18], 5.00th=[ 87], 10.00th=[ 142], 20.00th=[ 234], 00:26:40.887 | 30.00th=[ 264], 40.00th=[ 284], 50.00th=[ 309], 60.00th=[ 330], 00:26:40.887 | 70.00th=[ 363], 80.00th=[ 405], 90.00th=[ 460], 95.00th=[ 502], 00:26:40.887 | 99.00th=[ 575], 99.50th=[ 575], 99.90th=[ 592], 99.95th=[ 642], 00:26:40.887 | 99.99th=[ 667] 00:26:40.887 bw ( KiB/s): min=31744, max=97280, per=6.51%, avg=51686.40, stdev=15820.61, samples=20 00:26:40.887 iops : min= 124, max= 380, avg=201.90, stdev=61.80, samples=20 00:26:40.887 lat (msec) : 10=0.19%, 20=0.82%, 50=1.01%, 100=4.85%, 250=17.57% 00:26:40.887 lat (msec) : 500=70.28%, 750=5.28% 00:26:40.887 cpu : usr=0.08%, sys=0.86%, ctx=335, majf=0, minf=4097 00:26:40.887 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.8%, 32=1.5%, >=64=97.0% 00:26:40.887 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:40.887 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:40.887 issued rwts: total=2083,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:40.887 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:40.887 job7: (groupid=0, jobs=1): err= 0: pid=4088952: Sat Dec 14 00:07:18 2024 00:26:40.887 read: IOPS=336, BW=84.2MiB/s (88.3MB/s)(850MiB/10090msec) 00:26:40.887 slat (usec): min=11, max=313867, avg=1782.42, stdev=10303.26 00:26:40.887 clat (usec): min=1620, max=638441, avg=188024.74, stdev=147302.24 00:26:40.887 lat (usec): min=1666, max=783844, avg=189807.17, stdev=148190.44 00:26:40.887 clat percentiles (msec): 00:26:40.887 | 1.00th=[ 3], 5.00th=[ 13], 10.00th=[ 20], 20.00th=[ 63], 00:26:40.887 | 30.00th=[ 77], 40.00th=[ 121], 50.00th=[ 157], 60.00th=[ 197], 00:26:40.887 | 70.00th=[ 262], 80.00th=[ 321], 90.00th=[ 401], 95.00th=[ 472], 00:26:40.887 | 99.00th=[ 600], 99.50th=[ 634], 99.90th=[ 634], 99.95th=[ 642], 00:26:40.887 | 99.99th=[ 642] 00:26:40.887 bw ( KiB/s): min=32256, max=263168, per=10.76%, avg=85409.05, stdev=53554.06, samples=20 00:26:40.887 iops : min= 126, max= 1028, avg=333.60, stdev=209.20, samples=20 00:26:40.887 lat (msec) : 2=0.21%, 4=1.47%, 10=1.88%, 20=6.59%, 50=8.27% 00:26:40.887 lat (msec) : 100=18.48%, 250=31.86%, 500=27.71%, 750=3.53% 00:26:40.887 cpu : usr=0.12%, sys=1.18%, ctx=1055, majf=0, minf=4097 00:26:40.887 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=0.9%, >=64=98.1% 00:26:40.887 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:40.887 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:40.887 issued rwts: total=3399,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:40.887 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:40.887 job8: (groupid=0, jobs=1): err= 0: pid=4088953: Sat Dec 14 00:07:18 2024 00:26:40.887 read: IOPS=233, BW=58.3MiB/s (61.1MB/s)(592MiB/10154msec) 00:26:40.887 slat (usec): min=15, max=181483, avg=4082.29, stdev=14915.48 00:26:40.887 clat (msec): min=12, max=667, avg=270.12, stdev=131.51 00:26:40.887 lat (msec): min=13, max=667, avg=274.20, stdev=133.25 00:26:40.887 clat percentiles (msec): 00:26:40.887 | 1.00th=[ 26], 5.00th=[ 81], 10.00th=[ 102], 20.00th=[ 127], 00:26:40.887 | 30.00th=[ 199], 40.00th=[ 243], 50.00th=[ 271], 60.00th=[ 305], 00:26:40.887 | 70.00th=[ 347], 80.00th=[ 384], 90.00th=[ 439], 95.00th=[ 502], 00:26:40.887 | 99.00th=[ 584], 99.50th=[ 609], 99.90th=[ 642], 99.95th=[ 667], 00:26:40.887 | 99.99th=[ 667] 00:26:40.887 bw ( KiB/s): min=27648, max=165376, per=7.43%, avg=58976.30, stdev=32143.40, samples=20 00:26:40.887 iops : min= 108, max= 646, avg=230.35, stdev=125.56, samples=20 00:26:40.887 lat (msec) : 20=0.42%, 50=2.58%, 100=6.59%, 250=33.00%, 500=52.77% 00:26:40.887 lat (msec) : 750=4.65% 00:26:40.887 cpu : usr=0.10%, sys=1.01%, ctx=323, majf=0, minf=3722 00:26:40.887 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.3%, 16=0.7%, 32=1.4%, >=64=97.3% 00:26:40.887 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:40.887 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:40.887 issued rwts: total=2367,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:40.887 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:40.887 job9: (groupid=0, jobs=1): err= 0: pid=4088954: Sat Dec 14 00:07:18 2024 00:26:40.887 read: IOPS=199, BW=49.8MiB/s (52.2MB/s)(505MiB/10154msec) 00:26:40.887 slat (usec): min=15, max=165117, avg=3298.35, stdev=14120.56 00:26:40.887 clat (msec): min=2, max=717, avg=317.88, stdev=131.60 00:26:40.887 lat (msec): min=3, max=717, avg=321.18, stdev=132.86 00:26:40.887 clat percentiles (msec): 00:26:40.887 | 1.00th=[ 23], 5.00th=[ 132], 10.00th=[ 161], 20.00th=[ 197], 00:26:40.887 | 30.00th=[ 230], 40.00th=[ 271], 50.00th=[ 326], 60.00th=[ 351], 00:26:40.887 | 70.00th=[ 384], 80.00th=[ 430], 90.00th=[ 502], 95.00th=[ 542], 00:26:40.887 | 99.00th=[ 651], 99.50th=[ 667], 99.90th=[ 718], 99.95th=[ 718], 00:26:40.887 | 99.99th=[ 718] 00:26:40.887 bw ( KiB/s): min=22016, max=86528, per=6.32%, avg=50126.70, stdev=17108.93, samples=20 00:26:40.887 iops : min= 86, max= 338, avg=195.75, stdev=66.81, samples=20 00:26:40.887 lat (msec) : 4=0.10%, 10=0.35%, 20=0.49%, 50=0.79%, 100=1.88% 00:26:40.887 lat (msec) : 250=31.77%, 500=54.03%, 750=10.59% 00:26:40.887 cpu : usr=0.07%, sys=0.73%, ctx=369, majf=0, minf=4097 00:26:40.887 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.8%, 32=1.6%, >=64=96.9% 00:26:40.887 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:40.887 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:40.887 issued rwts: total=2021,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:40.887 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:40.887 job10: (groupid=0, jobs=1): err= 0: pid=4088955: Sat Dec 14 00:07:18 2024 00:26:40.887 read: IOPS=306, BW=76.6MiB/s (80.3MB/s)(774MiB/10114msec) 00:26:40.887 slat (usec): min=15, max=112674, avg=1472.65, stdev=6738.37 00:26:40.887 clat (usec): min=1649, max=619774, avg=207279.56, stdev=126288.09 00:26:40.887 lat (usec): min=1705, max=619802, avg=208752.21, stdev=126645.55 00:26:40.887 clat percentiles (msec): 00:26:40.887 | 1.00th=[ 6], 5.00th=[ 23], 10.00th=[ 45], 20.00th=[ 121], 00:26:40.887 | 30.00th=[ 136], 40.00th=[ 148], 50.00th=[ 184], 60.00th=[ 224], 00:26:40.887 | 70.00th=[ 271], 80.00th=[ 305], 90.00th=[ 384], 95.00th=[ 456], 00:26:40.887 | 99.00th=[ 558], 99.50th=[ 575], 99.90th=[ 617], 99.95th=[ 617], 00:26:40.887 | 99.99th=[ 617] 00:26:40.887 bw ( KiB/s): min=27136, max=144896, per=9.79%, avg=77663.10, stdev=35442.01, samples=20 00:26:40.887 iops : min= 106, max= 566, avg=303.35, stdev=138.45, samples=20 00:26:40.887 lat (msec) : 2=0.10%, 4=0.55%, 10=1.58%, 20=2.65%, 50=6.26% 00:26:40.887 lat (msec) : 100=6.88%, 250=47.89%, 500=31.48%, 750=2.62% 00:26:40.887 cpu : usr=0.13%, sys=1.15%, ctx=1004, majf=0, minf=4097 00:26:40.887 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.5%, 32=1.0%, >=64=98.0% 00:26:40.887 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:40.887 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:40.887 issued rwts: total=3097,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:40.887 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:40.887 00:26:40.887 Run status group 0 (all jobs): 00:26:40.887 READ: bw=775MiB/s (812MB/s), 49.8MiB/s-105MiB/s (52.2MB/s-110MB/s), io=7872MiB (8255MB), run=10090-10160msec 00:26:40.887 00:26:40.887 Disk stats (read/write): 00:26:40.887 nvme0n1: ios=8309/0, merge=0/0, ticks=1239709/0, in_queue=1239709, util=97.30% 00:26:40.887 nvme10n1: ios=6747/0, merge=0/0, ticks=1205310/0, in_queue=1205310, util=97.52% 00:26:40.887 nvme1n1: ios=4323/0, merge=0/0, ticks=1213047/0, in_queue=1213047, util=97.78% 00:26:40.887 nvme2n1: ios=8297/0, merge=0/0, ticks=1239779/0, in_queue=1239779, util=97.92% 00:26:40.887 nvme3n1: ios=4292/0, merge=0/0, ticks=1234106/0, in_queue=1234106, util=98.02% 00:26:40.887 nvme4n1: ios=4219/0, merge=0/0, ticks=1223514/0, in_queue=1223514, util=98.30% 00:26:40.887 nvme5n1: ios=4018/0, merge=0/0, ticks=1214939/0, in_queue=1214939, util=98.46% 00:26:40.887 nvme6n1: ios=6643/0, merge=0/0, ticks=1239786/0, in_queue=1239786, util=98.58% 00:26:40.888 nvme7n1: ios=4596/0, merge=0/0, ticks=1218073/0, in_queue=1218073, util=98.98% 00:26:40.888 nvme8n1: ios=3890/0, merge=0/0, ticks=1212299/0, in_queue=1212299, util=99.12% 00:26:40.888 nvme9n1: ios=6008/0, merge=0/0, ticks=1237800/0, in_queue=1237800, util=99.25% 00:26:40.888 00:07:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t randwrite -r 10 00:26:40.888 [global] 00:26:40.888 thread=1 00:26:40.888 invalidate=1 00:26:40.888 rw=randwrite 00:26:40.888 time_based=1 00:26:40.888 runtime=10 00:26:40.888 ioengine=libaio 00:26:40.888 direct=1 00:26:40.888 bs=262144 00:26:40.888 iodepth=64 00:26:40.888 norandommap=1 00:26:40.888 numjobs=1 00:26:40.888 00:26:40.888 [job0] 00:26:40.888 filename=/dev/nvme0n1 00:26:40.888 [job1] 00:26:40.888 filename=/dev/nvme10n1 00:26:40.888 [job2] 00:26:40.888 filename=/dev/nvme1n1 00:26:40.888 [job3] 00:26:40.888 filename=/dev/nvme2n1 00:26:40.888 [job4] 00:26:40.888 filename=/dev/nvme3n1 00:26:40.888 [job5] 00:26:40.888 filename=/dev/nvme4n1 00:26:40.888 [job6] 00:26:40.888 filename=/dev/nvme5n1 00:26:40.888 [job7] 00:26:40.888 filename=/dev/nvme6n1 00:26:40.888 [job8] 00:26:40.888 filename=/dev/nvme7n1 00:26:40.888 [job9] 00:26:40.888 filename=/dev/nvme8n1 00:26:40.888 [job10] 00:26:40.888 filename=/dev/nvme9n1 00:26:40.888 Could not set queue depth (nvme0n1) 00:26:40.888 Could not set queue depth (nvme10n1) 00:26:40.888 Could not set queue depth (nvme1n1) 00:26:40.888 Could not set queue depth (nvme2n1) 00:26:40.888 Could not set queue depth (nvme3n1) 00:26:40.888 Could not set queue depth (nvme4n1) 00:26:40.888 Could not set queue depth (nvme5n1) 00:26:40.888 Could not set queue depth (nvme6n1) 00:26:40.888 Could not set queue depth (nvme7n1) 00:26:40.888 Could not set queue depth (nvme8n1) 00:26:40.888 Could not set queue depth (nvme9n1) 00:26:40.888 job0: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:40.888 job1: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:40.888 job2: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:40.888 job3: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:40.888 job4: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:40.888 job5: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:40.888 job6: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:40.888 job7: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:40.888 job8: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:40.888 job9: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:40.888 job10: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:40.888 fio-3.35 00:26:40.888 Starting 11 threads 00:26:50.864 00:26:50.864 job0: (groupid=0, jobs=1): err= 0: pid=4089984: Sat Dec 14 00:07:29 2024 00:26:50.864 write: IOPS=458, BW=115MiB/s (120MB/s)(1159MiB/10117msec); 0 zone resets 00:26:50.864 slat (usec): min=20, max=111445, avg=1882.41, stdev=4781.43 00:26:50.864 clat (msec): min=9, max=575, avg=137.76, stdev=92.81 00:26:50.864 lat (msec): min=9, max=576, avg=139.65, stdev=93.97 00:26:50.864 clat percentiles (msec): 00:26:50.864 | 1.00th=[ 35], 5.00th=[ 52], 10.00th=[ 70], 20.00th=[ 90], 00:26:50.864 | 30.00th=[ 94], 40.00th=[ 96], 50.00th=[ 97], 60.00th=[ 100], 00:26:50.864 | 70.00th=[ 155], 80.00th=[ 194], 90.00th=[ 230], 95.00th=[ 266], 00:26:50.864 | 99.00th=[ 542], 99.50th=[ 558], 99.90th=[ 575], 99.95th=[ 575], 00:26:50.864 | 99.99th=[ 575] 00:26:50.864 bw ( KiB/s): min=28672, max=242176, per=12.01%, avg=117043.20, stdev=57844.77, samples=20 00:26:50.864 iops : min= 112, max= 946, avg=457.20, stdev=225.96, samples=20 00:26:50.864 lat (msec) : 10=0.02%, 20=0.32%, 50=3.71%, 100=56.42%, 250=33.07% 00:26:50.864 lat (msec) : 500=4.23%, 750=2.22% 00:26:50.864 cpu : usr=0.99%, sys=1.62%, ctx=1559, majf=0, minf=1 00:26:50.864 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.6% 00:26:50.864 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:50.864 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:50.864 issued rwts: total=0,4635,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:50.864 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:50.864 job1: (groupid=0, jobs=1): err= 0: pid=4089996: Sat Dec 14 00:07:29 2024 00:26:50.864 write: IOPS=280, BW=70.1MiB/s (73.5MB/s)(711MiB/10137msec); 0 zone resets 00:26:50.864 slat (usec): min=30, max=127863, avg=3513.52, stdev=7114.40 00:26:50.864 clat (msec): min=37, max=610, avg=224.51, stdev=93.18 00:26:50.864 lat (msec): min=37, max=610, avg=228.03, stdev=94.34 00:26:50.864 clat percentiles (msec): 00:26:50.864 | 1.00th=[ 121], 5.00th=[ 136], 10.00th=[ 148], 20.00th=[ 167], 00:26:50.864 | 30.00th=[ 176], 40.00th=[ 182], 50.00th=[ 194], 60.00th=[ 213], 00:26:50.864 | 70.00th=[ 243], 80.00th=[ 271], 90.00th=[ 317], 95.00th=[ 443], 00:26:50.864 | 99.00th=[ 600], 99.50th=[ 609], 99.90th=[ 609], 99.95th=[ 609], 00:26:50.864 | 99.99th=[ 609] 00:26:50.864 bw ( KiB/s): min=26624, max=106496, per=7.30%, avg=71168.00, stdev=23462.79, samples=20 00:26:50.864 iops : min= 104, max= 416, avg=278.00, stdev=91.65, samples=20 00:26:50.864 lat (msec) : 50=0.14%, 100=0.42%, 250=71.10%, 500=24.44%, 750=3.90% 00:26:50.864 cpu : usr=0.68%, sys=1.03%, ctx=699, majf=0, minf=1 00:26:50.864 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.6%, 32=1.1%, >=64=97.8% 00:26:50.864 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:50.864 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:50.864 issued rwts: total=0,2844,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:50.864 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:50.864 job2: (groupid=0, jobs=1): err= 0: pid=4089997: Sat Dec 14 00:07:29 2024 00:26:50.864 write: IOPS=418, BW=105MiB/s (110MB/s)(1052MiB/10057msec); 0 zone resets 00:26:50.864 slat (usec): min=23, max=101077, avg=2237.37, stdev=5781.79 00:26:50.864 clat (usec): min=1527, max=475052, avg=150661.65, stdev=121728.51 00:26:50.864 lat (usec): min=1589, max=475096, avg=152899.01, stdev=123443.89 00:26:50.864 clat percentiles (msec): 00:26:50.864 | 1.00th=[ 8], 5.00th=[ 34], 10.00th=[ 47], 20.00th=[ 50], 00:26:50.864 | 30.00th=[ 51], 40.00th=[ 68], 50.00th=[ 101], 60.00th=[ 127], 00:26:50.864 | 70.00th=[ 222], 80.00th=[ 271], 90.00th=[ 351], 95.00th=[ 397], 00:26:50.864 | 99.00th=[ 435], 99.50th=[ 460], 99.90th=[ 477], 99.95th=[ 477], 00:26:50.864 | 99.99th=[ 477] 00:26:50.864 bw ( KiB/s): min=38912, max=333824, per=10.89%, avg=106112.00, stdev=84331.48, samples=20 00:26:50.864 iops : min= 152, max= 1304, avg=414.50, stdev=329.42, samples=20 00:26:50.864 lat (msec) : 2=0.02%, 4=0.31%, 10=1.09%, 20=1.69%, 50=27.19% 00:26:50.864 lat (msec) : 100=19.46%, 250=28.35%, 500=21.89% 00:26:50.864 cpu : usr=0.98%, sys=1.38%, ctx=1366, majf=0, minf=2 00:26:50.864 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.5% 00:26:50.864 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:50.864 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:50.864 issued rwts: total=0,4208,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:50.864 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:50.864 job3: (groupid=0, jobs=1): err= 0: pid=4089998: Sat Dec 14 00:07:29 2024 00:26:50.864 write: IOPS=272, BW=68.2MiB/s (71.5MB/s)(693MiB/10168msec); 0 zone resets 00:26:50.864 slat (usec): min=21, max=49847, avg=2506.75, stdev=6719.24 00:26:50.864 clat (usec): min=1647, max=536210, avg=232067.10, stdev=123350.11 00:26:50.864 lat (usec): min=1687, max=543935, avg=234573.85, stdev=125024.06 00:26:50.864 clat percentiles (msec): 00:26:50.864 | 1.00th=[ 4], 5.00th=[ 21], 10.00th=[ 56], 20.00th=[ 124], 00:26:50.864 | 30.00th=[ 165], 40.00th=[ 203], 50.00th=[ 226], 60.00th=[ 266], 00:26:50.864 | 70.00th=[ 288], 80.00th=[ 347], 90.00th=[ 397], 95.00th=[ 422], 00:26:50.864 | 99.00th=[ 518], 99.50th=[ 527], 99.90th=[ 535], 99.95th=[ 535], 00:26:50.864 | 99.99th=[ 535] 00:26:50.864 bw ( KiB/s): min=30720, max=139264, per=7.12%, avg=69376.00, stdev=28736.65, samples=20 00:26:50.864 iops : min= 120, max= 544, avg=271.00, stdev=112.25, samples=20 00:26:50.864 lat (msec) : 2=0.14%, 4=1.37%, 10=2.67%, 20=0.76%, 50=4.15% 00:26:50.864 lat (msec) : 100=7.21%, 250=37.54%, 500=44.10%, 750=2.06% 00:26:50.864 cpu : usr=0.61%, sys=0.93%, ctx=1549, majf=0, minf=1 00:26:50.864 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.6%, 32=1.2%, >=64=97.7% 00:26:50.864 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:50.864 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:50.864 issued rwts: total=0,2773,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:50.864 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:50.864 job4: (groupid=0, jobs=1): err= 0: pid=4089999: Sat Dec 14 00:07:29 2024 00:26:50.864 write: IOPS=468, BW=117MiB/s (123MB/s)(1187MiB/10127msec); 0 zone resets 00:26:50.864 slat (usec): min=26, max=95049, avg=1688.78, stdev=5108.37 00:26:50.864 clat (usec): min=1556, max=444325, avg=134739.47, stdev=110320.04 00:26:50.864 lat (usec): min=1640, max=444379, avg=136428.26, stdev=111763.76 00:26:50.864 clat percentiles (msec): 00:26:50.864 | 1.00th=[ 5], 5.00th=[ 15], 10.00th=[ 25], 20.00th=[ 60], 00:26:50.864 | 30.00th=[ 66], 40.00th=[ 68], 50.00th=[ 69], 60.00th=[ 123], 00:26:50.864 | 70.00th=[ 194], 80.00th=[ 215], 90.00th=[ 321], 95.00th=[ 388], 00:26:50.864 | 99.00th=[ 426], 99.50th=[ 435], 99.90th=[ 443], 99.95th=[ 443], 00:26:50.864 | 99.99th=[ 443] 00:26:50.864 bw ( KiB/s): min=38912, max=249344, per=12.31%, avg=119884.80, stdev=70709.73, samples=20 00:26:50.864 iops : min= 152, max= 974, avg=468.30, stdev=276.21, samples=20 00:26:50.864 lat (msec) : 2=0.13%, 4=0.48%, 10=2.59%, 20=4.02%, 50=10.09% 00:26:50.864 lat (msec) : 100=37.84%, 250=30.59%, 500=14.24% 00:26:50.864 cpu : usr=1.26%, sys=1.48%, ctx=2339, majf=0, minf=1 00:26:50.864 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:26:50.864 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:50.864 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:50.864 issued rwts: total=0,4746,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:50.864 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:50.864 job5: (groupid=0, jobs=1): err= 0: pid=4090000: Sat Dec 14 00:07:29 2024 00:26:50.864 write: IOPS=291, BW=72.8MiB/s (76.3MB/s)(740MiB/10171msec); 0 zone resets 00:26:50.864 slat (usec): min=24, max=52329, avg=3278.62, stdev=6922.18 00:26:50.864 clat (usec): min=1291, max=445385, avg=216317.10, stdev=111515.49 00:26:50.864 lat (usec): min=1983, max=445438, avg=219595.72, stdev=113108.41 00:26:50.864 clat percentiles (msec): 00:26:50.864 | 1.00th=[ 6], 5.00th=[ 33], 10.00th=[ 68], 20.00th=[ 95], 00:26:50.864 | 30.00th=[ 174], 40.00th=[ 201], 50.00th=[ 215], 60.00th=[ 230], 00:26:50.864 | 70.00th=[ 279], 80.00th=[ 326], 90.00th=[ 376], 95.00th=[ 401], 00:26:50.864 | 99.00th=[ 430], 99.50th=[ 439], 99.90th=[ 447], 99.95th=[ 447], 00:26:50.864 | 99.99th=[ 447] 00:26:50.864 bw ( KiB/s): min=38912, max=199168, per=7.61%, avg=74137.60, stdev=41912.17, samples=20 00:26:50.864 iops : min= 152, max= 778, avg=289.60, stdev=163.72, samples=20 00:26:50.864 lat (msec) : 2=0.10%, 4=0.20%, 10=2.23%, 20=1.55%, 50=3.55% 00:26:50.864 lat (msec) : 100=13.34%, 250=43.61%, 500=35.41% 00:26:50.864 cpu : usr=0.67%, sys=1.09%, ctx=962, majf=0, minf=1 00:26:50.864 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.5%, 32=1.1%, >=64=97.9% 00:26:50.864 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:50.864 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:50.864 issued rwts: total=0,2960,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:50.864 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:50.864 job6: (groupid=0, jobs=1): err= 0: pid=4090001: Sat Dec 14 00:07:29 2024 00:26:50.864 write: IOPS=500, BW=125MiB/s (131MB/s)(1267MiB/10122msec); 0 zone resets 00:26:50.864 slat (usec): min=17, max=182006, avg=1570.70, stdev=4322.75 00:26:50.864 clat (usec): min=1000, max=688354, avg=126262.20, stdev=73550.02 00:26:50.864 lat (usec): min=1044, max=693223, avg=127832.90, stdev=74210.94 00:26:50.864 clat percentiles (msec): 00:26:50.864 | 1.00th=[ 6], 5.00th=[ 52], 10.00th=[ 59], 20.00th=[ 90], 00:26:50.864 | 30.00th=[ 93], 40.00th=[ 96], 50.00th=[ 97], 60.00th=[ 100], 00:26:50.864 | 70.00th=[ 146], 80.00th=[ 192], 90.00th=[ 213], 95.00th=[ 241], 00:26:50.864 | 99.00th=[ 384], 99.50th=[ 542], 99.90th=[ 651], 99.95th=[ 676], 00:26:50.864 | 99.99th=[ 693] 00:26:50.864 bw ( KiB/s): min=66048, max=225792, per=13.15%, avg=128076.80, stdev=46806.13, samples=20 00:26:50.864 iops : min= 258, max= 882, avg=500.30, stdev=182.84, samples=20 00:26:50.864 lat (msec) : 2=0.18%, 4=0.49%, 10=1.13%, 20=0.83%, 50=2.33% 00:26:50.864 lat (msec) : 100=55.92%, 250=35.29%, 500=3.14%, 750=0.69% 00:26:50.864 cpu : usr=1.16%, sys=1.37%, ctx=2070, majf=0, minf=1 00:26:50.864 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:26:50.864 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:50.864 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:50.864 issued rwts: total=0,5066,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:50.864 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:50.864 job7: (groupid=0, jobs=1): err= 0: pid=4090002: Sat Dec 14 00:07:29 2024 00:26:50.865 write: IOPS=272, BW=68.1MiB/s (71.4MB/s)(691MiB/10136msec); 0 zone resets 00:26:50.865 slat (usec): min=25, max=134915, avg=2891.89, stdev=7285.10 00:26:50.865 clat (usec): min=1280, max=625144, avg=231875.79, stdev=124450.66 00:26:50.865 lat (usec): min=1427, max=629849, avg=234767.68, stdev=126198.73 00:26:50.865 clat percentiles (msec): 00:26:50.865 | 1.00th=[ 6], 5.00th=[ 29], 10.00th=[ 64], 20.00th=[ 114], 00:26:50.865 | 30.00th=[ 186], 40.00th=[ 201], 50.00th=[ 222], 60.00th=[ 234], 00:26:50.865 | 70.00th=[ 305], 80.00th=[ 334], 90.00th=[ 393], 95.00th=[ 422], 00:26:50.865 | 99.00th=[ 600], 99.50th=[ 609], 99.90th=[ 617], 99.95th=[ 625], 00:26:50.865 | 99.99th=[ 625] 00:26:50.865 bw ( KiB/s): min=33280, max=127488, per=7.09%, avg=69094.40, stdev=30945.30, samples=20 00:26:50.865 iops : min= 130, max= 498, avg=269.90, stdev=120.88, samples=20 00:26:50.865 lat (msec) : 2=0.22%, 4=0.51%, 10=1.05%, 20=2.03%, 50=4.24% 00:26:50.865 lat (msec) : 100=9.88%, 250=44.42%, 500=35.08%, 750=2.57% 00:26:50.865 cpu : usr=0.71%, sys=0.79%, ctx=1308, majf=0, minf=1 00:26:50.865 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.6%, 32=1.2%, >=64=97.7% 00:26:50.865 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:50.865 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:50.865 issued rwts: total=0,2762,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:50.865 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:50.865 job8: (groupid=0, jobs=1): err= 0: pid=4090003: Sat Dec 14 00:07:29 2024 00:26:50.865 write: IOPS=281, BW=70.4MiB/s (73.8MB/s)(716MiB/10168msec); 0 zone resets 00:26:50.865 slat (usec): min=21, max=280732, avg=2458.26, stdev=9023.68 00:26:50.865 clat (usec): min=1624, max=637769, avg=224616.68, stdev=134962.96 00:26:50.865 lat (msec): min=2, max=669, avg=227.07, stdev=136.65 00:26:50.865 clat percentiles (msec): 00:26:50.865 | 1.00th=[ 8], 5.00th=[ 18], 10.00th=[ 34], 20.00th=[ 81], 00:26:50.865 | 30.00th=[ 182], 40.00th=[ 199], 50.00th=[ 213], 60.00th=[ 230], 00:26:50.865 | 70.00th=[ 284], 80.00th=[ 347], 90.00th=[ 409], 95.00th=[ 447], 00:26:50.865 | 99.00th=[ 567], 99.50th=[ 584], 99.90th=[ 642], 99.95th=[ 642], 00:26:50.865 | 99.99th=[ 642] 00:26:50.865 bw ( KiB/s): min= 8704, max=214016, per=7.35%, avg=71654.40, stdev=43032.53, samples=20 00:26:50.865 iops : min= 34, max= 836, avg=279.90, stdev=168.10, samples=20 00:26:50.865 lat (msec) : 2=0.03%, 4=0.17%, 10=1.15%, 20=4.54%, 50=9.01% 00:26:50.865 lat (msec) : 100=6.81%, 250=42.12%, 500=32.76%, 750=3.39% 00:26:50.865 cpu : usr=0.64%, sys=0.96%, ctx=1619, majf=0, minf=1 00:26:50.865 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.6%, 32=1.1%, >=64=97.8% 00:26:50.865 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:50.865 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:50.865 issued rwts: total=0,2863,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:50.865 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:50.865 job9: (groupid=0, jobs=1): err= 0: pid=4090004: Sat Dec 14 00:07:29 2024 00:26:50.865 write: IOPS=282, BW=70.6MiB/s (74.0MB/s)(715MiB/10132msec); 0 zone resets 00:26:50.865 slat (usec): min=26, max=219691, avg=3243.47, stdev=7767.36 00:26:50.865 clat (msec): min=48, max=617, avg=223.29, stdev=92.69 00:26:50.865 lat (msec): min=48, max=624, avg=226.54, stdev=93.78 00:26:50.865 clat percentiles (msec): 00:26:50.865 | 1.00th=[ 130], 5.00th=[ 142], 10.00th=[ 159], 20.00th=[ 169], 00:26:50.865 | 30.00th=[ 176], 40.00th=[ 182], 50.00th=[ 190], 60.00th=[ 203], 00:26:50.865 | 70.00th=[ 232], 80.00th=[ 266], 90.00th=[ 292], 95.00th=[ 472], 00:26:50.865 | 99.00th=[ 592], 99.50th=[ 600], 99.90th=[ 617], 99.95th=[ 617], 00:26:50.865 | 99.99th=[ 617] 00:26:50.865 bw ( KiB/s): min=26624, max=106496, per=7.35%, avg=71628.80, stdev=23714.82, samples=20 00:26:50.865 iops : min= 104, max= 416, avg=279.80, stdev=92.64, samples=20 00:26:50.865 lat (msec) : 50=0.03%, 250=74.10%, 500=21.53%, 750=4.33% 00:26:50.865 cpu : usr=0.69%, sys=1.08%, ctx=865, majf=0, minf=1 00:26:50.865 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.6%, 32=1.1%, >=64=97.8% 00:26:50.865 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:50.865 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:50.865 issued rwts: total=0,2861,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:50.865 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:50.865 job10: (groupid=0, jobs=1): err= 0: pid=4090005: Sat Dec 14 00:07:29 2024 00:26:50.865 write: IOPS=293, BW=73.5MiB/s (77.1MB/s)(747MiB/10168msec); 0 zone resets 00:26:50.865 slat (usec): min=22, max=136489, avg=2899.09, stdev=7124.93 00:26:50.865 clat (usec): min=1316, max=645561, avg=214711.97, stdev=112562.59 00:26:50.865 lat (msec): min=2, max=645, avg=217.61, stdev=114.13 00:26:50.865 clat percentiles (msec): 00:26:50.865 | 1.00th=[ 6], 5.00th=[ 42], 10.00th=[ 87], 20.00th=[ 155], 00:26:50.865 | 30.00th=[ 174], 40.00th=[ 182], 50.00th=[ 199], 60.00th=[ 215], 00:26:50.865 | 70.00th=[ 251], 80.00th=[ 271], 90.00th=[ 326], 95.00th=[ 456], 00:26:50.865 | 99.00th=[ 625], 99.50th=[ 634], 99.90th=[ 642], 99.95th=[ 642], 00:26:50.865 | 99.99th=[ 642] 00:26:50.865 bw ( KiB/s): min=26624, max=126976, per=7.69%, avg=74905.60, stdev=24226.63, samples=20 00:26:50.865 iops : min= 104, max= 496, avg=292.60, stdev=94.64, samples=20 00:26:50.865 lat (msec) : 2=0.10%, 4=0.23%, 10=2.84%, 20=0.50%, 50=1.81% 00:26:50.865 lat (msec) : 100=6.09%, 250=58.01%, 500=26.33%, 750=4.08% 00:26:50.865 cpu : usr=0.66%, sys=0.93%, ctx=1291, majf=0, minf=1 00:26:50.865 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.5%, 32=1.1%, >=64=97.9% 00:26:50.865 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:50.865 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:50.865 issued rwts: total=0,2989,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:50.865 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:50.865 00:26:50.865 Run status group 0 (all jobs): 00:26:50.865 WRITE: bw=951MiB/s (998MB/s), 68.1MiB/s-125MiB/s (71.4MB/s-131MB/s), io=9677MiB (10.1GB), run=10057-10171msec 00:26:50.865 00:26:50.865 Disk stats (read/write): 00:26:50.865 nvme0n1: ios=49/9090, merge=0/0, ticks=46/1204357, in_queue=1204403, util=97.22% 00:26:50.865 nvme10n1: ios=31/5495, merge=0/0, ticks=125/1201497, in_queue=1201622, util=97.86% 00:26:50.865 nvme1n1: ios=45/8057, merge=0/0, ticks=687/1201470, in_queue=1202157, util=99.88% 00:26:50.865 nvme2n1: ios=0/5533, merge=0/0, ticks=0/1246640, in_queue=1246640, util=97.75% 00:26:50.865 nvme3n1: ios=42/9311, merge=0/0, ticks=2727/1201663, in_queue=1204390, util=99.86% 00:26:50.865 nvme4n1: ios=45/5903, merge=0/0, ticks=575/1233386, in_queue=1233961, util=99.94% 00:26:50.865 nvme5n1: ios=0/9915, merge=0/0, ticks=0/1207353, in_queue=1207353, util=98.21% 00:26:50.865 nvme6n1: ios=0/5334, merge=0/0, ticks=0/1210727, in_queue=1210727, util=98.37% 00:26:50.865 nvme7n1: ios=43/5713, merge=0/0, ticks=912/1240216, in_queue=1241128, util=99.92% 00:26:50.865 nvme8n1: ios=40/5531, merge=0/0, ticks=2232/1204995, in_queue=1207227, util=99.93% 00:26:50.865 nvme9n1: ios=0/5966, merge=0/0, ticks=0/1239708, in_queue=1239708, util=99.08% 00:26:50.865 00:07:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@36 -- # sync 00:26:50.865 00:07:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # seq 1 11 00:26:50.865 00:07:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:50.865 00:07:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:26:50.865 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:26:50.865 00:07:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK1 00:26:50.865 00:07:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:26:50.865 00:07:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:26:50.865 00:07:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK1 00:26:50.865 00:07:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK1 00:26:50.865 00:07:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:26:50.865 00:07:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:26:50.865 00:07:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:50.865 00:07:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:50.865 00:07:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:50.865 00:07:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:50.865 00:07:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:50.865 00:07:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode2 00:26:51.433 NQN:nqn.2016-06.io.spdk:cnode2 disconnected 1 controller(s) 00:26:51.433 00:07:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK2 00:26:51.433 00:07:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:26:51.433 00:07:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:26:51.433 00:07:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK2 00:26:51.433 00:07:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:26:51.433 00:07:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK2 00:26:51.433 00:07:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:26:51.433 00:07:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:26:51.433 00:07:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:51.433 00:07:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:51.433 00:07:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:51.433 00:07:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:51.433 00:07:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode3 00:26:52.001 NQN:nqn.2016-06.io.spdk:cnode3 disconnected 1 controller(s) 00:26:52.001 00:07:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK3 00:26:52.001 00:07:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:26:52.001 00:07:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:26:52.001 00:07:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK3 00:26:52.001 00:07:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:26:52.001 00:07:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK3 00:26:52.001 00:07:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:26:52.001 00:07:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:26:52.001 00:07:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:52.001 00:07:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:52.001 00:07:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:52.001 00:07:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:52.001 00:07:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode4 00:26:52.570 NQN:nqn.2016-06.io.spdk:cnode4 disconnected 1 controller(s) 00:26:52.570 00:07:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK4 00:26:52.570 00:07:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:26:52.570 00:07:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:26:52.570 00:07:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK4 00:26:52.570 00:07:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:26:52.570 00:07:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK4 00:26:52.570 00:07:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:26:52.570 00:07:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:26:52.570 00:07:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:52.570 00:07:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:52.570 00:07:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:52.570 00:07:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:52.570 00:07:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode5 00:26:52.829 NQN:nqn.2016-06.io.spdk:cnode5 disconnected 1 controller(s) 00:26:52.829 00:07:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK5 00:26:52.829 00:07:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:26:52.829 00:07:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:26:52.829 00:07:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK5 00:26:52.829 00:07:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK5 00:26:52.829 00:07:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:26:52.829 00:07:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:26:52.829 00:07:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode5 00:26:52.829 00:07:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:52.829 00:07:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:52.829 00:07:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:52.829 00:07:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:52.829 00:07:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode6 00:26:53.398 NQN:nqn.2016-06.io.spdk:cnode6 disconnected 1 controller(s) 00:26:53.398 00:07:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK6 00:26:53.398 00:07:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:26:53.398 00:07:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:26:53.398 00:07:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK6 00:26:53.398 00:07:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:26:53.398 00:07:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK6 00:26:53.398 00:07:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:26:53.398 00:07:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode6 00:26:53.398 00:07:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:53.398 00:07:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:53.398 00:07:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:53.398 00:07:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:53.398 00:07:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode7 00:26:53.966 NQN:nqn.2016-06.io.spdk:cnode7 disconnected 1 controller(s) 00:26:53.966 00:07:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK7 00:26:53.966 00:07:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:26:53.966 00:07:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:26:53.966 00:07:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK7 00:26:53.966 00:07:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:26:53.966 00:07:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK7 00:26:53.966 00:07:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:26:53.966 00:07:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode7 00:26:53.966 00:07:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:53.966 00:07:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:53.966 00:07:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:53.966 00:07:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:53.966 00:07:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode8 00:26:54.225 NQN:nqn.2016-06.io.spdk:cnode8 disconnected 1 controller(s) 00:26:54.226 00:07:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK8 00:26:54.226 00:07:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:26:54.226 00:07:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:26:54.226 00:07:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK8 00:26:54.226 00:07:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK8 00:26:54.226 00:07:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:26:54.226 00:07:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:26:54.226 00:07:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode8 00:26:54.226 00:07:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:54.226 00:07:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:54.226 00:07:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:54.226 00:07:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:54.226 00:07:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode9 00:26:54.794 NQN:nqn.2016-06.io.spdk:cnode9 disconnected 1 controller(s) 00:26:54.794 00:07:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK9 00:26:54.794 00:07:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:26:54.794 00:07:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:26:54.794 00:07:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK9 00:26:54.794 00:07:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:26:54.794 00:07:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK9 00:26:54.794 00:07:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:26:54.794 00:07:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode9 00:26:54.794 00:07:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:54.794 00:07:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:54.794 00:07:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:54.794 00:07:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:54.794 00:07:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode10 00:26:55.054 NQN:nqn.2016-06.io.spdk:cnode10 disconnected 1 controller(s) 00:26:55.054 00:07:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK10 00:26:55.054 00:07:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:26:55.054 00:07:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:26:55.054 00:07:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK10 00:26:55.054 00:07:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:26:55.054 00:07:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK10 00:26:55.054 00:07:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:26:55.054 00:07:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode10 00:26:55.054 00:07:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:55.054 00:07:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:55.054 00:07:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:55.054 00:07:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:55.054 00:07:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode11 00:26:55.622 NQN:nqn.2016-06.io.spdk:cnode11 disconnected 1 controller(s) 00:26:55.622 00:07:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK11 00:26:55.622 00:07:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:26:55.622 00:07:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:26:55.622 00:07:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK11 00:26:55.622 00:07:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:26:55.622 00:07:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK11 00:26:55.622 00:07:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:26:55.622 00:07:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode11 00:26:55.622 00:07:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:55.622 00:07:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:55.622 00:07:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:55.622 00:07:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@43 -- # rm -f ./local-job0-0-verify.state 00:26:55.622 00:07:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:26:55.622 00:07:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@47 -- # nvmftestfini 00:26:55.622 00:07:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:55.622 00:07:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@121 -- # sync 00:26:55.622 00:07:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:55.623 00:07:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@124 -- # set +e 00:26:55.623 00:07:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:55.623 00:07:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:55.623 rmmod nvme_tcp 00:26:55.623 rmmod nvme_fabrics 00:26:55.623 rmmod nvme_keyring 00:26:55.623 00:07:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:55.623 00:07:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@128 -- # set -e 00:26:55.623 00:07:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@129 -- # return 0 00:26:55.623 00:07:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@517 -- # '[' -n 4082203 ']' 00:26:55.623 00:07:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@518 -- # killprocess 4082203 00:26:55.623 00:07:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@954 -- # '[' -z 4082203 ']' 00:26:55.623 00:07:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@958 -- # kill -0 4082203 00:26:55.623 00:07:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@959 -- # uname 00:26:55.623 00:07:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:55.623 00:07:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4082203 00:26:55.623 00:07:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:26:55.623 00:07:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:26:55.623 00:07:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4082203' 00:26:55.623 killing process with pid 4082203 00:26:55.623 00:07:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@973 -- # kill 4082203 00:26:55.623 00:07:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@978 -- # wait 4082203 00:26:58.918 00:07:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:26:58.918 00:07:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:26:58.918 00:07:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:26:58.918 00:07:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@297 -- # iptr 00:26:58.918 00:07:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@791 -- # iptables-save 00:26:58.918 00:07:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:26:58.918 00:07:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@791 -- # iptables-restore 00:26:58.918 00:07:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:58.918 00:07:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@302 -- # remove_spdk_ns 00:26:58.918 00:07:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:58.918 00:07:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:58.918 00:07:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:01.453 00:07:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:27:01.453 00:27:01.453 real 1m17.074s 00:27:01.453 user 4m40.544s 00:27:01.453 sys 0m16.801s 00:27:01.453 00:07:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:01.453 00:07:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:01.453 ************************************ 00:27:01.453 END TEST nvmf_multiconnection 00:27:01.453 ************************************ 00:27:01.453 00:07:40 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@50 -- # run_test nvmf_initiator_timeout /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:27:01.453 00:07:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:27:01.453 00:07:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:01.453 00:07:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:27:01.453 ************************************ 00:27:01.453 START TEST nvmf_initiator_timeout 00:27:01.453 ************************************ 00:27:01.453 00:07:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:27:01.453 * Looking for test storage... 00:27:01.453 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:27:01.453 00:07:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:27:01.453 00:07:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1711 -- # lcov --version 00:27:01.453 00:07:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:27:01.453 00:07:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:27:01.453 00:07:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:01.453 00:07:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:01.453 00:07:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:01.453 00:07:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@336 -- # IFS=.-: 00:27:01.453 00:07:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@336 -- # read -ra ver1 00:27:01.453 00:07:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@337 -- # IFS=.-: 00:27:01.453 00:07:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@337 -- # read -ra ver2 00:27:01.453 00:07:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@338 -- # local 'op=<' 00:27:01.453 00:07:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@340 -- # ver1_l=2 00:27:01.453 00:07:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@341 -- # ver2_l=1 00:27:01.453 00:07:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:01.453 00:07:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@344 -- # case "$op" in 00:27:01.453 00:07:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@345 -- # : 1 00:27:01.453 00:07:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:01.453 00:07:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:01.453 00:07:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@365 -- # decimal 1 00:27:01.453 00:07:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@353 -- # local d=1 00:27:01.453 00:07:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:01.453 00:07:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@355 -- # echo 1 00:27:01.453 00:07:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@365 -- # ver1[v]=1 00:27:01.453 00:07:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@366 -- # decimal 2 00:27:01.453 00:07:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@353 -- # local d=2 00:27:01.453 00:07:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:01.453 00:07:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@355 -- # echo 2 00:27:01.453 00:07:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@366 -- # ver2[v]=2 00:27:01.453 00:07:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:01.453 00:07:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:01.453 00:07:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@368 -- # return 0 00:27:01.453 00:07:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:01.453 00:07:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:27:01.453 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:01.453 --rc genhtml_branch_coverage=1 00:27:01.453 --rc genhtml_function_coverage=1 00:27:01.453 --rc genhtml_legend=1 00:27:01.453 --rc geninfo_all_blocks=1 00:27:01.453 --rc geninfo_unexecuted_blocks=1 00:27:01.453 00:27:01.453 ' 00:27:01.453 00:07:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:27:01.453 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:01.453 --rc genhtml_branch_coverage=1 00:27:01.453 --rc genhtml_function_coverage=1 00:27:01.453 --rc genhtml_legend=1 00:27:01.453 --rc geninfo_all_blocks=1 00:27:01.453 --rc geninfo_unexecuted_blocks=1 00:27:01.453 00:27:01.453 ' 00:27:01.453 00:07:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:27:01.453 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:01.453 --rc genhtml_branch_coverage=1 00:27:01.453 --rc genhtml_function_coverage=1 00:27:01.453 --rc genhtml_legend=1 00:27:01.453 --rc geninfo_all_blocks=1 00:27:01.453 --rc geninfo_unexecuted_blocks=1 00:27:01.453 00:27:01.453 ' 00:27:01.453 00:07:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:27:01.453 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:01.453 --rc genhtml_branch_coverage=1 00:27:01.453 --rc genhtml_function_coverage=1 00:27:01.453 --rc genhtml_legend=1 00:27:01.453 --rc geninfo_all_blocks=1 00:27:01.453 --rc geninfo_unexecuted_blocks=1 00:27:01.453 00:27:01.453 ' 00:27:01.453 00:07:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:01.453 00:07:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@7 -- # uname -s 00:27:01.453 00:07:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:01.453 00:07:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:01.453 00:07:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:01.453 00:07:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:01.453 00:07:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:01.453 00:07:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:01.453 00:07:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:01.453 00:07:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:01.453 00:07:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:01.453 00:07:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:01.453 00:07:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:27:01.453 00:07:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:27:01.453 00:07:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:01.453 00:07:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:01.453 00:07:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:01.454 00:07:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:01.454 00:07:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:01.454 00:07:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@15 -- # shopt -s extglob 00:27:01.454 00:07:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:01.454 00:07:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:01.454 00:07:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:01.454 00:07:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:01.454 00:07:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:01.454 00:07:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:01.454 00:07:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@5 -- # export PATH 00:27:01.454 00:07:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:01.454 00:07:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@51 -- # : 0 00:27:01.454 00:07:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:01.454 00:07:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:01.454 00:07:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:01.454 00:07:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:01.454 00:07:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:01.454 00:07:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:27:01.454 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:27:01.454 00:07:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:01.454 00:07:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:01.454 00:07:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:01.454 00:07:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:27:01.454 00:07:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:27:01.454 00:07:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@14 -- # nvmftestinit 00:27:01.454 00:07:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:27:01.454 00:07:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:01.454 00:07:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@476 -- # prepare_net_devs 00:27:01.454 00:07:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@438 -- # local -g is_hw=no 00:27:01.454 00:07:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@440 -- # remove_spdk_ns 00:27:01.454 00:07:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:01.454 00:07:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:01.454 00:07:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:01.454 00:07:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:27:01.454 00:07:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:27:01.454 00:07:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@309 -- # xtrace_disable 00:27:01.454 00:07:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:06.725 00:07:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:06.725 00:07:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@315 -- # pci_devs=() 00:27:06.725 00:07:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:06.725 00:07:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:06.725 00:07:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:06.725 00:07:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:06.725 00:07:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:06.725 00:07:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@319 -- # net_devs=() 00:27:06.725 00:07:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:06.725 00:07:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@320 -- # e810=() 00:27:06.725 00:07:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@320 -- # local -ga e810 00:27:06.725 00:07:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@321 -- # x722=() 00:27:06.725 00:07:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@321 -- # local -ga x722 00:27:06.725 00:07:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@322 -- # mlx=() 00:27:06.725 00:07:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@322 -- # local -ga mlx 00:27:06.725 00:07:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:06.725 00:07:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:06.725 00:07:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:06.725 00:07:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:06.725 00:07:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:06.725 00:07:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:06.725 00:07:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:06.725 00:07:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:06.725 00:07:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:06.725 00:07:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:06.725 00:07:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:06.725 00:07:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:06.725 00:07:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:06.725 00:07:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:06.725 00:07:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:06.725 00:07:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:06.725 00:07:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:06.725 00:07:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:06.725 00:07:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:06.725 00:07:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:27:06.725 Found 0000:af:00.0 (0x8086 - 0x159b) 00:27:06.725 00:07:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:06.725 00:07:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:06.725 00:07:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:06.725 00:07:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:06.725 00:07:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:06.725 00:07:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:06.725 00:07:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:27:06.725 Found 0000:af:00.1 (0x8086 - 0x159b) 00:27:06.725 00:07:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:06.725 00:07:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:06.725 00:07:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:06.725 00:07:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:06.725 00:07:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:06.725 00:07:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:06.725 00:07:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:06.725 00:07:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:06.725 00:07:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:06.725 00:07:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:06.725 00:07:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:06.725 00:07:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:06.725 00:07:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:06.725 00:07:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:06.725 00:07:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:06.725 00:07:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:27:06.725 Found net devices under 0000:af:00.0: cvl_0_0 00:27:06.725 00:07:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:06.725 00:07:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:06.725 00:07:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:06.725 00:07:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:06.725 00:07:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:06.725 00:07:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:06.725 00:07:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:06.725 00:07:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:06.725 00:07:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:27:06.725 Found net devices under 0000:af:00.1: cvl_0_1 00:27:06.725 00:07:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:06.725 00:07:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:27:06.726 00:07:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@442 -- # is_hw=yes 00:27:06.726 00:07:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:27:06.726 00:07:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:27:06.726 00:07:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:27:06.726 00:07:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:06.726 00:07:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:06.726 00:07:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:06.726 00:07:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:06.726 00:07:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:27:06.726 00:07:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:06.726 00:07:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:06.726 00:07:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:27:06.726 00:07:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:27:06.726 00:07:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:06.726 00:07:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:06.726 00:07:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:27:06.726 00:07:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:27:06.726 00:07:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:27:06.726 00:07:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:06.726 00:07:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:06.726 00:07:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:06.726 00:07:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:27:06.726 00:07:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:06.726 00:07:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:06.726 00:07:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:06.726 00:07:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:27:06.726 00:07:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:27:06.726 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:06.726 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.417 ms 00:27:06.726 00:27:06.726 --- 10.0.0.2 ping statistics --- 00:27:06.726 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:06.726 rtt min/avg/max/mdev = 0.417/0.417/0.417/0.000 ms 00:27:06.726 00:07:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:06.726 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:06.726 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.212 ms 00:27:06.726 00:27:06.726 --- 10.0.0.1 ping statistics --- 00:27:06.726 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:06.726 rtt min/avg/max/mdev = 0.212/0.212/0.212/0.000 ms 00:27:06.726 00:07:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:06.726 00:07:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@450 -- # return 0 00:27:06.726 00:07:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:27:06.726 00:07:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:06.726 00:07:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:27:06.726 00:07:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:27:06.726 00:07:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:06.726 00:07:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:27:06.726 00:07:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:27:06.726 00:07:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@15 -- # nvmfappstart -m 0xF 00:27:06.726 00:07:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:27:06.726 00:07:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:06.726 00:07:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:06.726 00:07:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@509 -- # nvmfpid=4095802 00:27:06.726 00:07:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@510 -- # waitforlisten 4095802 00:27:06.726 00:07:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@835 -- # '[' -z 4095802 ']' 00:27:06.726 00:07:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:06.726 00:07:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:06.726 00:07:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:27:06.726 00:07:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:06.726 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:06.726 00:07:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:06.726 00:07:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:06.726 [2024-12-14 00:07:45.560062] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:27:06.726 [2024-12-14 00:07:45.560171] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:06.726 [2024-12-14 00:07:45.680591] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:06.726 [2024-12-14 00:07:45.796986] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:06.726 [2024-12-14 00:07:45.797032] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:06.726 [2024-12-14 00:07:45.797043] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:06.726 [2024-12-14 00:07:45.797053] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:06.726 [2024-12-14 00:07:45.797061] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:06.726 [2024-12-14 00:07:45.799459] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:27:06.726 [2024-12-14 00:07:45.799554] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:27:06.726 [2024-12-14 00:07:45.799577] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:27:06.726 [2024-12-14 00:07:45.799583] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:27:07.293 00:07:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:07.293 00:07:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@868 -- # return 0 00:27:07.293 00:07:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:27:07.293 00:07:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:07.293 00:07:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:07.293 00:07:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:07.293 00:07:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@17 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:27:07.293 00:07:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:27:07.293 00:07:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:07.293 00:07:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:07.552 Malloc0 00:27:07.552 00:07:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:07.552 00:07:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@22 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 30 -t 30 -w 30 -n 30 00:27:07.552 00:07:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:07.552 00:07:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:07.552 Delay0 00:27:07.552 00:07:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:07.552 00:07:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:07.552 00:07:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:07.552 00:07:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:07.552 [2024-12-14 00:07:46.497000] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:07.552 00:07:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:07.552 00:07:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:27:07.552 00:07:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:07.552 00:07:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:07.552 00:07:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:07.552 00:07:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:07.552 00:07:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:07.552 00:07:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:07.552 00:07:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:07.552 00:07:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:07.552 00:07:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:07.552 00:07:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:07.552 [2024-12-14 00:07:46.525278] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:07.552 00:07:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:07.552 00:07:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:27:08.929 00:07:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@31 -- # waitforserial SPDKISFASTANDAWESOME 00:27:08.929 00:07:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1202 -- # local i=0 00:27:08.929 00:07:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:27:08.929 00:07:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:27:08.929 00:07:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1209 -- # sleep 2 00:27:10.865 00:07:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:27:10.865 00:07:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:27:10.865 00:07:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:27:10.865 00:07:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:27:10.865 00:07:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:27:10.866 00:07:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1212 -- # return 0 00:27:10.866 00:07:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@35 -- # fio_pid=4096497 00:27:10.866 00:07:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 60 -v 00:27:10.866 00:07:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@37 -- # sleep 3 00:27:10.866 [global] 00:27:10.866 thread=1 00:27:10.866 invalidate=1 00:27:10.866 rw=write 00:27:10.866 time_based=1 00:27:10.866 runtime=60 00:27:10.866 ioengine=libaio 00:27:10.866 direct=1 00:27:10.866 bs=4096 00:27:10.866 iodepth=1 00:27:10.866 norandommap=0 00:27:10.866 numjobs=1 00:27:10.866 00:27:10.866 verify_dump=1 00:27:10.866 verify_backlog=512 00:27:10.866 verify_state_save=0 00:27:10.866 do_verify=1 00:27:10.866 verify=crc32c-intel 00:27:10.866 [job0] 00:27:10.866 filename=/dev/nvme0n1 00:27:10.866 Could not set queue depth (nvme0n1) 00:27:11.130 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:27:11.130 fio-3.35 00:27:11.130 Starting 1 thread 00:27:13.656 00:07:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@40 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 31000000 00:27:13.656 00:07:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:13.656 00:07:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:13.656 true 00:27:13.656 00:07:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:13.656 00:07:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@41 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 31000000 00:27:13.656 00:07:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:13.656 00:07:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:13.656 true 00:27:13.656 00:07:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:13.656 00:07:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@42 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 31000000 00:27:13.656 00:07:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:13.656 00:07:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:13.656 true 00:27:13.656 00:07:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:13.657 00:07:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@43 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 310000000 00:27:13.657 00:07:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:13.657 00:07:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:13.657 true 00:27:13.657 00:07:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:13.657 00:07:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@45 -- # sleep 3 00:27:16.937 00:07:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@48 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 30 00:27:16.937 00:07:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:16.937 00:07:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:16.937 true 00:27:16.937 00:07:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:16.937 00:07:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@49 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 30 00:27:16.937 00:07:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:16.937 00:07:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:16.937 true 00:27:16.937 00:07:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:16.937 00:07:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@50 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 30 00:27:16.937 00:07:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:16.937 00:07:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:16.937 true 00:27:16.937 00:07:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:16.937 00:07:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@51 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 30 00:27:16.937 00:07:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:16.937 00:07:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:16.937 true 00:27:16.937 00:07:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:16.937 00:07:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@53 -- # fio_status=0 00:27:16.937 00:07:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@54 -- # wait 4096497 00:28:13.230 00:28:13.230 job0: (groupid=0, jobs=1): err= 0: pid=4096727: Sat Dec 14 00:08:50 2024 00:28:13.230 read: IOPS=269, BW=1080KiB/s (1106kB/s)(63.3MiB/60001msec) 00:28:13.230 slat (usec): min=6, max=10761, avg= 8.97, stdev=84.53 00:28:13.230 clat (usec): min=218, max=41544k, avg=3464.22, stdev=326449.77 00:28:13.230 lat (usec): min=226, max=41544k, avg=3473.19, stdev=326449.89 00:28:13.230 clat percentiles (usec): 00:28:13.230 | 1.00th=[ 239], 5.00th=[ 249], 10.00th=[ 253], 20.00th=[ 265], 00:28:13.230 | 30.00th=[ 269], 40.00th=[ 277], 50.00th=[ 281], 60.00th=[ 289], 00:28:13.230 | 70.00th=[ 297], 80.00th=[ 306], 90.00th=[ 318], 95.00th=[ 441], 00:28:13.230 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[42206], 00:28:13.230 | 99.99th=[42206] 00:28:13.230 write: IOPS=273, BW=1092KiB/s (1118kB/s)(64.0MiB/60001msec); 0 zone resets 00:28:13.230 slat (usec): min=9, max=28923, avg=13.37, stdev=225.88 00:28:13.230 clat (usec): min=160, max=1248, avg=209.15, stdev=26.19 00:28:13.230 lat (usec): min=171, max=29237, avg=222.53, stdev=228.23 00:28:13.230 clat percentiles (usec): 00:28:13.230 | 1.00th=[ 174], 5.00th=[ 180], 10.00th=[ 182], 20.00th=[ 190], 00:28:13.230 | 30.00th=[ 198], 40.00th=[ 202], 50.00th=[ 206], 60.00th=[ 210], 00:28:13.230 | 70.00th=[ 215], 80.00th=[ 223], 90.00th=[ 239], 95.00th=[ 251], 00:28:13.230 | 99.00th=[ 314], 99.50th=[ 326], 99.90th=[ 338], 99.95th=[ 343], 00:28:13.230 | 99.99th=[ 433] 00:28:13.230 bw ( KiB/s): min= 1216, max= 8192, per=100.00%, avg=6482.11, stdev=2275.48, samples=19 00:28:13.230 iops : min= 304, max= 2048, avg=1620.53, stdev=568.87, samples=19 00:28:13.230 lat (usec) : 250=51.02%, 500=48.17%, 750=0.06% 00:28:13.230 lat (msec) : 2=0.01%, 50=0.75%, >=2000=0.01% 00:28:13.230 cpu : usr=0.44%, sys=0.78%, ctx=32588, majf=0, minf=1 00:28:13.230 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:13.230 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:13.230 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:13.230 issued rwts: total=16198,16384,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:13.230 latency : target=0, window=0, percentile=100.00%, depth=1 00:28:13.230 00:28:13.230 Run status group 0 (all jobs): 00:28:13.230 READ: bw=1080KiB/s (1106kB/s), 1080KiB/s-1080KiB/s (1106kB/s-1106kB/s), io=63.3MiB (66.3MB), run=60001-60001msec 00:28:13.230 WRITE: bw=1092KiB/s (1118kB/s), 1092KiB/s-1092KiB/s (1118kB/s-1118kB/s), io=64.0MiB (67.1MB), run=60001-60001msec 00:28:13.230 00:28:13.230 Disk stats (read/write): 00:28:13.230 nvme0n1: ios=15924/16332, merge=0/0, ticks=15619/3257, in_queue=18876, util=99.65% 00:28:13.230 00:08:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@56 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:28:13.230 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:28:13.230 00:08:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@57 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:28:13.230 00:08:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1223 -- # local i=0 00:28:13.230 00:08:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:28:13.230 00:08:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:28:13.230 00:08:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:28:13.230 00:08:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:28:13.230 00:08:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1235 -- # return 0 00:28:13.230 00:08:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@59 -- # '[' 0 -eq 0 ']' 00:28:13.230 00:08:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@60 -- # echo 'nvmf hotplug test: fio successful as expected' 00:28:13.230 nvmf hotplug test: fio successful as expected 00:28:13.230 00:08:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:13.230 00:08:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:13.231 00:08:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:28:13.231 00:08:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:13.231 00:08:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@69 -- # rm -f ./local-job0-0-verify.state 00:28:13.231 00:08:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@71 -- # trap - SIGINT SIGTERM EXIT 00:28:13.231 00:08:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@73 -- # nvmftestfini 00:28:13.231 00:08:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:13.231 00:08:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@121 -- # sync 00:28:13.231 00:08:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:13.231 00:08:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@124 -- # set +e 00:28:13.231 00:08:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:13.231 00:08:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:13.231 rmmod nvme_tcp 00:28:13.231 rmmod nvme_fabrics 00:28:13.231 rmmod nvme_keyring 00:28:13.231 00:08:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:13.231 00:08:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@128 -- # set -e 00:28:13.231 00:08:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@129 -- # return 0 00:28:13.231 00:08:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@517 -- # '[' -n 4095802 ']' 00:28:13.231 00:08:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@518 -- # killprocess 4095802 00:28:13.231 00:08:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@954 -- # '[' -z 4095802 ']' 00:28:13.231 00:08:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@958 -- # kill -0 4095802 00:28:13.231 00:08:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@959 -- # uname 00:28:13.231 00:08:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:13.231 00:08:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4095802 00:28:13.231 00:08:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:28:13.231 00:08:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:28:13.231 00:08:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4095802' 00:28:13.231 killing process with pid 4095802 00:28:13.231 00:08:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@973 -- # kill 4095802 00:28:13.231 00:08:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@978 -- # wait 4095802 00:28:13.231 00:08:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:13.231 00:08:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:13.231 00:08:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:13.231 00:08:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@297 -- # iptr 00:28:13.231 00:08:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@791 -- # iptables-save 00:28:13.231 00:08:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:13.231 00:08:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@791 -- # iptables-restore 00:28:13.231 00:08:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:13.231 00:08:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:13.231 00:08:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:13.231 00:08:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:13.231 00:08:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:15.133 00:08:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:15.133 00:28:15.133 real 1m13.956s 00:28:15.133 user 4m29.094s 00:28:15.133 sys 0m6.761s 00:28:15.133 00:08:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:15.133 00:08:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:28:15.133 ************************************ 00:28:15.133 END TEST nvmf_initiator_timeout 00:28:15.133 ************************************ 00:28:15.133 00:08:54 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@53 -- # [[ phy == phy ]] 00:28:15.133 00:08:54 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@54 -- # '[' tcp = tcp ']' 00:28:15.133 00:08:54 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@55 -- # gather_supported_nvmf_pci_devs 00:28:15.133 00:08:54 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@309 -- # xtrace_disable 00:28:15.133 00:08:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:28:20.399 00:08:59 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:20.399 00:08:59 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # pci_devs=() 00:28:20.399 00:08:59 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:20.399 00:08:59 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:20.399 00:08:59 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:20.399 00:08:59 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:20.399 00:08:59 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:20.399 00:08:59 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # net_devs=() 00:28:20.399 00:08:59 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:20.399 00:08:59 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # e810=() 00:28:20.399 00:08:59 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # local -ga e810 00:28:20.399 00:08:59 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # x722=() 00:28:20.399 00:08:59 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # local -ga x722 00:28:20.399 00:08:59 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # mlx=() 00:28:20.399 00:08:59 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # local -ga mlx 00:28:20.399 00:08:59 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:20.399 00:08:59 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:20.399 00:08:59 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:20.399 00:08:59 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:20.399 00:08:59 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:20.399 00:08:59 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:20.399 00:08:59 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:20.399 00:08:59 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:20.399 00:08:59 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:20.399 00:08:59 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:20.399 00:08:59 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:20.399 00:08:59 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:20.399 00:08:59 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:20.399 00:08:59 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:20.399 00:08:59 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:20.399 00:08:59 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:20.399 00:08:59 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:20.399 00:08:59 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:20.399 00:08:59 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:20.399 00:08:59 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:28:20.399 Found 0000:af:00.0 (0x8086 - 0x159b) 00:28:20.399 00:08:59 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:20.399 00:08:59 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:20.399 00:08:59 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:20.399 00:08:59 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:20.399 00:08:59 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:20.399 00:08:59 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:20.399 00:08:59 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:28:20.399 Found 0000:af:00.1 (0x8086 - 0x159b) 00:28:20.399 00:08:59 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:20.399 00:08:59 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:20.399 00:08:59 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:20.399 00:08:59 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:20.399 00:08:59 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:20.399 00:08:59 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:20.399 00:08:59 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:20.399 00:08:59 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:20.399 00:08:59 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:20.399 00:08:59 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:20.399 00:08:59 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:20.399 00:08:59 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:20.399 00:08:59 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:20.399 00:08:59 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:20.399 00:08:59 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:20.399 00:08:59 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:28:20.399 Found net devices under 0000:af:00.0: cvl_0_0 00:28:20.399 00:08:59 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:20.399 00:08:59 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:20.399 00:08:59 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:20.399 00:08:59 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:20.399 00:08:59 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:20.399 00:08:59 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:20.399 00:08:59 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:20.399 00:08:59 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:20.399 00:08:59 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:28:20.399 Found net devices under 0000:af:00.1: cvl_0_1 00:28:20.399 00:08:59 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:20.399 00:08:59 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:20.399 00:08:59 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@56 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:20.399 00:08:59 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@57 -- # (( 2 > 0 )) 00:28:20.399 00:08:59 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@58 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:28:20.399 00:08:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:28:20.399 00:08:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:20.399 00:08:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:28:20.399 ************************************ 00:28:20.399 START TEST nvmf_perf_adq 00:28:20.399 ************************************ 00:28:20.399 00:08:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:28:20.399 * Looking for test storage... 00:28:20.399 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:28:20.399 00:08:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:28:20.399 00:08:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1711 -- # lcov --version 00:28:20.399 00:08:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:28:20.399 00:08:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:28:20.399 00:08:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:20.399 00:08:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:20.399 00:08:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:20.399 00:08:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # IFS=.-: 00:28:20.399 00:08:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # read -ra ver1 00:28:20.399 00:08:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # IFS=.-: 00:28:20.399 00:08:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # read -ra ver2 00:28:20.399 00:08:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@338 -- # local 'op=<' 00:28:20.399 00:08:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@340 -- # ver1_l=2 00:28:20.399 00:08:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@341 -- # ver2_l=1 00:28:20.399 00:08:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:20.399 00:08:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@344 -- # case "$op" in 00:28:20.399 00:08:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@345 -- # : 1 00:28:20.399 00:08:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:20.400 00:08:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:20.400 00:08:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # decimal 1 00:28:20.400 00:08:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=1 00:28:20.400 00:08:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:20.400 00:08:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 1 00:28:20.400 00:08:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # ver1[v]=1 00:28:20.400 00:08:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # decimal 2 00:28:20.400 00:08:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=2 00:28:20.400 00:08:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:20.400 00:08:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 2 00:28:20.400 00:08:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # ver2[v]=2 00:28:20.400 00:08:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:20.400 00:08:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:20.400 00:08:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # return 0 00:28:20.400 00:08:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:20.400 00:08:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:28:20.400 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:20.400 --rc genhtml_branch_coverage=1 00:28:20.400 --rc genhtml_function_coverage=1 00:28:20.400 --rc genhtml_legend=1 00:28:20.400 --rc geninfo_all_blocks=1 00:28:20.400 --rc geninfo_unexecuted_blocks=1 00:28:20.400 00:28:20.400 ' 00:28:20.400 00:08:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:28:20.400 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:20.400 --rc genhtml_branch_coverage=1 00:28:20.400 --rc genhtml_function_coverage=1 00:28:20.400 --rc genhtml_legend=1 00:28:20.400 --rc geninfo_all_blocks=1 00:28:20.400 --rc geninfo_unexecuted_blocks=1 00:28:20.400 00:28:20.400 ' 00:28:20.400 00:08:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:28:20.400 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:20.400 --rc genhtml_branch_coverage=1 00:28:20.400 --rc genhtml_function_coverage=1 00:28:20.400 --rc genhtml_legend=1 00:28:20.400 --rc geninfo_all_blocks=1 00:28:20.400 --rc geninfo_unexecuted_blocks=1 00:28:20.400 00:28:20.400 ' 00:28:20.400 00:08:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:28:20.400 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:20.400 --rc genhtml_branch_coverage=1 00:28:20.400 --rc genhtml_function_coverage=1 00:28:20.400 --rc genhtml_legend=1 00:28:20.400 --rc geninfo_all_blocks=1 00:28:20.400 --rc geninfo_unexecuted_blocks=1 00:28:20.400 00:28:20.400 ' 00:28:20.400 00:08:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:20.400 00:08:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # uname -s 00:28:20.400 00:08:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:20.400 00:08:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:20.400 00:08:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:20.400 00:08:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:20.400 00:08:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:20.400 00:08:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:20.400 00:08:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:20.400 00:08:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:20.400 00:08:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:20.400 00:08:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:20.400 00:08:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:28:20.400 00:08:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:28:20.400 00:08:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:20.400 00:08:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:20.400 00:08:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:20.400 00:08:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:20.400 00:08:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:20.400 00:08:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@15 -- # shopt -s extglob 00:28:20.400 00:08:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:20.400 00:08:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:20.400 00:08:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:20.400 00:08:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:20.400 00:08:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:20.400 00:08:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:20.400 00:08:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@5 -- # export PATH 00:28:20.400 00:08:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:20.400 00:08:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@51 -- # : 0 00:28:20.400 00:08:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:20.400 00:08:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:20.400 00:08:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:20.400 00:08:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:20.400 00:08:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:20.400 00:08:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:28:20.400 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:28:20.400 00:08:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:20.400 00:08:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:20.400 00:08:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:20.400 00:08:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:28:20.400 00:08:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:28:20.400 00:08:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:25.666 00:09:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:25.666 00:09:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:28:25.666 00:09:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:25.666 00:09:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:25.666 00:09:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:25.666 00:09:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:25.666 00:09:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:25.666 00:09:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:28:25.666 00:09:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:25.667 00:09:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:28:25.667 00:09:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:28:25.667 00:09:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:28:25.667 00:09:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:28:25.667 00:09:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:28:25.667 00:09:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:28:25.667 00:09:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:25.667 00:09:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:25.667 00:09:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:25.667 00:09:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:25.667 00:09:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:25.667 00:09:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:25.667 00:09:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:25.667 00:09:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:25.667 00:09:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:25.667 00:09:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:25.667 00:09:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:25.667 00:09:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:25.667 00:09:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:25.667 00:09:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:25.667 00:09:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:25.667 00:09:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:25.667 00:09:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:25.667 00:09:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:25.667 00:09:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:25.667 00:09:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:28:25.667 Found 0000:af:00.0 (0x8086 - 0x159b) 00:28:25.667 00:09:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:25.667 00:09:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:25.667 00:09:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:25.667 00:09:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:25.667 00:09:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:25.667 00:09:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:25.667 00:09:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:28:25.667 Found 0000:af:00.1 (0x8086 - 0x159b) 00:28:25.667 00:09:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:25.667 00:09:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:25.667 00:09:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:25.667 00:09:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:25.667 00:09:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:25.667 00:09:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:25.667 00:09:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:25.667 00:09:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:25.667 00:09:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:25.667 00:09:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:25.667 00:09:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:25.667 00:09:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:25.667 00:09:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:25.667 00:09:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:25.667 00:09:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:25.667 00:09:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:28:25.667 Found net devices under 0000:af:00.0: cvl_0_0 00:28:25.667 00:09:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:25.667 00:09:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:25.667 00:09:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:25.667 00:09:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:25.667 00:09:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:25.667 00:09:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:25.667 00:09:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:25.667 00:09:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:25.667 00:09:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:28:25.667 Found net devices under 0000:af:00.1: cvl_0_1 00:28:25.667 00:09:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:25.667 00:09:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:25.667 00:09:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:25.667 00:09:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:28:25.667 00:09:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:28:25.667 00:09:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@68 -- # adq_reload_driver 00:28:25.667 00:09:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:28:25.667 00:09:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:28:27.041 00:09:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:28:29.580 00:09:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:28:34.852 00:09:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@76 -- # nvmftestinit 00:28:34.853 00:09:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:34.853 00:09:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:34.853 00:09:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:34.853 00:09:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:34.853 00:09:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:34.853 00:09:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:34.853 00:09:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:34.853 00:09:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:34.853 00:09:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:34.853 00:09:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:34.853 00:09:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:28:34.853 00:09:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:34.853 00:09:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:34.853 00:09:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:28:34.853 00:09:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:34.853 00:09:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:34.853 00:09:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:34.853 00:09:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:34.853 00:09:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:34.853 00:09:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:28:34.853 00:09:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:34.853 00:09:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:28:34.853 00:09:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:28:34.853 00:09:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:28:34.853 00:09:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:28:34.853 00:09:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:28:34.853 00:09:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:28:34.853 00:09:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:34.853 00:09:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:34.853 00:09:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:34.853 00:09:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:34.853 00:09:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:34.853 00:09:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:34.853 00:09:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:34.853 00:09:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:34.853 00:09:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:34.853 00:09:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:34.853 00:09:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:34.853 00:09:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:34.853 00:09:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:34.853 00:09:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:34.853 00:09:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:34.853 00:09:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:34.853 00:09:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:34.853 00:09:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:34.853 00:09:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:34.853 00:09:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:28:34.853 Found 0000:af:00.0 (0x8086 - 0x159b) 00:28:34.853 00:09:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:34.853 00:09:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:34.853 00:09:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:34.853 00:09:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:34.853 00:09:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:34.853 00:09:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:34.853 00:09:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:28:34.853 Found 0000:af:00.1 (0x8086 - 0x159b) 00:28:34.853 00:09:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:34.853 00:09:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:34.853 00:09:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:34.853 00:09:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:34.853 00:09:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:34.853 00:09:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:34.853 00:09:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:34.853 00:09:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:34.853 00:09:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:34.853 00:09:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:34.853 00:09:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:34.853 00:09:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:34.853 00:09:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:34.853 00:09:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:34.853 00:09:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:34.853 00:09:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:28:34.853 Found net devices under 0000:af:00.0: cvl_0_0 00:28:34.853 00:09:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:34.853 00:09:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:34.853 00:09:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:34.853 00:09:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:34.853 00:09:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:34.853 00:09:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:34.853 00:09:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:34.853 00:09:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:34.853 00:09:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:28:34.853 Found net devices under 0000:af:00.1: cvl_0_1 00:28:34.853 00:09:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:34.853 00:09:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:34.853 00:09:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # is_hw=yes 00:28:34.853 00:09:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:34.853 00:09:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:28:34.853 00:09:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:28:34.853 00:09:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:34.853 00:09:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:34.853 00:09:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:34.853 00:09:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:34.853 00:09:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:34.853 00:09:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:34.853 00:09:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:34.853 00:09:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:34.853 00:09:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:34.853 00:09:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:34.853 00:09:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:34.853 00:09:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:34.853 00:09:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:34.853 00:09:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:34.853 00:09:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:34.853 00:09:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:34.853 00:09:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:34.853 00:09:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:34.853 00:09:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:34.853 00:09:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:34.853 00:09:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:34.853 00:09:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:34.854 00:09:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:34.854 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:34.854 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.838 ms 00:28:34.854 00:28:34.854 --- 10.0.0.2 ping statistics --- 00:28:34.854 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:34.854 rtt min/avg/max/mdev = 0.838/0.838/0.838/0.000 ms 00:28:34.854 00:09:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:34.854 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:34.854 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.196 ms 00:28:34.854 00:28:34.854 --- 10.0.0.1 ping statistics --- 00:28:34.854 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:34.854 rtt min/avg/max/mdev = 0.196/0.196/0.196/0.000 ms 00:28:34.854 00:09:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:34.854 00:09:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # return 0 00:28:34.854 00:09:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:34.854 00:09:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:34.854 00:09:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:34.854 00:09:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:34.854 00:09:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:34.854 00:09:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:34.854 00:09:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:34.854 00:09:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@77 -- # nvmfappstart -m 0xF --wait-for-rpc 00:28:34.854 00:09:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:34.854 00:09:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:34.854 00:09:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:34.854 00:09:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@509 -- # nvmfpid=4114852 00:28:34.854 00:09:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@510 -- # waitforlisten 4114852 00:28:34.854 00:09:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # '[' -z 4114852 ']' 00:28:34.854 00:09:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:34.854 00:09:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:34.854 00:09:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:34.854 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:34.854 00:09:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:28:34.854 00:09:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:34.854 00:09:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:34.854 [2024-12-14 00:09:13.718827] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:28:34.854 [2024-12-14 00:09:13.718919] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:34.854 [2024-12-14 00:09:13.836401] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:34.854 [2024-12-14 00:09:13.942107] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:34.854 [2024-12-14 00:09:13.942145] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:34.854 [2024-12-14 00:09:13.942157] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:34.854 [2024-12-14 00:09:13.942168] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:34.854 [2024-12-14 00:09:13.942176] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:34.854 [2024-12-14 00:09:13.944580] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:28:34.854 [2024-12-14 00:09:13.944662] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:28:34.854 [2024-12-14 00:09:13.944735] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:28:34.854 [2024-12-14 00:09:13.944745] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:28:35.421 00:09:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:35.421 00:09:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@868 -- # return 0 00:28:35.421 00:09:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:35.421 00:09:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:35.421 00:09:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:35.679 00:09:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:35.679 00:09:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@78 -- # adq_configure_nvmf_target 0 00:28:35.679 00:09:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:28:35.679 00:09:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:28:35.679 00:09:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:35.679 00:09:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:35.679 00:09:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:35.679 00:09:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:28:35.679 00:09:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:28:35.679 00:09:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:35.679 00:09:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:35.680 00:09:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:35.680 00:09:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:28:35.680 00:09:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:35.680 00:09:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:35.938 00:09:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:35.938 00:09:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:28:35.938 00:09:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:35.938 00:09:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:35.938 [2024-12-14 00:09:14.980752] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:35.938 00:09:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:35.938 00:09:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:28:35.938 00:09:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:35.938 00:09:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:35.938 Malloc1 00:28:35.938 00:09:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:35.938 00:09:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:35.938 00:09:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:35.938 00:09:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:36.195 00:09:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:36.195 00:09:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:28:36.195 00:09:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:36.195 00:09:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:36.195 00:09:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:36.195 00:09:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:36.195 00:09:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:36.195 00:09:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:36.195 [2024-12-14 00:09:15.094468] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:36.195 00:09:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:36.195 00:09:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@82 -- # perfpid=4115101 00:28:36.195 00:09:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@83 -- # sleep 2 00:28:36.195 00:09:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:28:38.092 00:09:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # rpc_cmd nvmf_get_stats 00:28:38.092 00:09:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:38.092 00:09:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:38.092 00:09:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:38.092 00:09:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # nvmf_stats='{ 00:28:38.092 "tick_rate": 2100000000, 00:28:38.092 "poll_groups": [ 00:28:38.092 { 00:28:38.092 "name": "nvmf_tgt_poll_group_000", 00:28:38.092 "admin_qpairs": 1, 00:28:38.092 "io_qpairs": 1, 00:28:38.092 "current_admin_qpairs": 1, 00:28:38.092 "current_io_qpairs": 1, 00:28:38.092 "pending_bdev_io": 0, 00:28:38.092 "completed_nvme_io": 18164, 00:28:38.092 "transports": [ 00:28:38.092 { 00:28:38.092 "trtype": "TCP" 00:28:38.092 } 00:28:38.092 ] 00:28:38.092 }, 00:28:38.092 { 00:28:38.092 "name": "nvmf_tgt_poll_group_001", 00:28:38.092 "admin_qpairs": 0, 00:28:38.092 "io_qpairs": 1, 00:28:38.092 "current_admin_qpairs": 0, 00:28:38.092 "current_io_qpairs": 1, 00:28:38.092 "pending_bdev_io": 0, 00:28:38.092 "completed_nvme_io": 17922, 00:28:38.092 "transports": [ 00:28:38.092 { 00:28:38.092 "trtype": "TCP" 00:28:38.092 } 00:28:38.092 ] 00:28:38.092 }, 00:28:38.092 { 00:28:38.092 "name": "nvmf_tgt_poll_group_002", 00:28:38.092 "admin_qpairs": 0, 00:28:38.092 "io_qpairs": 1, 00:28:38.092 "current_admin_qpairs": 0, 00:28:38.092 "current_io_qpairs": 1, 00:28:38.092 "pending_bdev_io": 0, 00:28:38.092 "completed_nvme_io": 18372, 00:28:38.092 "transports": [ 00:28:38.092 { 00:28:38.092 "trtype": "TCP" 00:28:38.092 } 00:28:38.092 ] 00:28:38.092 }, 00:28:38.092 { 00:28:38.092 "name": "nvmf_tgt_poll_group_003", 00:28:38.092 "admin_qpairs": 0, 00:28:38.092 "io_qpairs": 1, 00:28:38.092 "current_admin_qpairs": 0, 00:28:38.092 "current_io_qpairs": 1, 00:28:38.092 "pending_bdev_io": 0, 00:28:38.092 "completed_nvme_io": 17970, 00:28:38.092 "transports": [ 00:28:38.092 { 00:28:38.092 "trtype": "TCP" 00:28:38.092 } 00:28:38.092 ] 00:28:38.092 } 00:28:38.092 ] 00:28:38.092 }' 00:28:38.092 00:09:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:28:38.092 00:09:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # wc -l 00:28:38.092 00:09:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # count=4 00:28:38.092 00:09:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@87 -- # [[ 4 -ne 4 ]] 00:28:38.092 00:09:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@91 -- # wait 4115101 00:28:46.191 Initializing NVMe Controllers 00:28:46.191 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:46.191 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:28:46.191 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:28:46.191 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:28:46.191 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:28:46.191 Initialization complete. Launching workers. 00:28:46.191 ======================================================== 00:28:46.191 Latency(us) 00:28:46.191 Device Information : IOPS MiB/s Average min max 00:28:46.191 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 10052.00 39.27 6365.60 2556.22 10388.42 00:28:46.191 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 9862.10 38.52 6490.64 2166.41 15614.12 00:28:46.191 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 9866.90 38.54 6487.11 2001.32 11164.64 00:28:46.192 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 9763.40 38.14 6554.04 2548.48 12188.64 00:28:46.192 ======================================================== 00:28:46.192 Total : 39544.38 154.47 6473.63 2001.32 15614.12 00:28:46.192 00:28:46.449 00:09:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@92 -- # nvmftestfini 00:28:46.449 00:09:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:46.449 00:09:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:28:46.449 00:09:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:46.449 00:09:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:28:46.449 00:09:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:46.449 00:09:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:46.449 rmmod nvme_tcp 00:28:46.449 rmmod nvme_fabrics 00:28:46.449 rmmod nvme_keyring 00:28:46.449 00:09:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:46.449 00:09:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:28:46.449 00:09:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:28:46.449 00:09:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@517 -- # '[' -n 4114852 ']' 00:28:46.449 00:09:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@518 -- # killprocess 4114852 00:28:46.449 00:09:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # '[' -z 4114852 ']' 00:28:46.449 00:09:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # kill -0 4114852 00:28:46.449 00:09:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # uname 00:28:46.449 00:09:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:46.449 00:09:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4114852 00:28:46.449 00:09:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:28:46.450 00:09:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:28:46.450 00:09:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4114852' 00:28:46.450 killing process with pid 4114852 00:28:46.450 00:09:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@973 -- # kill 4114852 00:28:46.450 00:09:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@978 -- # wait 4114852 00:28:47.826 00:09:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:47.826 00:09:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:47.826 00:09:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:47.826 00:09:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:28:47.826 00:09:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:47.826 00:09:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-save 00:28:47.826 00:09:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-restore 00:28:47.826 00:09:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:47.826 00:09:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:47.826 00:09:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:47.826 00:09:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:47.826 00:09:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:50.360 00:09:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:50.360 00:09:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@94 -- # adq_reload_driver 00:28:50.360 00:09:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:28:50.360 00:09:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:28:51.299 00:09:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:28:53.834 00:09:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:28:59.107 00:09:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@97 -- # nvmftestinit 00:28:59.107 00:09:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:59.107 00:09:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:59.107 00:09:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:59.107 00:09:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:59.107 00:09:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:59.107 00:09:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:59.107 00:09:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:59.107 00:09:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:59.107 00:09:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:59.107 00:09:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:59.107 00:09:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:28:59.107 00:09:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:59.107 00:09:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:59.107 00:09:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:28:59.107 00:09:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:59.107 00:09:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:59.107 00:09:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:59.107 00:09:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:59.107 00:09:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:59.107 00:09:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:28:59.107 00:09:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:59.107 00:09:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:28:59.107 00:09:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:28:59.107 00:09:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:28:59.107 00:09:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:28:59.107 00:09:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:28:59.107 00:09:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:28:59.107 00:09:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:59.107 00:09:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:59.107 00:09:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:59.107 00:09:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:59.107 00:09:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:59.107 00:09:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:59.107 00:09:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:59.107 00:09:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:59.107 00:09:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:59.107 00:09:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:59.107 00:09:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:59.107 00:09:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:59.107 00:09:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:59.107 00:09:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:59.107 00:09:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:59.107 00:09:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:59.107 00:09:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:59.107 00:09:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:59.107 00:09:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:59.107 00:09:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:28:59.107 Found 0000:af:00.0 (0x8086 - 0x159b) 00:28:59.107 00:09:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:59.107 00:09:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:59.107 00:09:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:59.107 00:09:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:59.107 00:09:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:59.107 00:09:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:59.107 00:09:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:28:59.107 Found 0000:af:00.1 (0x8086 - 0x159b) 00:28:59.107 00:09:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:59.107 00:09:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:59.107 00:09:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:59.107 00:09:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:59.107 00:09:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:59.107 00:09:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:59.107 00:09:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:59.107 00:09:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:59.107 00:09:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:59.107 00:09:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:59.107 00:09:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:59.107 00:09:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:59.107 00:09:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:59.107 00:09:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:59.107 00:09:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:59.107 00:09:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:28:59.107 Found net devices under 0000:af:00.0: cvl_0_0 00:28:59.107 00:09:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:59.107 00:09:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:59.107 00:09:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:59.107 00:09:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:59.107 00:09:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:59.107 00:09:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:59.107 00:09:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:59.107 00:09:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:59.107 00:09:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:28:59.107 Found net devices under 0000:af:00.1: cvl_0_1 00:28:59.107 00:09:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:59.107 00:09:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:59.107 00:09:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # is_hw=yes 00:28:59.107 00:09:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:59.107 00:09:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:28:59.107 00:09:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:28:59.107 00:09:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:59.107 00:09:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:59.107 00:09:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:59.107 00:09:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:59.107 00:09:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:59.107 00:09:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:59.107 00:09:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:59.107 00:09:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:59.107 00:09:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:59.107 00:09:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:59.107 00:09:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:59.108 00:09:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:59.108 00:09:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:59.108 00:09:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:59.108 00:09:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:59.108 00:09:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:59.108 00:09:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:59.108 00:09:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:59.108 00:09:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:59.108 00:09:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:59.108 00:09:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:59.108 00:09:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:59.108 00:09:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:59.108 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:59.108 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.522 ms 00:28:59.108 00:28:59.108 --- 10.0.0.2 ping statistics --- 00:28:59.108 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:59.108 rtt min/avg/max/mdev = 0.522/0.522/0.522/0.000 ms 00:28:59.108 00:09:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:59.108 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:59.108 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.209 ms 00:28:59.108 00:28:59.108 --- 10.0.0.1 ping statistics --- 00:28:59.108 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:59.108 rtt min/avg/max/mdev = 0.209/0.209/0.209/0.000 ms 00:28:59.108 00:09:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:59.108 00:09:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # return 0 00:28:59.108 00:09:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:59.108 00:09:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:59.108 00:09:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:59.108 00:09:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:59.108 00:09:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:59.108 00:09:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:59.108 00:09:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:59.108 00:09:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@98 -- # adq_configure_driver 00:28:59.108 00:09:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:28:59.108 00:09:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:28:59.108 00:09:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:28:59.108 net.core.busy_poll = 1 00:28:59.108 00:09:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:28:59.108 net.core.busy_read = 1 00:28:59.108 00:09:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:28:59.108 00:09:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:28:59.108 00:09:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:28:59.108 00:09:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:28:59.108 00:09:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:28:59.108 00:09:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@99 -- # nvmfappstart -m 0xF --wait-for-rpc 00:28:59.108 00:09:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:59.108 00:09:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:59.108 00:09:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:59.108 00:09:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@509 -- # nvmfpid=4119205 00:28:59.108 00:09:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@510 -- # waitforlisten 4119205 00:28:59.108 00:09:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:28:59.108 00:09:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # '[' -z 4119205 ']' 00:28:59.108 00:09:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:59.108 00:09:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:59.108 00:09:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:59.108 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:59.108 00:09:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:59.108 00:09:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:59.366 [2024-12-14 00:09:38.309697] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:28:59.366 [2024-12-14 00:09:38.309792] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:59.366 [2024-12-14 00:09:38.430048] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:59.625 [2024-12-14 00:09:38.540824] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:59.625 [2024-12-14 00:09:38.540869] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:59.625 [2024-12-14 00:09:38.540880] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:59.625 [2024-12-14 00:09:38.540890] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:59.625 [2024-12-14 00:09:38.540898] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:59.625 [2024-12-14 00:09:38.543037] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:28:59.625 [2024-12-14 00:09:38.543111] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:28:59.625 [2024-12-14 00:09:38.543178] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:28:59.625 [2024-12-14 00:09:38.543188] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:29:00.192 00:09:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:00.192 00:09:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@868 -- # return 0 00:29:00.192 00:09:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:00.192 00:09:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:00.192 00:09:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:00.192 00:09:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:00.192 00:09:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@100 -- # adq_configure_nvmf_target 1 00:29:00.192 00:09:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:29:00.192 00:09:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:29:00.192 00:09:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:00.192 00:09:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:00.192 00:09:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:00.192 00:09:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:29:00.192 00:09:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:29:00.192 00:09:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:00.192 00:09:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:00.192 00:09:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:00.192 00:09:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:29:00.192 00:09:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:00.192 00:09:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:00.451 00:09:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:00.451 00:09:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:29:00.451 00:09:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:00.451 00:09:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:00.451 [2024-12-14 00:09:39.542904] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:00.451 00:09:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:00.452 00:09:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:29:00.452 00:09:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:00.452 00:09:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:00.709 Malloc1 00:29:00.709 00:09:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:00.709 00:09:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:00.709 00:09:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:00.709 00:09:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:00.709 00:09:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:00.709 00:09:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:29:00.709 00:09:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:00.709 00:09:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:00.710 00:09:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:00.710 00:09:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:00.710 00:09:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:00.710 00:09:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:00.710 [2024-12-14 00:09:39.670160] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:00.710 00:09:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:00.710 00:09:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@104 -- # perfpid=4119497 00:29:00.710 00:09:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@105 -- # sleep 2 00:29:00.710 00:09:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:29:02.609 00:09:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # rpc_cmd nvmf_get_stats 00:29:02.609 00:09:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:02.609 00:09:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:02.609 00:09:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:02.609 00:09:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # nvmf_stats='{ 00:29:02.609 "tick_rate": 2100000000, 00:29:02.609 "poll_groups": [ 00:29:02.609 { 00:29:02.609 "name": "nvmf_tgt_poll_group_000", 00:29:02.609 "admin_qpairs": 1, 00:29:02.609 "io_qpairs": 2, 00:29:02.609 "current_admin_qpairs": 1, 00:29:02.609 "current_io_qpairs": 2, 00:29:02.609 "pending_bdev_io": 0, 00:29:02.609 "completed_nvme_io": 25596, 00:29:02.609 "transports": [ 00:29:02.609 { 00:29:02.609 "trtype": "TCP" 00:29:02.609 } 00:29:02.609 ] 00:29:02.609 }, 00:29:02.609 { 00:29:02.609 "name": "nvmf_tgt_poll_group_001", 00:29:02.609 "admin_qpairs": 0, 00:29:02.609 "io_qpairs": 2, 00:29:02.609 "current_admin_qpairs": 0, 00:29:02.609 "current_io_qpairs": 2, 00:29:02.609 "pending_bdev_io": 0, 00:29:02.609 "completed_nvme_io": 25705, 00:29:02.609 "transports": [ 00:29:02.609 { 00:29:02.609 "trtype": "TCP" 00:29:02.609 } 00:29:02.609 ] 00:29:02.609 }, 00:29:02.609 { 00:29:02.609 "name": "nvmf_tgt_poll_group_002", 00:29:02.609 "admin_qpairs": 0, 00:29:02.609 "io_qpairs": 0, 00:29:02.609 "current_admin_qpairs": 0, 00:29:02.609 "current_io_qpairs": 0, 00:29:02.609 "pending_bdev_io": 0, 00:29:02.609 "completed_nvme_io": 0, 00:29:02.609 "transports": [ 00:29:02.609 { 00:29:02.609 "trtype": "TCP" 00:29:02.609 } 00:29:02.609 ] 00:29:02.609 }, 00:29:02.609 { 00:29:02.609 "name": "nvmf_tgt_poll_group_003", 00:29:02.609 "admin_qpairs": 0, 00:29:02.609 "io_qpairs": 0, 00:29:02.609 "current_admin_qpairs": 0, 00:29:02.609 "current_io_qpairs": 0, 00:29:02.609 "pending_bdev_io": 0, 00:29:02.609 "completed_nvme_io": 0, 00:29:02.609 "transports": [ 00:29:02.609 { 00:29:02.609 "trtype": "TCP" 00:29:02.609 } 00:29:02.609 ] 00:29:02.609 } 00:29:02.609 ] 00:29:02.609 }' 00:29:02.609 00:09:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:29:02.609 00:09:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # wc -l 00:29:02.609 00:09:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # count=2 00:29:02.609 00:09:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@109 -- # [[ 2 -lt 2 ]] 00:29:02.609 00:09:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@114 -- # wait 4119497 00:29:12.578 Initializing NVMe Controllers 00:29:12.578 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:12.578 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:29:12.578 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:29:12.578 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:29:12.578 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:29:12.578 Initialization complete. Launching workers. 00:29:12.578 ======================================================== 00:29:12.578 Latency(us) 00:29:12.578 Device Information : IOPS MiB/s Average min max 00:29:12.578 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 7506.60 29.32 8526.78 1692.79 53766.51 00:29:12.578 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 6076.90 23.74 10532.65 1639.45 56324.85 00:29:12.578 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 7697.10 30.07 8314.68 1455.80 55759.98 00:29:12.578 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 6084.40 23.77 10551.73 1605.07 54511.04 00:29:12.578 ======================================================== 00:29:12.578 Total : 27364.99 106.89 9362.80 1455.80 56324.85 00:29:12.578 00:29:12.578 00:09:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@115 -- # nvmftestfini 00:29:12.578 00:09:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:12.578 00:09:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:29:12.578 00:09:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:12.578 00:09:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:29:12.578 00:09:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:12.578 00:09:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:12.578 rmmod nvme_tcp 00:29:12.578 rmmod nvme_fabrics 00:29:12.578 rmmod nvme_keyring 00:29:12.578 00:09:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:12.578 00:09:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:29:12.578 00:09:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:29:12.578 00:09:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@517 -- # '[' -n 4119205 ']' 00:29:12.578 00:09:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@518 -- # killprocess 4119205 00:29:12.578 00:09:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # '[' -z 4119205 ']' 00:29:12.578 00:09:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # kill -0 4119205 00:29:12.578 00:09:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # uname 00:29:12.578 00:09:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:12.578 00:09:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4119205 00:29:12.578 00:09:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:12.578 00:09:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:12.578 00:09:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4119205' 00:29:12.578 killing process with pid 4119205 00:29:12.578 00:09:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@973 -- # kill 4119205 00:29:12.578 00:09:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@978 -- # wait 4119205 00:29:12.578 00:09:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:12.578 00:09:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:12.578 00:09:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:12.578 00:09:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:29:12.578 00:09:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-save 00:29:12.578 00:09:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:12.578 00:09:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-restore 00:29:12.578 00:09:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:12.578 00:09:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:12.578 00:09:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:12.578 00:09:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:12.578 00:09:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:14.484 00:09:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:14.484 00:09:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@117 -- # trap - SIGINT SIGTERM EXIT 00:29:14.484 00:29:14.484 real 0m54.171s 00:29:14.484 user 2m58.673s 00:29:14.484 sys 0m10.343s 00:29:14.484 00:09:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:14.484 00:09:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:14.484 ************************************ 00:29:14.484 END TEST nvmf_perf_adq 00:29:14.484 ************************************ 00:29:14.484 00:09:53 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@65 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:29:14.484 00:09:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:29:14.484 00:09:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:14.484 00:09:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:29:14.484 ************************************ 00:29:14.484 START TEST nvmf_shutdown 00:29:14.484 ************************************ 00:29:14.484 00:09:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:29:14.743 * Looking for test storage... 00:29:14.743 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:29:14.743 00:09:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:29:14.743 00:09:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1711 -- # lcov --version 00:29:14.743 00:09:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:29:14.743 00:09:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:29:14.743 00:09:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:14.743 00:09:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:14.743 00:09:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:14.743 00:09:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:29:14.743 00:09:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:29:14.743 00:09:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:29:14.743 00:09:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:29:14.743 00:09:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:29:14.743 00:09:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:29:14.743 00:09:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:29:14.743 00:09:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:14.743 00:09:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:29:14.743 00:09:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@345 -- # : 1 00:29:14.743 00:09:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:14.743 00:09:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:14.743 00:09:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # decimal 1 00:29:14.743 00:09:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=1 00:29:14.743 00:09:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:14.743 00:09:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 1 00:29:14.743 00:09:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:29:14.743 00:09:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # decimal 2 00:29:14.744 00:09:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=2 00:29:14.744 00:09:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:14.744 00:09:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 2 00:29:14.744 00:09:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:29:14.744 00:09:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:14.744 00:09:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:14.744 00:09:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # return 0 00:29:14.744 00:09:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:14.744 00:09:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:29:14.744 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:14.744 --rc genhtml_branch_coverage=1 00:29:14.744 --rc genhtml_function_coverage=1 00:29:14.744 --rc genhtml_legend=1 00:29:14.744 --rc geninfo_all_blocks=1 00:29:14.744 --rc geninfo_unexecuted_blocks=1 00:29:14.744 00:29:14.744 ' 00:29:14.744 00:09:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:29:14.744 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:14.744 --rc genhtml_branch_coverage=1 00:29:14.744 --rc genhtml_function_coverage=1 00:29:14.744 --rc genhtml_legend=1 00:29:14.744 --rc geninfo_all_blocks=1 00:29:14.744 --rc geninfo_unexecuted_blocks=1 00:29:14.744 00:29:14.744 ' 00:29:14.744 00:09:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:29:14.744 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:14.744 --rc genhtml_branch_coverage=1 00:29:14.744 --rc genhtml_function_coverage=1 00:29:14.744 --rc genhtml_legend=1 00:29:14.744 --rc geninfo_all_blocks=1 00:29:14.744 --rc geninfo_unexecuted_blocks=1 00:29:14.744 00:29:14.744 ' 00:29:14.744 00:09:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:29:14.744 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:14.744 --rc genhtml_branch_coverage=1 00:29:14.744 --rc genhtml_function_coverage=1 00:29:14.744 --rc genhtml_legend=1 00:29:14.744 --rc geninfo_all_blocks=1 00:29:14.744 --rc geninfo_unexecuted_blocks=1 00:29:14.744 00:29:14.744 ' 00:29:14.744 00:09:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:14.744 00:09:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:29:14.744 00:09:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:14.744 00:09:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:14.744 00:09:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:14.744 00:09:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:14.744 00:09:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:14.744 00:09:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:14.744 00:09:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:14.744 00:09:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:14.744 00:09:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:14.744 00:09:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:14.744 00:09:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:29:14.744 00:09:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:29:14.744 00:09:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:14.744 00:09:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:14.744 00:09:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:14.744 00:09:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:14.744 00:09:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:14.744 00:09:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@15 -- # shopt -s extglob 00:29:14.744 00:09:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:14.744 00:09:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:14.744 00:09:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:14.744 00:09:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:14.744 00:09:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:14.744 00:09:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:14.744 00:09:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:29:14.744 00:09:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:14.744 00:09:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@51 -- # : 0 00:29:14.744 00:09:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:14.744 00:09:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:14.744 00:09:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:14.744 00:09:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:14.744 00:09:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:14.744 00:09:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:29:14.744 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:29:14.744 00:09:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:14.744 00:09:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:14.744 00:09:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:14.744 00:09:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BDEV_SIZE=64 00:29:14.744 00:09:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:29:14.744 00:09:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@162 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:29:14.744 00:09:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:29:14.744 00:09:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:14.744 00:09:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:29:14.744 ************************************ 00:29:14.744 START TEST nvmf_shutdown_tc1 00:29:14.744 ************************************ 00:29:14.744 00:09:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc1 00:29:14.744 00:09:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@75 -- # starttarget 00:29:14.744 00:09:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@16 -- # nvmftestinit 00:29:14.744 00:09:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:14.744 00:09:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:14.744 00:09:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:14.744 00:09:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:14.744 00:09:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:14.744 00:09:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:14.744 00:09:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:14.744 00:09:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:14.744 00:09:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:14.744 00:09:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:14.744 00:09:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@309 -- # xtrace_disable 00:29:14.744 00:09:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:29:21.303 00:09:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:21.303 00:09:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # pci_devs=() 00:29:21.303 00:09:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:21.303 00:09:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:21.303 00:09:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:21.303 00:09:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:21.303 00:09:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:21.303 00:09:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # net_devs=() 00:29:21.303 00:09:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:21.303 00:09:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # e810=() 00:29:21.303 00:09:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # local -ga e810 00:29:21.303 00:09:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # x722=() 00:29:21.303 00:09:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # local -ga x722 00:29:21.303 00:09:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # mlx=() 00:29:21.303 00:09:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # local -ga mlx 00:29:21.303 00:09:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:21.303 00:09:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:21.303 00:09:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:21.303 00:09:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:21.303 00:09:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:21.303 00:09:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:21.303 00:09:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:21.303 00:09:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:21.303 00:09:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:21.303 00:09:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:21.303 00:09:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:21.303 00:09:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:21.303 00:09:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:21.303 00:09:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:21.303 00:09:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:21.303 00:09:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:21.303 00:09:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:21.303 00:09:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:21.303 00:09:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:21.303 00:09:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:29:21.303 Found 0000:af:00.0 (0x8086 - 0x159b) 00:29:21.303 00:09:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:21.303 00:09:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:21.303 00:09:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:21.303 00:09:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:21.303 00:09:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:21.303 00:09:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:21.303 00:09:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:29:21.303 Found 0000:af:00.1 (0x8086 - 0x159b) 00:29:21.303 00:09:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:21.303 00:09:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:21.303 00:09:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:21.303 00:09:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:21.303 00:09:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:21.303 00:09:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:21.303 00:09:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:21.303 00:09:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:21.303 00:09:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:21.303 00:09:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:21.303 00:09:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:21.303 00:09:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:21.303 00:09:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:21.303 00:09:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:21.303 00:09:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:21.303 00:09:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:29:21.303 Found net devices under 0000:af:00.0: cvl_0_0 00:29:21.303 00:09:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:21.303 00:09:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:21.303 00:09:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:21.303 00:09:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:21.303 00:09:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:21.303 00:09:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:21.303 00:09:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:21.304 00:09:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:21.304 00:09:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:29:21.304 Found net devices under 0000:af:00.1: cvl_0_1 00:29:21.304 00:09:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:21.304 00:09:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:21.304 00:09:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # is_hw=yes 00:29:21.304 00:09:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:21.304 00:09:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:21.304 00:09:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:21.304 00:09:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:21.304 00:09:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:21.304 00:09:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:21.304 00:09:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:21.304 00:09:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:21.304 00:09:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:21.304 00:09:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:21.304 00:09:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:21.304 00:09:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:21.304 00:09:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:21.304 00:09:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:21.304 00:09:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:21.304 00:09:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:21.304 00:09:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:21.304 00:09:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:21.304 00:09:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:21.304 00:09:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:21.304 00:09:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:21.304 00:09:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:21.304 00:09:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:21.304 00:09:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:21.304 00:09:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:21.304 00:09:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:21.304 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:21.304 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.123 ms 00:29:21.304 00:29:21.304 --- 10.0.0.2 ping statistics --- 00:29:21.304 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:21.304 rtt min/avg/max/mdev = 0.123/0.123/0.123/0.000 ms 00:29:21.304 00:09:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:21.304 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:21.304 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.232 ms 00:29:21.304 00:29:21.304 --- 10.0.0.1 ping statistics --- 00:29:21.304 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:21.304 rtt min/avg/max/mdev = 0.232/0.232/0.232/0.000 ms 00:29:21.304 00:09:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:21.304 00:09:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@450 -- # return 0 00:29:21.304 00:09:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:21.304 00:09:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:21.304 00:09:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:21.304 00:09:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:21.304 00:09:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:21.304 00:09:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:21.304 00:09:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:21.304 00:09:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:29:21.304 00:09:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:21.304 00:09:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:21.304 00:09:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:29:21.304 00:09:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@509 -- # nvmfpid=4124827 00:29:21.304 00:09:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@510 -- # waitforlisten 4124827 00:29:21.304 00:09:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:29:21.304 00:09:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # '[' -z 4124827 ']' 00:29:21.304 00:09:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:21.304 00:09:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:21.304 00:09:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:21.304 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:21.304 00:09:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:21.304 00:09:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:29:21.304 [2024-12-14 00:09:59.632240] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:29:21.304 [2024-12-14 00:09:59.632330] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:21.304 [2024-12-14 00:09:59.750783] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:21.304 [2024-12-14 00:09:59.863639] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:21.304 [2024-12-14 00:09:59.863684] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:21.304 [2024-12-14 00:09:59.863695] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:21.304 [2024-12-14 00:09:59.863706] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:21.304 [2024-12-14 00:09:59.863714] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:21.304 [2024-12-14 00:09:59.866127] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:29:21.304 [2024-12-14 00:09:59.866210] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:29:21.304 [2024-12-14 00:09:59.866329] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:29:21.304 [2024-12-14 00:09:59.866377] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 4 00:29:21.562 00:10:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:21.562 00:10:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@868 -- # return 0 00:29:21.562 00:10:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:21.562 00:10:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:21.562 00:10:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:29:21.562 00:10:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:21.562 00:10:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:21.562 00:10:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:21.562 00:10:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:29:21.562 [2024-12-14 00:10:00.483813] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:21.562 00:10:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:21.562 00:10:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:29:21.562 00:10:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:29:21.562 00:10:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:21.562 00:10:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:29:21.562 00:10:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:29:21.562 00:10:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:21.562 00:10:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:29:21.562 00:10:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:21.562 00:10:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:29:21.562 00:10:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:21.562 00:10:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:29:21.562 00:10:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:21.562 00:10:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:29:21.562 00:10:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:21.562 00:10:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:29:21.562 00:10:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:21.562 00:10:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:29:21.562 00:10:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:21.562 00:10:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:29:21.562 00:10:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:21.562 00:10:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:29:21.562 00:10:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:21.562 00:10:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:29:21.562 00:10:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:21.562 00:10:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:29:21.562 00:10:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # rpc_cmd 00:29:21.562 00:10:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:21.562 00:10:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:29:21.562 Malloc1 00:29:21.562 [2024-12-14 00:10:00.659174] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:21.820 Malloc2 00:29:21.820 Malloc3 00:29:21.820 Malloc4 00:29:22.078 Malloc5 00:29:22.078 Malloc6 00:29:22.334 Malloc7 00:29:22.334 Malloc8 00:29:22.334 Malloc9 00:29:22.592 Malloc10 00:29:22.592 00:10:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:22.592 00:10:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:29:22.593 00:10:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:22.593 00:10:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:29:22.593 00:10:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # perfpid=4125202 00:29:22.593 00:10:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # waitforlisten 4125202 /var/tmp/bdevperf.sock 00:29:22.593 00:10:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # '[' -z 4125202 ']' 00:29:22.593 00:10:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:29:22.593 00:10:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:29:22.593 00:10:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:29:22.593 00:10:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:22.593 00:10:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:29:22.593 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:29:22.593 00:10:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # config=() 00:29:22.593 00:10:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:22.593 00:10:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # local subsystem config 00:29:22.593 00:10:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:29:22.593 00:10:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:22.593 00:10:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:22.593 { 00:29:22.593 "params": { 00:29:22.593 "name": "Nvme$subsystem", 00:29:22.593 "trtype": "$TEST_TRANSPORT", 00:29:22.593 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:22.593 "adrfam": "ipv4", 00:29:22.593 "trsvcid": "$NVMF_PORT", 00:29:22.593 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:22.593 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:22.593 "hdgst": ${hdgst:-false}, 00:29:22.593 "ddgst": ${ddgst:-false} 00:29:22.593 }, 00:29:22.593 "method": "bdev_nvme_attach_controller" 00:29:22.593 } 00:29:22.593 EOF 00:29:22.593 )") 00:29:22.593 00:10:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:29:22.593 00:10:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:22.593 00:10:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:22.593 { 00:29:22.593 "params": { 00:29:22.593 "name": "Nvme$subsystem", 00:29:22.593 "trtype": "$TEST_TRANSPORT", 00:29:22.593 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:22.593 "adrfam": "ipv4", 00:29:22.593 "trsvcid": "$NVMF_PORT", 00:29:22.593 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:22.593 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:22.593 "hdgst": ${hdgst:-false}, 00:29:22.593 "ddgst": ${ddgst:-false} 00:29:22.593 }, 00:29:22.593 "method": "bdev_nvme_attach_controller" 00:29:22.593 } 00:29:22.593 EOF 00:29:22.593 )") 00:29:22.593 00:10:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:29:22.593 00:10:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:22.593 00:10:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:22.593 { 00:29:22.593 "params": { 00:29:22.593 "name": "Nvme$subsystem", 00:29:22.593 "trtype": "$TEST_TRANSPORT", 00:29:22.593 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:22.593 "adrfam": "ipv4", 00:29:22.593 "trsvcid": "$NVMF_PORT", 00:29:22.593 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:22.593 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:22.593 "hdgst": ${hdgst:-false}, 00:29:22.593 "ddgst": ${ddgst:-false} 00:29:22.593 }, 00:29:22.593 "method": "bdev_nvme_attach_controller" 00:29:22.593 } 00:29:22.593 EOF 00:29:22.593 )") 00:29:22.593 00:10:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:29:22.593 00:10:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:22.593 00:10:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:22.593 { 00:29:22.593 "params": { 00:29:22.593 "name": "Nvme$subsystem", 00:29:22.593 "trtype": "$TEST_TRANSPORT", 00:29:22.593 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:22.593 "adrfam": "ipv4", 00:29:22.593 "trsvcid": "$NVMF_PORT", 00:29:22.593 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:22.593 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:22.593 "hdgst": ${hdgst:-false}, 00:29:22.593 "ddgst": ${ddgst:-false} 00:29:22.593 }, 00:29:22.593 "method": "bdev_nvme_attach_controller" 00:29:22.593 } 00:29:22.593 EOF 00:29:22.593 )") 00:29:22.593 00:10:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:29:22.593 00:10:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:22.593 00:10:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:22.593 { 00:29:22.593 "params": { 00:29:22.593 "name": "Nvme$subsystem", 00:29:22.593 "trtype": "$TEST_TRANSPORT", 00:29:22.593 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:22.593 "adrfam": "ipv4", 00:29:22.593 "trsvcid": "$NVMF_PORT", 00:29:22.593 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:22.593 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:22.593 "hdgst": ${hdgst:-false}, 00:29:22.593 "ddgst": ${ddgst:-false} 00:29:22.593 }, 00:29:22.593 "method": "bdev_nvme_attach_controller" 00:29:22.593 } 00:29:22.593 EOF 00:29:22.593 )") 00:29:22.593 00:10:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:29:22.593 00:10:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:22.593 00:10:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:22.593 { 00:29:22.593 "params": { 00:29:22.593 "name": "Nvme$subsystem", 00:29:22.593 "trtype": "$TEST_TRANSPORT", 00:29:22.593 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:22.593 "adrfam": "ipv4", 00:29:22.593 "trsvcid": "$NVMF_PORT", 00:29:22.593 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:22.593 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:22.593 "hdgst": ${hdgst:-false}, 00:29:22.593 "ddgst": ${ddgst:-false} 00:29:22.593 }, 00:29:22.593 "method": "bdev_nvme_attach_controller" 00:29:22.593 } 00:29:22.593 EOF 00:29:22.593 )") 00:29:22.593 00:10:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:29:22.593 00:10:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:22.593 00:10:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:22.593 { 00:29:22.593 "params": { 00:29:22.593 "name": "Nvme$subsystem", 00:29:22.593 "trtype": "$TEST_TRANSPORT", 00:29:22.593 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:22.593 "adrfam": "ipv4", 00:29:22.593 "trsvcid": "$NVMF_PORT", 00:29:22.593 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:22.593 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:22.593 "hdgst": ${hdgst:-false}, 00:29:22.593 "ddgst": ${ddgst:-false} 00:29:22.593 }, 00:29:22.593 "method": "bdev_nvme_attach_controller" 00:29:22.593 } 00:29:22.593 EOF 00:29:22.593 )") 00:29:22.593 00:10:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:29:22.593 00:10:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:22.593 00:10:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:22.593 { 00:29:22.593 "params": { 00:29:22.593 "name": "Nvme$subsystem", 00:29:22.593 "trtype": "$TEST_TRANSPORT", 00:29:22.593 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:22.593 "adrfam": "ipv4", 00:29:22.593 "trsvcid": "$NVMF_PORT", 00:29:22.593 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:22.593 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:22.593 "hdgst": ${hdgst:-false}, 00:29:22.593 "ddgst": ${ddgst:-false} 00:29:22.593 }, 00:29:22.593 "method": "bdev_nvme_attach_controller" 00:29:22.593 } 00:29:22.593 EOF 00:29:22.593 )") 00:29:22.593 00:10:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:29:22.593 00:10:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:22.593 00:10:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:22.593 { 00:29:22.593 "params": { 00:29:22.593 "name": "Nvme$subsystem", 00:29:22.593 "trtype": "$TEST_TRANSPORT", 00:29:22.593 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:22.593 "adrfam": "ipv4", 00:29:22.593 "trsvcid": "$NVMF_PORT", 00:29:22.593 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:22.593 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:22.593 "hdgst": ${hdgst:-false}, 00:29:22.593 "ddgst": ${ddgst:-false} 00:29:22.593 }, 00:29:22.593 "method": "bdev_nvme_attach_controller" 00:29:22.593 } 00:29:22.593 EOF 00:29:22.593 )") 00:29:22.593 00:10:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:29:22.593 00:10:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:22.593 00:10:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:22.593 { 00:29:22.593 "params": { 00:29:22.593 "name": "Nvme$subsystem", 00:29:22.593 "trtype": "$TEST_TRANSPORT", 00:29:22.594 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:22.594 "adrfam": "ipv4", 00:29:22.594 "trsvcid": "$NVMF_PORT", 00:29:22.594 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:22.594 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:22.594 "hdgst": ${hdgst:-false}, 00:29:22.594 "ddgst": ${ddgst:-false} 00:29:22.594 }, 00:29:22.594 "method": "bdev_nvme_attach_controller" 00:29:22.594 } 00:29:22.594 EOF 00:29:22.594 )") 00:29:22.594 00:10:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:29:22.594 00:10:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # jq . 00:29:22.594 [2024-12-14 00:10:01.685844] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:29:22.594 [2024-12-14 00:10:01.685927] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:29:22.594 00:10:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@585 -- # IFS=, 00:29:22.594 00:10:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:29:22.594 "params": { 00:29:22.594 "name": "Nvme1", 00:29:22.594 "trtype": "tcp", 00:29:22.594 "traddr": "10.0.0.2", 00:29:22.594 "adrfam": "ipv4", 00:29:22.594 "trsvcid": "4420", 00:29:22.594 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:22.594 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:22.594 "hdgst": false, 00:29:22.594 "ddgst": false 00:29:22.594 }, 00:29:22.594 "method": "bdev_nvme_attach_controller" 00:29:22.594 },{ 00:29:22.594 "params": { 00:29:22.594 "name": "Nvme2", 00:29:22.594 "trtype": "tcp", 00:29:22.594 "traddr": "10.0.0.2", 00:29:22.594 "adrfam": "ipv4", 00:29:22.594 "trsvcid": "4420", 00:29:22.594 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:29:22.594 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:29:22.594 "hdgst": false, 00:29:22.594 "ddgst": false 00:29:22.594 }, 00:29:22.594 "method": "bdev_nvme_attach_controller" 00:29:22.594 },{ 00:29:22.594 "params": { 00:29:22.594 "name": "Nvme3", 00:29:22.594 "trtype": "tcp", 00:29:22.594 "traddr": "10.0.0.2", 00:29:22.594 "adrfam": "ipv4", 00:29:22.594 "trsvcid": "4420", 00:29:22.594 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:29:22.594 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:29:22.594 "hdgst": false, 00:29:22.594 "ddgst": false 00:29:22.594 }, 00:29:22.594 "method": "bdev_nvme_attach_controller" 00:29:22.594 },{ 00:29:22.594 "params": { 00:29:22.594 "name": "Nvme4", 00:29:22.594 "trtype": "tcp", 00:29:22.594 "traddr": "10.0.0.2", 00:29:22.594 "adrfam": "ipv4", 00:29:22.594 "trsvcid": "4420", 00:29:22.594 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:29:22.594 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:29:22.594 "hdgst": false, 00:29:22.594 "ddgst": false 00:29:22.594 }, 00:29:22.594 "method": "bdev_nvme_attach_controller" 00:29:22.594 },{ 00:29:22.594 "params": { 00:29:22.594 "name": "Nvme5", 00:29:22.594 "trtype": "tcp", 00:29:22.594 "traddr": "10.0.0.2", 00:29:22.594 "adrfam": "ipv4", 00:29:22.594 "trsvcid": "4420", 00:29:22.594 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:29:22.594 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:29:22.594 "hdgst": false, 00:29:22.594 "ddgst": false 00:29:22.594 }, 00:29:22.594 "method": "bdev_nvme_attach_controller" 00:29:22.594 },{ 00:29:22.594 "params": { 00:29:22.594 "name": "Nvme6", 00:29:22.594 "trtype": "tcp", 00:29:22.594 "traddr": "10.0.0.2", 00:29:22.594 "adrfam": "ipv4", 00:29:22.594 "trsvcid": "4420", 00:29:22.594 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:29:22.594 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:29:22.594 "hdgst": false, 00:29:22.594 "ddgst": false 00:29:22.594 }, 00:29:22.594 "method": "bdev_nvme_attach_controller" 00:29:22.594 },{ 00:29:22.594 "params": { 00:29:22.594 "name": "Nvme7", 00:29:22.594 "trtype": "tcp", 00:29:22.594 "traddr": "10.0.0.2", 00:29:22.594 "adrfam": "ipv4", 00:29:22.594 "trsvcid": "4420", 00:29:22.594 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:29:22.594 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:29:22.594 "hdgst": false, 00:29:22.594 "ddgst": false 00:29:22.594 }, 00:29:22.594 "method": "bdev_nvme_attach_controller" 00:29:22.594 },{ 00:29:22.594 "params": { 00:29:22.594 "name": "Nvme8", 00:29:22.594 "trtype": "tcp", 00:29:22.594 "traddr": "10.0.0.2", 00:29:22.594 "adrfam": "ipv4", 00:29:22.594 "trsvcid": "4420", 00:29:22.594 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:29:22.594 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:29:22.594 "hdgst": false, 00:29:22.594 "ddgst": false 00:29:22.594 }, 00:29:22.594 "method": "bdev_nvme_attach_controller" 00:29:22.594 },{ 00:29:22.594 "params": { 00:29:22.594 "name": "Nvme9", 00:29:22.594 "trtype": "tcp", 00:29:22.594 "traddr": "10.0.0.2", 00:29:22.594 "adrfam": "ipv4", 00:29:22.594 "trsvcid": "4420", 00:29:22.594 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:29:22.594 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:29:22.594 "hdgst": false, 00:29:22.594 "ddgst": false 00:29:22.594 }, 00:29:22.594 "method": "bdev_nvme_attach_controller" 00:29:22.594 },{ 00:29:22.594 "params": { 00:29:22.594 "name": "Nvme10", 00:29:22.594 "trtype": "tcp", 00:29:22.594 "traddr": "10.0.0.2", 00:29:22.594 "adrfam": "ipv4", 00:29:22.594 "trsvcid": "4420", 00:29:22.594 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:29:22.594 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:29:22.594 "hdgst": false, 00:29:22.594 "ddgst": false 00:29:22.594 }, 00:29:22.594 "method": "bdev_nvme_attach_controller" 00:29:22.594 }' 00:29:22.852 [2024-12-14 00:10:01.801690] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:22.852 [2024-12-14 00:10:01.914861] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:29:25.528 00:10:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:25.528 00:10:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@868 -- # return 0 00:29:25.528 00:10:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@81 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:29:25.528 00:10:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:25.528 00:10:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:29:25.528 00:10:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:25.528 00:10:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # kill -9 4125202 00:29:25.528 00:10:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@85 -- # rm -f /var/run/spdk_bdev1 00:29:25.528 00:10:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # sleep 1 00:29:26.097 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 74: 4125202 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:29:26.097 00:10:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@89 -- # kill -0 4124827 00:29:26.357 00:10:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:29:26.357 00:10:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:29:26.357 00:10:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # config=() 00:29:26.357 00:10:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # local subsystem config 00:29:26.357 00:10:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:26.357 00:10:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:26.357 { 00:29:26.357 "params": { 00:29:26.357 "name": "Nvme$subsystem", 00:29:26.357 "trtype": "$TEST_TRANSPORT", 00:29:26.357 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:26.357 "adrfam": "ipv4", 00:29:26.357 "trsvcid": "$NVMF_PORT", 00:29:26.357 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:26.357 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:26.357 "hdgst": ${hdgst:-false}, 00:29:26.357 "ddgst": ${ddgst:-false} 00:29:26.357 }, 00:29:26.357 "method": "bdev_nvme_attach_controller" 00:29:26.357 } 00:29:26.357 EOF 00:29:26.357 )") 00:29:26.357 00:10:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:29:26.357 00:10:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:26.357 00:10:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:26.357 { 00:29:26.357 "params": { 00:29:26.357 "name": "Nvme$subsystem", 00:29:26.357 "trtype": "$TEST_TRANSPORT", 00:29:26.357 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:26.357 "adrfam": "ipv4", 00:29:26.357 "trsvcid": "$NVMF_PORT", 00:29:26.357 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:26.357 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:26.357 "hdgst": ${hdgst:-false}, 00:29:26.357 "ddgst": ${ddgst:-false} 00:29:26.357 }, 00:29:26.357 "method": "bdev_nvme_attach_controller" 00:29:26.357 } 00:29:26.357 EOF 00:29:26.357 )") 00:29:26.357 00:10:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:29:26.357 00:10:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:26.357 00:10:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:26.357 { 00:29:26.357 "params": { 00:29:26.357 "name": "Nvme$subsystem", 00:29:26.357 "trtype": "$TEST_TRANSPORT", 00:29:26.357 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:26.357 "adrfam": "ipv4", 00:29:26.357 "trsvcid": "$NVMF_PORT", 00:29:26.357 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:26.357 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:26.357 "hdgst": ${hdgst:-false}, 00:29:26.357 "ddgst": ${ddgst:-false} 00:29:26.357 }, 00:29:26.357 "method": "bdev_nvme_attach_controller" 00:29:26.357 } 00:29:26.357 EOF 00:29:26.357 )") 00:29:26.357 00:10:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:29:26.357 00:10:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:26.357 00:10:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:26.357 { 00:29:26.357 "params": { 00:29:26.357 "name": "Nvme$subsystem", 00:29:26.357 "trtype": "$TEST_TRANSPORT", 00:29:26.357 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:26.357 "adrfam": "ipv4", 00:29:26.357 "trsvcid": "$NVMF_PORT", 00:29:26.357 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:26.357 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:26.357 "hdgst": ${hdgst:-false}, 00:29:26.357 "ddgst": ${ddgst:-false} 00:29:26.357 }, 00:29:26.357 "method": "bdev_nvme_attach_controller" 00:29:26.357 } 00:29:26.357 EOF 00:29:26.357 )") 00:29:26.357 00:10:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:29:26.357 00:10:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:26.357 00:10:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:26.357 { 00:29:26.357 "params": { 00:29:26.357 "name": "Nvme$subsystem", 00:29:26.357 "trtype": "$TEST_TRANSPORT", 00:29:26.357 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:26.357 "adrfam": "ipv4", 00:29:26.357 "trsvcid": "$NVMF_PORT", 00:29:26.357 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:26.357 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:26.357 "hdgst": ${hdgst:-false}, 00:29:26.357 "ddgst": ${ddgst:-false} 00:29:26.357 }, 00:29:26.357 "method": "bdev_nvme_attach_controller" 00:29:26.357 } 00:29:26.357 EOF 00:29:26.357 )") 00:29:26.357 00:10:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:29:26.357 00:10:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:26.357 00:10:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:26.357 { 00:29:26.357 "params": { 00:29:26.357 "name": "Nvme$subsystem", 00:29:26.357 "trtype": "$TEST_TRANSPORT", 00:29:26.357 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:26.357 "adrfam": "ipv4", 00:29:26.357 "trsvcid": "$NVMF_PORT", 00:29:26.357 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:26.357 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:26.357 "hdgst": ${hdgst:-false}, 00:29:26.357 "ddgst": ${ddgst:-false} 00:29:26.357 }, 00:29:26.357 "method": "bdev_nvme_attach_controller" 00:29:26.357 } 00:29:26.357 EOF 00:29:26.357 )") 00:29:26.357 00:10:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:29:26.357 00:10:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:26.357 00:10:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:26.357 { 00:29:26.357 "params": { 00:29:26.357 "name": "Nvme$subsystem", 00:29:26.357 "trtype": "$TEST_TRANSPORT", 00:29:26.357 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:26.357 "adrfam": "ipv4", 00:29:26.357 "trsvcid": "$NVMF_PORT", 00:29:26.357 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:26.357 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:26.357 "hdgst": ${hdgst:-false}, 00:29:26.357 "ddgst": ${ddgst:-false} 00:29:26.357 }, 00:29:26.357 "method": "bdev_nvme_attach_controller" 00:29:26.357 } 00:29:26.357 EOF 00:29:26.357 )") 00:29:26.357 00:10:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:29:26.357 00:10:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:26.357 00:10:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:26.357 { 00:29:26.357 "params": { 00:29:26.357 "name": "Nvme$subsystem", 00:29:26.357 "trtype": "$TEST_TRANSPORT", 00:29:26.357 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:26.357 "adrfam": "ipv4", 00:29:26.357 "trsvcid": "$NVMF_PORT", 00:29:26.357 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:26.357 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:26.357 "hdgst": ${hdgst:-false}, 00:29:26.357 "ddgst": ${ddgst:-false} 00:29:26.357 }, 00:29:26.357 "method": "bdev_nvme_attach_controller" 00:29:26.357 } 00:29:26.357 EOF 00:29:26.357 )") 00:29:26.357 00:10:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:29:26.357 00:10:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:26.357 00:10:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:26.357 { 00:29:26.357 "params": { 00:29:26.357 "name": "Nvme$subsystem", 00:29:26.357 "trtype": "$TEST_TRANSPORT", 00:29:26.357 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:26.357 "adrfam": "ipv4", 00:29:26.357 "trsvcid": "$NVMF_PORT", 00:29:26.357 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:26.357 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:26.357 "hdgst": ${hdgst:-false}, 00:29:26.357 "ddgst": ${ddgst:-false} 00:29:26.357 }, 00:29:26.358 "method": "bdev_nvme_attach_controller" 00:29:26.358 } 00:29:26.358 EOF 00:29:26.358 )") 00:29:26.358 00:10:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:29:26.358 00:10:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:26.358 00:10:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:26.358 { 00:29:26.358 "params": { 00:29:26.358 "name": "Nvme$subsystem", 00:29:26.358 "trtype": "$TEST_TRANSPORT", 00:29:26.358 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:26.358 "adrfam": "ipv4", 00:29:26.358 "trsvcid": "$NVMF_PORT", 00:29:26.358 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:26.358 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:26.358 "hdgst": ${hdgst:-false}, 00:29:26.358 "ddgst": ${ddgst:-false} 00:29:26.358 }, 00:29:26.358 "method": "bdev_nvme_attach_controller" 00:29:26.358 } 00:29:26.358 EOF 00:29:26.358 )") 00:29:26.358 00:10:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:29:26.358 00:10:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # jq . 00:29:26.358 00:10:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@585 -- # IFS=, 00:29:26.358 00:10:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:29:26.358 "params": { 00:29:26.358 "name": "Nvme1", 00:29:26.358 "trtype": "tcp", 00:29:26.358 "traddr": "10.0.0.2", 00:29:26.358 "adrfam": "ipv4", 00:29:26.358 "trsvcid": "4420", 00:29:26.358 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:26.358 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:26.358 "hdgst": false, 00:29:26.358 "ddgst": false 00:29:26.358 }, 00:29:26.358 "method": "bdev_nvme_attach_controller" 00:29:26.358 },{ 00:29:26.358 "params": { 00:29:26.358 "name": "Nvme2", 00:29:26.358 "trtype": "tcp", 00:29:26.358 "traddr": "10.0.0.2", 00:29:26.358 "adrfam": "ipv4", 00:29:26.358 "trsvcid": "4420", 00:29:26.358 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:29:26.358 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:29:26.358 "hdgst": false, 00:29:26.358 "ddgst": false 00:29:26.358 }, 00:29:26.358 "method": "bdev_nvme_attach_controller" 00:29:26.358 },{ 00:29:26.358 "params": { 00:29:26.358 "name": "Nvme3", 00:29:26.358 "trtype": "tcp", 00:29:26.358 "traddr": "10.0.0.2", 00:29:26.358 "adrfam": "ipv4", 00:29:26.358 "trsvcid": "4420", 00:29:26.358 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:29:26.358 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:29:26.358 "hdgst": false, 00:29:26.358 "ddgst": false 00:29:26.358 }, 00:29:26.358 "method": "bdev_nvme_attach_controller" 00:29:26.358 },{ 00:29:26.358 "params": { 00:29:26.358 "name": "Nvme4", 00:29:26.358 "trtype": "tcp", 00:29:26.358 "traddr": "10.0.0.2", 00:29:26.358 "adrfam": "ipv4", 00:29:26.358 "trsvcid": "4420", 00:29:26.358 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:29:26.358 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:29:26.358 "hdgst": false, 00:29:26.358 "ddgst": false 00:29:26.358 }, 00:29:26.358 "method": "bdev_nvme_attach_controller" 00:29:26.358 },{ 00:29:26.358 "params": { 00:29:26.358 "name": "Nvme5", 00:29:26.358 "trtype": "tcp", 00:29:26.358 "traddr": "10.0.0.2", 00:29:26.358 "adrfam": "ipv4", 00:29:26.358 "trsvcid": "4420", 00:29:26.358 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:29:26.358 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:29:26.358 "hdgst": false, 00:29:26.358 "ddgst": false 00:29:26.358 }, 00:29:26.358 "method": "bdev_nvme_attach_controller" 00:29:26.358 },{ 00:29:26.358 "params": { 00:29:26.358 "name": "Nvme6", 00:29:26.358 "trtype": "tcp", 00:29:26.358 "traddr": "10.0.0.2", 00:29:26.358 "adrfam": "ipv4", 00:29:26.358 "trsvcid": "4420", 00:29:26.358 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:29:26.358 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:29:26.358 "hdgst": false, 00:29:26.358 "ddgst": false 00:29:26.358 }, 00:29:26.358 "method": "bdev_nvme_attach_controller" 00:29:26.358 },{ 00:29:26.358 "params": { 00:29:26.358 "name": "Nvme7", 00:29:26.358 "trtype": "tcp", 00:29:26.358 "traddr": "10.0.0.2", 00:29:26.358 "adrfam": "ipv4", 00:29:26.358 "trsvcid": "4420", 00:29:26.358 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:29:26.358 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:29:26.358 "hdgst": false, 00:29:26.358 "ddgst": false 00:29:26.358 }, 00:29:26.358 "method": "bdev_nvme_attach_controller" 00:29:26.358 },{ 00:29:26.358 "params": { 00:29:26.358 "name": "Nvme8", 00:29:26.358 "trtype": "tcp", 00:29:26.358 "traddr": "10.0.0.2", 00:29:26.358 "adrfam": "ipv4", 00:29:26.358 "trsvcid": "4420", 00:29:26.358 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:29:26.358 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:29:26.358 "hdgst": false, 00:29:26.358 "ddgst": false 00:29:26.358 }, 00:29:26.358 "method": "bdev_nvme_attach_controller" 00:29:26.358 },{ 00:29:26.358 "params": { 00:29:26.358 "name": "Nvme9", 00:29:26.358 "trtype": "tcp", 00:29:26.358 "traddr": "10.0.0.2", 00:29:26.358 "adrfam": "ipv4", 00:29:26.358 "trsvcid": "4420", 00:29:26.358 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:29:26.358 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:29:26.358 "hdgst": false, 00:29:26.358 "ddgst": false 00:29:26.358 }, 00:29:26.358 "method": "bdev_nvme_attach_controller" 00:29:26.358 },{ 00:29:26.358 "params": { 00:29:26.358 "name": "Nvme10", 00:29:26.358 "trtype": "tcp", 00:29:26.358 "traddr": "10.0.0.2", 00:29:26.358 "adrfam": "ipv4", 00:29:26.358 "trsvcid": "4420", 00:29:26.358 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:29:26.358 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:29:26.358 "hdgst": false, 00:29:26.358 "ddgst": false 00:29:26.358 }, 00:29:26.358 "method": "bdev_nvme_attach_controller" 00:29:26.358 }' 00:29:26.358 [2024-12-14 00:10:05.315958] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:29:26.358 [2024-12-14 00:10:05.316055] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4125798 ] 00:29:26.358 [2024-12-14 00:10:05.436810] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:26.615 [2024-12-14 00:10:05.552617] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:29:27.987 Running I/O for 1 seconds... 00:29:29.361 1953.00 IOPS, 122.06 MiB/s 00:29:29.361 Latency(us) 00:29:29.361 [2024-12-13T23:10:08.502Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:29.361 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:29.361 Verification LBA range: start 0x0 length 0x400 00:29:29.361 Nvme1n1 : 1.09 241.26 15.08 0.00 0.00 256603.58 18350.08 222697.57 00:29:29.361 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:29.361 Verification LBA range: start 0x0 length 0x400 00:29:29.361 Nvme2n1 : 1.12 249.73 15.61 0.00 0.00 237012.14 17101.78 239674.51 00:29:29.361 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:29.361 Verification LBA range: start 0x0 length 0x400 00:29:29.361 Nvme3n1 : 1.13 230.50 14.41 0.00 0.00 265684.62 6959.30 229688.08 00:29:29.361 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:29.361 Verification LBA range: start 0x0 length 0x400 00:29:29.361 Nvme4n1 : 1.17 272.68 17.04 0.00 0.00 222476.97 17101.78 240673.16 00:29:29.361 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:29.361 Verification LBA range: start 0x0 length 0x400 00:29:29.361 Nvme5n1 : 1.13 231.97 14.50 0.00 0.00 250282.95 19348.72 248662.31 00:29:29.361 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:29.361 Verification LBA range: start 0x0 length 0x400 00:29:29.361 Nvme6n1 : 1.14 225.23 14.08 0.00 0.00 260588.98 17226.61 250659.60 00:29:29.361 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:29.361 Verification LBA range: start 0x0 length 0x400 00:29:29.361 Nvme7n1 : 1.18 271.92 17.00 0.00 0.00 213021.84 15416.56 259647.39 00:29:29.361 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:29.361 Verification LBA range: start 0x0 length 0x400 00:29:29.361 Nvme8n1 : 1.18 270.56 16.91 0.00 0.00 210938.54 14230.67 235679.94 00:29:29.361 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:29.361 Verification LBA range: start 0x0 length 0x400 00:29:29.361 Nvme9n1 : 1.16 219.80 13.74 0.00 0.00 255074.74 31207.62 248662.31 00:29:29.361 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:29.361 Verification LBA range: start 0x0 length 0x400 00:29:29.361 Nvme10n1 : 1.19 269.21 16.83 0.00 0.00 205480.81 11796.48 265639.25 00:29:29.361 [2024-12-13T23:10:08.502Z] =================================================================================================================== 00:29:29.361 [2024-12-13T23:10:08.502Z] Total : 2482.87 155.18 0.00 0.00 235592.89 6959.30 265639.25 00:29:30.295 00:10:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@95 -- # stoptarget 00:29:30.295 00:10:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:29:30.295 00:10:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:29:30.295 00:10:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:29:30.295 00:10:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@46 -- # nvmftestfini 00:29:30.295 00:10:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:30.295 00:10:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # sync 00:29:30.295 00:10:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:30.295 00:10:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set +e 00:29:30.295 00:10:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:30.295 00:10:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:30.295 rmmod nvme_tcp 00:29:30.295 rmmod nvme_fabrics 00:29:30.295 rmmod nvme_keyring 00:29:30.295 00:10:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:30.295 00:10:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@128 -- # set -e 00:29:30.295 00:10:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@129 -- # return 0 00:29:30.295 00:10:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@517 -- # '[' -n 4124827 ']' 00:29:30.295 00:10:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@518 -- # killprocess 4124827 00:29:30.295 00:10:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # '[' -z 4124827 ']' 00:29:30.295 00:10:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@958 -- # kill -0 4124827 00:29:30.295 00:10:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@959 -- # uname 00:29:30.295 00:10:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:30.295 00:10:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4124827 00:29:30.553 00:10:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:30.553 00:10:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:30.553 00:10:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4124827' 00:29:30.553 killing process with pid 4124827 00:29:30.553 00:10:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@973 -- # kill 4124827 00:29:30.553 00:10:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@978 -- # wait 4124827 00:29:33.840 00:10:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:33.840 00:10:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:33.840 00:10:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:33.840 00:10:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # iptr 00:29:33.840 00:10:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # iptables-save 00:29:33.840 00:10:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:33.840 00:10:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # iptables-restore 00:29:33.840 00:10:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:33.840 00:10:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:33.840 00:10:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:33.840 00:10:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:33.840 00:10:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:35.743 00:10:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:35.743 00:29:35.743 real 0m20.805s 00:29:35.743 user 0m57.121s 00:29:35.743 sys 0m6.090s 00:29:35.743 00:10:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:35.743 00:10:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:29:35.743 ************************************ 00:29:35.743 END TEST nvmf_shutdown_tc1 00:29:35.743 ************************************ 00:29:35.743 00:10:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@163 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:29:35.743 00:10:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:29:35.743 00:10:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:35.743 00:10:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:29:35.743 ************************************ 00:29:35.743 START TEST nvmf_shutdown_tc2 00:29:35.743 ************************************ 00:29:35.743 00:10:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc2 00:29:35.743 00:10:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@100 -- # starttarget 00:29:35.743 00:10:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@16 -- # nvmftestinit 00:29:35.743 00:10:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:35.743 00:10:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:35.743 00:10:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:35.743 00:10:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:35.743 00:10:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:35.743 00:10:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:35.743 00:10:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:35.743 00:10:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:35.743 00:10:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:35.743 00:10:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:35.743 00:10:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@309 -- # xtrace_disable 00:29:35.743 00:10:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:35.743 00:10:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:35.743 00:10:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # pci_devs=() 00:29:35.743 00:10:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:35.743 00:10:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:35.743 00:10:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:35.743 00:10:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:35.743 00:10:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:35.743 00:10:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # net_devs=() 00:29:35.743 00:10:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:35.743 00:10:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # e810=() 00:29:35.743 00:10:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # local -ga e810 00:29:35.743 00:10:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # x722=() 00:29:35.743 00:10:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # local -ga x722 00:29:35.743 00:10:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # mlx=() 00:29:35.743 00:10:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # local -ga mlx 00:29:35.743 00:10:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:35.743 00:10:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:35.743 00:10:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:35.743 00:10:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:35.743 00:10:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:35.743 00:10:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:35.743 00:10:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:35.743 00:10:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:35.743 00:10:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:35.743 00:10:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:35.743 00:10:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:35.743 00:10:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:35.743 00:10:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:35.743 00:10:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:35.743 00:10:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:35.743 00:10:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:35.743 00:10:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:35.743 00:10:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:35.743 00:10:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:35.743 00:10:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:29:35.743 Found 0000:af:00.0 (0x8086 - 0x159b) 00:29:35.743 00:10:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:35.743 00:10:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:35.743 00:10:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:35.743 00:10:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:35.743 00:10:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:35.743 00:10:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:35.743 00:10:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:29:35.743 Found 0000:af:00.1 (0x8086 - 0x159b) 00:29:35.743 00:10:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:35.743 00:10:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:35.743 00:10:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:35.743 00:10:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:35.743 00:10:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:35.743 00:10:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:35.743 00:10:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:35.743 00:10:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:35.743 00:10:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:35.743 00:10:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:35.743 00:10:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:35.743 00:10:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:35.744 00:10:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:35.744 00:10:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:35.744 00:10:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:35.744 00:10:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:29:35.744 Found net devices under 0000:af:00.0: cvl_0_0 00:29:35.744 00:10:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:35.744 00:10:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:35.744 00:10:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:35.744 00:10:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:35.744 00:10:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:35.744 00:10:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:35.744 00:10:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:35.744 00:10:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:35.744 00:10:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:29:35.744 Found net devices under 0000:af:00.1: cvl_0_1 00:29:35.744 00:10:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:35.744 00:10:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:35.744 00:10:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # is_hw=yes 00:29:35.744 00:10:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:35.744 00:10:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:35.744 00:10:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:35.744 00:10:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:35.744 00:10:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:35.744 00:10:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:35.744 00:10:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:35.744 00:10:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:35.744 00:10:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:35.744 00:10:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:35.744 00:10:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:35.744 00:10:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:35.744 00:10:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:35.744 00:10:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:35.744 00:10:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:35.744 00:10:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:35.744 00:10:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:35.744 00:10:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:35.744 00:10:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:35.744 00:10:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:35.744 00:10:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:35.744 00:10:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:36.003 00:10:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:36.003 00:10:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:36.003 00:10:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:36.003 00:10:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:36.003 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:36.003 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.379 ms 00:29:36.003 00:29:36.003 --- 10.0.0.2 ping statistics --- 00:29:36.003 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:36.003 rtt min/avg/max/mdev = 0.379/0.379/0.379/0.000 ms 00:29:36.003 00:10:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:36.003 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:36.003 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.192 ms 00:29:36.003 00:29:36.003 --- 10.0.0.1 ping statistics --- 00:29:36.003 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:36.003 rtt min/avg/max/mdev = 0.192/0.192/0.192/0.000 ms 00:29:36.003 00:10:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:36.003 00:10:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@450 -- # return 0 00:29:36.003 00:10:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:36.003 00:10:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:36.003 00:10:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:36.003 00:10:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:36.003 00:10:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:36.003 00:10:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:36.003 00:10:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:36.003 00:10:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:29:36.003 00:10:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:36.003 00:10:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:36.003 00:10:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:36.003 00:10:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@509 -- # nvmfpid=4127373 00:29:36.003 00:10:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@510 -- # waitforlisten 4127373 00:29:36.003 00:10:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:29:36.003 00:10:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # '[' -z 4127373 ']' 00:29:36.003 00:10:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:36.003 00:10:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:36.003 00:10:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:36.003 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:36.003 00:10:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:36.003 00:10:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:36.003 [2024-12-14 00:10:15.051409] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:29:36.003 [2024-12-14 00:10:15.051519] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:36.261 [2024-12-14 00:10:15.169336] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:36.261 [2024-12-14 00:10:15.282340] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:36.261 [2024-12-14 00:10:15.282384] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:36.261 [2024-12-14 00:10:15.282395] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:36.261 [2024-12-14 00:10:15.282405] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:36.261 [2024-12-14 00:10:15.282413] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:36.261 [2024-12-14 00:10:15.284934] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:29:36.261 [2024-12-14 00:10:15.285005] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:29:36.261 [2024-12-14 00:10:15.285148] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:29:36.261 [2024-12-14 00:10:15.285171] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 4 00:29:36.827 00:10:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:36.827 00:10:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@868 -- # return 0 00:29:36.827 00:10:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:36.827 00:10:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:36.827 00:10:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:36.827 00:10:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:36.827 00:10:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:36.827 00:10:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:36.827 00:10:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:36.827 [2024-12-14 00:10:15.903905] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:36.827 00:10:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:36.827 00:10:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:29:36.827 00:10:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:29:36.827 00:10:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:36.827 00:10:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:36.827 00:10:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:29:36.827 00:10:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:36.827 00:10:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:29:36.827 00:10:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:36.827 00:10:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:29:36.827 00:10:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:36.827 00:10:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:29:36.827 00:10:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:36.827 00:10:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:29:36.827 00:10:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:36.827 00:10:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:29:36.827 00:10:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:36.827 00:10:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:29:36.827 00:10:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:36.827 00:10:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:29:37.085 00:10:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:37.085 00:10:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:29:37.085 00:10:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:37.085 00:10:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:29:37.085 00:10:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:37.085 00:10:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:29:37.085 00:10:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # rpc_cmd 00:29:37.085 00:10:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:37.085 00:10:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:37.085 Malloc1 00:29:37.085 [2024-12-14 00:10:16.076865] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:37.085 Malloc2 00:29:37.343 Malloc3 00:29:37.343 Malloc4 00:29:37.343 Malloc5 00:29:37.600 Malloc6 00:29:37.600 Malloc7 00:29:37.857 Malloc8 00:29:37.857 Malloc9 00:29:37.857 Malloc10 00:29:37.857 00:10:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:37.857 00:10:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:29:37.857 00:10:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:37.857 00:10:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:38.116 00:10:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # perfpid=4127849 00:29:38.116 00:10:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # waitforlisten 4127849 /var/tmp/bdevperf.sock 00:29:38.116 00:10:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # '[' -z 4127849 ']' 00:29:38.116 00:10:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:29:38.116 00:10:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:29:38.116 00:10:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:38.116 00:10:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:29:38.116 00:10:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:29:38.116 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:29:38.116 00:10:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # config=() 00:29:38.116 00:10:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:38.116 00:10:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # local subsystem config 00:29:38.116 00:10:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:38.116 00:10:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:38.116 00:10:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:38.116 { 00:29:38.116 "params": { 00:29:38.116 "name": "Nvme$subsystem", 00:29:38.116 "trtype": "$TEST_TRANSPORT", 00:29:38.116 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:38.116 "adrfam": "ipv4", 00:29:38.116 "trsvcid": "$NVMF_PORT", 00:29:38.116 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:38.116 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:38.116 "hdgst": ${hdgst:-false}, 00:29:38.116 "ddgst": ${ddgst:-false} 00:29:38.116 }, 00:29:38.116 "method": "bdev_nvme_attach_controller" 00:29:38.116 } 00:29:38.116 EOF 00:29:38.116 )") 00:29:38.116 00:10:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:29:38.116 00:10:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:38.116 00:10:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:38.116 { 00:29:38.116 "params": { 00:29:38.116 "name": "Nvme$subsystem", 00:29:38.116 "trtype": "$TEST_TRANSPORT", 00:29:38.116 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:38.116 "adrfam": "ipv4", 00:29:38.116 "trsvcid": "$NVMF_PORT", 00:29:38.116 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:38.116 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:38.116 "hdgst": ${hdgst:-false}, 00:29:38.116 "ddgst": ${ddgst:-false} 00:29:38.116 }, 00:29:38.116 "method": "bdev_nvme_attach_controller" 00:29:38.116 } 00:29:38.116 EOF 00:29:38.116 )") 00:29:38.116 00:10:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:29:38.116 00:10:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:38.116 00:10:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:38.116 { 00:29:38.116 "params": { 00:29:38.116 "name": "Nvme$subsystem", 00:29:38.116 "trtype": "$TEST_TRANSPORT", 00:29:38.116 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:38.116 "adrfam": "ipv4", 00:29:38.116 "trsvcid": "$NVMF_PORT", 00:29:38.116 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:38.116 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:38.116 "hdgst": ${hdgst:-false}, 00:29:38.116 "ddgst": ${ddgst:-false} 00:29:38.116 }, 00:29:38.116 "method": "bdev_nvme_attach_controller" 00:29:38.116 } 00:29:38.116 EOF 00:29:38.116 )") 00:29:38.116 00:10:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:29:38.116 00:10:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:38.116 00:10:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:38.116 { 00:29:38.116 "params": { 00:29:38.116 "name": "Nvme$subsystem", 00:29:38.116 "trtype": "$TEST_TRANSPORT", 00:29:38.116 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:38.116 "adrfam": "ipv4", 00:29:38.116 "trsvcid": "$NVMF_PORT", 00:29:38.116 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:38.116 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:38.116 "hdgst": ${hdgst:-false}, 00:29:38.116 "ddgst": ${ddgst:-false} 00:29:38.116 }, 00:29:38.116 "method": "bdev_nvme_attach_controller" 00:29:38.116 } 00:29:38.116 EOF 00:29:38.116 )") 00:29:38.116 00:10:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:29:38.116 00:10:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:38.116 00:10:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:38.116 { 00:29:38.116 "params": { 00:29:38.116 "name": "Nvme$subsystem", 00:29:38.116 "trtype": "$TEST_TRANSPORT", 00:29:38.116 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:38.116 "adrfam": "ipv4", 00:29:38.116 "trsvcid": "$NVMF_PORT", 00:29:38.116 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:38.116 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:38.116 "hdgst": ${hdgst:-false}, 00:29:38.116 "ddgst": ${ddgst:-false} 00:29:38.116 }, 00:29:38.116 "method": "bdev_nvme_attach_controller" 00:29:38.116 } 00:29:38.116 EOF 00:29:38.116 )") 00:29:38.116 00:10:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:29:38.116 00:10:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:38.116 00:10:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:38.116 { 00:29:38.116 "params": { 00:29:38.116 "name": "Nvme$subsystem", 00:29:38.116 "trtype": "$TEST_TRANSPORT", 00:29:38.116 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:38.116 "adrfam": "ipv4", 00:29:38.116 "trsvcid": "$NVMF_PORT", 00:29:38.116 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:38.116 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:38.116 "hdgst": ${hdgst:-false}, 00:29:38.116 "ddgst": ${ddgst:-false} 00:29:38.116 }, 00:29:38.116 "method": "bdev_nvme_attach_controller" 00:29:38.116 } 00:29:38.116 EOF 00:29:38.116 )") 00:29:38.116 00:10:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:29:38.116 00:10:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:38.116 00:10:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:38.116 { 00:29:38.116 "params": { 00:29:38.116 "name": "Nvme$subsystem", 00:29:38.116 "trtype": "$TEST_TRANSPORT", 00:29:38.116 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:38.116 "adrfam": "ipv4", 00:29:38.116 "trsvcid": "$NVMF_PORT", 00:29:38.116 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:38.116 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:38.116 "hdgst": ${hdgst:-false}, 00:29:38.116 "ddgst": ${ddgst:-false} 00:29:38.116 }, 00:29:38.116 "method": "bdev_nvme_attach_controller" 00:29:38.116 } 00:29:38.116 EOF 00:29:38.116 )") 00:29:38.116 00:10:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:29:38.117 00:10:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:38.117 00:10:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:38.117 { 00:29:38.117 "params": { 00:29:38.117 "name": "Nvme$subsystem", 00:29:38.117 "trtype": "$TEST_TRANSPORT", 00:29:38.117 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:38.117 "adrfam": "ipv4", 00:29:38.117 "trsvcid": "$NVMF_PORT", 00:29:38.117 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:38.117 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:38.117 "hdgst": ${hdgst:-false}, 00:29:38.117 "ddgst": ${ddgst:-false} 00:29:38.117 }, 00:29:38.117 "method": "bdev_nvme_attach_controller" 00:29:38.117 } 00:29:38.117 EOF 00:29:38.117 )") 00:29:38.117 00:10:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:29:38.117 00:10:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:38.117 00:10:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:38.117 { 00:29:38.117 "params": { 00:29:38.117 "name": "Nvme$subsystem", 00:29:38.117 "trtype": "$TEST_TRANSPORT", 00:29:38.117 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:38.117 "adrfam": "ipv4", 00:29:38.117 "trsvcid": "$NVMF_PORT", 00:29:38.117 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:38.117 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:38.117 "hdgst": ${hdgst:-false}, 00:29:38.117 "ddgst": ${ddgst:-false} 00:29:38.117 }, 00:29:38.117 "method": "bdev_nvme_attach_controller" 00:29:38.117 } 00:29:38.117 EOF 00:29:38.117 )") 00:29:38.117 00:10:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:29:38.117 00:10:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:38.117 00:10:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:38.117 { 00:29:38.117 "params": { 00:29:38.117 "name": "Nvme$subsystem", 00:29:38.117 "trtype": "$TEST_TRANSPORT", 00:29:38.117 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:38.117 "adrfam": "ipv4", 00:29:38.117 "trsvcid": "$NVMF_PORT", 00:29:38.117 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:38.117 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:38.117 "hdgst": ${hdgst:-false}, 00:29:38.117 "ddgst": ${ddgst:-false} 00:29:38.117 }, 00:29:38.117 "method": "bdev_nvme_attach_controller" 00:29:38.117 } 00:29:38.117 EOF 00:29:38.117 )") 00:29:38.117 00:10:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:29:38.117 00:10:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@584 -- # jq . 00:29:38.117 00:10:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@585 -- # IFS=, 00:29:38.117 00:10:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:29:38.117 "params": { 00:29:38.117 "name": "Nvme1", 00:29:38.117 "trtype": "tcp", 00:29:38.117 "traddr": "10.0.0.2", 00:29:38.117 "adrfam": "ipv4", 00:29:38.117 "trsvcid": "4420", 00:29:38.117 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:38.117 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:38.117 "hdgst": false, 00:29:38.117 "ddgst": false 00:29:38.117 }, 00:29:38.117 "method": "bdev_nvme_attach_controller" 00:29:38.117 },{ 00:29:38.117 "params": { 00:29:38.117 "name": "Nvme2", 00:29:38.117 "trtype": "tcp", 00:29:38.117 "traddr": "10.0.0.2", 00:29:38.117 "adrfam": "ipv4", 00:29:38.117 "trsvcid": "4420", 00:29:38.117 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:29:38.117 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:29:38.117 "hdgst": false, 00:29:38.117 "ddgst": false 00:29:38.117 }, 00:29:38.117 "method": "bdev_nvme_attach_controller" 00:29:38.117 },{ 00:29:38.117 "params": { 00:29:38.117 "name": "Nvme3", 00:29:38.117 "trtype": "tcp", 00:29:38.117 "traddr": "10.0.0.2", 00:29:38.117 "adrfam": "ipv4", 00:29:38.117 "trsvcid": "4420", 00:29:38.117 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:29:38.117 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:29:38.117 "hdgst": false, 00:29:38.117 "ddgst": false 00:29:38.117 }, 00:29:38.117 "method": "bdev_nvme_attach_controller" 00:29:38.117 },{ 00:29:38.117 "params": { 00:29:38.117 "name": "Nvme4", 00:29:38.117 "trtype": "tcp", 00:29:38.117 "traddr": "10.0.0.2", 00:29:38.117 "adrfam": "ipv4", 00:29:38.117 "trsvcid": "4420", 00:29:38.117 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:29:38.117 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:29:38.117 "hdgst": false, 00:29:38.117 "ddgst": false 00:29:38.117 }, 00:29:38.117 "method": "bdev_nvme_attach_controller" 00:29:38.117 },{ 00:29:38.117 "params": { 00:29:38.117 "name": "Nvme5", 00:29:38.117 "trtype": "tcp", 00:29:38.117 "traddr": "10.0.0.2", 00:29:38.117 "adrfam": "ipv4", 00:29:38.117 "trsvcid": "4420", 00:29:38.117 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:29:38.117 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:29:38.117 "hdgst": false, 00:29:38.117 "ddgst": false 00:29:38.117 }, 00:29:38.117 "method": "bdev_nvme_attach_controller" 00:29:38.117 },{ 00:29:38.117 "params": { 00:29:38.117 "name": "Nvme6", 00:29:38.117 "trtype": "tcp", 00:29:38.117 "traddr": "10.0.0.2", 00:29:38.117 "adrfam": "ipv4", 00:29:38.117 "trsvcid": "4420", 00:29:38.117 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:29:38.117 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:29:38.117 "hdgst": false, 00:29:38.117 "ddgst": false 00:29:38.117 }, 00:29:38.117 "method": "bdev_nvme_attach_controller" 00:29:38.117 },{ 00:29:38.117 "params": { 00:29:38.117 "name": "Nvme7", 00:29:38.117 "trtype": "tcp", 00:29:38.117 "traddr": "10.0.0.2", 00:29:38.117 "adrfam": "ipv4", 00:29:38.117 "trsvcid": "4420", 00:29:38.117 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:29:38.117 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:29:38.117 "hdgst": false, 00:29:38.117 "ddgst": false 00:29:38.117 }, 00:29:38.117 "method": "bdev_nvme_attach_controller" 00:29:38.117 },{ 00:29:38.117 "params": { 00:29:38.117 "name": "Nvme8", 00:29:38.117 "trtype": "tcp", 00:29:38.117 "traddr": "10.0.0.2", 00:29:38.117 "adrfam": "ipv4", 00:29:38.117 "trsvcid": "4420", 00:29:38.117 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:29:38.117 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:29:38.117 "hdgst": false, 00:29:38.117 "ddgst": false 00:29:38.117 }, 00:29:38.117 "method": "bdev_nvme_attach_controller" 00:29:38.117 },{ 00:29:38.117 "params": { 00:29:38.117 "name": "Nvme9", 00:29:38.117 "trtype": "tcp", 00:29:38.117 "traddr": "10.0.0.2", 00:29:38.117 "adrfam": "ipv4", 00:29:38.117 "trsvcid": "4420", 00:29:38.117 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:29:38.117 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:29:38.117 "hdgst": false, 00:29:38.117 "ddgst": false 00:29:38.117 }, 00:29:38.117 "method": "bdev_nvme_attach_controller" 00:29:38.117 },{ 00:29:38.117 "params": { 00:29:38.117 "name": "Nvme10", 00:29:38.117 "trtype": "tcp", 00:29:38.117 "traddr": "10.0.0.2", 00:29:38.117 "adrfam": "ipv4", 00:29:38.117 "trsvcid": "4420", 00:29:38.117 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:29:38.117 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:29:38.117 "hdgst": false, 00:29:38.117 "ddgst": false 00:29:38.117 }, 00:29:38.117 "method": "bdev_nvme_attach_controller" 00:29:38.117 }' 00:29:38.117 [2024-12-14 00:10:17.094026] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:29:38.117 [2024-12-14 00:10:17.094113] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4127849 ] 00:29:38.117 [2024-12-14 00:10:17.210848] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:38.375 [2024-12-14 00:10:17.323466] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:29:40.275 Running I/O for 10 seconds... 00:29:40.533 00:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:40.533 00:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@868 -- # return 0 00:29:40.533 00:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@106 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:29:40.533 00:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:40.533 00:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:40.791 00:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:40.791 00:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@108 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:29:40.791 00:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:29:40.791 00:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:29:40.791 00:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local ret=1 00:29:40.791 00:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # local i 00:29:40.791 00:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:29:40.791 00:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:29:40.791 00:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:29:40.791 00:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:29:40.791 00:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:40.791 00:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:40.791 00:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:40.791 00:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=67 00:29:40.791 00:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 67 -ge 100 ']' 00:29:40.791 00:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@68 -- # sleep 0.25 00:29:41.049 00:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i-- )) 00:29:41.049 00:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:29:41.049 00:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:29:41.049 00:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:41.049 00:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:29:41.049 00:10:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:41.049 00:10:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:41.049 00:10:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=131 00:29:41.049 00:10:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 131 -ge 100 ']' 00:29:41.049 00:10:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # ret=0 00:29:41.049 00:10:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@66 -- # break 00:29:41.049 00:10:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@70 -- # return 0 00:29:41.049 00:10:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@111 -- # killprocess 4127849 00:29:41.049 00:10:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # '[' -z 4127849 ']' 00:29:41.049 00:10:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # kill -0 4127849 00:29:41.049 00:10:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # uname 00:29:41.049 00:10:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:41.049 00:10:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4127849 00:29:41.049 00:10:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:41.049 00:10:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:41.049 00:10:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4127849' 00:29:41.049 killing process with pid 4127849 00:29:41.049 00:10:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@973 -- # kill 4127849 00:29:41.049 00:10:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@978 -- # wait 4127849 00:29:41.049 Received shutdown signal, test time was about 0.915826 seconds 00:29:41.049 00:29:41.049 Latency(us) 00:29:41.049 [2024-12-13T23:10:20.190Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:41.049 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:41.049 Verification LBA range: start 0x0 length 0x400 00:29:41.049 Nvme1n1 : 0.91 282.10 17.63 0.00 0.00 224184.08 17601.10 240673.16 00:29:41.049 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:41.049 Verification LBA range: start 0x0 length 0x400 00:29:41.049 Nvme2n1 : 0.88 241.59 15.10 0.00 0.00 252981.62 9237.46 223696.21 00:29:41.049 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:41.049 Verification LBA range: start 0x0 length 0x400 00:29:41.049 Nvme3n1 : 0.90 288.10 18.01 0.00 0.00 210828.65 3666.90 241671.80 00:29:41.049 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:41.049 Verification LBA range: start 0x0 length 0x400 00:29:41.049 Nvme4n1 : 0.91 281.15 17.57 0.00 0.00 212411.73 17226.61 240673.16 00:29:41.049 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:41.049 Verification LBA range: start 0x0 length 0x400 00:29:41.049 Nvme5n1 : 0.90 214.12 13.38 0.00 0.00 273084.55 20347.37 255652.82 00:29:41.049 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:41.049 Verification LBA range: start 0x0 length 0x400 00:29:41.049 Nvme6n1 : 0.88 218.15 13.63 0.00 0.00 261940.83 19723.22 245666.38 00:29:41.049 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:41.049 Verification LBA range: start 0x0 length 0x400 00:29:41.049 Nvme7n1 : 0.86 222.63 13.91 0.00 0.00 250492.18 16352.79 238675.87 00:29:41.049 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:41.049 Verification LBA range: start 0x0 length 0x400 00:29:41.050 Nvme8n1 : 0.92 279.74 17.48 0.00 0.00 196773.06 22344.66 238675.87 00:29:41.050 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:41.050 Verification LBA range: start 0x0 length 0x400 00:29:41.050 Nvme9n1 : 0.89 216.15 13.51 0.00 0.00 247894.15 19473.55 243669.09 00:29:41.050 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:41.050 Verification LBA range: start 0x0 length 0x400 00:29:41.050 Nvme10n1 : 0.89 214.81 13.43 0.00 0.00 244170.36 19848.05 269633.83 00:29:41.050 [2024-12-13T23:10:20.191Z] =================================================================================================================== 00:29:41.050 [2024-12-13T23:10:20.191Z] Total : 2458.54 153.66 0.00 0.00 234493.52 3666.90 269633.83 00:29:42.422 00:10:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # sleep 1 00:29:43.355 00:10:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@115 -- # kill -0 4127373 00:29:43.355 00:10:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@117 -- # stoptarget 00:29:43.355 00:10:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:29:43.355 00:10:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:29:43.355 00:10:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:29:43.355 00:10:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@46 -- # nvmftestfini 00:29:43.355 00:10:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:43.355 00:10:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # sync 00:29:43.355 00:10:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:43.355 00:10:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set +e 00:29:43.355 00:10:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:43.355 00:10:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:43.355 rmmod nvme_tcp 00:29:43.355 rmmod nvme_fabrics 00:29:43.355 rmmod nvme_keyring 00:29:43.355 00:10:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:43.355 00:10:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@128 -- # set -e 00:29:43.355 00:10:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@129 -- # return 0 00:29:43.355 00:10:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@517 -- # '[' -n 4127373 ']' 00:29:43.355 00:10:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@518 -- # killprocess 4127373 00:29:43.355 00:10:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # '[' -z 4127373 ']' 00:29:43.355 00:10:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # kill -0 4127373 00:29:43.355 00:10:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # uname 00:29:43.355 00:10:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:43.355 00:10:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4127373 00:29:43.355 00:10:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:43.355 00:10:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:43.355 00:10:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4127373' 00:29:43.355 killing process with pid 4127373 00:29:43.355 00:10:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@973 -- # kill 4127373 00:29:43.355 00:10:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@978 -- # wait 4127373 00:29:46.637 00:10:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:46.637 00:10:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:46.637 00:10:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:46.637 00:10:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # iptr 00:29:46.637 00:10:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # iptables-save 00:29:46.637 00:10:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # iptables-restore 00:29:46.637 00:10:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:46.637 00:10:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:46.637 00:10:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:46.637 00:10:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:46.637 00:10:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:46.637 00:10:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:48.540 00:10:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:48.540 00:29:48.540 real 0m12.826s 00:29:48.540 user 0m43.489s 00:29:48.540 sys 0m1.712s 00:29:48.540 00:10:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:48.540 00:10:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:48.540 ************************************ 00:29:48.540 END TEST nvmf_shutdown_tc2 00:29:48.540 ************************************ 00:29:48.540 00:10:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@164 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:29:48.540 00:10:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:29:48.540 00:10:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:48.540 00:10:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:29:48.540 ************************************ 00:29:48.540 START TEST nvmf_shutdown_tc3 00:29:48.540 ************************************ 00:29:48.540 00:10:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc3 00:29:48.540 00:10:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@122 -- # starttarget 00:29:48.540 00:10:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@16 -- # nvmftestinit 00:29:48.540 00:10:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:48.540 00:10:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:48.540 00:10:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:48.540 00:10:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:48.540 00:10:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:48.540 00:10:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:48.540 00:10:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:48.540 00:10:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:48.540 00:10:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:48.540 00:10:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:48.540 00:10:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@309 -- # xtrace_disable 00:29:48.540 00:10:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:48.540 00:10:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:48.540 00:10:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # pci_devs=() 00:29:48.540 00:10:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:48.540 00:10:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:48.540 00:10:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:48.540 00:10:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:48.540 00:10:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:48.540 00:10:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # net_devs=() 00:29:48.540 00:10:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:48.540 00:10:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # e810=() 00:29:48.540 00:10:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # local -ga e810 00:29:48.540 00:10:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # x722=() 00:29:48.540 00:10:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # local -ga x722 00:29:48.540 00:10:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # mlx=() 00:29:48.540 00:10:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # local -ga mlx 00:29:48.540 00:10:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:48.540 00:10:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:48.540 00:10:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:48.540 00:10:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:48.540 00:10:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:48.540 00:10:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:48.540 00:10:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:48.540 00:10:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:48.540 00:10:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:48.540 00:10:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:48.540 00:10:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:48.540 00:10:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:48.540 00:10:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:48.540 00:10:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:48.540 00:10:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:48.540 00:10:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:48.540 00:10:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:48.540 00:10:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:48.540 00:10:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:48.540 00:10:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:29:48.540 Found 0000:af:00.0 (0x8086 - 0x159b) 00:29:48.540 00:10:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:48.540 00:10:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:48.540 00:10:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:48.540 00:10:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:48.541 00:10:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:48.541 00:10:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:48.541 00:10:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:29:48.541 Found 0000:af:00.1 (0x8086 - 0x159b) 00:29:48.541 00:10:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:48.541 00:10:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:48.541 00:10:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:48.541 00:10:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:48.541 00:10:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:48.541 00:10:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:48.541 00:10:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:48.541 00:10:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:48.541 00:10:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:48.541 00:10:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:48.541 00:10:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:48.541 00:10:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:48.541 00:10:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:48.541 00:10:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:48.541 00:10:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:48.541 00:10:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:29:48.541 Found net devices under 0000:af:00.0: cvl_0_0 00:29:48.541 00:10:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:48.541 00:10:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:48.541 00:10:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:48.541 00:10:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:48.541 00:10:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:48.541 00:10:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:48.541 00:10:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:48.541 00:10:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:48.541 00:10:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:29:48.541 Found net devices under 0000:af:00.1: cvl_0_1 00:29:48.541 00:10:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:48.541 00:10:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:48.541 00:10:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # is_hw=yes 00:29:48.541 00:10:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:48.541 00:10:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:48.541 00:10:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:48.541 00:10:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:48.541 00:10:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:48.541 00:10:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:48.541 00:10:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:48.541 00:10:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:48.541 00:10:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:48.541 00:10:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:48.541 00:10:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:48.541 00:10:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:48.541 00:10:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:48.541 00:10:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:48.541 00:10:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:48.541 00:10:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:48.541 00:10:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:48.541 00:10:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:48.800 00:10:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:48.800 00:10:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:48.800 00:10:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:48.800 00:10:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:48.800 00:10:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:48.800 00:10:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:48.800 00:10:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:48.800 00:10:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:48.800 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:48.800 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.166 ms 00:29:48.800 00:29:48.800 --- 10.0.0.2 ping statistics --- 00:29:48.800 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:48.800 rtt min/avg/max/mdev = 0.166/0.166/0.166/0.000 ms 00:29:48.800 00:10:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:48.800 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:48.800 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.120 ms 00:29:48.800 00:29:48.800 --- 10.0.0.1 ping statistics --- 00:29:48.800 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:48.800 rtt min/avg/max/mdev = 0.120/0.120/0.120/0.000 ms 00:29:48.800 00:10:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:48.800 00:10:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@450 -- # return 0 00:29:48.800 00:10:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:48.800 00:10:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:48.800 00:10:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:48.800 00:10:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:48.800 00:10:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:48.800 00:10:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:48.800 00:10:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:48.800 00:10:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:29:48.800 00:10:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:48.800 00:10:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:48.800 00:10:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:48.800 00:10:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@509 -- # nvmfpid=4129630 00:29:48.800 00:10:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@510 -- # waitforlisten 4129630 00:29:48.800 00:10:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:29:48.800 00:10:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # '[' -z 4129630 ']' 00:29:48.800 00:10:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:48.800 00:10:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:48.800 00:10:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:48.800 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:48.800 00:10:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:48.800 00:10:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:49.058 [2024-12-14 00:10:27.963084] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:29:49.058 [2024-12-14 00:10:27.963176] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:49.058 [2024-12-14 00:10:28.083757] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:49.058 [2024-12-14 00:10:28.190628] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:49.058 [2024-12-14 00:10:28.190671] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:49.058 [2024-12-14 00:10:28.190685] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:49.058 [2024-12-14 00:10:28.190696] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:49.058 [2024-12-14 00:10:28.190704] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:49.058 [2024-12-14 00:10:28.193011] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:29:49.058 [2024-12-14 00:10:28.193101] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:29:49.058 [2024-12-14 00:10:28.193209] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:29:49.058 [2024-12-14 00:10:28.193231] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 4 00:29:49.992 00:10:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:49.992 00:10:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@868 -- # return 0 00:29:49.992 00:10:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:49.992 00:10:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:49.992 00:10:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:49.992 00:10:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:49.992 00:10:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:49.992 00:10:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:49.992 00:10:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:49.992 [2024-12-14 00:10:28.813529] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:49.992 00:10:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:49.992 00:10:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:29:49.992 00:10:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:29:49.992 00:10:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:49.992 00:10:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:49.992 00:10:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:29:49.992 00:10:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:49.992 00:10:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:29:49.992 00:10:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:49.992 00:10:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:29:49.992 00:10:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:49.992 00:10:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:29:49.992 00:10:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:49.992 00:10:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:29:49.992 00:10:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:49.992 00:10:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:29:49.992 00:10:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:49.992 00:10:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:29:49.992 00:10:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:49.992 00:10:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:29:49.992 00:10:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:49.992 00:10:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:29:49.992 00:10:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:49.992 00:10:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:29:49.992 00:10:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:49.992 00:10:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:29:49.992 00:10:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # rpc_cmd 00:29:49.992 00:10:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:49.992 00:10:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:49.992 Malloc1 00:29:49.992 [2024-12-14 00:10:28.972570] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:49.992 Malloc2 00:29:50.250 Malloc3 00:29:50.250 Malloc4 00:29:50.250 Malloc5 00:29:50.508 Malloc6 00:29:50.508 Malloc7 00:29:50.766 Malloc8 00:29:50.766 Malloc9 00:29:50.766 Malloc10 00:29:50.766 00:10:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:50.766 00:10:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:29:50.766 00:10:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:50.766 00:10:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:51.024 00:10:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # perfpid=4130028 00:29:51.024 00:10:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # waitforlisten 4130028 /var/tmp/bdevperf.sock 00:29:51.024 00:10:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # '[' -z 4130028 ']' 00:29:51.024 00:10:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:29:51.024 00:10:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:29:51.024 00:10:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:51.024 00:10:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:29:51.024 00:10:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:29:51.024 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:29:51.024 00:10:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # config=() 00:29:51.024 00:10:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:51.024 00:10:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # local subsystem config 00:29:51.024 00:10:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:51.024 00:10:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:51.024 00:10:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:51.024 { 00:29:51.024 "params": { 00:29:51.024 "name": "Nvme$subsystem", 00:29:51.024 "trtype": "$TEST_TRANSPORT", 00:29:51.024 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:51.024 "adrfam": "ipv4", 00:29:51.024 "trsvcid": "$NVMF_PORT", 00:29:51.024 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:51.024 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:51.024 "hdgst": ${hdgst:-false}, 00:29:51.024 "ddgst": ${ddgst:-false} 00:29:51.024 }, 00:29:51.024 "method": "bdev_nvme_attach_controller" 00:29:51.024 } 00:29:51.024 EOF 00:29:51.024 )") 00:29:51.024 00:10:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:29:51.024 00:10:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:51.024 00:10:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:51.024 { 00:29:51.024 "params": { 00:29:51.024 "name": "Nvme$subsystem", 00:29:51.024 "trtype": "$TEST_TRANSPORT", 00:29:51.024 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:51.024 "adrfam": "ipv4", 00:29:51.024 "trsvcid": "$NVMF_PORT", 00:29:51.024 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:51.024 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:51.024 "hdgst": ${hdgst:-false}, 00:29:51.024 "ddgst": ${ddgst:-false} 00:29:51.024 }, 00:29:51.024 "method": "bdev_nvme_attach_controller" 00:29:51.024 } 00:29:51.024 EOF 00:29:51.024 )") 00:29:51.024 00:10:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:29:51.024 00:10:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:51.024 00:10:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:51.024 { 00:29:51.024 "params": { 00:29:51.024 "name": "Nvme$subsystem", 00:29:51.024 "trtype": "$TEST_TRANSPORT", 00:29:51.024 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:51.024 "adrfam": "ipv4", 00:29:51.024 "trsvcid": "$NVMF_PORT", 00:29:51.024 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:51.024 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:51.024 "hdgst": ${hdgst:-false}, 00:29:51.024 "ddgst": ${ddgst:-false} 00:29:51.024 }, 00:29:51.024 "method": "bdev_nvme_attach_controller" 00:29:51.024 } 00:29:51.024 EOF 00:29:51.024 )") 00:29:51.024 00:10:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:29:51.024 00:10:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:51.024 00:10:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:51.024 { 00:29:51.024 "params": { 00:29:51.024 "name": "Nvme$subsystem", 00:29:51.024 "trtype": "$TEST_TRANSPORT", 00:29:51.024 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:51.024 "adrfam": "ipv4", 00:29:51.024 "trsvcid": "$NVMF_PORT", 00:29:51.024 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:51.024 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:51.024 "hdgst": ${hdgst:-false}, 00:29:51.024 "ddgst": ${ddgst:-false} 00:29:51.024 }, 00:29:51.024 "method": "bdev_nvme_attach_controller" 00:29:51.025 } 00:29:51.025 EOF 00:29:51.025 )") 00:29:51.025 00:10:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:29:51.025 00:10:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:51.025 00:10:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:51.025 { 00:29:51.025 "params": { 00:29:51.025 "name": "Nvme$subsystem", 00:29:51.025 "trtype": "$TEST_TRANSPORT", 00:29:51.025 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:51.025 "adrfam": "ipv4", 00:29:51.025 "trsvcid": "$NVMF_PORT", 00:29:51.025 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:51.025 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:51.025 "hdgst": ${hdgst:-false}, 00:29:51.025 "ddgst": ${ddgst:-false} 00:29:51.025 }, 00:29:51.025 "method": "bdev_nvme_attach_controller" 00:29:51.025 } 00:29:51.025 EOF 00:29:51.025 )") 00:29:51.025 00:10:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:29:51.025 00:10:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:51.025 00:10:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:51.025 { 00:29:51.025 "params": { 00:29:51.025 "name": "Nvme$subsystem", 00:29:51.025 "trtype": "$TEST_TRANSPORT", 00:29:51.025 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:51.025 "adrfam": "ipv4", 00:29:51.025 "trsvcid": "$NVMF_PORT", 00:29:51.025 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:51.025 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:51.025 "hdgst": ${hdgst:-false}, 00:29:51.025 "ddgst": ${ddgst:-false} 00:29:51.025 }, 00:29:51.025 "method": "bdev_nvme_attach_controller" 00:29:51.025 } 00:29:51.025 EOF 00:29:51.025 )") 00:29:51.025 00:10:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:29:51.025 00:10:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:51.025 00:10:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:51.025 { 00:29:51.025 "params": { 00:29:51.025 "name": "Nvme$subsystem", 00:29:51.025 "trtype": "$TEST_TRANSPORT", 00:29:51.025 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:51.025 "adrfam": "ipv4", 00:29:51.025 "trsvcid": "$NVMF_PORT", 00:29:51.025 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:51.025 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:51.025 "hdgst": ${hdgst:-false}, 00:29:51.025 "ddgst": ${ddgst:-false} 00:29:51.025 }, 00:29:51.025 "method": "bdev_nvme_attach_controller" 00:29:51.025 } 00:29:51.025 EOF 00:29:51.025 )") 00:29:51.025 00:10:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:29:51.025 00:10:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:51.025 00:10:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:51.025 { 00:29:51.025 "params": { 00:29:51.025 "name": "Nvme$subsystem", 00:29:51.025 "trtype": "$TEST_TRANSPORT", 00:29:51.025 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:51.025 "adrfam": "ipv4", 00:29:51.025 "trsvcid": "$NVMF_PORT", 00:29:51.025 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:51.025 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:51.025 "hdgst": ${hdgst:-false}, 00:29:51.025 "ddgst": ${ddgst:-false} 00:29:51.025 }, 00:29:51.025 "method": "bdev_nvme_attach_controller" 00:29:51.025 } 00:29:51.025 EOF 00:29:51.025 )") 00:29:51.025 00:10:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:29:51.025 00:10:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:51.025 00:10:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:51.025 { 00:29:51.025 "params": { 00:29:51.025 "name": "Nvme$subsystem", 00:29:51.025 "trtype": "$TEST_TRANSPORT", 00:29:51.025 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:51.025 "adrfam": "ipv4", 00:29:51.025 "trsvcid": "$NVMF_PORT", 00:29:51.025 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:51.025 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:51.025 "hdgst": ${hdgst:-false}, 00:29:51.025 "ddgst": ${ddgst:-false} 00:29:51.025 }, 00:29:51.025 "method": "bdev_nvme_attach_controller" 00:29:51.025 } 00:29:51.025 EOF 00:29:51.025 )") 00:29:51.025 00:10:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:29:51.025 00:10:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:51.025 00:10:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:51.025 { 00:29:51.025 "params": { 00:29:51.025 "name": "Nvme$subsystem", 00:29:51.025 "trtype": "$TEST_TRANSPORT", 00:29:51.025 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:51.025 "adrfam": "ipv4", 00:29:51.025 "trsvcid": "$NVMF_PORT", 00:29:51.025 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:51.025 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:51.025 "hdgst": ${hdgst:-false}, 00:29:51.025 "ddgst": ${ddgst:-false} 00:29:51.025 }, 00:29:51.025 "method": "bdev_nvme_attach_controller" 00:29:51.025 } 00:29:51.025 EOF 00:29:51.025 )") 00:29:51.025 00:10:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:29:51.025 00:10:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@584 -- # jq . 00:29:51.025 [2024-12-14 00:10:29.983918] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:29:51.025 [2024-12-14 00:10:29.984006] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4130028 ] 00:29:51.025 00:10:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@585 -- # IFS=, 00:29:51.025 00:10:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:29:51.025 "params": { 00:29:51.025 "name": "Nvme1", 00:29:51.025 "trtype": "tcp", 00:29:51.025 "traddr": "10.0.0.2", 00:29:51.025 "adrfam": "ipv4", 00:29:51.025 "trsvcid": "4420", 00:29:51.025 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:51.025 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:51.025 "hdgst": false, 00:29:51.025 "ddgst": false 00:29:51.025 }, 00:29:51.025 "method": "bdev_nvme_attach_controller" 00:29:51.025 },{ 00:29:51.025 "params": { 00:29:51.025 "name": "Nvme2", 00:29:51.025 "trtype": "tcp", 00:29:51.025 "traddr": "10.0.0.2", 00:29:51.025 "adrfam": "ipv4", 00:29:51.025 "trsvcid": "4420", 00:29:51.025 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:29:51.025 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:29:51.025 "hdgst": false, 00:29:51.025 "ddgst": false 00:29:51.025 }, 00:29:51.025 "method": "bdev_nvme_attach_controller" 00:29:51.025 },{ 00:29:51.025 "params": { 00:29:51.025 "name": "Nvme3", 00:29:51.025 "trtype": "tcp", 00:29:51.025 "traddr": "10.0.0.2", 00:29:51.025 "adrfam": "ipv4", 00:29:51.025 "trsvcid": "4420", 00:29:51.025 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:29:51.025 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:29:51.025 "hdgst": false, 00:29:51.025 "ddgst": false 00:29:51.025 }, 00:29:51.025 "method": "bdev_nvme_attach_controller" 00:29:51.025 },{ 00:29:51.025 "params": { 00:29:51.025 "name": "Nvme4", 00:29:51.025 "trtype": "tcp", 00:29:51.025 "traddr": "10.0.0.2", 00:29:51.025 "adrfam": "ipv4", 00:29:51.025 "trsvcid": "4420", 00:29:51.025 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:29:51.025 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:29:51.025 "hdgst": false, 00:29:51.025 "ddgst": false 00:29:51.025 }, 00:29:51.025 "method": "bdev_nvme_attach_controller" 00:29:51.025 },{ 00:29:51.025 "params": { 00:29:51.025 "name": "Nvme5", 00:29:51.025 "trtype": "tcp", 00:29:51.025 "traddr": "10.0.0.2", 00:29:51.025 "adrfam": "ipv4", 00:29:51.025 "trsvcid": "4420", 00:29:51.025 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:29:51.025 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:29:51.025 "hdgst": false, 00:29:51.025 "ddgst": false 00:29:51.025 }, 00:29:51.025 "method": "bdev_nvme_attach_controller" 00:29:51.025 },{ 00:29:51.025 "params": { 00:29:51.025 "name": "Nvme6", 00:29:51.025 "trtype": "tcp", 00:29:51.025 "traddr": "10.0.0.2", 00:29:51.025 "adrfam": "ipv4", 00:29:51.025 "trsvcid": "4420", 00:29:51.025 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:29:51.025 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:29:51.025 "hdgst": false, 00:29:51.025 "ddgst": false 00:29:51.025 }, 00:29:51.025 "method": "bdev_nvme_attach_controller" 00:29:51.025 },{ 00:29:51.025 "params": { 00:29:51.025 "name": "Nvme7", 00:29:51.025 "trtype": "tcp", 00:29:51.025 "traddr": "10.0.0.2", 00:29:51.025 "adrfam": "ipv4", 00:29:51.025 "trsvcid": "4420", 00:29:51.025 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:29:51.025 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:29:51.025 "hdgst": false, 00:29:51.025 "ddgst": false 00:29:51.025 }, 00:29:51.025 "method": "bdev_nvme_attach_controller" 00:29:51.025 },{ 00:29:51.025 "params": { 00:29:51.025 "name": "Nvme8", 00:29:51.025 "trtype": "tcp", 00:29:51.025 "traddr": "10.0.0.2", 00:29:51.025 "adrfam": "ipv4", 00:29:51.025 "trsvcid": "4420", 00:29:51.025 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:29:51.025 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:29:51.025 "hdgst": false, 00:29:51.025 "ddgst": false 00:29:51.025 }, 00:29:51.025 "method": "bdev_nvme_attach_controller" 00:29:51.025 },{ 00:29:51.025 "params": { 00:29:51.025 "name": "Nvme9", 00:29:51.025 "trtype": "tcp", 00:29:51.025 "traddr": "10.0.0.2", 00:29:51.025 "adrfam": "ipv4", 00:29:51.025 "trsvcid": "4420", 00:29:51.025 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:29:51.026 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:29:51.026 "hdgst": false, 00:29:51.026 "ddgst": false 00:29:51.026 }, 00:29:51.026 "method": "bdev_nvme_attach_controller" 00:29:51.026 },{ 00:29:51.026 "params": { 00:29:51.026 "name": "Nvme10", 00:29:51.026 "trtype": "tcp", 00:29:51.026 "traddr": "10.0.0.2", 00:29:51.026 "adrfam": "ipv4", 00:29:51.026 "trsvcid": "4420", 00:29:51.026 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:29:51.026 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:29:51.026 "hdgst": false, 00:29:51.026 "ddgst": false 00:29:51.026 }, 00:29:51.026 "method": "bdev_nvme_attach_controller" 00:29:51.026 }' 00:29:51.026 [2024-12-14 00:10:30.103238] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:51.283 [2024-12-14 00:10:30.215649] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:29:53.182 Running I/O for 10 seconds... 00:29:53.440 00:10:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:53.440 00:10:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@868 -- # return 0 00:29:53.440 00:10:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@128 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:29:53.440 00:10:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:53.440 00:10:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:53.440 00:10:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:53.440 00:10:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@131 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:29:53.440 00:10:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@133 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:29:53.440 00:10:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:29:53.440 00:10:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:29:53.440 00:10:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local ret=1 00:29:53.440 00:10:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # local i 00:29:53.440 00:10:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:29:53.440 00:10:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:29:53.699 00:10:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:29:53.699 00:10:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:53.699 00:10:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:29:53.699 00:10:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:53.699 00:10:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:53.699 00:10:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=67 00:29:53.699 00:10:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 67 -ge 100 ']' 00:29:53.699 00:10:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@68 -- # sleep 0.25 00:29:53.972 00:10:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i-- )) 00:29:53.972 00:10:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:29:53.972 00:10:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:29:53.972 00:10:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:53.972 00:10:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:29:53.972 00:10:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:53.972 00:10:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:53.972 00:10:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=136 00:29:53.972 00:10:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 136 -ge 100 ']' 00:29:53.972 00:10:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # ret=0 00:29:53.972 00:10:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@66 -- # break 00:29:53.972 00:10:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@70 -- # return 0 00:29:53.972 00:10:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # killprocess 4129630 00:29:53.972 00:10:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # '[' -z 4129630 ']' 00:29:53.972 00:10:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # kill -0 4129630 00:29:53.972 00:10:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@959 -- # uname 00:29:53.972 00:10:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:53.972 00:10:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4129630 00:29:53.972 00:10:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:53.972 00:10:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:53.972 00:10:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4129630' 00:29:53.972 killing process with pid 4129630 00:29:53.972 00:10:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@973 -- # kill 4129630 00:29:53.973 00:10:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@978 -- # wait 4129630 00:29:53.973 [2024-12-14 00:10:32.992103] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:53.973 [2024-12-14 00:10:32.992157] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:53.973 [2024-12-14 00:10:32.992169] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:53.973 [2024-12-14 00:10:32.992179] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:53.973 [2024-12-14 00:10:32.992187] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:53.973 [2024-12-14 00:10:32.992196] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:53.973 [2024-12-14 00:10:32.992205] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:53.973 [2024-12-14 00:10:32.992214] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:53.973 [2024-12-14 00:10:32.992224] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:53.973 [2024-12-14 00:10:32.992232] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:53.973 [2024-12-14 00:10:32.992240] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:53.973 [2024-12-14 00:10:32.992249] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:53.973 [2024-12-14 00:10:32.992258] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:53.973 [2024-12-14 00:10:32.992273] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:53.973 [2024-12-14 00:10:32.992282] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:53.973 [2024-12-14 00:10:32.992291] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:53.973 [2024-12-14 00:10:32.992299] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:53.973 [2024-12-14 00:10:32.992308] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:53.973 [2024-12-14 00:10:32.992316] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:53.973 [2024-12-14 00:10:32.992325] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:53.973 [2024-12-14 00:10:32.992333] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:53.973 [2024-12-14 00:10:32.992342] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:53.973 [2024-12-14 00:10:32.992350] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:53.973 [2024-12-14 00:10:32.992358] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:53.973 [2024-12-14 00:10:32.992367] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:53.973 [2024-12-14 00:10:32.992376] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:53.973 [2024-12-14 00:10:32.992384] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:53.973 [2024-12-14 00:10:32.992393] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:53.973 [2024-12-14 00:10:32.992401] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:53.973 [2024-12-14 00:10:32.992410] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:53.973 [2024-12-14 00:10:32.992418] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:53.973 [2024-12-14 00:10:32.992427] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:53.973 [2024-12-14 00:10:32.992435] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:53.973 [2024-12-14 00:10:32.992448] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:53.973 [2024-12-14 00:10:32.992458] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:53.973 [2024-12-14 00:10:32.992466] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:53.973 [2024-12-14 00:10:32.992474] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:53.973 [2024-12-14 00:10:32.992483] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:53.973 [2024-12-14 00:10:32.992491] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:53.973 [2024-12-14 00:10:32.992502] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:53.973 [2024-12-14 00:10:32.992511] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:53.973 [2024-12-14 00:10:32.992520] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:53.973 [2024-12-14 00:10:32.992528] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:53.973 [2024-12-14 00:10:32.992536] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:53.973 [2024-12-14 00:10:32.992545] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:53.973 [2024-12-14 00:10:32.992553] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:53.973 [2024-12-14 00:10:32.992561] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:53.973 [2024-12-14 00:10:32.992569] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:53.973 [2024-12-14 00:10:32.992577] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:53.973 [2024-12-14 00:10:32.992585] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:53.973 [2024-12-14 00:10:32.992594] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:53.973 [2024-12-14 00:10:32.992601] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:53.973 [2024-12-14 00:10:32.992609] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:53.973 [2024-12-14 00:10:32.992618] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:53.973 [2024-12-14 00:10:32.992626] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:53.973 [2024-12-14 00:10:32.992634] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:53.973 [2024-12-14 00:10:32.992643] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:53.973 [2024-12-14 00:10:32.992651] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:53.973 [2024-12-14 00:10:32.992659] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:53.973 [2024-12-14 00:10:32.992667] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:53.973 [2024-12-14 00:10:32.992675] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:53.973 [2024-12-14 00:10:32.992684] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:53.973 [2024-12-14 00:10:32.992692] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000007480 is same with the state(6) to be set 00:29:53.973 [2024-12-14 00:10:32.995410] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:53.973 [2024-12-14 00:10:32.995447] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:53.973 [2024-12-14 00:10:32.995458] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:53.973 [2024-12-14 00:10:32.995472] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:53.973 [2024-12-14 00:10:32.995481] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:53.973 [2024-12-14 00:10:32.995490] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:53.973 [2024-12-14 00:10:32.995499] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:53.973 [2024-12-14 00:10:32.995507] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:53.973 [2024-12-14 00:10:32.995516] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:53.973 [2024-12-14 00:10:32.995526] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:53.973 [2024-12-14 00:10:32.995535] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:53.973 [2024-12-14 00:10:32.995544] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:53.973 [2024-12-14 00:10:32.995552] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:53.973 [2024-12-14 00:10:32.995561] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:53.973 [2024-12-14 00:10:32.995569] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:53.973 [2024-12-14 00:10:32.995578] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:53.973 [2024-12-14 00:10:32.995587] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:53.973 [2024-12-14 00:10:32.995595] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:53.973 [2024-12-14 00:10:32.995604] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:53.973 [2024-12-14 00:10:32.995613] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:53.973 [2024-12-14 00:10:32.995621] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:53.974 [2024-12-14 00:10:32.995630] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:53.974 [2024-12-14 00:10:32.995638] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:53.974 [2024-12-14 00:10:32.995647] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:53.974 [2024-12-14 00:10:32.995655] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:53.974 [2024-12-14 00:10:32.995663] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:53.974 [2024-12-14 00:10:32.995672] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:53.974 [2024-12-14 00:10:32.995680] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:53.974 [2024-12-14 00:10:32.995689] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:53.974 [2024-12-14 00:10:32.995699] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:53.974 [2024-12-14 00:10:32.995708] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:53.974 [2024-12-14 00:10:32.995716] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:53.974 [2024-12-14 00:10:32.995725] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:53.974 [2024-12-14 00:10:32.995734] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:53.974 [2024-12-14 00:10:32.995743] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:53.974 [2024-12-14 00:10:32.995752] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:53.974 [2024-12-14 00:10:32.995760] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:53.974 [2024-12-14 00:10:32.995769] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:53.974 [2024-12-14 00:10:32.995777] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:53.974 [2024-12-14 00:10:32.995785] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:53.974 [2024-12-14 00:10:32.995794] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:53.974 [2024-12-14 00:10:32.995802] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:53.974 [2024-12-14 00:10:32.995811] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:53.974 [2024-12-14 00:10:32.995819] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:53.974 [2024-12-14 00:10:32.995827] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:53.974 [2024-12-14 00:10:32.995836] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:53.974 [2024-12-14 00:10:32.995845] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:53.974 [2024-12-14 00:10:32.995853] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:53.974 [2024-12-14 00:10:32.995862] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009880 is same with the state(6) to be set 00:29:53.974 [2024-12-14 00:10:32.999520] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:53.974 [2024-12-14 00:10:32.999560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.974 [2024-12-14 00:10:32.999576] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:53.974 [2024-12-14 00:10:32.999587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.974 [2024-12-14 00:10:32.999598] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:53.974 [2024-12-14 00:10:32.999610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.974 [2024-12-14 00:10:32.999625] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:53.974 [2024-12-14 00:10:32.999635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.974 [2024-12-14 00:10:32.999654] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032b480 is same with the state(6) to be set 00:29:53.974 [2024-12-14 00:10:32.999753] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:53.974 [2024-12-14 00:10:32.999766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.974 [2024-12-14 00:10:32.999777] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:53.974 [2024-12-14 00:10:32.999787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.974 [2024-12-14 00:10:32.999797] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:53.974 [2024-12-14 00:10:32.999807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.974 [2024-12-14 00:10:32.999818] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:53.974 [2024-12-14 00:10:32.999828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.974 [2024-12-14 00:10:32.999837] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000326480 is same with the state(6) to be set 00:29:53.974 [2024-12-14 00:10:32.999870] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:53.974 [2024-12-14 00:10:32.999882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.974 [2024-12-14 00:10:32.999893] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:53.974 [2024-12-14 00:10:32.999902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.974 [2024-12-14 00:10:32.999913] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:53.974 [2024-12-14 00:10:32.999922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.974 [2024-12-14 00:10:32.999932] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:53.974 [2024-12-14 00:10:32.999941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.974 [2024-12-14 00:10:32.999950] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:29:53.974 [2024-12-14 00:10:33.002341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.974 [2024-12-14 00:10:33.002377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.974 [2024-12-14 00:10:33.002401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.974 [2024-12-14 00:10:33.002412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.974 [2024-12-14 00:10:33.002430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.974 [2024-12-14 00:10:33.002446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.974 [2024-12-14 00:10:33.002459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.974 [2024-12-14 00:10:33.002469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.974 [2024-12-14 00:10:33.002481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.974 [2024-12-14 00:10:33.002490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.974 [2024-12-14 00:10:33.002502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.974 [2024-12-14 00:10:33.002512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.974 [2024-12-14 00:10:33.002524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.974 [2024-12-14 00:10:33.002533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.974 [2024-12-14 00:10:33.002544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.974 [2024-12-14 00:10:33.002554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.974 [2024-12-14 00:10:33.002565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.974 [2024-12-14 00:10:33.002575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.974 [2024-12-14 00:10:33.002587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.974 [2024-12-14 00:10:33.002596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.974 [2024-12-14 00:10:33.002607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.974 [2024-12-14 00:10:33.002617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.974 [2024-12-14 00:10:33.002629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.974 [2024-12-14 00:10:33.002638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.974 [2024-12-14 00:10:33.002650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.974 [2024-12-14 00:10:33.002659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.975 [2024-12-14 00:10:33.002671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.975 [2024-12-14 00:10:33.002680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.975 [2024-12-14 00:10:33.002692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.975 [2024-12-14 00:10:33.002701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.975 [2024-12-14 00:10:33.002715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.975 [2024-12-14 00:10:33.002724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.975 [2024-12-14 00:10:33.002736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.975 [2024-12-14 00:10:33.002746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.975 [2024-12-14 00:10:33.002757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.975 [2024-12-14 00:10:33.002766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.975 [2024-12-14 00:10:33.002778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.975 [2024-12-14 00:10:33.002787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.975 [2024-12-14 00:10:33.002799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.975 [2024-12-14 00:10:33.002808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.975 [2024-12-14 00:10:33.002820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.975 [2024-12-14 00:10:33.002831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.975 [2024-12-14 00:10:33.002843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.975 [2024-12-14 00:10:33.002852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.975 [2024-12-14 00:10:33.002864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.975 [2024-12-14 00:10:33.002873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.975 [2024-12-14 00:10:33.002885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.975 [2024-12-14 00:10:33.002895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.975 [2024-12-14 00:10:33.002907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.975 [2024-12-14 00:10:33.002916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.975 [2024-12-14 00:10:33.002928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.975 [2024-12-14 00:10:33.002937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.975 [2024-12-14 00:10:33.002949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.975 [2024-12-14 00:10:33.002958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.975 [2024-12-14 00:10:33.002969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.975 [2024-12-14 00:10:33.002980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.975 [2024-12-14 00:10:33.002992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.975 [2024-12-14 00:10:33.003002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.975 [2024-12-14 00:10:33.003013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.975 [2024-12-14 00:10:33.003022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.975 [2024-12-14 00:10:33.003034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.975 [2024-12-14 00:10:33.003043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.975 [2024-12-14 00:10:33.003054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.975 [2024-12-14 00:10:33.003063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.975 [2024-12-14 00:10:33.003075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.975 [2024-12-14 00:10:33.003084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.975 [2024-12-14 00:10:33.003095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.975 [2024-12-14 00:10:33.003105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.975 [2024-12-14 00:10:33.003116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.975 [2024-12-14 00:10:33.003126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.975 [2024-12-14 00:10:33.003138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.975 [2024-12-14 00:10:33.003147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.975 [2024-12-14 00:10:33.003158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.975 [2024-12-14 00:10:33.003168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.975 [2024-12-14 00:10:33.003179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.975 [2024-12-14 00:10:33.003188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.975 [2024-12-14 00:10:33.003199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.975 [2024-12-14 00:10:33.003209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.975 [2024-12-14 00:10:33.003220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.975 [2024-12-14 00:10:33.003230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.975 [2024-12-14 00:10:33.003242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.975 [2024-12-14 00:10:33.003252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.975 [2024-12-14 00:10:33.003263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.975 [2024-12-14 00:10:33.003272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.975 [2024-12-14 00:10:33.003284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.975 [2024-12-14 00:10:33.003293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.975 [2024-12-14 00:10:33.003304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.975 [2024-12-14 00:10:33.003313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.975 [2024-12-14 00:10:33.003324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.975 [2024-12-14 00:10:33.003334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.975 [2024-12-14 00:10:33.003345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.975 [2024-12-14 00:10:33.003354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.975 [2024-12-14 00:10:33.003365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.975 [2024-12-14 00:10:33.003380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.975 [2024-12-14 00:10:33.003392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.975 [2024-12-14 00:10:33.003401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.975 [2024-12-14 00:10:33.003413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.975 [2024-12-14 00:10:33.003423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.975 [2024-12-14 00:10:33.003435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.975 [2024-12-14 00:10:33.003450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.975 [2024-12-14 00:10:33.003462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.975 [2024-12-14 00:10:33.003472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.975 [2024-12-14 00:10:33.003484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.975 [2024-12-14 00:10:33.003493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.975 [2024-12-14 00:10:33.003505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.975 [2024-12-14 00:10:33.003517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.976 [2024-12-14 00:10:33.003529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.976 [2024-12-14 00:10:33.003539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.976 [2024-12-14 00:10:33.003551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.976 [2024-12-14 00:10:33.003570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.976 [2024-12-14 00:10:33.003582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.976 [2024-12-14 00:10:33.003592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.976 [2024-12-14 00:10:33.003603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.976 [2024-12-14 00:10:33.003614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.976 [2024-12-14 00:10:33.003626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.976 [2024-12-14 00:10:33.003636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.976 [2024-12-14 00:10:33.003648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.976 [2024-12-14 00:10:33.003657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.976 [2024-12-14 00:10:33.003670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.976 [2024-12-14 00:10:33.003679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.976 [2024-12-14 00:10:33.003691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.976 [2024-12-14 00:10:33.003700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.976 [2024-12-14 00:10:33.003712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.976 [2024-12-14 00:10:33.003721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.976 [2024-12-14 00:10:33.003733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.976 [2024-12-14 00:10:33.003743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.976 [2024-12-14 00:10:33.003755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.976 [2024-12-14 00:10:33.003764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.976 [2024-12-14 00:10:33.004208] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:53.976 [2024-12-14 00:10:33.004240] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:53.976 [2024-12-14 00:10:33.004256] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008080 is same with the state(6) to be set 00:29:53.976 [2024-12-14 00:10:33.004816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.976 [2024-12-14 00:10:33.004845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.976 [2024-12-14 00:10:33.004871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.976 [2024-12-14 00:10:33.004881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.976 [2024-12-14 00:10:33.004893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.976 [2024-12-14 00:10:33.004903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.976 [2024-12-14 00:10:33.004915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.976 [2024-12-14 00:10:33.004924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.976 [2024-12-14 00:10:33.004936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.976 [2024-12-14 00:10:33.004946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.976 [2024-12-14 00:10:33.004959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.976 [2024-12-14 00:10:33.004969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.976 [2024-12-14 00:10:33.004980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.976 [2024-12-14 00:10:33.004990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.976 [2024-12-14 00:10:33.005001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.976 [2024-12-14 00:10:33.005012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.976 [2024-12-14 00:10:33.005023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.976 [2024-12-14 00:10:33.005033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.976 [2024-12-14 00:10:33.005044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.976 [2024-12-14 00:10:33.005053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.976 [2024-12-14 00:10:33.005065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.976 [2024-12-14 00:10:33.005074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.976 [2024-12-14 00:10:33.005086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.976 [2024-12-14 00:10:33.005096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.976 [2024-12-14 00:10:33.005110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.976 [2024-12-14 00:10:33.005120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.976 [2024-12-14 00:10:33.005132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.976 [2024-12-14 00:10:33.005142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.976 [2024-12-14 00:10:33.005154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.976 [2024-12-14 00:10:33.005163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.976 [2024-12-14 00:10:33.005175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.976 [2024-12-14 00:10:33.005184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.976 [2024-12-14 00:10:33.005195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.976 [2024-12-14 00:10:33.005205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.976 [2024-12-14 00:10:33.005216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.976 [2024-12-14 00:10:33.005226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.976 [2024-12-14 00:10:33.005237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.976 [2024-12-14 00:10:33.005246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.976 [2024-12-14 00:10:33.005257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.976 [2024-12-14 00:10:33.005266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.976 [2024-12-14 00:10:33.005277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.976 [2024-12-14 00:10:33.005287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.976 [2024-12-14 00:10:33.005299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.976 [2024-12-14 00:10:33.005309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.976 [2024-12-14 00:10:33.005320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.976 [2024-12-14 00:10:33.005329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.976 [2024-12-14 00:10:33.005340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.976 [2024-12-14 00:10:33.005349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.976 [2024-12-14 00:10:33.005361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.976 [2024-12-14 00:10:33.005372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.976 [2024-12-14 00:10:33.005383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.976 [2024-12-14 00:10:33.005396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.976 [2024-12-14 00:10:33.005407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.976 [2024-12-14 00:10:33.005417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.976 [2024-12-14 00:10:33.005428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.976 [2024-12-14 00:10:33.005453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.977 [2024-12-14 00:10:33.005466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.977 [2024-12-14 00:10:33.005475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.977 [2024-12-14 00:10:33.005487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.977 [2024-12-14 00:10:33.005498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.977 [2024-12-14 00:10:33.005509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.977 [2024-12-14 00:10:33.005519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.977 [2024-12-14 00:10:33.005530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.977 [2024-12-14 00:10:33.005539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.977 [2024-12-14 00:10:33.005551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.977 [2024-12-14 00:10:33.005560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.977 [2024-12-14 00:10:33.005571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.977 [2024-12-14 00:10:33.005581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.977 [2024-12-14 00:10:33.005592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.977 [2024-12-14 00:10:33.005601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.977 [2024-12-14 00:10:33.005612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.977 [2024-12-14 00:10:33.005621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.977 [2024-12-14 00:10:33.005632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.977 [2024-12-14 00:10:33.005644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.977 [2024-12-14 00:10:33.005658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.977 [2024-12-14 00:10:33.005667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.977 [2024-12-14 00:10:33.005679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.977 [2024-12-14 00:10:33.005688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.977 [2024-12-14 00:10:33.005699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.977 [2024-12-14 00:10:33.005709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.977 [2024-12-14 00:10:33.005720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.977 [2024-12-14 00:10:33.005730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.977 [2024-12-14 00:10:33.005740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.977 [2024-12-14 00:10:33.005734] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:53.977 [2024-12-14 00:10:33.005752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.977 [2024-12-14 00:10:33.005760] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:53.977 [2024-12-14 00:10:33.005763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.977 [2024-12-14 00:10:33.005770] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:53.977 [2024-12-14 00:10:33.005774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.977 [2024-12-14 00:10:33.005780] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:53.977 [2024-12-14 00:10:33.005786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.977 [2024-12-14 00:10:33.005790] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:53.977 [2024-12-14 00:10:33.005796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.977 [2024-12-14 00:10:33.005800] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:53.977 [2024-12-14 00:10:33.005807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.977 [2024-12-14 00:10:33.005810] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:53.977 [2024-12-14 00:10:33.005818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.977 [2024-12-14 00:10:33.005820] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:53.977 [2024-12-14 00:10:33.005830] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same [2024-12-14 00:10:33.005830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:1with the state(6) to be set 00:29:53.977 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.977 [2024-12-14 00:10:33.005842] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:53.977 [2024-12-14 00:10:33.005844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.977 [2024-12-14 00:10:33.005852] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:53.977 [2024-12-14 00:10:33.005856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.977 [2024-12-14 00:10:33.005862] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:53.977 [2024-12-14 00:10:33.005867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.977 [2024-12-14 00:10:33.005871] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:53.977 [2024-12-14 00:10:33.005880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:1[2024-12-14 00:10:33.005880] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.977 with the state(6) to be set 00:29:53.977 [2024-12-14 00:10:33.005892] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same [2024-12-14 00:10:33.005892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cwith the state(6) to be set 00:29:53.977 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.977 [2024-12-14 00:10:33.005903] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:53.977 [2024-12-14 00:10:33.005907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.977 [2024-12-14 00:10:33.005912] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:53.977 [2024-12-14 00:10:33.005917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.977 [2024-12-14 00:10:33.005921] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:53.977 [2024-12-14 00:10:33.005929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:1[2024-12-14 00:10:33.005930] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.977 with the state(6) to be set 00:29:53.977 [2024-12-14 00:10:33.005941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-12-14 00:10:33.005941] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.977 with the state(6) to be set 00:29:53.977 [2024-12-14 00:10:33.005953] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:53.977 [2024-12-14 00:10:33.005955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.977 [2024-12-14 00:10:33.005961] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:53.978 [2024-12-14 00:10:33.005965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.978 [2024-12-14 00:10:33.005971] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same [2024-12-14 00:10:33.005977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:1with the state(6) to be set 00:29:53.978 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.978 [2024-12-14 00:10:33.005988] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same [2024-12-14 00:10:33.005990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cwith the state(6) to be set 00:29:53.978 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.978 [2024-12-14 00:10:33.005999] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:53.978 [2024-12-14 00:10:33.006003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.978 [2024-12-14 00:10:33.006009] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:53.978 [2024-12-14 00:10:33.006015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.978 [2024-12-14 00:10:33.006018] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:53.978 [2024-12-14 00:10:33.006027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:1[2024-12-14 00:10:33.006028] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.978 with the state(6) to be set 00:29:53.978 [2024-12-14 00:10:33.006039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-12-14 00:10:33.006039] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.978 with the state(6) to be set 00:29:53.978 [2024-12-14 00:10:33.006052] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:53.978 [2024-12-14 00:10:33.006054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.978 [2024-12-14 00:10:33.006061] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:53.978 [2024-12-14 00:10:33.006070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-12-14 00:10:33.006070] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.978 with the state(6) to be set 00:29:53.978 [2024-12-14 00:10:33.006082] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:53.978 [2024-12-14 00:10:33.006085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.978 [2024-12-14 00:10:33.006092] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:53.978 [2024-12-14 00:10:33.006095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.978 [2024-12-14 00:10:33.006101] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:53.978 [2024-12-14 00:10:33.006108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.978 [2024-12-14 00:10:33.006110] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:53.978 [2024-12-14 00:10:33.006118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.978 [2024-12-14 00:10:33.006119] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:53.978 [2024-12-14 00:10:33.006131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:1[2024-12-14 00:10:33.006131] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.978 with the state(6) to be set 00:29:53.978 [2024-12-14 00:10:33.006143] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same [2024-12-14 00:10:33.006144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cwith the state(6) to be set 00:29:53.978 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.978 [2024-12-14 00:10:33.006154] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:53.978 [2024-12-14 00:10:33.006158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.978 [2024-12-14 00:10:33.006163] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:53.978 [2024-12-14 00:10:33.006168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.978 [2024-12-14 00:10:33.006173] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:53.978 [2024-12-14 00:10:33.006181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.978 [2024-12-14 00:10:33.006182] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:53.978 [2024-12-14 00:10:33.006191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-12-14 00:10:33.006191] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.978 with the state(6) to be set 00:29:53.978 [2024-12-14 00:10:33.006205] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:53.978 [2024-12-14 00:10:33.006208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.978 [2024-12-14 00:10:33.006214] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:53.978 [2024-12-14 00:10:33.006219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.978 [2024-12-14 00:10:33.006223] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:53.978 [2024-12-14 00:10:33.006232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:1[2024-12-14 00:10:33.006232] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.978 with the state(6) to be set 00:29:53.978 [2024-12-14 00:10:33.006245] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same [2024-12-14 00:10:33.006245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cwith the state(6) to be set 00:29:53.978 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.978 [2024-12-14 00:10:33.006256] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:53.978 [2024-12-14 00:10:33.006259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.978 [2024-12-14 00:10:33.006270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-12-14 00:10:33.006272] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.978 with the state(6) to be set 00:29:53.978 [2024-12-14 00:10:33.006283] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:53.978 [2024-12-14 00:10:33.006286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.978 [2024-12-14 00:10:33.006292] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:53.978 [2024-12-14 00:10:33.006296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.978 [2024-12-14 00:10:33.006301] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:53.978 [2024-12-14 00:10:33.006310] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:53.978 [2024-12-14 00:10:33.006318] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:53.978 [2024-12-14 00:10:33.006326] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:53.978 [2024-12-14 00:10:33.006331] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ [2024-12-14 00:10:33.006334] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same transport error -6 (No such device or address) on qpair id 1 00:29:53.978 with the state(6) to be set 00:29:53.978 [2024-12-14 00:10:33.006344] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:53.978 [2024-12-14 00:10:33.006352] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:53.978 [2024-12-14 00:10:33.006360] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:53.978 [2024-12-14 00:10:33.006368] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:53.978 [2024-12-14 00:10:33.006376] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008480 is same with the state(6) to be set 00:29:53.978 [2024-12-14 00:10:33.006705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.978 [2024-12-14 00:10:33.006719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.978 [2024-12-14 00:10:33.006736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.978 [2024-12-14 00:10:33.006746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.978 [2024-12-14 00:10:33.006758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.978 [2024-12-14 00:10:33.006768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.978 [2024-12-14 00:10:33.006780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.978 [2024-12-14 00:10:33.006790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.978 [2024-12-14 00:10:33.006801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.978 [2024-12-14 00:10:33.006812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.978 [2024-12-14 00:10:33.006827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.978 [2024-12-14 00:10:33.006837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.978 [2024-12-14 00:10:33.006848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.978 [2024-12-14 00:10:33.006858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.979 [2024-12-14 00:10:33.006870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.979 [2024-12-14 00:10:33.006879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.979 [2024-12-14 00:10:33.006891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.979 [2024-12-14 00:10:33.006901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.979 [2024-12-14 00:10:33.006913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.979 [2024-12-14 00:10:33.006922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.979 [2024-12-14 00:10:33.006933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.979 [2024-12-14 00:10:33.006942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.979 [2024-12-14 00:10:33.006954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.979 [2024-12-14 00:10:33.006963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.979 [2024-12-14 00:10:33.006974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.979 [2024-12-14 00:10:33.006984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.979 [2024-12-14 00:10:33.006995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.979 [2024-12-14 00:10:33.007004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.979 [2024-12-14 00:10:33.007015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.979 [2024-12-14 00:10:33.007025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.979 [2024-12-14 00:10:33.007036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.979 [2024-12-14 00:10:33.007045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.979 [2024-12-14 00:10:33.007057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.979 [2024-12-14 00:10:33.007066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.979 [2024-12-14 00:10:33.007077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.979 [2024-12-14 00:10:33.007088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.979 [2024-12-14 00:10:33.007099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.979 [2024-12-14 00:10:33.007109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.979 [2024-12-14 00:10:33.007120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.979 [2024-12-14 00:10:33.007129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.979 [2024-12-14 00:10:33.007140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.979 [2024-12-14 00:10:33.007149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.979 [2024-12-14 00:10:33.007161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.979 [2024-12-14 00:10:33.007171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.979 [2024-12-14 00:10:33.007181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.979 [2024-12-14 00:10:33.007191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.979 [2024-12-14 00:10:33.007202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.979 [2024-12-14 00:10:33.007212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.979 [2024-12-14 00:10:33.007223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.979 [2024-12-14 00:10:33.007233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.979 [2024-12-14 00:10:33.007244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.979 [2024-12-14 00:10:33.007253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.979 [2024-12-14 00:10:33.007264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.979 [2024-12-14 00:10:33.007274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.979 [2024-12-14 00:10:33.007285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.979 [2024-12-14 00:10:33.007294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.979 [2024-12-14 00:10:33.007306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.979 [2024-12-14 00:10:33.007315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.979 [2024-12-14 00:10:33.007326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.979 [2024-12-14 00:10:33.007339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.979 [2024-12-14 00:10:33.007352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.979 [2024-12-14 00:10:33.007361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.979 [2024-12-14 00:10:33.007373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.979 [2024-12-14 00:10:33.007382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.979 [2024-12-14 00:10:33.007394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.979 [2024-12-14 00:10:33.007403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.979 [2024-12-14 00:10:33.007414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.979 [2024-12-14 00:10:33.007424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.979 [2024-12-14 00:10:33.007435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.979 [2024-12-14 00:10:33.007450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.979 [2024-12-14 00:10:33.007462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.979 [2024-12-14 00:10:33.007471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.979 [2024-12-14 00:10:33.007483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.979 [2024-12-14 00:10:33.007493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.979 [2024-12-14 00:10:33.007505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.979 [2024-12-14 00:10:33.007514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.979 [2024-12-14 00:10:33.007525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.979 [2024-12-14 00:10:33.007534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.979 [2024-12-14 00:10:33.007546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.979 [2024-12-14 00:10:33.007555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.979 [2024-12-14 00:10:33.007566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.979 [2024-12-14 00:10:33.007576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.979 [2024-12-14 00:10:33.007587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.979 [2024-12-14 00:10:33.007596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.979 [2024-12-14 00:10:33.007607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.979 [2024-12-14 00:10:33.007618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.979 [2024-12-14 00:10:33.007630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.979 [2024-12-14 00:10:33.007639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.979 [2024-12-14 00:10:33.007650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.979 [2024-12-14 00:10:33.007659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.979 [2024-12-14 00:10:33.007670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.979 [2024-12-14 00:10:33.007681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.979 [2024-12-14 00:10:33.007692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.979 [2024-12-14 00:10:33.007701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.980 [2024-12-14 00:10:33.007713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.980 [2024-12-14 00:10:33.007722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.980 [2024-12-14 00:10:33.007733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.980 [2024-12-14 00:10:33.007742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.980 [2024-12-14 00:10:33.007754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.980 [2024-12-14 00:10:33.007763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.980 [2024-12-14 00:10:33.007774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.980 [2024-12-14 00:10:33.007783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.980 [2024-12-14 00:10:33.007794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.980 [2024-12-14 00:10:33.007804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.980 [2024-12-14 00:10:33.007815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.980 [2024-12-14 00:10:33.007825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.980 [2024-12-14 00:10:33.007835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.980 [2024-12-14 00:10:33.007845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.980 [2024-12-14 00:10:33.007881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.980 [2024-12-14 00:10:33.007891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.980 [2024-12-14 00:10:33.007904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.980 [2024-12-14 00:10:33.007914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.980 [2024-12-14 00:10:33.007925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.980 [2024-12-14 00:10:33.007935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.980 [2024-12-14 00:10:33.007946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.980 [2024-12-14 00:10:33.007956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.980 [2024-12-14 00:10:33.007967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.980 [2024-12-14 00:10:33.007977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.980 [2024-12-14 00:10:33.007988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.980 [2024-12-14 00:10:33.007998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.980 [2024-12-14 00:10:33.008009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.980 [2024-12-14 00:10:33.008018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.980 [2024-12-14 00:10:33.008029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.980 [2024-12-14 00:10:33.008040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.980 [2024-12-14 00:10:33.008051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.980 [2024-12-14 00:10:33.008060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.980 [2024-12-14 00:10:33.008072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.980 [2024-12-14 00:10:33.008081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.980 [2024-12-14 00:10:33.008628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.980 [2024-12-14 00:10:33.008655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.980 [2024-12-14 00:10:33.008678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.980 [2024-12-14 00:10:33.008688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.980 [2024-12-14 00:10:33.008700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.980 [2024-12-14 00:10:33.008710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.980 [2024-12-14 00:10:33.008722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.980 [2024-12-14 00:10:33.008734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.980 [2024-12-14 00:10:33.008746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.980 [2024-12-14 00:10:33.008759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.980 [2024-12-14 00:10:33.008771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.980 [2024-12-14 00:10:33.008780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.980 [2024-12-14 00:10:33.008792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.980 [2024-12-14 00:10:33.008802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.980 [2024-12-14 00:10:33.008813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.980 [2024-12-14 00:10:33.008822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.980 [2024-12-14 00:10:33.008834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.980 [2024-12-14 00:10:33.008843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.980 [2024-12-14 00:10:33.008855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.980 [2024-12-14 00:10:33.008863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.980 [2024-12-14 00:10:33.008875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.980 [2024-12-14 00:10:33.008913] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:53.980 [2024-12-14 00:10:33.008937] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:53.980 [2024-12-14 00:10:33.008946] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:53.980 [2024-12-14 00:10:33.008955] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:53.980 [2024-12-14 00:10:33.008964] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:53.980 [2024-12-14 00:10:33.008973] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:53.980 [2024-12-14 00:10:33.008982] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:53.980 [2024-12-14 00:10:33.008990] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:53.980 [2024-12-14 00:10:33.008998] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:53.980 [2024-12-14 00:10:33.009006] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:53.980 [2024-12-14 00:10:33.009015] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:53.980 [2024-12-14 00:10:33.009028] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:53.980 [2024-12-14 00:10:33.009036] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:53.980 [2024-12-14 00:10:33.009045] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:53.980 [2024-12-14 00:10:33.009054] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:53.980 [2024-12-14 00:10:33.009063] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:53.980 [2024-12-14 00:10:33.009071] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:53.980 [2024-12-14 00:10:33.009079] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:53.980 [2024-12-14 00:10:33.009088] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:53.980 [2024-12-14 00:10:33.009096] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:53.980 [2024-12-14 00:10:33.009104] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:53.980 [2024-12-14 00:10:33.009113] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:53.980 [2024-12-14 00:10:33.009121] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:53.980 [2024-12-14 00:10:33.009129] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:53.980 [2024-12-14 00:10:33.009138] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:53.980 [2024-12-14 00:10:33.009146] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:53.980 [2024-12-14 00:10:33.009155] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:53.980 [2024-12-14 00:10:33.009163] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:53.981 [2024-12-14 00:10:33.009171] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:53.981 [2024-12-14 00:10:33.009179] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:53.981 [2024-12-14 00:10:33.009188] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:53.981 [2024-12-14 00:10:33.009197] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:53.981 [2024-12-14 00:10:33.009205] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:53.981 [2024-12-14 00:10:33.009214] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:53.981 [2024-12-14 00:10:33.009222] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:53.981 [2024-12-14 00:10:33.009231] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:53.981 [2024-12-14 00:10:33.009239] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:53.981 [2024-12-14 00:10:33.009249] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:53.981 [2024-12-14 00:10:33.009258] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:53.981 [2024-12-14 00:10:33.009266] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:53.981 [2024-12-14 00:10:33.009274] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:53.981 [2024-12-14 00:10:33.009283] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:53.981 [2024-12-14 00:10:33.009291] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:53.981 [2024-12-14 00:10:33.009299] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:53.981 [2024-12-14 00:10:33.009307] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:53.981 [2024-12-14 00:10:33.009315] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:53.981 [2024-12-14 00:10:33.009324] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:53.981 [2024-12-14 00:10:33.009332] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:53.981 [2024-12-14 00:10:33.009340] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:53.981 [2024-12-14 00:10:33.009349] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:53.981 [2024-12-14 00:10:33.009357] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:53.981 [2024-12-14 00:10:33.009366] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:53.981 [2024-12-14 00:10:33.009374] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:53.981 [2024-12-14 00:10:33.009382] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:53.981 [2024-12-14 00:10:33.009391] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:53.981 [2024-12-14 00:10:33.009399] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:53.981 [2024-12-14 00:10:33.009407] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:53.981 [2024-12-14 00:10:33.009416] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:53.981 [2024-12-14 00:10:33.009424] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:53.981 [2024-12-14 00:10:33.009432] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:53.981 [2024-12-14 00:10:33.009445] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:53.981 [2024-12-14 00:10:33.009454] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:53.981 [2024-12-14 00:10:33.009462] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008880 is same with the state(6) to be set 00:29:53.981 [2024-12-14 00:10:33.012767] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:53.981 [2024-12-14 00:10:33.012800] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:53.981 [2024-12-14 00:10:33.012812] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:53.981 [2024-12-14 00:10:33.012822] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:53.981 [2024-12-14 00:10:33.012832] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:53.981 [2024-12-14 00:10:33.012841] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:53.981 [2024-12-14 00:10:33.012849] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:53.981 [2024-12-14 00:10:33.012858] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:53.981 [2024-12-14 00:10:33.012867] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:53.981 [2024-12-14 00:10:33.012875] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:53.981 [2024-12-14 00:10:33.012884] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:53.981 [2024-12-14 00:10:33.012892] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:53.981 [2024-12-14 00:10:33.012901] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:53.981 [2024-12-14 00:10:33.012909] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:53.981 [2024-12-14 00:10:33.012918] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:53.981 [2024-12-14 00:10:33.012926] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:53.981 [2024-12-14 00:10:33.012935] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:53.981 [2024-12-14 00:10:33.012943] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:53.981 [2024-12-14 00:10:33.012952] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:53.981 [2024-12-14 00:10:33.012960] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:53.981 [2024-12-14 00:10:33.012968] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:53.981 [2024-12-14 00:10:33.012976] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:53.981 [2024-12-14 00:10:33.012984] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:53.981 [2024-12-14 00:10:33.012993] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:53.981 [2024-12-14 00:10:33.013001] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:53.981 [2024-12-14 00:10:33.013009] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:53.981 [2024-12-14 00:10:33.013018] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:53.981 [2024-12-14 00:10:33.013028] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:53.981 [2024-12-14 00:10:33.013036] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:53.981 [2024-12-14 00:10:33.013044] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:53.981 [2024-12-14 00:10:33.013053] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:53.981 [2024-12-14 00:10:33.013061] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:53.981 [2024-12-14 00:10:33.013069] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:53.981 [2024-12-14 00:10:33.013078] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:53.981 [2024-12-14 00:10:33.013086] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:53.981 [2024-12-14 00:10:33.013095] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:53.981 [2024-12-14 00:10:33.013103] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:53.981 [2024-12-14 00:10:33.013111] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:53.981 [2024-12-14 00:10:33.013119] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:53.982 [2024-12-14 00:10:33.013127] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:53.982 [2024-12-14 00:10:33.013136] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:53.982 [2024-12-14 00:10:33.013144] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:53.982 [2024-12-14 00:10:33.013152] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:53.982 [2024-12-14 00:10:33.013160] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:53.982 [2024-12-14 00:10:33.013169] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:53.982 [2024-12-14 00:10:33.013177] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:53.982 [2024-12-14 00:10:33.013185] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:53.982 [2024-12-14 00:10:33.013193] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:53.982 [2024-12-14 00:10:33.013201] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:53.982 [2024-12-14 00:10:33.013209] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:53.982 [2024-12-14 00:10:33.013217] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:53.982 [2024-12-14 00:10:33.013225] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:53.982 [2024-12-14 00:10:33.013234] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:53.982 [2024-12-14 00:10:33.013243] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:53.982 [2024-12-14 00:10:33.013252] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:53.982 [2024-12-14 00:10:33.013260] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:53.982 [2024-12-14 00:10:33.013269] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:53.982 [2024-12-14 00:10:33.013277] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:53.982 [2024-12-14 00:10:33.013285] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:53.982 [2024-12-14 00:10:33.013293] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:53.982 [2024-12-14 00:10:33.013302] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:53.982 [2024-12-14 00:10:33.013310] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:53.982 [2024-12-14 00:10:33.013318] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000008c80 is same with the state(6) to be set 00:29:53.982 [2024-12-14 00:10:33.015649] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:53.982 [2024-12-14 00:10:33.015674] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:53.982 [2024-12-14 00:10:33.015684] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:53.982 [2024-12-14 00:10:33.015693] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:53.982 [2024-12-14 00:10:33.015701] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:53.982 [2024-12-14 00:10:33.015710] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:53.982 [2024-12-14 00:10:33.015718] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:53.982 [2024-12-14 00:10:33.015728] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:53.982 [2024-12-14 00:10:33.015737] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:53.982 [2024-12-14 00:10:33.015746] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:53.982 [2024-12-14 00:10:33.015755] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:53.982 [2024-12-14 00:10:33.015763] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:53.982 [2024-12-14 00:10:33.015771] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:53.982 [2024-12-14 00:10:33.015779] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:53.982 [2024-12-14 00:10:33.015787] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:53.982 [2024-12-14 00:10:33.015796] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:53.982 [2024-12-14 00:10:33.015808] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:53.982 [2024-12-14 00:10:33.015820] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:53.982 [2024-12-14 00:10:33.015828] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:53.982 [2024-12-14 00:10:33.015837] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:53.982 [2024-12-14 00:10:33.015845] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:53.982 [2024-12-14 00:10:33.015853] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:53.982 [2024-12-14 00:10:33.015861] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:53.982 [2024-12-14 00:10:33.015869] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:53.982 [2024-12-14 00:10:33.015877] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:53.982 [2024-12-14 00:10:33.015886] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:53.982 [2024-12-14 00:10:33.015894] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:53.982 [2024-12-14 00:10:33.015902] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:53.982 [2024-12-14 00:10:33.015910] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:53.982 [2024-12-14 00:10:33.015918] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:53.982 [2024-12-14 00:10:33.015926] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:53.982 [2024-12-14 00:10:33.015935] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:53.982 [2024-12-14 00:10:33.015943] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:53.982 [2024-12-14 00:10:33.015951] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:53.982 [2024-12-14 00:10:33.015959] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:53.982 [2024-12-14 00:10:33.015967] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:53.982 [2024-12-14 00:10:33.015975] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:53.982 [2024-12-14 00:10:33.015983] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:53.982 [2024-12-14 00:10:33.015991] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:53.982 [2024-12-14 00:10:33.015999] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:53.982 [2024-12-14 00:10:33.016007] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:53.982 [2024-12-14 00:10:33.016015] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:53.982 [2024-12-14 00:10:33.016023] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:53.982 [2024-12-14 00:10:33.016045] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:53.982 [2024-12-14 00:10:33.016053] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:53.982 [2024-12-14 00:10:33.016062] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:53.982 [2024-12-14 00:10:33.016070] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:53.982 [2024-12-14 00:10:33.016078] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:53.982 [2024-12-14 00:10:33.016087] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:53.982 [2024-12-14 00:10:33.016095] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:53.982 [2024-12-14 00:10:33.016103] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:53.982 [2024-12-14 00:10:33.016111] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:53.982 [2024-12-14 00:10:33.016120] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:53.982 [2024-12-14 00:10:33.016128] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:53.982 [2024-12-14 00:10:33.016136] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:53.982 [2024-12-14 00:10:33.016144] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:53.982 [2024-12-14 00:10:33.016152] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:53.982 [2024-12-14 00:10:33.016160] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:53.982 [2024-12-14 00:10:33.016168] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:53.982 [2024-12-14 00:10:33.016176] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:53.982 [2024-12-14 00:10:33.016184] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:53.983 [2024-12-14 00:10:33.016192] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:53.983 [2024-12-14 00:10:33.016200] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009080 is same with the state(6) to be set 00:29:53.983 [2024-12-14 00:10:33.017621] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:53.983 [2024-12-14 00:10:33.017644] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:53.983 [2024-12-14 00:10:33.017654] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:53.983 [2024-12-14 00:10:33.017663] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:53.983 [2024-12-14 00:10:33.017671] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:53.983 [2024-12-14 00:10:33.017679] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:53.983 [2024-12-14 00:10:33.017691] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:53.983 [2024-12-14 00:10:33.017699] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:53.983 [2024-12-14 00:10:33.017707] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:53.983 [2024-12-14 00:10:33.017715] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:53.983 [2024-12-14 00:10:33.017723] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:53.983 [2024-12-14 00:10:33.017731] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:53.983 [2024-12-14 00:10:33.017739] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:53.983 [2024-12-14 00:10:33.017748] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:53.983 [2024-12-14 00:10:33.017756] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:53.983 [2024-12-14 00:10:33.017764] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:53.983 [2024-12-14 00:10:33.017772] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:53.983 [2024-12-14 00:10:33.017780] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:53.983 [2024-12-14 00:10:33.017788] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:53.983 [2024-12-14 00:10:33.017797] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:53.983 [2024-12-14 00:10:33.017805] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:53.983 [2024-12-14 00:10:33.017812] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:53.983 [2024-12-14 00:10:33.017820] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:53.983 [2024-12-14 00:10:33.017829] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:53.983 [2024-12-14 00:10:33.017837] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:53.983 [2024-12-14 00:10:33.017845] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:53.983 [2024-12-14 00:10:33.017853] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:53.983 [2024-12-14 00:10:33.017861] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:53.983 [2024-12-14 00:10:33.017869] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:53.983 [2024-12-14 00:10:33.017877] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:53.983 [2024-12-14 00:10:33.017885] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:53.983 [2024-12-14 00:10:33.017893] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:53.983 [2024-12-14 00:10:33.017903] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:53.983 [2024-12-14 00:10:33.017914] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:53.983 [2024-12-14 00:10:33.017922] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:53.983 [2024-12-14 00:10:33.017930] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:53.983 [2024-12-14 00:10:33.017939] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:53.983 [2024-12-14 00:10:33.017947] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:53.983 [2024-12-14 00:10:33.017955] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:53.983 [2024-12-14 00:10:33.017964] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:53.983 [2024-12-14 00:10:33.017972] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:53.983 [2024-12-14 00:10:33.017980] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:53.983 [2024-12-14 00:10:33.017988] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:53.983 [2024-12-14 00:10:33.017997] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:53.983 [2024-12-14 00:10:33.018006] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:53.983 [2024-12-14 00:10:33.018014] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:53.983 [2024-12-14 00:10:33.018022] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:53.983 [2024-12-14 00:10:33.018030] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:53.983 [2024-12-14 00:10:33.018038] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:53.983 [2024-12-14 00:10:33.018046] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:53.983 [2024-12-14 00:10:33.018055] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:53.983 [2024-12-14 00:10:33.018063] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:53.983 [2024-12-14 00:10:33.018071] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:53.983 [2024-12-14 00:10:33.018079] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:53.983 [2024-12-14 00:10:33.018087] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:53.983 [2024-12-14 00:10:33.018095] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:53.983 [2024-12-14 00:10:33.018103] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:53.983 [2024-12-14 00:10:33.018112] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:53.983 [2024-12-14 00:10:33.018120] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:53.983 [2024-12-14 00:10:33.018129] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:53.983 [2024-12-14 00:10:33.018137] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:53.983 [2024-12-14 00:10:33.018145] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:53.983 [2024-12-14 00:10:33.018153] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000009480 is same with the state(6) to be set 00:29:53.983 [2024-12-14 00:10:33.019287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.983 [2024-12-14 00:10:33.019308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.983 [2024-12-14 00:10:33.019321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.983 [2024-12-14 00:10:33.019336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.983 [2024-12-14 00:10:33.019349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.983 [2024-12-14 00:10:33.019365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.983 [2024-12-14 00:10:33.019379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.983 [2024-12-14 00:10:33.019394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.983 [2024-12-14 00:10:33.019406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.983 [2024-12-14 00:10:33.019421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.983 [2024-12-14 00:10:33.019433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.983 [2024-12-14 00:10:33.019453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.983 [2024-12-14 00:10:33.019466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.983 [2024-12-14 00:10:33.019482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.983 [2024-12-14 00:10:33.019495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.983 [2024-12-14 00:10:33.019510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.983 [2024-12-14 00:10:33.019523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.983 [2024-12-14 00:10:33.019539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.983 [2024-12-14 00:10:33.019552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.984 [2024-12-14 00:10:33.019567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.984 [2024-12-14 00:10:33.019580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.984 [2024-12-14 00:10:33.019598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.984 [2024-12-14 00:10:33.019611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.984 [2024-12-14 00:10:33.019626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.984 [2024-12-14 00:10:33.019639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.984 [2024-12-14 00:10:33.019654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.984 [2024-12-14 00:10:33.019667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.984 [2024-12-14 00:10:33.019683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.984 [2024-12-14 00:10:33.019696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.984 [2024-12-14 00:10:33.019711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.984 [2024-12-14 00:10:33.019723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.984 [2024-12-14 00:10:33.019738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.984 [2024-12-14 00:10:33.019751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.984 [2024-12-14 00:10:33.019766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.984 [2024-12-14 00:10:33.019779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.984 [2024-12-14 00:10:33.019794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.984 [2024-12-14 00:10:33.019807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.984 [2024-12-14 00:10:33.019822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.984 [2024-12-14 00:10:33.019834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.984 [2024-12-14 00:10:33.019849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.984 [2024-12-14 00:10:33.019861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.984 [2024-12-14 00:10:33.019877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.984 [2024-12-14 00:10:33.019889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.984 [2024-12-14 00:10:33.019904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.984 [2024-12-14 00:10:33.019916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.984 [2024-12-14 00:10:33.019932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.984 [2024-12-14 00:10:33.019946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.984 [2024-12-14 00:10:33.019961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.984 [2024-12-14 00:10:33.019974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.984 [2024-12-14 00:10:33.019989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.984 [2024-12-14 00:10:33.020001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.984 [2024-12-14 00:10:33.020016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.984 [2024-12-14 00:10:33.020029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.984 [2024-12-14 00:10:33.020044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.984 [2024-12-14 00:10:33.020057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.984 [2024-12-14 00:10:33.020071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.984 [2024-12-14 00:10:33.020084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.984 [2024-12-14 00:10:33.020099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.984 [2024-12-14 00:10:33.020111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.984 [2024-12-14 00:10:33.020126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.984 [2024-12-14 00:10:33.020139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.984 [2024-12-14 00:10:33.020154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.984 [2024-12-14 00:10:33.020166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.984 [2024-12-14 00:10:33.020181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.984 [2024-12-14 00:10:33.020194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.984 [2024-12-14 00:10:33.020212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.984 [2024-12-14 00:10:33.020225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.984 [2024-12-14 00:10:33.020240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.984 [2024-12-14 00:10:33.020253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.984 [2024-12-14 00:10:33.020268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.984 [2024-12-14 00:10:33.020281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.984 [2024-12-14 00:10:33.020298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.984 [2024-12-14 00:10:33.020311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.984 [2024-12-14 00:10:33.020326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.984 [2024-12-14 00:10:33.020339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.984 [2024-12-14 00:10:33.020353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.984 [2024-12-14 00:10:33.020366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.984 [2024-12-14 00:10:33.020381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.984 [2024-12-14 00:10:33.020394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.984 [2024-12-14 00:10:33.020409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.984 [2024-12-14 00:10:33.020421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.984 [2024-12-14 00:10:33.020436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.984 [2024-12-14 00:10:33.020460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.984 [2024-12-14 00:10:33.020475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.984 [2024-12-14 00:10:33.020488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.984 [2024-12-14 00:10:33.020503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.984 [2024-12-14 00:10:33.020516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.984 [2024-12-14 00:10:33.020538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.984 [2024-12-14 00:10:33.020550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.984 [2024-12-14 00:10:33.020566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.984 [2024-12-14 00:10:33.020579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.984 [2024-12-14 00:10:33.020594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.984 [2024-12-14 00:10:33.020606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.984 [2024-12-14 00:10:33.020621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.984 [2024-12-14 00:10:33.020634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.984 [2024-12-14 00:10:33.020649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.984 [2024-12-14 00:10:33.020664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.984 [2024-12-14 00:10:33.020681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.984 [2024-12-14 00:10:33.020694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.984 [2024-12-14 00:10:33.020710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.985 [2024-12-14 00:10:33.020722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.985 [2024-12-14 00:10:33.020738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.985 [2024-12-14 00:10:33.020750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.985 [2024-12-14 00:10:33.020765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.985 [2024-12-14 00:10:33.020778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.985 [2024-12-14 00:10:33.020793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.985 [2024-12-14 00:10:33.020806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.985 [2024-12-14 00:10:33.020847] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:53.985 [2024-12-14 00:10:33.022906] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] resetting controller 00:29:53.985 [2024-12-14 00:10:33.022955] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032b480 (9): Bad file descriptor 00:29:53.985 [2024-12-14 00:10:33.023016] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:53.985 [2024-12-14 00:10:33.023035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.985 [2024-12-14 00:10:33.023051] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:53.985 [2024-12-14 00:10:33.023064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.985 [2024-12-14 00:10:33.023078] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:53.985 [2024-12-14 00:10:33.023091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.985 [2024-12-14 00:10:33.023105] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:53.985 [2024-12-14 00:10:33.023118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.985 [2024-12-14 00:10:33.023131] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000328c80 is same with the state(6) to be set 00:29:53.985 [2024-12-14 00:10:33.023178] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:53.985 [2024-12-14 00:10:33.023194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.985 [2024-12-14 00:10:33.023212] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:53.985 [2024-12-14 00:10:33.023225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.985 [2024-12-14 00:10:33.023239] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:53.985 [2024-12-14 00:10:33.023251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.985 [2024-12-14 00:10:33.023265] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:53.985 [2024-12-14 00:10:33.023277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.985 [2024-12-14 00:10:33.023289] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032aa80 is same with the state(6) to be set 00:29:53.985 [2024-12-14 00:10:33.023333] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:53.985 [2024-12-14 00:10:33.023348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.985 [2024-12-14 00:10:33.023362] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:53.985 [2024-12-14 00:10:33.023375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.985 [2024-12-14 00:10:33.023388] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:53.985 [2024-12-14 00:10:33.023400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.985 [2024-12-14 00:10:33.023414] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:53.985 [2024-12-14 00:10:33.023426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.985 [2024-12-14 00:10:33.023443] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000329680 is same with the state(6) to be set 00:29:53.985 [2024-12-14 00:10:33.023491] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:53.985 [2024-12-14 00:10:33.023507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.985 [2024-12-14 00:10:33.023521] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:53.985 [2024-12-14 00:10:33.023534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.985 [2024-12-14 00:10:33.023547] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:53.985 [2024-12-14 00:10:33.023559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.985 [2024-12-14 00:10:33.023573] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:53.985 [2024-12-14 00:10:33.023586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.985 [2024-12-14 00:10:33.023597] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032a080 is same with the state(6) to be set 00:29:53.985 [2024-12-14 00:10:33.023641] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:53.985 [2024-12-14 00:10:33.023656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.985 [2024-12-14 00:10:33.023670] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:53.985 [2024-12-14 00:10:33.023683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.985 [2024-12-14 00:10:33.023697] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:53.985 [2024-12-14 00:10:33.023710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.985 [2024-12-14 00:10:33.023723] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:53.985 [2024-12-14 00:10:33.023736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.985 [2024-12-14 00:10:33.023747] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000326e80 is same with the state(6) to be set 00:29:53.985 [2024-12-14 00:10:33.023780] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:53.985 [2024-12-14 00:10:33.023795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.985 [2024-12-14 00:10:33.023809] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:53.985 [2024-12-14 00:10:33.023822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.985 [2024-12-14 00:10:33.023835] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:53.985 [2024-12-14 00:10:33.023848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.985 [2024-12-14 00:10:33.023861] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:53.985 [2024-12-14 00:10:33.023873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.985 [2024-12-14 00:10:33.023885] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000327880 is same with the state(6) to be set 00:29:53.985 [2024-12-14 00:10:33.023918] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:53.985 [2024-12-14 00:10:33.023933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.985 [2024-12-14 00:10:33.023946] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:53.985 [2024-12-14 00:10:33.023958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.985 [2024-12-14 00:10:33.023972] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:53.985 [2024-12-14 00:10:33.023984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.985 [2024-12-14 00:10:33.023997] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:53.985 [2024-12-14 00:10:33.024010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.985 [2024-12-14 00:10:33.024028] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000328280 is same with the state(6) to be set 00:29:53.986 [2024-12-14 00:10:33.024055] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000326480 (9): Bad file descriptor 00:29:53.986 [2024-12-14 00:10:33.024083] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:29:53.986 [2024-12-14 00:10:33.028476] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] resetting controller 00:29:53.986 [2024-12-14 00:10:33.028520] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] resetting controller 00:29:53.986 [2024-12-14 00:10:33.028541] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] resetting controller 00:29:53.986 [2024-12-14 00:10:33.028566] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000327880 (9): Bad file descriptor 00:29:53.986 [2024-12-14 00:10:33.028587] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000326e80 (9): Bad file descriptor 00:29:53.986 [2024-12-14 00:10:33.029889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.986 [2024-12-14 00:10:33.029927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032b480 with addr=10.0.0.2, port=4420 00:29:53.986 [2024-12-14 00:10:33.029945] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032b480 is same with the state(6) to be set 00:29:53.986 [2024-12-14 00:10:33.030056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.986 [2024-12-14 00:10:33.030075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:29:53.986 [2024-12-14 00:10:33.030090] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000326480 is same with the state(6) to be set 00:29:53.986 [2024-12-14 00:10:33.030181] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:29:53.986 [2024-12-14 00:10:33.032384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.986 [2024-12-14 00:10:33.032420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326e80 with addr=10.0.0.2, port=4420 00:29:53.986 [2024-12-14 00:10:33.032436] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000326e80 is same with the state(6) to be set 00:29:53.986 [2024-12-14 00:10:33.032630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.986 [2024-12-14 00:10:33.032651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000327880 with addr=10.0.0.2, port=4420 00:29:53.986 [2024-12-14 00:10:33.032665] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000327880 is same with the state(6) to be set 00:29:53.986 [2024-12-14 00:10:33.032683] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032b480 (9): Bad file descriptor 00:29:53.986 [2024-12-14 00:10:33.032704] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000326480 (9): Bad file descriptor 00:29:53.986 [2024-12-14 00:10:33.033153] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:29:53.986 [2024-12-14 00:10:33.033224] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:29:53.986 [2024-12-14 00:10:33.033288] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:29:53.986 [2024-12-14 00:10:33.033352] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:29:53.986 [2024-12-14 00:10:33.033413] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:29:53.986 [2024-12-14 00:10:33.033470] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000326e80 (9): Bad file descriptor 00:29:53.986 [2024-12-14 00:10:33.033492] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000327880 (9): Bad file descriptor 00:29:53.986 [2024-12-14 00:10:33.033514] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Ctrlr is in error state 00:29:53.986 [2024-12-14 00:10:33.033528] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] controller reinitialization failed 00:29:53.986 [2024-12-14 00:10:33.033545] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] in failed state. 00:29:53.986 [2024-12-14 00:10:33.033562] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Resetting controller failed. 00:29:53.986 [2024-12-14 00:10:33.033577] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Ctrlr is in error state 00:29:53.986 [2024-12-14 00:10:33.033590] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] controller reinitialization failed 00:29:53.986 [2024-12-14 00:10:33.033602] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] in failed state. 00:29:53.986 [2024-12-14 00:10:33.033613] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Resetting controller failed. 00:29:53.986 [2024-12-14 00:10:33.033645] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000328c80 (9): Bad file descriptor 00:29:53.986 [2024-12-14 00:10:33.033681] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032aa80 (9): Bad file descriptor 00:29:53.986 [2024-12-14 00:10:33.033712] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000329680 (9): Bad file descriptor 00:29:53.986 [2024-12-14 00:10:33.033740] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032a080 (9): Bad file descriptor 00:29:53.986 [2024-12-14 00:10:33.033773] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000328280 (9): Bad file descriptor 00:29:53.986 [2024-12-14 00:10:33.033986] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Ctrlr is in error state 00:29:53.986 [2024-12-14 00:10:33.034006] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] controller reinitialization failed 00:29:53.986 [2024-12-14 00:10:33.034019] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] in failed state. 00:29:53.986 [2024-12-14 00:10:33.034032] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Resetting controller failed. 00:29:53.986 [2024-12-14 00:10:33.034046] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Ctrlr is in error state 00:29:53.986 [2024-12-14 00:10:33.034058] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] controller reinitialization failed 00:29:53.986 [2024-12-14 00:10:33.034071] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] in failed state. 00:29:53.986 [2024-12-14 00:10:33.034083] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Resetting controller failed. 00:29:53.986 [2024-12-14 00:10:33.034161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.986 [2024-12-14 00:10:33.034183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.986 [2024-12-14 00:10:33.034211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.986 [2024-12-14 00:10:33.034226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.986 [2024-12-14 00:10:33.034244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.986 [2024-12-14 00:10:33.034258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.986 [2024-12-14 00:10:33.034279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.986 [2024-12-14 00:10:33.034293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.986 [2024-12-14 00:10:33.034310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.986 [2024-12-14 00:10:33.034324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.986 [2024-12-14 00:10:33.034340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.986 [2024-12-14 00:10:33.034354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.986 [2024-12-14 00:10:33.034370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.986 [2024-12-14 00:10:33.034383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.986 [2024-12-14 00:10:33.034398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.986 [2024-12-14 00:10:33.034412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.986 [2024-12-14 00:10:33.034428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.986 [2024-12-14 00:10:33.034448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.986 [2024-12-14 00:10:33.034466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.986 [2024-12-14 00:10:33.034479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.986 [2024-12-14 00:10:33.034495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.986 [2024-12-14 00:10:33.034509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.986 [2024-12-14 00:10:33.034525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.986 [2024-12-14 00:10:33.034538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.986 [2024-12-14 00:10:33.034554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.986 [2024-12-14 00:10:33.034567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.986 [2024-12-14 00:10:33.034583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.986 [2024-12-14 00:10:33.034597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.986 [2024-12-14 00:10:33.034613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.986 [2024-12-14 00:10:33.034626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.986 [2024-12-14 00:10:33.034642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.986 [2024-12-14 00:10:33.034655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.986 [2024-12-14 00:10:33.034674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.986 [2024-12-14 00:10:33.034687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.986 [2024-12-14 00:10:33.034703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.986 [2024-12-14 00:10:33.034716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.986 [2024-12-14 00:10:33.034732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.987 [2024-12-14 00:10:33.034745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.987 [2024-12-14 00:10:33.034761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.987 [2024-12-14 00:10:33.034774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.987 [2024-12-14 00:10:33.034790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.987 [2024-12-14 00:10:33.034803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.987 [2024-12-14 00:10:33.034819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.987 [2024-12-14 00:10:33.034832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.987 [2024-12-14 00:10:33.034847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.987 [2024-12-14 00:10:33.034860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.987 [2024-12-14 00:10:33.034876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.987 [2024-12-14 00:10:33.034889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.987 [2024-12-14 00:10:33.034904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.987 [2024-12-14 00:10:33.034917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.987 [2024-12-14 00:10:33.034933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.987 [2024-12-14 00:10:33.034946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.987 [2024-12-14 00:10:33.034962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.987 [2024-12-14 00:10:33.034975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.987 [2024-12-14 00:10:33.034990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.987 [2024-12-14 00:10:33.035003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.987 [2024-12-14 00:10:33.035019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.987 [2024-12-14 00:10:33.035036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.987 [2024-12-14 00:10:33.035052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.987 [2024-12-14 00:10:33.035065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.987 [2024-12-14 00:10:33.035081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.987 [2024-12-14 00:10:33.035095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.987 [2024-12-14 00:10:33.035112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.987 [2024-12-14 00:10:33.035126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.987 [2024-12-14 00:10:33.035142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.987 [2024-12-14 00:10:33.035156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.987 [2024-12-14 00:10:33.035172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.987 [2024-12-14 00:10:33.035185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.987 [2024-12-14 00:10:33.035200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.987 [2024-12-14 00:10:33.035214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.987 [2024-12-14 00:10:33.035229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.987 [2024-12-14 00:10:33.035242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.987 [2024-12-14 00:10:33.035258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.987 [2024-12-14 00:10:33.035272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.987 [2024-12-14 00:10:33.035288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.987 [2024-12-14 00:10:33.035301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.987 [2024-12-14 00:10:33.035316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.987 [2024-12-14 00:10:33.035329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.987 [2024-12-14 00:10:33.035345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.987 [2024-12-14 00:10:33.035358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.987 [2024-12-14 00:10:33.035374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.987 [2024-12-14 00:10:33.035387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.987 [2024-12-14 00:10:33.035407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.987 [2024-12-14 00:10:33.035420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.987 [2024-12-14 00:10:33.035440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.987 [2024-12-14 00:10:33.035453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.987 [2024-12-14 00:10:33.035469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.987 [2024-12-14 00:10:33.035483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.987 [2024-12-14 00:10:33.035498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.987 [2024-12-14 00:10:33.035511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.987 [2024-12-14 00:10:33.035527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.987 [2024-12-14 00:10:33.035541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.987 [2024-12-14 00:10:33.035556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.987 [2024-12-14 00:10:33.035569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.987 [2024-12-14 00:10:33.035585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.987 [2024-12-14 00:10:33.035598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.987 [2024-12-14 00:10:33.035613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.987 [2024-12-14 00:10:33.035626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.987 [2024-12-14 00:10:33.035642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.987 [2024-12-14 00:10:33.035655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.987 [2024-12-14 00:10:33.035671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.987 [2024-12-14 00:10:33.035684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.987 [2024-12-14 00:10:33.035700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.987 [2024-12-14 00:10:33.035712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.987 [2024-12-14 00:10:33.035728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.987 [2024-12-14 00:10:33.035742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.987 [2024-12-14 00:10:33.035757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.987 [2024-12-14 00:10:33.035773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.987 [2024-12-14 00:10:33.035789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.987 [2024-12-14 00:10:33.035802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.987 [2024-12-14 00:10:33.035817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.987 [2024-12-14 00:10:33.035831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.987 [2024-12-14 00:10:33.035847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.987 [2024-12-14 00:10:33.035860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.987 [2024-12-14 00:10:33.035875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.987 [2024-12-14 00:10:33.035889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.987 [2024-12-14 00:10:33.035905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.987 [2024-12-14 00:10:33.035918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.988 [2024-12-14 00:10:33.035934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.988 [2024-12-14 00:10:33.035947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.988 [2024-12-14 00:10:33.035969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.988 [2024-12-14 00:10:33.035983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.988 [2024-12-14 00:10:33.035999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.988 [2024-12-14 00:10:33.036012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.988 [2024-12-14 00:10:33.036028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.988 [2024-12-14 00:10:33.036041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.988 [2024-12-14 00:10:33.036057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.988 [2024-12-14 00:10:33.036070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.988 [2024-12-14 00:10:33.036084] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032d500 is same with the state(6) to be set 00:29:53.988 [2024-12-14 00:10:33.037473] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:29:53.988 [2024-12-14 00:10:33.037810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.988 [2024-12-14 00:10:33.037831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:29:53.988 [2024-12-14 00:10:33.037845] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:29:53.988 [2024-12-14 00:10:33.038290] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:29:53.988 [2024-12-14 00:10:33.038425] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:29:53.988 [2024-12-14 00:10:33.038446] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:29:53.988 [2024-12-14 00:10:33.038457] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:29:53.988 [2024-12-14 00:10:33.038468] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:29:53.988 [2024-12-14 00:10:33.038728] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] resetting controller 00:29:53.988 [2024-12-14 00:10:33.038788] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] resetting controller 00:29:53.988 [2024-12-14 00:10:33.039110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.988 [2024-12-14 00:10:33.039129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:29:53.988 [2024-12-14 00:10:33.039140] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000326480 is same with the state(6) to be set 00:29:53.988 [2024-12-14 00:10:33.039398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.988 [2024-12-14 00:10:33.039413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032b480 with addr=10.0.0.2, port=4420 00:29:53.988 [2024-12-14 00:10:33.039422] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032b480 is same with the state(6) to be set 00:29:53.988 [2024-12-14 00:10:33.039435] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000326480 (9): Bad file descriptor 00:29:53.988 [2024-12-14 00:10:33.039483] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032b480 (9): Bad file descriptor 00:29:53.988 [2024-12-14 00:10:33.039495] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Ctrlr is in error state 00:29:53.988 [2024-12-14 00:10:33.039504] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] controller reinitialization failed 00:29:53.988 [2024-12-14 00:10:33.039514] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] in failed state. 00:29:53.988 [2024-12-14 00:10:33.039525] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Resetting controller failed. 00:29:53.988 [2024-12-14 00:10:33.039563] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Ctrlr is in error state 00:29:53.988 [2024-12-14 00:10:33.039573] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] controller reinitialization failed 00:29:53.988 [2024-12-14 00:10:33.039582] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] in failed state. 00:29:53.988 [2024-12-14 00:10:33.039590] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Resetting controller failed. 00:29:53.988 [2024-12-14 00:10:33.040223] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] resetting controller 00:29:53.988 [2024-12-14 00:10:33.040284] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] resetting controller 00:29:53.988 [2024-12-14 00:10:33.040534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.988 [2024-12-14 00:10:33.040552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000327880 with addr=10.0.0.2, port=4420 00:29:53.988 [2024-12-14 00:10:33.040563] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000327880 is same with the state(6) to be set 00:29:53.988 [2024-12-14 00:10:33.040803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.988 [2024-12-14 00:10:33.040819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326e80 with addr=10.0.0.2, port=4420 00:29:53.988 [2024-12-14 00:10:33.040829] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000326e80 is same with the state(6) to be set 00:29:53.988 [2024-12-14 00:10:33.040841] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000327880 (9): Bad file descriptor 00:29:53.988 [2024-12-14 00:10:33.040883] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000326e80 (9): Bad file descriptor 00:29:53.988 [2024-12-14 00:10:33.040894] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Ctrlr is in error state 00:29:53.988 [2024-12-14 00:10:33.040903] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] controller reinitialization failed 00:29:53.988 [2024-12-14 00:10:33.040912] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] in failed state. 00:29:53.988 [2024-12-14 00:10:33.040921] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Resetting controller failed. 00:29:53.988 [2024-12-14 00:10:33.040959] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Ctrlr is in error state 00:29:53.988 [2024-12-14 00:10:33.040969] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] controller reinitialization failed 00:29:53.988 [2024-12-14 00:10:33.040977] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] in failed state. 00:29:53.988 [2024-12-14 00:10:33.040985] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Resetting controller failed. 00:29:53.988 [2024-12-14 00:10:33.043648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.988 [2024-12-14 00:10:33.043672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.988 [2024-12-14 00:10:33.043691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.988 [2024-12-14 00:10:33.043702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.988 [2024-12-14 00:10:33.043714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.988 [2024-12-14 00:10:33.043723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.988 [2024-12-14 00:10:33.043742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.988 [2024-12-14 00:10:33.043752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.988 [2024-12-14 00:10:33.043763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.988 [2024-12-14 00:10:33.043772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.988 [2024-12-14 00:10:33.043784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.988 [2024-12-14 00:10:33.043794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.988 [2024-12-14 00:10:33.043805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.988 [2024-12-14 00:10:33.043815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.988 [2024-12-14 00:10:33.043830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.988 [2024-12-14 00:10:33.043839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.988 [2024-12-14 00:10:33.043851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.988 [2024-12-14 00:10:33.043861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.988 [2024-12-14 00:10:33.043872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.988 [2024-12-14 00:10:33.043881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.988 [2024-12-14 00:10:33.043893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.988 [2024-12-14 00:10:33.043902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.988 [2024-12-14 00:10:33.043913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.988 [2024-12-14 00:10:33.043922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.988 [2024-12-14 00:10:33.043933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.988 [2024-12-14 00:10:33.043942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.988 [2024-12-14 00:10:33.043954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.988 [2024-12-14 00:10:33.043963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.988 [2024-12-14 00:10:33.043974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.988 [2024-12-14 00:10:33.043983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.988 [2024-12-14 00:10:33.043994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.989 [2024-12-14 00:10:33.044004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.989 [2024-12-14 00:10:33.044015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.989 [2024-12-14 00:10:33.044024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.989 [2024-12-14 00:10:33.044035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.989 [2024-12-14 00:10:33.044045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.989 [2024-12-14 00:10:33.044056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.989 [2024-12-14 00:10:33.044064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.989 [2024-12-14 00:10:33.044076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.989 [2024-12-14 00:10:33.044086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.989 [2024-12-14 00:10:33.044098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.989 [2024-12-14 00:10:33.044107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.989 [2024-12-14 00:10:33.044118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.989 [2024-12-14 00:10:33.044127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.989 [2024-12-14 00:10:33.044138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.989 [2024-12-14 00:10:33.044148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.989 [2024-12-14 00:10:33.044159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.989 [2024-12-14 00:10:33.044168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.989 [2024-12-14 00:10:33.044179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.989 [2024-12-14 00:10:33.044189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.989 [2024-12-14 00:10:33.044200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.989 [2024-12-14 00:10:33.044209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.989 [2024-12-14 00:10:33.044221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.989 [2024-12-14 00:10:33.044230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.989 [2024-12-14 00:10:33.044241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.989 [2024-12-14 00:10:33.044250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.989 [2024-12-14 00:10:33.044261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.989 [2024-12-14 00:10:33.044271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.989 [2024-12-14 00:10:33.044282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.989 [2024-12-14 00:10:33.044291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.989 [2024-12-14 00:10:33.044302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.989 [2024-12-14 00:10:33.044312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.989 [2024-12-14 00:10:33.044323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.989 [2024-12-14 00:10:33.044332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.989 [2024-12-14 00:10:33.044345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.989 [2024-12-14 00:10:33.044354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.989 [2024-12-14 00:10:33.044366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.989 [2024-12-14 00:10:33.044375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.989 [2024-12-14 00:10:33.044387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.989 [2024-12-14 00:10:33.044396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.989 [2024-12-14 00:10:33.044407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.989 [2024-12-14 00:10:33.044417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.989 [2024-12-14 00:10:33.044428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.989 [2024-12-14 00:10:33.044449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.989 [2024-12-14 00:10:33.044467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.989 [2024-12-14 00:10:33.044477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.989 [2024-12-14 00:10:33.044488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.989 [2024-12-14 00:10:33.044498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.989 [2024-12-14 00:10:33.044509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.989 [2024-12-14 00:10:33.044519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.989 [2024-12-14 00:10:33.044530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.989 [2024-12-14 00:10:33.044540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.989 [2024-12-14 00:10:33.044552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.989 [2024-12-14 00:10:33.044561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.989 [2024-12-14 00:10:33.044572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.989 [2024-12-14 00:10:33.044582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.989 [2024-12-14 00:10:33.044593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.989 [2024-12-14 00:10:33.044602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.989 [2024-12-14 00:10:33.044613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.989 [2024-12-14 00:10:33.044624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.989 [2024-12-14 00:10:33.044636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.989 [2024-12-14 00:10:33.044645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.989 [2024-12-14 00:10:33.044656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.989 [2024-12-14 00:10:33.044665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.989 [2024-12-14 00:10:33.044676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.989 [2024-12-14 00:10:33.044686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.989 [2024-12-14 00:10:33.044697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.989 [2024-12-14 00:10:33.044706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.989 [2024-12-14 00:10:33.044717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.989 [2024-12-14 00:10:33.044726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.989 [2024-12-14 00:10:33.044738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.989 [2024-12-14 00:10:33.044747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.989 [2024-12-14 00:10:33.044758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.989 [2024-12-14 00:10:33.044767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.989 [2024-12-14 00:10:33.044778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.989 [2024-12-14 00:10:33.044788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.989 [2024-12-14 00:10:33.044799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.989 [2024-12-14 00:10:33.044808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.990 [2024-12-14 00:10:33.044819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.990 [2024-12-14 00:10:33.044829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.990 [2024-12-14 00:10:33.044840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.990 [2024-12-14 00:10:33.044849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.990 [2024-12-14 00:10:33.044860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.990 [2024-12-14 00:10:33.044870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.990 [2024-12-14 00:10:33.044883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.990 [2024-12-14 00:10:33.044893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.990 [2024-12-14 00:10:33.044905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.990 [2024-12-14 00:10:33.044915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.990 [2024-12-14 00:10:33.044926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.990 [2024-12-14 00:10:33.044935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.990 [2024-12-14 00:10:33.044947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.990 [2024-12-14 00:10:33.044956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.990 [2024-12-14 00:10:33.044968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.990 [2024-12-14 00:10:33.044977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.990 [2024-12-14 00:10:33.044988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.990 [2024-12-14 00:10:33.044998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.990 [2024-12-14 00:10:33.045009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.990 [2024-12-14 00:10:33.045018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.990 [2024-12-14 00:10:33.045028] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032df00 is same with the state(6) to be set 00:29:53.990 [2024-12-14 00:10:33.046351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.990 [2024-12-14 00:10:33.046372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.990 [2024-12-14 00:10:33.046395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.990 [2024-12-14 00:10:33.046408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.990 [2024-12-14 00:10:33.046421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.990 [2024-12-14 00:10:33.046431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.990 [2024-12-14 00:10:33.046447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.990 [2024-12-14 00:10:33.046457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.990 [2024-12-14 00:10:33.046469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.990 [2024-12-14 00:10:33.046479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.990 [2024-12-14 00:10:33.046500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.990 [2024-12-14 00:10:33.046509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.990 [2024-12-14 00:10:33.046521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.990 [2024-12-14 00:10:33.046531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.990 [2024-12-14 00:10:33.046542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.990 [2024-12-14 00:10:33.046552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.990 [2024-12-14 00:10:33.046563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.990 [2024-12-14 00:10:33.046573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.990 [2024-12-14 00:10:33.046584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.990 [2024-12-14 00:10:33.046593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.990 [2024-12-14 00:10:33.046605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.990 [2024-12-14 00:10:33.046614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.990 [2024-12-14 00:10:33.046625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.990 [2024-12-14 00:10:33.046634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.990 [2024-12-14 00:10:33.046645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.990 [2024-12-14 00:10:33.046654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.990 [2024-12-14 00:10:33.046665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.990 [2024-12-14 00:10:33.046674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.990 [2024-12-14 00:10:33.046686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.990 [2024-12-14 00:10:33.046695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.990 [2024-12-14 00:10:33.046706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.990 [2024-12-14 00:10:33.046715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.990 [2024-12-14 00:10:33.046727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.990 [2024-12-14 00:10:33.046736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.990 [2024-12-14 00:10:33.046748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.990 [2024-12-14 00:10:33.046758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.990 [2024-12-14 00:10:33.046770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.990 [2024-12-14 00:10:33.046779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.990 [2024-12-14 00:10:33.046790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.990 [2024-12-14 00:10:33.046800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.990 [2024-12-14 00:10:33.046811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.990 [2024-12-14 00:10:33.046820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.990 [2024-12-14 00:10:33.046831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.990 [2024-12-14 00:10:33.046841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.990 [2024-12-14 00:10:33.046852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.990 [2024-12-14 00:10:33.046861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.990 [2024-12-14 00:10:33.046871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.990 [2024-12-14 00:10:33.046881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.990 [2024-12-14 00:10:33.046892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.990 [2024-12-14 00:10:33.046901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.991 [2024-12-14 00:10:33.046912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.991 [2024-12-14 00:10:33.046921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.991 [2024-12-14 00:10:33.046932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.991 [2024-12-14 00:10:33.046941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.991 [2024-12-14 00:10:33.046952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.991 [2024-12-14 00:10:33.046962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.991 [2024-12-14 00:10:33.046973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.991 [2024-12-14 00:10:33.046982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.991 [2024-12-14 00:10:33.046993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.991 [2024-12-14 00:10:33.047002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.991 [2024-12-14 00:10:33.047015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.991 [2024-12-14 00:10:33.047025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.991 [2024-12-14 00:10:33.047036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.991 [2024-12-14 00:10:33.047045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.991 [2024-12-14 00:10:33.047056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.991 [2024-12-14 00:10:33.047065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.991 [2024-12-14 00:10:33.047077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.991 [2024-12-14 00:10:33.047087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.991 [2024-12-14 00:10:33.047099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.991 [2024-12-14 00:10:33.047108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.991 [2024-12-14 00:10:33.047119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.991 [2024-12-14 00:10:33.047128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.991 [2024-12-14 00:10:33.047139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.991 [2024-12-14 00:10:33.047156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.991 [2024-12-14 00:10:33.047167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.991 [2024-12-14 00:10:33.047176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.991 [2024-12-14 00:10:33.047188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.991 [2024-12-14 00:10:33.047198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.991 [2024-12-14 00:10:33.047209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.991 [2024-12-14 00:10:33.047219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.991 [2024-12-14 00:10:33.047231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.991 [2024-12-14 00:10:33.047240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.991 [2024-12-14 00:10:33.047251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.991 [2024-12-14 00:10:33.047261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.991 [2024-12-14 00:10:33.047272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.991 [2024-12-14 00:10:33.047283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.991 [2024-12-14 00:10:33.047294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.991 [2024-12-14 00:10:33.047303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.991 [2024-12-14 00:10:33.047314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.991 [2024-12-14 00:10:33.047323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.991 [2024-12-14 00:10:33.047334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.991 [2024-12-14 00:10:33.047343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.991 [2024-12-14 00:10:33.047354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.991 [2024-12-14 00:10:33.047363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.991 [2024-12-14 00:10:33.047375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.991 [2024-12-14 00:10:33.047384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.991 [2024-12-14 00:10:33.047395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.991 [2024-12-14 00:10:33.047404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.991 [2024-12-14 00:10:33.047415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.991 [2024-12-14 00:10:33.047425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.991 [2024-12-14 00:10:33.047436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.991 [2024-12-14 00:10:33.047449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.991 [2024-12-14 00:10:33.047460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.991 [2024-12-14 00:10:33.047469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.991 [2024-12-14 00:10:33.047481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.991 [2024-12-14 00:10:33.047490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.991 [2024-12-14 00:10:33.047502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.991 [2024-12-14 00:10:33.047512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.991 [2024-12-14 00:10:33.047523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.991 [2024-12-14 00:10:33.047533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.991 [2024-12-14 00:10:33.047546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.991 [2024-12-14 00:10:33.047555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.991 [2024-12-14 00:10:33.047566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.991 [2024-12-14 00:10:33.047576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.991 [2024-12-14 00:10:33.047587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.991 [2024-12-14 00:10:33.047596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.991 [2024-12-14 00:10:33.047607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.991 [2024-12-14 00:10:33.047616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.991 [2024-12-14 00:10:33.047628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.991 [2024-12-14 00:10:33.047637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.991 [2024-12-14 00:10:33.047648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.991 [2024-12-14 00:10:33.047657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.991 [2024-12-14 00:10:33.047669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.991 [2024-12-14 00:10:33.047678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.991 [2024-12-14 00:10:33.047689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.991 [2024-12-14 00:10:33.047699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.991 [2024-12-14 00:10:33.047710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.991 [2024-12-14 00:10:33.047719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.991 [2024-12-14 00:10:33.047729] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032e180 is same with the state(6) to be set 00:29:53.991 [2024-12-14 00:10:33.049062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.991 [2024-12-14 00:10:33.049080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.992 [2024-12-14 00:10:33.049095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.992 [2024-12-14 00:10:33.049105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.992 [2024-12-14 00:10:33.049117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.992 [2024-12-14 00:10:33.049126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.992 [2024-12-14 00:10:33.049137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.992 [2024-12-14 00:10:33.049150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.992 [2024-12-14 00:10:33.049162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.992 [2024-12-14 00:10:33.049171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.992 [2024-12-14 00:10:33.049183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.992 [2024-12-14 00:10:33.049192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.992 [2024-12-14 00:10:33.049203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.992 [2024-12-14 00:10:33.049212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.992 [2024-12-14 00:10:33.049223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.992 [2024-12-14 00:10:33.049233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.992 [2024-12-14 00:10:33.049245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.992 [2024-12-14 00:10:33.049254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.992 [2024-12-14 00:10:33.049266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.992 [2024-12-14 00:10:33.049275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.992 [2024-12-14 00:10:33.049286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.992 [2024-12-14 00:10:33.049295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.992 [2024-12-14 00:10:33.049306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.992 [2024-12-14 00:10:33.049315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.992 [2024-12-14 00:10:33.049326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.992 [2024-12-14 00:10:33.049335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.992 [2024-12-14 00:10:33.049346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.992 [2024-12-14 00:10:33.049355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.992 [2024-12-14 00:10:33.049366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.992 [2024-12-14 00:10:33.049375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.992 [2024-12-14 00:10:33.049386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.992 [2024-12-14 00:10:33.049395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.992 [2024-12-14 00:10:33.049408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.992 [2024-12-14 00:10:33.049418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.992 [2024-12-14 00:10:33.049429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.992 [2024-12-14 00:10:33.049442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.992 [2024-12-14 00:10:33.049454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.992 [2024-12-14 00:10:33.049463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.992 [2024-12-14 00:10:33.049475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.992 [2024-12-14 00:10:33.049484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.992 [2024-12-14 00:10:33.049496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.992 [2024-12-14 00:10:33.049505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.992 [2024-12-14 00:10:33.049517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.992 [2024-12-14 00:10:33.049526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.992 [2024-12-14 00:10:33.049537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.992 [2024-12-14 00:10:33.049551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.992 [2024-12-14 00:10:33.049562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.992 [2024-12-14 00:10:33.049572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.992 [2024-12-14 00:10:33.049583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.992 [2024-12-14 00:10:33.049592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.992 [2024-12-14 00:10:33.049604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.992 [2024-12-14 00:10:33.049613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.992 [2024-12-14 00:10:33.049624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.992 [2024-12-14 00:10:33.049634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.992 [2024-12-14 00:10:33.049645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.992 [2024-12-14 00:10:33.049655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.992 [2024-12-14 00:10:33.049666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.992 [2024-12-14 00:10:33.049677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.992 [2024-12-14 00:10:33.049689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.992 [2024-12-14 00:10:33.049699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.992 [2024-12-14 00:10:33.049711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.992 [2024-12-14 00:10:33.049721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.992 [2024-12-14 00:10:33.049732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.992 [2024-12-14 00:10:33.049742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.992 [2024-12-14 00:10:33.049753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.992 [2024-12-14 00:10:33.049762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.992 [2024-12-14 00:10:33.049774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.992 [2024-12-14 00:10:33.049783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.992 [2024-12-14 00:10:33.049794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.992 [2024-12-14 00:10:33.049804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.992 [2024-12-14 00:10:33.049815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.992 [2024-12-14 00:10:33.049824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.992 [2024-12-14 00:10:33.049841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.992 [2024-12-14 00:10:33.049850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.992 [2024-12-14 00:10:33.049862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.992 [2024-12-14 00:10:33.049871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.992 [2024-12-14 00:10:33.049882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.992 [2024-12-14 00:10:33.049892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.992 [2024-12-14 00:10:33.049903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.992 [2024-12-14 00:10:33.049913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.992 [2024-12-14 00:10:33.049924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.992 [2024-12-14 00:10:33.049934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.992 [2024-12-14 00:10:33.049947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.993 [2024-12-14 00:10:33.049956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.993 [2024-12-14 00:10:33.049967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.993 [2024-12-14 00:10:33.049976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.993 [2024-12-14 00:10:33.049988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.993 [2024-12-14 00:10:33.049997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.993 [2024-12-14 00:10:33.050008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.993 [2024-12-14 00:10:33.050017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.993 [2024-12-14 00:10:33.050029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.993 [2024-12-14 00:10:33.050039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.993 [2024-12-14 00:10:33.050051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.993 [2024-12-14 00:10:33.050061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.993 [2024-12-14 00:10:33.050072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.993 [2024-12-14 00:10:33.050082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.993 [2024-12-14 00:10:33.050094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.993 [2024-12-14 00:10:33.050103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.993 [2024-12-14 00:10:33.050115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.993 [2024-12-14 00:10:33.050124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.993 [2024-12-14 00:10:33.050136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.993 [2024-12-14 00:10:33.050145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.993 [2024-12-14 00:10:33.050157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.993 [2024-12-14 00:10:33.050166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.993 [2024-12-14 00:10:33.050178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.993 [2024-12-14 00:10:33.050187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.993 [2024-12-14 00:10:33.050199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.993 [2024-12-14 00:10:33.050210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.993 [2024-12-14 00:10:33.050222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.993 [2024-12-14 00:10:33.050232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.993 [2024-12-14 00:10:33.050243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.993 [2024-12-14 00:10:33.050252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.993 [2024-12-14 00:10:33.050263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.993 [2024-12-14 00:10:33.050272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.993 [2024-12-14 00:10:33.050284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.993 [2024-12-14 00:10:33.050293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.993 [2024-12-14 00:10:33.050304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.993 [2024-12-14 00:10:33.050313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.993 [2024-12-14 00:10:33.050324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.993 [2024-12-14 00:10:33.050333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.993 [2024-12-14 00:10:33.050344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.993 [2024-12-14 00:10:33.050354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.993 [2024-12-14 00:10:33.050365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.993 [2024-12-14 00:10:33.050374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.993 [2024-12-14 00:10:33.050385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.993 [2024-12-14 00:10:33.050395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.993 [2024-12-14 00:10:33.050406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.993 [2024-12-14 00:10:33.050415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.993 [2024-12-14 00:10:33.050425] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032e400 is same with the state(6) to be set 00:29:53.993 [2024-12-14 00:10:33.051737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.993 [2024-12-14 00:10:33.051755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.993 [2024-12-14 00:10:33.051769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.993 [2024-12-14 00:10:33.051783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.993 [2024-12-14 00:10:33.051795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.993 [2024-12-14 00:10:33.051805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.993 [2024-12-14 00:10:33.051817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.993 [2024-12-14 00:10:33.051826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.993 [2024-12-14 00:10:33.051838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.993 [2024-12-14 00:10:33.051847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.993 [2024-12-14 00:10:33.051859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.993 [2024-12-14 00:10:33.051869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.993 [2024-12-14 00:10:33.051881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.993 [2024-12-14 00:10:33.051890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.993 [2024-12-14 00:10:33.051902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.993 [2024-12-14 00:10:33.051911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.993 [2024-12-14 00:10:33.051923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.993 [2024-12-14 00:10:33.051932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.993 [2024-12-14 00:10:33.051944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.993 [2024-12-14 00:10:33.051953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.993 [2024-12-14 00:10:33.051965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.993 [2024-12-14 00:10:33.051975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.993 [2024-12-14 00:10:33.051986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.993 [2024-12-14 00:10:33.051996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.993 [2024-12-14 00:10:33.052007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.993 [2024-12-14 00:10:33.052020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.993 [2024-12-14 00:10:33.052031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.993 [2024-12-14 00:10:33.052041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.993 [2024-12-14 00:10:33.052054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.993 [2024-12-14 00:10:33.052064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.993 [2024-12-14 00:10:33.052075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.993 [2024-12-14 00:10:33.052085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.993 [2024-12-14 00:10:33.052096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.993 [2024-12-14 00:10:33.052106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.993 [2024-12-14 00:10:33.052117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.994 [2024-12-14 00:10:33.052127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.994 [2024-12-14 00:10:33.052139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.994 [2024-12-14 00:10:33.052149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.994 [2024-12-14 00:10:33.052160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.994 [2024-12-14 00:10:33.052169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.994 [2024-12-14 00:10:33.052181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.994 [2024-12-14 00:10:33.052191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.994 [2024-12-14 00:10:33.052202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.994 [2024-12-14 00:10:33.052212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.994 [2024-12-14 00:10:33.052223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.994 [2024-12-14 00:10:33.052232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.994 [2024-12-14 00:10:33.052244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.994 [2024-12-14 00:10:33.052253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.994 [2024-12-14 00:10:33.052264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.994 [2024-12-14 00:10:33.052273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.994 [2024-12-14 00:10:33.052285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.994 [2024-12-14 00:10:33.052294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.994 [2024-12-14 00:10:33.052305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.994 [2024-12-14 00:10:33.052315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.994 [2024-12-14 00:10:33.052327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.994 [2024-12-14 00:10:33.052336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.994 [2024-12-14 00:10:33.052347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.994 [2024-12-14 00:10:33.052357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.994 [2024-12-14 00:10:33.052368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.994 [2024-12-14 00:10:33.052377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.994 [2024-12-14 00:10:33.052388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.994 [2024-12-14 00:10:33.052397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.994 [2024-12-14 00:10:33.052409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.994 [2024-12-14 00:10:33.052418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.994 [2024-12-14 00:10:33.052429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.994 [2024-12-14 00:10:33.052444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.994 [2024-12-14 00:10:33.052456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.994 [2024-12-14 00:10:33.052465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.994 [2024-12-14 00:10:33.052477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.994 [2024-12-14 00:10:33.052486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.994 [2024-12-14 00:10:33.052498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.994 [2024-12-14 00:10:33.052512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.994 [2024-12-14 00:10:33.052523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.994 [2024-12-14 00:10:33.052533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.994 [2024-12-14 00:10:33.052544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.994 [2024-12-14 00:10:33.052554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.994 [2024-12-14 00:10:33.052566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.994 [2024-12-14 00:10:33.052575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.994 [2024-12-14 00:10:33.052589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.994 [2024-12-14 00:10:33.052598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.994 [2024-12-14 00:10:33.052609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.994 [2024-12-14 00:10:33.052619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.994 [2024-12-14 00:10:33.052630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.994 [2024-12-14 00:10:33.052640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.994 [2024-12-14 00:10:33.052651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.994 [2024-12-14 00:10:33.052660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.994 [2024-12-14 00:10:33.052671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.994 [2024-12-14 00:10:33.052680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.994 [2024-12-14 00:10:33.052691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.994 [2024-12-14 00:10:33.052700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.994 [2024-12-14 00:10:33.052712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.994 [2024-12-14 00:10:33.052721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.994 [2024-12-14 00:10:33.052733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.994 [2024-12-14 00:10:33.052741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.994 [2024-12-14 00:10:33.052753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.994 [2024-12-14 00:10:33.052762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.994 [2024-12-14 00:10:33.052774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.994 [2024-12-14 00:10:33.052783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.994 [2024-12-14 00:10:33.052795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.994 [2024-12-14 00:10:33.052804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.994 [2024-12-14 00:10:33.052815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.994 [2024-12-14 00:10:33.052824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.994 [2024-12-14 00:10:33.052835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.994 [2024-12-14 00:10:33.052848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.994 [2024-12-14 00:10:33.052859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.994 [2024-12-14 00:10:33.052868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.994 [2024-12-14 00:10:33.052879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.994 [2024-12-14 00:10:33.052888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.995 [2024-12-14 00:10:33.052900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.995 [2024-12-14 00:10:33.052909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.995 [2024-12-14 00:10:33.052920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.995 [2024-12-14 00:10:33.052930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.995 [2024-12-14 00:10:33.052941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.995 [2024-12-14 00:10:33.052950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.995 [2024-12-14 00:10:33.052961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.995 [2024-12-14 00:10:33.052970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.995 [2024-12-14 00:10:33.052981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.995 [2024-12-14 00:10:33.052991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.995 [2024-12-14 00:10:33.053001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.995 [2024-12-14 00:10:33.053011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.995 [2024-12-14 00:10:33.053022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.995 [2024-12-14 00:10:33.053032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.995 [2024-12-14 00:10:33.053043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.995 [2024-12-14 00:10:33.053053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.995 [2024-12-14 00:10:33.053063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.995 [2024-12-14 00:10:33.053073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.995 [2024-12-14 00:10:33.053084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.995 [2024-12-14 00:10:33.053094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.995 [2024-12-14 00:10:33.053106] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032e680 is same with the state(6) to be set 00:29:53.995 [2024-12-14 00:10:33.054413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.995 [2024-12-14 00:10:33.054431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.995 [2024-12-14 00:10:33.054450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.995 [2024-12-14 00:10:33.054459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.995 [2024-12-14 00:10:33.054471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.995 [2024-12-14 00:10:33.054481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.995 [2024-12-14 00:10:33.054493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.995 [2024-12-14 00:10:33.054502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.995 [2024-12-14 00:10:33.054514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.995 [2024-12-14 00:10:33.054523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.995 [2024-12-14 00:10:33.054535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.995 [2024-12-14 00:10:33.054545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.995 [2024-12-14 00:10:33.054556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.995 [2024-12-14 00:10:33.054565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.995 [2024-12-14 00:10:33.054577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.995 [2024-12-14 00:10:33.054586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.995 [2024-12-14 00:10:33.054598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.995 [2024-12-14 00:10:33.054607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.995 [2024-12-14 00:10:33.054619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.995 [2024-12-14 00:10:33.054628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.995 [2024-12-14 00:10:33.054640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.995 [2024-12-14 00:10:33.054649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.995 [2024-12-14 00:10:33.054661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.995 [2024-12-14 00:10:33.054670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.995 [2024-12-14 00:10:33.054685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.995 [2024-12-14 00:10:33.054695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.995 [2024-12-14 00:10:33.054706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.995 [2024-12-14 00:10:33.054715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.995 [2024-12-14 00:10:33.054727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.995 [2024-12-14 00:10:33.054736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.995 [2024-12-14 00:10:33.054748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.995 [2024-12-14 00:10:33.054757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.995 [2024-12-14 00:10:33.054768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.995 [2024-12-14 00:10:33.054778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.995 [2024-12-14 00:10:33.054789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.995 [2024-12-14 00:10:33.054799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.995 [2024-12-14 00:10:33.054810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.995 [2024-12-14 00:10:33.054820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.995 [2024-12-14 00:10:33.054831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.995 [2024-12-14 00:10:33.054841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.995 [2024-12-14 00:10:33.054853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.995 [2024-12-14 00:10:33.054862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.995 [2024-12-14 00:10:33.054874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.995 [2024-12-14 00:10:33.054883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.995 [2024-12-14 00:10:33.054894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.995 [2024-12-14 00:10:33.054904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.995 [2024-12-14 00:10:33.054915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.995 [2024-12-14 00:10:33.054925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.995 [2024-12-14 00:10:33.054936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.995 [2024-12-14 00:10:33.054947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.995 [2024-12-14 00:10:33.054959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.995 [2024-12-14 00:10:33.054968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.995 [2024-12-14 00:10:33.054979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.995 [2024-12-14 00:10:33.054989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.995 [2024-12-14 00:10:33.055000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.995 [2024-12-14 00:10:33.055009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.995 [2024-12-14 00:10:33.055021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.995 [2024-12-14 00:10:33.055030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.995 [2024-12-14 00:10:33.055042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.995 [2024-12-14 00:10:33.055051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.996 [2024-12-14 00:10:33.055062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.996 [2024-12-14 00:10:33.055071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.996 [2024-12-14 00:10:33.055082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.996 [2024-12-14 00:10:33.055092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.996 [2024-12-14 00:10:33.055103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.996 [2024-12-14 00:10:33.055112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.996 [2024-12-14 00:10:33.055123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.996 [2024-12-14 00:10:33.055132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.996 [2024-12-14 00:10:33.055143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.996 [2024-12-14 00:10:33.055153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.996 [2024-12-14 00:10:33.055169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.996 [2024-12-14 00:10:33.055179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.996 [2024-12-14 00:10:33.055190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.996 [2024-12-14 00:10:33.055200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.996 [2024-12-14 00:10:33.055214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.996 [2024-12-14 00:10:33.055223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.996 [2024-12-14 00:10:33.055234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.996 [2024-12-14 00:10:33.055244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.996 [2024-12-14 00:10:33.055255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.996 [2024-12-14 00:10:33.055264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.996 [2024-12-14 00:10:33.055276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.996 [2024-12-14 00:10:33.055285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.996 [2024-12-14 00:10:33.055296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.996 [2024-12-14 00:10:33.055305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.996 [2024-12-14 00:10:33.055317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.996 [2024-12-14 00:10:33.055326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.996 [2024-12-14 00:10:33.055337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.996 [2024-12-14 00:10:33.055346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.996 [2024-12-14 00:10:33.055358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.996 [2024-12-14 00:10:33.055368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.996 [2024-12-14 00:10:33.055379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.996 [2024-12-14 00:10:33.055388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.996 [2024-12-14 00:10:33.055399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.996 [2024-12-14 00:10:33.055409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.996 [2024-12-14 00:10:33.055420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.996 [2024-12-14 00:10:33.055429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.996 [2024-12-14 00:10:33.055443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.996 [2024-12-14 00:10:33.055453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.996 [2024-12-14 00:10:33.055463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.996 [2024-12-14 00:10:33.055474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.996 [2024-12-14 00:10:33.055486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.996 [2024-12-14 00:10:33.055495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.996 [2024-12-14 00:10:33.055506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.996 [2024-12-14 00:10:33.055515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.996 [2024-12-14 00:10:33.055527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.996 [2024-12-14 00:10:33.055536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.996 [2024-12-14 00:10:33.055547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.996 [2024-12-14 00:10:33.055556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.996 [2024-12-14 00:10:33.055568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.996 [2024-12-14 00:10:33.055577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.996 [2024-12-14 00:10:33.055589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.996 [2024-12-14 00:10:33.055598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.996 [2024-12-14 00:10:33.055609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.996 [2024-12-14 00:10:33.055618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.996 [2024-12-14 00:10:33.055629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.996 [2024-12-14 00:10:33.055639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.996 [2024-12-14 00:10:33.055650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.996 [2024-12-14 00:10:33.055660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.996 [2024-12-14 00:10:33.055671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.996 [2024-12-14 00:10:33.055680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.996 [2024-12-14 00:10:33.055692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.996 [2024-12-14 00:10:33.055702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.996 [2024-12-14 00:10:33.055714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.996 [2024-12-14 00:10:33.055723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.996 [2024-12-14 00:10:33.055734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.996 [2024-12-14 00:10:33.055745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.996 [2024-12-14 00:10:33.055757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.996 [2024-12-14 00:10:33.055766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.996 [2024-12-14 00:10:33.055775] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032e900 is same with the state(6) to be set 00:29:53.996 [2024-12-14 00:10:33.057044] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] resetting controller 00:29:53.996 [2024-12-14 00:10:33.057065] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] resetting controller 00:29:53.996 [2024-12-14 00:10:33.057079] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] resetting controller 00:29:53.996 [2024-12-14 00:10:33.057095] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] resetting controller 00:29:53.996 [2024-12-14 00:10:33.057186] bdev_nvme.c:3173:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode9, 1] Unable to perform failover, already in progress. 00:29:53.996 task offset: 16384 on job bdev=Nvme10n1 fails 00:29:53.996 00:29:53.996 Latency(us) 00:29:53.996 [2024-12-13T23:10:33.137Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:53.996 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:53.996 Job: Nvme1n1 ended in about 0.84 seconds with error 00:29:53.996 Verification LBA range: start 0x0 length 0x400 00:29:53.996 Nvme1n1 : 0.84 157.55 9.85 75.82 0.00 271239.83 29709.65 225693.50 00:29:53.996 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:53.996 Job: Nvme2n1 ended in about 0.83 seconds with error 00:29:53.996 Verification LBA range: start 0x0 length 0x400 00:29:53.996 Nvme2n1 : 0.83 230.70 14.42 76.90 0.00 201484.31 12358.22 236678.58 00:29:53.996 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:53.996 Job: Nvme3n1 ended in about 0.83 seconds with error 00:29:53.996 Verification LBA range: start 0x0 length 0x400 00:29:53.996 Nvme3n1 : 0.83 230.33 14.40 76.78 0.00 197620.54 24591.60 229688.08 00:29:53.997 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:53.997 Job: Nvme4n1 ended in about 0.83 seconds with error 00:29:53.997 Verification LBA range: start 0x0 length 0x400 00:29:53.997 Nvme4n1 : 0.83 229.98 14.37 76.66 0.00 193723.00 18474.91 242670.45 00:29:53.997 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:53.997 Job: Nvme5n1 ended in about 0.85 seconds with error 00:29:53.997 Verification LBA range: start 0x0 length 0x400 00:29:53.997 Nvme5n1 : 0.85 150.06 9.38 75.03 0.00 258901.33 18974.23 238675.87 00:29:53.997 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:53.997 Job: Nvme6n1 ended in about 0.86 seconds with error 00:29:53.997 Verification LBA range: start 0x0 length 0x400 00:29:53.997 Nvme6n1 : 0.86 149.58 9.35 74.79 0.00 254170.45 20097.71 259647.39 00:29:53.997 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:53.997 Job: Nvme7n1 ended in about 0.86 seconds with error 00:29:53.997 Verification LBA range: start 0x0 length 0x400 00:29:53.997 Nvme7n1 : 0.86 149.11 9.32 74.56 0.00 249510.44 16852.11 239674.51 00:29:53.997 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:53.997 Job: Nvme8n1 ended in about 0.86 seconds with error 00:29:53.997 Verification LBA range: start 0x0 length 0x400 00:29:53.997 Nvme8n1 : 0.86 148.65 9.29 74.33 0.00 244924.55 18225.25 220700.28 00:29:53.997 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:53.997 Job: Nvme9n1 ended in about 0.86 seconds with error 00:29:53.997 Verification LBA range: start 0x0 length 0x400 00:29:53.997 Nvme9n1 : 0.86 148.19 9.26 74.10 0.00 240272.34 24716.43 257650.10 00:29:53.997 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:53.997 Job: Nvme10n1 ended in about 0.83 seconds with error 00:29:53.997 Verification LBA range: start 0x0 length 0x400 00:29:53.997 Nvme10n1 : 0.83 154.27 9.64 77.14 0.00 223334.07 19223.89 269633.83 00:29:53.997 [2024-12-13T23:10:33.138Z] =================================================================================================================== 00:29:53.997 [2024-12-13T23:10:33.138Z] Total : 1748.44 109.28 756.09 0.00 230350.45 12358.22 269633.83 00:29:54.257 [2024-12-14 00:10:33.188581] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:29:54.257 [2024-12-14 00:10:33.188643] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9, 1] resetting controller 00:29:54.257 [2024-12-14 00:10:33.189017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.257 [2024-12-14 00:10:33.189041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000328280 with addr=10.0.0.2, port=4420 00:29:54.257 [2024-12-14 00:10:33.189057] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000328280 is same with the state(6) to be set 00:29:54.257 [2024-12-14 00:10:33.189143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.257 [2024-12-14 00:10:33.189158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000328c80 with addr=10.0.0.2, port=4420 00:29:54.257 [2024-12-14 00:10:33.189168] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000328c80 is same with the state(6) to be set 00:29:54.257 [2024-12-14 00:10:33.189399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.257 [2024-12-14 00:10:33.189414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000329680 with addr=10.0.0.2, port=4420 00:29:54.257 [2024-12-14 00:10:33.189424] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000329680 is same with the state(6) to be set 00:29:54.257 [2024-12-14 00:10:33.189533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.257 [2024-12-14 00:10:33.189547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032a080 with addr=10.0.0.2, port=4420 00:29:54.257 [2024-12-14 00:10:33.189557] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032a080 is same with the state(6) to be set 00:29:54.257 [2024-12-14 00:10:33.191561] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:29:54.257 [2024-12-14 00:10:33.191593] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] resetting controller 00:29:54.257 [2024-12-14 00:10:33.191606] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] resetting controller 00:29:54.257 [2024-12-14 00:10:33.191617] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] resetting controller 00:29:54.257 [2024-12-14 00:10:33.191629] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] resetting controller 00:29:54.257 [2024-12-14 00:10:33.191968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.257 [2024-12-14 00:10:33.191987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032aa80 with addr=10.0.0.2, port=4420 00:29:54.257 [2024-12-14 00:10:33.191999] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032aa80 is same with the state(6) to be set 00:29:54.257 [2024-12-14 00:10:33.192019] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000328280 (9): Bad file descriptor 00:29:54.257 [2024-12-14 00:10:33.192036] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000328c80 (9): Bad file descriptor 00:29:54.257 [2024-12-14 00:10:33.192049] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000329680 (9): Bad file descriptor 00:29:54.257 [2024-12-14 00:10:33.192065] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032a080 (9): Bad file descriptor 00:29:54.257 [2024-12-14 00:10:33.192107] bdev_nvme.c:3173:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] Unable to perform failover, already in progress. 00:29:54.257 [2024-12-14 00:10:33.192122] bdev_nvme.c:3173:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] Unable to perform failover, already in progress. 00:29:54.257 [2024-12-14 00:10:33.192137] bdev_nvme.c:3173:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] Unable to perform failover, already in progress. 00:29:54.257 [2024-12-14 00:10:33.192151] bdev_nvme.c:3173:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] Unable to perform failover, already in progress. 00:29:54.257 [2024-12-14 00:10:33.192782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.257 [2024-12-14 00:10:33.192806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:29:54.257 [2024-12-14 00:10:33.192818] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:29:54.257 [2024-12-14 00:10:33.193044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.257 [2024-12-14 00:10:33.193059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326480 with addr=10.0.0.2, port=4420 00:29:54.257 [2024-12-14 00:10:33.193069] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000326480 is same with the state(6) to be set 00:29:54.257 [2024-12-14 00:10:33.193152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.257 [2024-12-14 00:10:33.193166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032b480 with addr=10.0.0.2, port=4420 00:29:54.257 [2024-12-14 00:10:33.193176] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500032b480 is same with the state(6) to be set 00:29:54.257 [2024-12-14 00:10:33.193333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.257 [2024-12-14 00:10:33.193347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000327880 with addr=10.0.0.2, port=4420 00:29:54.257 [2024-12-14 00:10:33.193358] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000327880 is same with the state(6) to be set 00:29:54.257 [2024-12-14 00:10:33.193583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.257 [2024-12-14 00:10:33.193598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326e80 with addr=10.0.0.2, port=4420 00:29:54.257 [2024-12-14 00:10:33.193609] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000326e80 is same with the state(6) to be set 00:29:54.257 [2024-12-14 00:10:33.193622] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032aa80 (9): Bad file descriptor 00:29:54.257 [2024-12-14 00:10:33.193635] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Ctrlr is in error state 00:29:54.257 [2024-12-14 00:10:33.193645] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] controller reinitialization failed 00:29:54.257 [2024-12-14 00:10:33.193657] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] in failed state. 00:29:54.257 [2024-12-14 00:10:33.193670] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Resetting controller failed. 00:29:54.257 [2024-12-14 00:10:33.193682] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Ctrlr is in error state 00:29:54.257 [2024-12-14 00:10:33.193690] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] controller reinitialization failed 00:29:54.257 [2024-12-14 00:10:33.193702] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] in failed state. 00:29:54.257 [2024-12-14 00:10:33.193711] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Resetting controller failed. 00:29:54.257 [2024-12-14 00:10:33.193721] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Ctrlr is in error state 00:29:54.257 [2024-12-14 00:10:33.193729] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] controller reinitialization failed 00:29:54.257 [2024-12-14 00:10:33.193739] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] in failed state. 00:29:54.257 [2024-12-14 00:10:33.193747] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Resetting controller failed. 00:29:54.257 [2024-12-14 00:10:33.193756] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Ctrlr is in error state 00:29:54.257 [2024-12-14 00:10:33.193764] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] controller reinitialization failed 00:29:54.257 [2024-12-14 00:10:33.193773] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] in failed state. 00:29:54.257 [2024-12-14 00:10:33.193782] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Resetting controller failed. 00:29:54.257 [2024-12-14 00:10:33.193877] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:29:54.257 [2024-12-14 00:10:33.193891] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000326480 (9): Bad file descriptor 00:29:54.257 [2024-12-14 00:10:33.193911] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032b480 (9): Bad file descriptor 00:29:54.257 [2024-12-14 00:10:33.193923] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000327880 (9): Bad file descriptor 00:29:54.257 [2024-12-14 00:10:33.193934] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000326e80 (9): Bad file descriptor 00:29:54.257 [2024-12-14 00:10:33.193945] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Ctrlr is in error state 00:29:54.257 [2024-12-14 00:10:33.193953] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] controller reinitialization failed 00:29:54.257 [2024-12-14 00:10:33.193962] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] in failed state. 00:29:54.258 [2024-12-14 00:10:33.193971] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Resetting controller failed. 00:29:54.258 [2024-12-14 00:10:33.194008] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:29:54.258 [2024-12-14 00:10:33.194018] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:29:54.258 [2024-12-14 00:10:33.194027] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:29:54.258 [2024-12-14 00:10:33.194036] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:29:54.258 [2024-12-14 00:10:33.194045] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Ctrlr is in error state 00:29:54.258 [2024-12-14 00:10:33.194053] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] controller reinitialization failed 00:29:54.258 [2024-12-14 00:10:33.194062] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] in failed state. 00:29:54.258 [2024-12-14 00:10:33.194070] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Resetting controller failed. 00:29:54.258 [2024-12-14 00:10:33.194079] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Ctrlr is in error state 00:29:54.258 [2024-12-14 00:10:33.194087] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] controller reinitialization failed 00:29:54.258 [2024-12-14 00:10:33.194098] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] in failed state. 00:29:54.258 [2024-12-14 00:10:33.194107] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Resetting controller failed. 00:29:54.258 [2024-12-14 00:10:33.194116] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Ctrlr is in error state 00:29:54.258 [2024-12-14 00:10:33.194125] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] controller reinitialization failed 00:29:54.258 [2024-12-14 00:10:33.194133] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] in failed state. 00:29:54.258 [2024-12-14 00:10:33.194142] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Resetting controller failed. 00:29:54.258 [2024-12-14 00:10:33.194151] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Ctrlr is in error state 00:29:54.258 [2024-12-14 00:10:33.194159] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] controller reinitialization failed 00:29:54.258 [2024-12-14 00:10:33.194169] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] in failed state. 00:29:54.258 [2024-12-14 00:10:33.194178] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Resetting controller failed. 00:29:57.545 00:10:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@137 -- # sleep 1 00:29:58.113 00:10:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@138 -- # NOT wait 4130028 00:29:58.113 00:10:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@652 -- # local es=0 00:29:58.113 00:10:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@654 -- # valid_exec_arg wait 4130028 00:29:58.113 00:10:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@640 -- # local arg=wait 00:29:58.113 00:10:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:58.113 00:10:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # type -t wait 00:29:58.113 00:10:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:58.113 00:10:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@655 -- # wait 4130028 00:29:58.113 00:10:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@655 -- # es=255 00:29:58.113 00:10:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:29:58.113 00:10:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@664 -- # es=127 00:29:58.113 00:10:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@665 -- # case "$es" in 00:29:58.113 00:10:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@672 -- # es=1 00:29:58.113 00:10:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:29:58.113 00:10:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@140 -- # stoptarget 00:29:58.113 00:10:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:29:58.113 00:10:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:29:58.113 00:10:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:29:58.113 00:10:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@46 -- # nvmftestfini 00:29:58.113 00:10:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:58.113 00:10:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # sync 00:29:58.113 00:10:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:58.114 00:10:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set +e 00:29:58.114 00:10:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:58.114 00:10:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:58.114 rmmod nvme_tcp 00:29:58.114 rmmod nvme_fabrics 00:29:58.114 rmmod nvme_keyring 00:29:58.114 00:10:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:58.114 00:10:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@128 -- # set -e 00:29:58.114 00:10:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@129 -- # return 0 00:29:58.114 00:10:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@517 -- # '[' -n 4129630 ']' 00:29:58.114 00:10:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@518 -- # killprocess 4129630 00:29:58.114 00:10:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # '[' -z 4129630 ']' 00:29:58.114 00:10:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # kill -0 4129630 00:29:58.114 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (4129630) - No such process 00:29:58.114 00:10:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@981 -- # echo 'Process with pid 4129630 is not found' 00:29:58.114 Process with pid 4129630 is not found 00:29:58.114 00:10:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:58.114 00:10:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:58.114 00:10:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:58.114 00:10:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # iptr 00:29:58.114 00:10:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # iptables-save 00:29:58.114 00:10:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:58.114 00:10:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # iptables-restore 00:29:58.114 00:10:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:58.114 00:10:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:58.114 00:10:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:58.114 00:10:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:58.114 00:10:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:00.648 00:10:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:00.648 00:30:00.648 real 0m11.673s 00:30:00.648 user 0m34.526s 00:30:00.648 sys 0m1.665s 00:30:00.648 00:10:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:00.648 00:10:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:30:00.648 ************************************ 00:30:00.648 END TEST nvmf_shutdown_tc3 00:30:00.648 ************************************ 00:30:00.648 00:10:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ e810 == \e\8\1\0 ]] 00:30:00.648 00:10:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ tcp == \r\d\m\a ]] 00:30:00.648 00:10:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@167 -- # run_test nvmf_shutdown_tc4 nvmf_shutdown_tc4 00:30:00.648 00:10:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:30:00.648 00:10:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:00.648 00:10:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:30:00.648 ************************************ 00:30:00.648 START TEST nvmf_shutdown_tc4 00:30:00.648 ************************************ 00:30:00.648 00:10:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc4 00:30:00.648 00:10:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@145 -- # starttarget 00:30:00.648 00:10:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@16 -- # nvmftestinit 00:30:00.648 00:10:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:00.648 00:10:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:00.648 00:10:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:00.648 00:10:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:00.648 00:10:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:00.648 00:10:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:00.648 00:10:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:00.648 00:10:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:00.648 00:10:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:00.648 00:10:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:00.648 00:10:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@309 -- # xtrace_disable 00:30:00.648 00:10:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:30:00.648 00:10:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:00.648 00:10:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # pci_devs=() 00:30:00.648 00:10:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:00.648 00:10:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:00.648 00:10:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:00.648 00:10:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:00.648 00:10:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:00.648 00:10:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # net_devs=() 00:30:00.648 00:10:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:00.648 00:10:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # e810=() 00:30:00.648 00:10:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # local -ga e810 00:30:00.648 00:10:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # x722=() 00:30:00.648 00:10:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # local -ga x722 00:30:00.648 00:10:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # mlx=() 00:30:00.648 00:10:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # local -ga mlx 00:30:00.648 00:10:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:00.648 00:10:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:00.648 00:10:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:00.648 00:10:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:00.648 00:10:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:00.648 00:10:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:00.648 00:10:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:00.648 00:10:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:00.648 00:10:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:00.648 00:10:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:00.649 00:10:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:00.649 00:10:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:00.649 00:10:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:00.649 00:10:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:00.649 00:10:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:00.649 00:10:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:00.649 00:10:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:00.649 00:10:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:00.649 00:10:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:00.649 00:10:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:30:00.649 Found 0000:af:00.0 (0x8086 - 0x159b) 00:30:00.649 00:10:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:00.649 00:10:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:00.649 00:10:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:00.649 00:10:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:00.649 00:10:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:00.649 00:10:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:00.649 00:10:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:30:00.649 Found 0000:af:00.1 (0x8086 - 0x159b) 00:30:00.649 00:10:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:00.649 00:10:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:00.649 00:10:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:00.649 00:10:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:00.649 00:10:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:00.649 00:10:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:00.649 00:10:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:00.649 00:10:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:00.649 00:10:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:00.649 00:10:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:00.649 00:10:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:00.649 00:10:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:00.649 00:10:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:00.649 00:10:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:00.649 00:10:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:00.649 00:10:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:30:00.649 Found net devices under 0000:af:00.0: cvl_0_0 00:30:00.649 00:10:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:00.649 00:10:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:00.649 00:10:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:00.649 00:10:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:00.649 00:10:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:00.649 00:10:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:00.649 00:10:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:00.649 00:10:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:00.649 00:10:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:30:00.649 Found net devices under 0000:af:00.1: cvl_0_1 00:30:00.649 00:10:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:00.649 00:10:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:00.649 00:10:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # is_hw=yes 00:30:00.649 00:10:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:00.649 00:10:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:00.649 00:10:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:00.649 00:10:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:00.649 00:10:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:00.649 00:10:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:00.649 00:10:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:00.649 00:10:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:00.649 00:10:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:00.649 00:10:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:00.649 00:10:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:00.649 00:10:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:00.649 00:10:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:00.649 00:10:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:00.649 00:10:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:00.649 00:10:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:00.649 00:10:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:00.649 00:10:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:00.649 00:10:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:00.649 00:10:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:00.649 00:10:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:00.649 00:10:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:00.649 00:10:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:00.649 00:10:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:00.649 00:10:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:00.649 00:10:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:00.649 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:00.649 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.344 ms 00:30:00.649 00:30:00.649 --- 10.0.0.2 ping statistics --- 00:30:00.649 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:00.649 rtt min/avg/max/mdev = 0.344/0.344/0.344/0.000 ms 00:30:00.649 00:10:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:00.649 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:00.649 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.207 ms 00:30:00.649 00:30:00.649 --- 10.0.0.1 ping statistics --- 00:30:00.649 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:00.649 rtt min/avg/max/mdev = 0.207/0.207/0.207/0.000 ms 00:30:00.649 00:10:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:00.649 00:10:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@450 -- # return 0 00:30:00.649 00:10:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:00.649 00:10:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:00.649 00:10:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:00.649 00:10:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:00.649 00:10:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:00.649 00:10:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:00.649 00:10:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:00.649 00:10:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:30:00.649 00:10:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:00.649 00:10:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:00.649 00:10:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:30:00.650 00:10:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@509 -- # nvmfpid=4131715 00:30:00.650 00:10:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@510 -- # waitforlisten 4131715 00:30:00.650 00:10:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:30:00.650 00:10:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@835 -- # '[' -z 4131715 ']' 00:30:00.650 00:10:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:00.650 00:10:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:00.650 00:10:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:00.650 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:00.650 00:10:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:00.650 00:10:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:30:00.650 [2024-12-14 00:10:39.706985] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:30:00.650 [2024-12-14 00:10:39.707069] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:00.909 [2024-12-14 00:10:39.821291] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:00.909 [2024-12-14 00:10:39.928570] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:00.909 [2024-12-14 00:10:39.928618] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:00.909 [2024-12-14 00:10:39.928628] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:00.909 [2024-12-14 00:10:39.928639] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:00.909 [2024-12-14 00:10:39.928646] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:00.909 [2024-12-14 00:10:39.930961] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:30:00.909 [2024-12-14 00:10:39.931033] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:30:00.909 [2024-12-14 00:10:39.931149] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:30:00.909 [2024-12-14 00:10:39.931172] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 4 00:30:01.476 00:10:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:01.476 00:10:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@868 -- # return 0 00:30:01.476 00:10:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:01.476 00:10:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:01.476 00:10:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:30:01.476 00:10:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:01.476 00:10:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:01.476 00:10:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:01.476 00:10:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:30:01.476 [2024-12-14 00:10:40.547445] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:01.476 00:10:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:01.476 00:10:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:30:01.476 00:10:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:30:01.476 00:10:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:01.476 00:10:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:30:01.477 00:10:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:30:01.477 00:10:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:30:01.477 00:10:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:30:01.477 00:10:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:30:01.477 00:10:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:30:01.477 00:10:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:30:01.477 00:10:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:30:01.477 00:10:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:30:01.477 00:10:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:30:01.477 00:10:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:30:01.477 00:10:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:30:01.477 00:10:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:30:01.477 00:10:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:30:01.477 00:10:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:30:01.477 00:10:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:30:01.477 00:10:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:30:01.477 00:10:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:30:01.477 00:10:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:30:01.477 00:10:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:30:01.736 00:10:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:30:01.736 00:10:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:30:01.736 00:10:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@36 -- # rpc_cmd 00:30:01.736 00:10:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:01.736 00:10:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:30:01.736 Malloc1 00:30:01.736 [2024-12-14 00:10:40.719222] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:01.736 Malloc2 00:30:01.994 Malloc3 00:30:01.994 Malloc4 00:30:01.994 Malloc5 00:30:02.253 Malloc6 00:30:02.253 Malloc7 00:30:02.512 Malloc8 00:30:02.512 Malloc9 00:30:02.512 Malloc10 00:30:02.512 00:10:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:02.512 00:10:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:30:02.512 00:10:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:02.512 00:10:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:30:02.512 00:10:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@149 -- # perfpid=4131995 00:30:02.512 00:10:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@148 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 45056 -O 4096 -w randwrite -t 20 -r 'trtype:tcp adrfam:IPV4 traddr:10.0.0.2 trsvcid:4420' -P 4 00:30:02.512 00:10:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@150 -- # sleep 5 00:30:02.770 [2024-12-14 00:10:41.741681] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:30:08.041 00:10:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@152 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:30:08.041 00:10:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@155 -- # killprocess 4131715 00:30:08.041 00:10:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@954 -- # '[' -z 4131715 ']' 00:30:08.041 00:10:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@958 -- # kill -0 4131715 00:30:08.041 00:10:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@959 -- # uname 00:30:08.041 00:10:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:08.041 00:10:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4131715 00:30:08.041 00:10:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:30:08.041 00:10:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:30:08.041 00:10:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4131715' 00:30:08.041 killing process with pid 4131715 00:30:08.041 00:10:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@973 -- # kill 4131715 00:30:08.041 00:10:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@978 -- # wait 4131715 00:30:08.041 [2024-12-14 00:10:46.712915] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000006480 is same with the state(6) to be set 00:30:08.041 [2024-12-14 00:10:46.712984] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000006480 is same with the state(6) to be set 00:30:08.041 [2024-12-14 00:10:46.712996] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000006480 is same with the state(6) to be set 00:30:08.041 [2024-12-14 00:10:46.713005] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000006480 is same with the state(6) to be set 00:30:08.041 [2024-12-14 00:10:46.713014] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000006480 is same with the state(6) to be set 00:30:08.041 [2024-12-14 00:10:46.713023] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000006480 is same with the state(6) to be set 00:30:08.041 [2024-12-14 00:10:46.713031] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000006480 is same with the state(6) to be set 00:30:08.041 Write completed with error (sct=0, sc=8) 00:30:08.041 Write completed with error (sct=0, sc=8) 00:30:08.041 Write completed with error (sct=0, sc=8) 00:30:08.041 starting I/O failed: -6 00:30:08.041 Write completed with error (sct=0, sc=8) 00:30:08.041 Write completed with error (sct=0, sc=8) 00:30:08.041 Write completed with error (sct=0, sc=8) 00:30:08.041 Write completed with error (sct=0, sc=8) 00:30:08.041 starting I/O failed: -6 00:30:08.041 Write completed with error (sct=0, sc=8) 00:30:08.041 Write completed with error (sct=0, sc=8) 00:30:08.041 Write completed with error (sct=0, sc=8) 00:30:08.041 Write completed with error (sct=0, sc=8) 00:30:08.041 starting I/O failed: -6 00:30:08.041 Write completed with error (sct=0, sc=8) 00:30:08.041 Write completed with error (sct=0, sc=8) 00:30:08.041 Write completed with error (sct=0, sc=8) 00:30:08.041 Write completed with error (sct=0, sc=8) 00:30:08.041 starting I/O failed: -6 00:30:08.041 Write completed with error (sct=0, sc=8) 00:30:08.041 Write completed with error (sct=0, sc=8) 00:30:08.041 Write completed with error (sct=0, sc=8) 00:30:08.041 Write completed with error (sct=0, sc=8) 00:30:08.041 starting I/O failed: -6 00:30:08.041 Write completed with error (sct=0, sc=8) 00:30:08.041 Write completed with error (sct=0, sc=8) 00:30:08.041 Write completed with error (sct=0, sc=8) 00:30:08.041 Write completed with error (sct=0, sc=8) 00:30:08.041 starting I/O failed: -6 00:30:08.041 Write completed with error (sct=0, sc=8) 00:30:08.041 Write completed with error (sct=0, sc=8) 00:30:08.041 Write completed with error (sct=0, sc=8) 00:30:08.041 Write completed with error (sct=0, sc=8) 00:30:08.041 starting I/O failed: -6 00:30:08.041 Write completed with error (sct=0, sc=8) 00:30:08.041 Write completed with error (sct=0, sc=8) 00:30:08.041 Write completed with error (sct=0, sc=8) 00:30:08.041 Write completed with error (sct=0, sc=8) 00:30:08.041 starting I/O failed: -6 00:30:08.041 [2024-12-14 00:10:46.714627] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:08.041 NVMe io qpair process completion error 00:30:08.041 [2024-12-14 00:10:46.715359] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000006c80 is same with the state(6) to be set 00:30:08.041 [2024-12-14 00:10:46.715391] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000006c80 is same with the state(6) to be set 00:30:08.041 [2024-12-14 00:10:46.715403] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000006c80 is same with the state(6) to be set 00:30:08.041 [2024-12-14 00:10:46.715413] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000006c80 is same with the state(6) to be set 00:30:08.041 [2024-12-14 00:10:46.715422] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000006c80 is same with the state(6) to be set 00:30:08.041 [2024-12-14 00:10:46.716979] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000006080 is same with the state(6) to be set 00:30:08.041 [2024-12-14 00:10:46.717011] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000006080 is same with the state(6) to be set 00:30:08.041 [2024-12-14 00:10:46.717023] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000006080 is same with the state(6) to be set 00:30:08.041 [2024-12-14 00:10:46.717033] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000006080 is same with the state(6) to be set 00:30:08.041 [2024-12-14 00:10:46.717042] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000006080 is same with the state(6) to be set 00:30:08.041 [2024-12-14 00:10:46.717051] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000006080 is same with the state(6) to be set 00:30:08.041 [2024-12-14 00:10:46.717060] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000006080 is same with the state(6) to be set 00:30:08.041 [2024-12-14 00:10:46.717068] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000006080 is same with the state(6) to be set 00:30:08.041 [2024-12-14 00:10:46.717076] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000006080 is same with the state(6) to be set 00:30:08.041 [2024-12-14 00:10:46.717085] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000006080 is same with the state(6) to be set 00:30:08.041 [2024-12-14 00:10:46.717100] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000006080 is same with the state(6) to be set 00:30:08.041 Write completed with error (sct=0, sc=8) 00:30:08.041 Write completed with error (sct=0, sc=8) 00:30:08.041 Write completed with error (sct=0, sc=8) 00:30:08.041 starting I/O failed: -6 00:30:08.041 Write completed with error (sct=0, sc=8) 00:30:08.041 Write completed with error (sct=0, sc=8) 00:30:08.041 Write completed with error (sct=0, sc=8) 00:30:08.041 Write completed with error (sct=0, sc=8) 00:30:08.041 starting I/O failed: -6 00:30:08.041 Write completed with error (sct=0, sc=8) 00:30:08.041 Write completed with error (sct=0, sc=8) 00:30:08.041 Write completed with error (sct=0, sc=8) 00:30:08.041 Write completed with error (sct=0, sc=8) 00:30:08.041 starting I/O failed: -6 00:30:08.041 Write completed with error (sct=0, sc=8) 00:30:08.041 Write completed with error (sct=0, sc=8) 00:30:08.041 Write completed with error (sct=0, sc=8) 00:30:08.041 Write completed with error (sct=0, sc=8) 00:30:08.041 starting I/O failed: -6 00:30:08.041 Write completed with error (sct=0, sc=8) 00:30:08.041 Write completed with error (sct=0, sc=8) 00:30:08.041 Write completed with error (sct=0, sc=8) 00:30:08.041 Write completed with error (sct=0, sc=8) 00:30:08.041 starting I/O failed: -6 00:30:08.041 Write completed with error (sct=0, sc=8) 00:30:08.041 Write completed with error (sct=0, sc=8) 00:30:08.041 Write completed with error (sct=0, sc=8) 00:30:08.041 Write completed with error (sct=0, sc=8) 00:30:08.041 starting I/O failed: -6 00:30:08.041 Write completed with error (sct=0, sc=8) 00:30:08.041 Write completed with error (sct=0, sc=8) 00:30:08.041 Write completed with error (sct=0, sc=8) 00:30:08.041 Write completed with error (sct=0, sc=8) 00:30:08.041 starting I/O failed: -6 00:30:08.041 Write completed with error (sct=0, sc=8) 00:30:08.041 Write completed with error (sct=0, sc=8) 00:30:08.041 Write completed with error (sct=0, sc=8) 00:30:08.041 Write completed with error (sct=0, sc=8) 00:30:08.041 starting I/O failed: -6 00:30:08.041 Write completed with error (sct=0, sc=8) 00:30:08.041 Write completed with error (sct=0, sc=8) 00:30:08.041 Write completed with error (sct=0, sc=8) 00:30:08.041 Write completed with error (sct=0, sc=8) 00:30:08.041 starting I/O failed: -6 00:30:08.041 Write completed with error (sct=0, sc=8) 00:30:08.041 Write completed with error (sct=0, sc=8) 00:30:08.041 Write completed with error (sct=0, sc=8) 00:30:08.041 Write completed with error (sct=0, sc=8) 00:30:08.041 starting I/O failed: -6 00:30:08.041 [2024-12-14 00:10:46.730353] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:08.041 Write completed with error (sct=0, sc=8) 00:30:08.041 Write completed with error (sct=0, sc=8) 00:30:08.041 starting I/O failed: -6 00:30:08.041 Write completed with error (sct=0, sc=8) 00:30:08.042 starting I/O failed: -6 00:30:08.042 Write completed with error (sct=0, sc=8) 00:30:08.042 Write completed with error (sct=0, sc=8) 00:30:08.042 Write completed with error (sct=0, sc=8) 00:30:08.042 starting I/O failed: -6 00:30:08.042 Write completed with error (sct=0, sc=8) 00:30:08.042 starting I/O failed: -6 00:30:08.042 Write completed with error (sct=0, sc=8) 00:30:08.042 Write completed with error (sct=0, sc=8) 00:30:08.042 Write completed with error (sct=0, sc=8) 00:30:08.042 starting I/O failed: -6 00:30:08.042 Write completed with error (sct=0, sc=8) 00:30:08.042 starting I/O failed: -6 00:30:08.042 Write completed with error (sct=0, sc=8) 00:30:08.042 Write completed with error (sct=0, sc=8) 00:30:08.042 Write completed with error (sct=0, sc=8) 00:30:08.042 starting I/O failed: -6 00:30:08.042 Write completed with error (sct=0, sc=8) 00:30:08.042 starting I/O failed: -6 00:30:08.042 Write completed with error (sct=0, sc=8) 00:30:08.042 Write completed with error (sct=0, sc=8) 00:30:08.042 Write completed with error (sct=0, sc=8) 00:30:08.042 starting I/O failed: -6 00:30:08.042 Write completed with error (sct=0, sc=8) 00:30:08.042 starting I/O failed: -6 00:30:08.042 Write completed with error (sct=0, sc=8) 00:30:08.042 Write completed with error (sct=0, sc=8) 00:30:08.042 Write completed with error (sct=0, sc=8) 00:30:08.042 starting I/O failed: -6 00:30:08.042 Write completed with error (sct=0, sc=8) 00:30:08.042 starting I/O failed: -6 00:30:08.042 Write completed with error (sct=0, sc=8) 00:30:08.042 Write completed with error (sct=0, sc=8) 00:30:08.042 Write completed with error (sct=0, sc=8) 00:30:08.042 starting I/O failed: -6 00:30:08.042 Write completed with error (sct=0, sc=8) 00:30:08.042 starting I/O failed: -6 00:30:08.042 Write completed with error (sct=0, sc=8) 00:30:08.042 Write completed with error (sct=0, sc=8) 00:30:08.042 Write completed with error (sct=0, sc=8) 00:30:08.042 starting I/O failed: -6 00:30:08.042 Write completed with error (sct=0, sc=8) 00:30:08.042 starting I/O failed: -6 00:30:08.042 Write completed with error (sct=0, sc=8) 00:30:08.042 Write completed with error (sct=0, sc=8) 00:30:08.042 Write completed with error (sct=0, sc=8) 00:30:08.042 starting I/O failed: -6 00:30:08.042 Write completed with error (sct=0, sc=8) 00:30:08.042 starting I/O failed: -6 00:30:08.042 Write completed with error (sct=0, sc=8) 00:30:08.042 Write completed with error (sct=0, sc=8) 00:30:08.042 Write completed with error (sct=0, sc=8) 00:30:08.042 starting I/O failed: -6 00:30:08.042 Write completed with error (sct=0, sc=8) 00:30:08.042 starting I/O failed: -6 00:30:08.042 Write completed with error (sct=0, sc=8) 00:30:08.042 Write completed with error (sct=0, sc=8) 00:30:08.042 Write completed with error (sct=0, sc=8) 00:30:08.042 starting I/O failed: -6 00:30:08.042 Write completed with error (sct=0, sc=8) 00:30:08.042 starting I/O failed: -6 00:30:08.042 Write completed with error (sct=0, sc=8) 00:30:08.042 Write completed with error (sct=0, sc=8) 00:30:08.042 Write completed with error (sct=0, sc=8) 00:30:08.042 starting I/O failed: -6 00:30:08.042 [2024-12-14 00:10:46.732340] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:08.042 Write completed with error (sct=0, sc=8) 00:30:08.042 starting I/O failed: -6 00:30:08.042 Write completed with error (sct=0, sc=8) 00:30:08.042 Write completed with error (sct=0, sc=8) 00:30:08.042 starting I/O failed: -6 00:30:08.042 Write completed with error (sct=0, sc=8) 00:30:08.042 starting I/O failed: -6 00:30:08.042 Write completed with error (sct=0, sc=8) 00:30:08.042 starting I/O failed: -6 00:30:08.042 Write completed with error (sct=0, sc=8) 00:30:08.042 Write completed with error (sct=0, sc=8) 00:30:08.042 starting I/O failed: -6 00:30:08.042 Write completed with error (sct=0, sc=8) 00:30:08.042 starting I/O failed: -6 00:30:08.042 Write completed with error (sct=0, sc=8) 00:30:08.042 starting I/O failed: -6 00:30:08.042 Write completed with error (sct=0, sc=8) 00:30:08.042 Write completed with error (sct=0, sc=8) 00:30:08.042 starting I/O failed: -6 00:30:08.042 Write completed with error (sct=0, sc=8) 00:30:08.042 starting I/O failed: -6 00:30:08.042 Write completed with error (sct=0, sc=8) 00:30:08.042 starting I/O failed: -6 00:30:08.042 Write completed with error (sct=0, sc=8) 00:30:08.042 Write completed with error (sct=0, sc=8) 00:30:08.042 starting I/O failed: -6 00:30:08.042 Write completed with error (sct=0, sc=8) 00:30:08.042 starting I/O failed: -6 00:30:08.042 Write completed with error (sct=0, sc=8) 00:30:08.042 starting I/O failed: -6 00:30:08.042 Write completed with error (sct=0, sc=8) 00:30:08.042 Write completed with error (sct=0, sc=8) 00:30:08.042 starting I/O failed: -6 00:30:08.042 Write completed with error (sct=0, sc=8) 00:30:08.042 starting I/O failed: -6 00:30:08.042 Write completed with error (sct=0, sc=8) 00:30:08.042 starting I/O failed: -6 00:30:08.042 Write completed with error (sct=0, sc=8) 00:30:08.042 Write completed with error (sct=0, sc=8) 00:30:08.042 starting I/O failed: -6 00:30:08.042 Write completed with error (sct=0, sc=8) 00:30:08.042 starting I/O failed: -6 00:30:08.042 Write completed with error (sct=0, sc=8) 00:30:08.042 starting I/O failed: -6 00:30:08.042 Write completed with error (sct=0, sc=8) 00:30:08.042 Write completed with error (sct=0, sc=8) 00:30:08.042 starting I/O failed: -6 00:30:08.042 Write completed with error (sct=0, sc=8) 00:30:08.042 starting I/O failed: -6 00:30:08.042 Write completed with error (sct=0, sc=8) 00:30:08.042 starting I/O failed: -6 00:30:08.042 Write completed with error (sct=0, sc=8) 00:30:08.042 Write completed with error (sct=0, sc=8) 00:30:08.042 starting I/O failed: -6 00:30:08.042 Write completed with error (sct=0, sc=8) 00:30:08.042 starting I/O failed: -6 00:30:08.042 Write completed with error (sct=0, sc=8) 00:30:08.042 starting I/O failed: -6 00:30:08.042 Write completed with error (sct=0, sc=8) 00:30:08.042 Write completed with error (sct=0, sc=8) 00:30:08.042 starting I/O failed: -6 00:30:08.042 Write completed with error (sct=0, sc=8) 00:30:08.042 starting I/O failed: -6 00:30:08.042 Write completed with error (sct=0, sc=8) 00:30:08.042 starting I/O failed: -6 00:30:08.042 Write completed with error (sct=0, sc=8) 00:30:08.042 Write completed with error (sct=0, sc=8) 00:30:08.042 starting I/O failed: -6 00:30:08.042 Write completed with error (sct=0, sc=8) 00:30:08.042 starting I/O failed: -6 00:30:08.042 Write completed with error (sct=0, sc=8) 00:30:08.042 starting I/O failed: -6 00:30:08.042 Write completed with error (sct=0, sc=8) 00:30:08.042 Write completed with error (sct=0, sc=8) 00:30:08.042 starting I/O failed: -6 00:30:08.042 Write completed with error (sct=0, sc=8) 00:30:08.042 starting I/O failed: -6 00:30:08.042 Write completed with error (sct=0, sc=8) 00:30:08.042 starting I/O failed: -6 00:30:08.042 Write completed with error (sct=0, sc=8) 00:30:08.042 Write completed with error (sct=0, sc=8) 00:30:08.042 starting I/O failed: -6 00:30:08.042 Write completed with error (sct=0, sc=8) 00:30:08.042 starting I/O failed: -6 00:30:08.042 [2024-12-14 00:10:46.734827] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:08.042 Write completed with error (sct=0, sc=8) 00:30:08.042 starting I/O failed: -6 00:30:08.042 Write completed with error (sct=0, sc=8) 00:30:08.042 starting I/O failed: -6 00:30:08.042 Write completed with error (sct=0, sc=8) 00:30:08.042 starting I/O failed: -6 00:30:08.042 Write completed with error (sct=0, sc=8) 00:30:08.042 starting I/O failed: -6 00:30:08.042 Write completed with error (sct=0, sc=8) 00:30:08.042 starting I/O failed: -6 00:30:08.042 Write completed with error (sct=0, sc=8) 00:30:08.042 starting I/O failed: -6 00:30:08.042 Write completed with error (sct=0, sc=8) 00:30:08.042 starting I/O failed: -6 00:30:08.042 Write completed with error (sct=0, sc=8) 00:30:08.042 starting I/O failed: -6 00:30:08.042 Write completed with error (sct=0, sc=8) 00:30:08.042 starting I/O failed: -6 00:30:08.042 Write completed with error (sct=0, sc=8) 00:30:08.042 starting I/O failed: -6 00:30:08.042 Write completed with error (sct=0, sc=8) 00:30:08.042 starting I/O failed: -6 00:30:08.042 Write completed with error (sct=0, sc=8) 00:30:08.042 starting I/O failed: -6 00:30:08.042 Write completed with error (sct=0, sc=8) 00:30:08.042 starting I/O failed: -6 00:30:08.042 Write completed with error (sct=0, sc=8) 00:30:08.042 starting I/O failed: -6 00:30:08.042 Write completed with error (sct=0, sc=8) 00:30:08.042 starting I/O failed: -6 00:30:08.042 Write completed with error (sct=0, sc=8) 00:30:08.042 starting I/O failed: -6 00:30:08.042 Write completed with error (sct=0, sc=8) 00:30:08.042 starting I/O failed: -6 00:30:08.042 Write completed with error (sct=0, sc=8) 00:30:08.042 starting I/O failed: -6 00:30:08.042 Write completed with error (sct=0, sc=8) 00:30:08.042 starting I/O failed: -6 00:30:08.042 Write completed with error (sct=0, sc=8) 00:30:08.042 starting I/O failed: -6 00:30:08.042 Write completed with error (sct=0, sc=8) 00:30:08.042 starting I/O failed: -6 00:30:08.042 Write completed with error (sct=0, sc=8) 00:30:08.042 starting I/O failed: -6 00:30:08.042 Write completed with error (sct=0, sc=8) 00:30:08.042 starting I/O failed: -6 00:30:08.042 Write completed with error (sct=0, sc=8) 00:30:08.042 starting I/O failed: -6 00:30:08.042 Write completed with error (sct=0, sc=8) 00:30:08.042 starting I/O failed: -6 00:30:08.042 Write completed with error (sct=0, sc=8) 00:30:08.042 starting I/O failed: -6 00:30:08.042 Write completed with error (sct=0, sc=8) 00:30:08.042 starting I/O failed: -6 00:30:08.042 Write completed with error (sct=0, sc=8) 00:30:08.042 starting I/O failed: -6 00:30:08.042 Write completed with error (sct=0, sc=8) 00:30:08.042 starting I/O failed: -6 00:30:08.042 Write completed with error (sct=0, sc=8) 00:30:08.042 starting I/O failed: -6 00:30:08.042 Write completed with error (sct=0, sc=8) 00:30:08.043 starting I/O failed: -6 00:30:08.043 Write completed with error (sct=0, sc=8) 00:30:08.043 starting I/O failed: -6 00:30:08.043 Write completed with error (sct=0, sc=8) 00:30:08.043 starting I/O failed: -6 00:30:08.043 Write completed with error (sct=0, sc=8) 00:30:08.043 starting I/O failed: -6 00:30:08.043 Write completed with error (sct=0, sc=8) 00:30:08.043 starting I/O failed: -6 00:30:08.043 Write completed with error (sct=0, sc=8) 00:30:08.043 starting I/O failed: -6 00:30:08.043 Write completed with error (sct=0, sc=8) 00:30:08.043 starting I/O failed: -6 00:30:08.043 Write completed with error (sct=0, sc=8) 00:30:08.043 starting I/O failed: -6 00:30:08.043 Write completed with error (sct=0, sc=8) 00:30:08.043 starting I/O failed: -6 00:30:08.043 Write completed with error (sct=0, sc=8) 00:30:08.043 starting I/O failed: -6 00:30:08.043 Write completed with error (sct=0, sc=8) 00:30:08.043 starting I/O failed: -6 00:30:08.043 Write completed with error (sct=0, sc=8) 00:30:08.043 starting I/O failed: -6 00:30:08.043 Write completed with error (sct=0, sc=8) 00:30:08.043 starting I/O failed: -6 00:30:08.043 Write completed with error (sct=0, sc=8) 00:30:08.043 starting I/O failed: -6 00:30:08.043 Write completed with error (sct=0, sc=8) 00:30:08.043 starting I/O failed: -6 00:30:08.043 Write completed with error (sct=0, sc=8) 00:30:08.043 starting I/O failed: -6 00:30:08.043 Write completed with error (sct=0, sc=8) 00:30:08.043 starting I/O failed: -6 00:30:08.043 Write completed with error (sct=0, sc=8) 00:30:08.043 starting I/O failed: -6 00:30:08.043 Write completed with error (sct=0, sc=8) 00:30:08.043 starting I/O failed: -6 00:30:08.043 Write completed with error (sct=0, sc=8) 00:30:08.043 starting I/O failed: -6 00:30:08.043 Write completed with error (sct=0, sc=8) 00:30:08.043 starting I/O failed: -6 00:30:08.043 Write completed with error (sct=0, sc=8) 00:30:08.043 starting I/O failed: -6 00:30:08.043 Write completed with error (sct=0, sc=8) 00:30:08.043 starting I/O failed: -6 00:30:08.043 Write completed with error (sct=0, sc=8) 00:30:08.043 starting I/O failed: -6 00:30:08.043 Write completed with error (sct=0, sc=8) 00:30:08.043 starting I/O failed: -6 00:30:08.043 Write completed with error (sct=0, sc=8) 00:30:08.043 starting I/O failed: -6 00:30:08.043 Write completed with error (sct=0, sc=8) 00:30:08.043 starting I/O failed: -6 00:30:08.043 Write completed with error (sct=0, sc=8) 00:30:08.043 starting I/O failed: -6 00:30:08.043 Write completed with error (sct=0, sc=8) 00:30:08.043 starting I/O failed: -6 00:30:08.043 [2024-12-14 00:10:46.745933] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:08.043 NVMe io qpair process completion error 00:30:08.043 Write completed with error (sct=0, sc=8) 00:30:08.043 Write completed with error (sct=0, sc=8) 00:30:08.043 Write completed with error (sct=0, sc=8) 00:30:08.043 starting I/O failed: -6 00:30:08.043 Write completed with error (sct=0, sc=8) 00:30:08.043 Write completed with error (sct=0, sc=8) 00:30:08.043 Write completed with error (sct=0, sc=8) 00:30:08.043 Write completed with error (sct=0, sc=8) 00:30:08.043 starting I/O failed: -6 00:30:08.043 Write completed with error (sct=0, sc=8) 00:30:08.043 Write completed with error (sct=0, sc=8) 00:30:08.043 Write completed with error (sct=0, sc=8) 00:30:08.043 Write completed with error (sct=0, sc=8) 00:30:08.043 starting I/O failed: -6 00:30:08.043 Write completed with error (sct=0, sc=8) 00:30:08.043 Write completed with error (sct=0, sc=8) 00:30:08.043 Write completed with error (sct=0, sc=8) 00:30:08.043 Write completed with error (sct=0, sc=8) 00:30:08.043 starting I/O failed: -6 00:30:08.043 Write completed with error (sct=0, sc=8) 00:30:08.043 Write completed with error (sct=0, sc=8) 00:30:08.043 Write completed with error (sct=0, sc=8) 00:30:08.043 Write completed with error (sct=0, sc=8) 00:30:08.043 starting I/O failed: -6 00:30:08.043 Write completed with error (sct=0, sc=8) 00:30:08.043 Write completed with error (sct=0, sc=8) 00:30:08.043 Write completed with error (sct=0, sc=8) 00:30:08.043 Write completed with error (sct=0, sc=8) 00:30:08.043 starting I/O failed: -6 00:30:08.043 Write completed with error (sct=0, sc=8) 00:30:08.043 Write completed with error (sct=0, sc=8) 00:30:08.043 Write completed with error (sct=0, sc=8) 00:30:08.043 Write completed with error (sct=0, sc=8) 00:30:08.043 starting I/O failed: -6 00:30:08.043 Write completed with error (sct=0, sc=8) 00:30:08.043 Write completed with error (sct=0, sc=8) 00:30:08.043 Write completed with error (sct=0, sc=8) 00:30:08.043 Write completed with error (sct=0, sc=8) 00:30:08.043 starting I/O failed: -6 00:30:08.043 Write completed with error (sct=0, sc=8) 00:30:08.043 Write completed with error (sct=0, sc=8) 00:30:08.043 Write completed with error (sct=0, sc=8) 00:30:08.043 Write completed with error (sct=0, sc=8) 00:30:08.043 starting I/O failed: -6 00:30:08.043 Write completed with error (sct=0, sc=8) 00:30:08.043 Write completed with error (sct=0, sc=8) 00:30:08.043 Write completed with error (sct=0, sc=8) 00:30:08.043 Write completed with error (sct=0, sc=8) 00:30:08.043 starting I/O failed: -6 00:30:08.043 Write completed with error (sct=0, sc=8) 00:30:08.043 [2024-12-14 00:10:46.747525] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:08.043 starting I/O failed: -6 00:30:08.043 starting I/O failed: -6 00:30:08.043 starting I/O failed: -6 00:30:08.043 starting I/O failed: -6 00:30:08.043 starting I/O failed: -6 00:30:08.043 starting I/O failed: -6 00:30:08.043 starting I/O failed: -6 00:30:08.043 Write completed with error (sct=0, sc=8) 00:30:08.043 starting I/O failed: -6 00:30:08.043 Write completed with error (sct=0, sc=8) 00:30:08.043 Write completed with error (sct=0, sc=8) 00:30:08.043 starting I/O failed: -6 00:30:08.043 Write completed with error (sct=0, sc=8) 00:30:08.043 Write completed with error (sct=0, sc=8) 00:30:08.043 starting I/O failed: -6 00:30:08.043 Write completed with error (sct=0, sc=8) 00:30:08.043 Write completed with error (sct=0, sc=8) 00:30:08.043 starting I/O failed: -6 00:30:08.043 Write completed with error (sct=0, sc=8) 00:30:08.043 Write completed with error (sct=0, sc=8) 00:30:08.043 starting I/O failed: -6 00:30:08.043 Write completed with error (sct=0, sc=8) 00:30:08.043 Write completed with error (sct=0, sc=8) 00:30:08.043 starting I/O failed: -6 00:30:08.043 Write completed with error (sct=0, sc=8) 00:30:08.043 Write completed with error (sct=0, sc=8) 00:30:08.043 starting I/O failed: -6 00:30:08.043 Write completed with error (sct=0, sc=8) 00:30:08.043 Write completed with error (sct=0, sc=8) 00:30:08.043 starting I/O failed: -6 00:30:08.043 Write completed with error (sct=0, sc=8) 00:30:08.043 Write completed with error (sct=0, sc=8) 00:30:08.043 starting I/O failed: -6 00:30:08.043 Write completed with error (sct=0, sc=8) 00:30:08.043 Write completed with error (sct=0, sc=8) 00:30:08.043 starting I/O failed: -6 00:30:08.043 Write completed with error (sct=0, sc=8) 00:30:08.043 Write completed with error (sct=0, sc=8) 00:30:08.043 starting I/O failed: -6 00:30:08.043 Write completed with error (sct=0, sc=8) 00:30:08.043 Write completed with error (sct=0, sc=8) 00:30:08.043 starting I/O failed: -6 00:30:08.043 Write completed with error (sct=0, sc=8) 00:30:08.043 Write completed with error (sct=0, sc=8) 00:30:08.043 starting I/O failed: -6 00:30:08.043 Write completed with error (sct=0, sc=8) 00:30:08.043 Write completed with error (sct=0, sc=8) 00:30:08.043 starting I/O failed: -6 00:30:08.043 Write completed with error (sct=0, sc=8) 00:30:08.043 Write completed with error (sct=0, sc=8) 00:30:08.043 starting I/O failed: -6 00:30:08.043 Write completed with error (sct=0, sc=8) 00:30:08.043 [2024-12-14 00:10:46.750502] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:08.043 starting I/O failed: -6 00:30:08.043 starting I/O failed: -6 00:30:08.043 starting I/O failed: -6 00:30:08.043 starting I/O failed: -6 00:30:08.043 starting I/O failed: -6 00:30:08.043 starting I/O failed: -6 00:30:08.043 starting I/O failed: -6 00:30:08.043 Write completed with error (sct=0, sc=8) 00:30:08.043 starting I/O failed: -6 00:30:08.043 Write completed with error (sct=0, sc=8) 00:30:08.043 Write completed with error (sct=0, sc=8) 00:30:08.043 starting I/O failed: -6 00:30:08.043 Write completed with error (sct=0, sc=8) 00:30:08.043 starting I/O failed: -6 00:30:08.043 Write completed with error (sct=0, sc=8) 00:30:08.043 starting I/O failed: -6 00:30:08.043 Write completed with error (sct=0, sc=8) 00:30:08.043 Write completed with error (sct=0, sc=8) 00:30:08.043 starting I/O failed: -6 00:30:08.043 Write completed with error (sct=0, sc=8) 00:30:08.043 starting I/O failed: -6 00:30:08.043 Write completed with error (sct=0, sc=8) 00:30:08.043 starting I/O failed: -6 00:30:08.043 Write completed with error (sct=0, sc=8) 00:30:08.043 Write completed with error (sct=0, sc=8) 00:30:08.043 starting I/O failed: -6 00:30:08.043 Write completed with error (sct=0, sc=8) 00:30:08.043 starting I/O failed: -6 00:30:08.043 Write completed with error (sct=0, sc=8) 00:30:08.043 starting I/O failed: -6 00:30:08.043 Write completed with error (sct=0, sc=8) 00:30:08.043 Write completed with error (sct=0, sc=8) 00:30:08.043 starting I/O failed: -6 00:30:08.043 Write completed with error (sct=0, sc=8) 00:30:08.043 starting I/O failed: -6 00:30:08.043 Write completed with error (sct=0, sc=8) 00:30:08.043 starting I/O failed: -6 00:30:08.043 Write completed with error (sct=0, sc=8) 00:30:08.043 Write completed with error (sct=0, sc=8) 00:30:08.043 starting I/O failed: -6 00:30:08.043 Write completed with error (sct=0, sc=8) 00:30:08.043 starting I/O failed: -6 00:30:08.043 Write completed with error (sct=0, sc=8) 00:30:08.043 starting I/O failed: -6 00:30:08.043 Write completed with error (sct=0, sc=8) 00:30:08.043 Write completed with error (sct=0, sc=8) 00:30:08.043 starting I/O failed: -6 00:30:08.043 Write completed with error (sct=0, sc=8) 00:30:08.043 starting I/O failed: -6 00:30:08.043 Write completed with error (sct=0, sc=8) 00:30:08.043 starting I/O failed: -6 00:30:08.043 Write completed with error (sct=0, sc=8) 00:30:08.043 Write completed with error (sct=0, sc=8) 00:30:08.043 starting I/O failed: -6 00:30:08.043 Write completed with error (sct=0, sc=8) 00:30:08.043 starting I/O failed: -6 00:30:08.044 Write completed with error (sct=0, sc=8) 00:30:08.044 starting I/O failed: -6 00:30:08.044 Write completed with error (sct=0, sc=8) 00:30:08.044 Write completed with error (sct=0, sc=8) 00:30:08.044 starting I/O failed: -6 00:30:08.044 Write completed with error (sct=0, sc=8) 00:30:08.044 starting I/O failed: -6 00:30:08.044 Write completed with error (sct=0, sc=8) 00:30:08.044 starting I/O failed: -6 00:30:08.044 Write completed with error (sct=0, sc=8) 00:30:08.044 Write completed with error (sct=0, sc=8) 00:30:08.044 starting I/O failed: -6 00:30:08.044 Write completed with error (sct=0, sc=8) 00:30:08.044 starting I/O failed: -6 00:30:08.044 Write completed with error (sct=0, sc=8) 00:30:08.044 starting I/O failed: -6 00:30:08.044 Write completed with error (sct=0, sc=8) 00:30:08.044 Write completed with error (sct=0, sc=8) 00:30:08.044 starting I/O failed: -6 00:30:08.044 Write completed with error (sct=0, sc=8) 00:30:08.044 starting I/O failed: -6 00:30:08.044 Write completed with error (sct=0, sc=8) 00:30:08.044 starting I/O failed: -6 00:30:08.044 [2024-12-14 00:10:46.753626] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:08.044 NVMe io qpair process completion error 00:30:08.044 [2024-12-14 00:10:46.754341] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000e080 is same with the state(6) to be set 00:30:08.044 [2024-12-14 00:10:46.754375] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000e080 is same with the state(6) to be set 00:30:08.044 [2024-12-14 00:10:46.754386] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000e080 is same with the state(6) to be set 00:30:08.044 [2024-12-14 00:10:46.754396] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000e080 is same with the state(6) to be set 00:30:08.044 [2024-12-14 00:10:46.754405] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000e080 is same with the state(6) to be set 00:30:08.044 [2024-12-14 00:10:46.754413] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000e080 is same with the state(6) to be set 00:30:08.044 [2024-12-14 00:10:46.754422] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000e080 is same with the state(6) to be set 00:30:08.044 [2024-12-14 00:10:46.754430] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000e080 is same with the state(6) to be set 00:30:08.044 [2024-12-14 00:10:46.754447] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000e080 is same with the state(6) to be set 00:30:08.044 [2024-12-14 00:10:46.754456] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000e080 is same with the state(6) to be set 00:30:08.044 Write completed with error (sct=0, sc=8) 00:30:08.044 Write completed with error (sct=0, sc=8) 00:30:08.044 starting I/O failed: -6 00:30:08.044 Write completed with error (sct=0, sc=8) 00:30:08.044 Write completed with error (sct=0, sc=8) 00:30:08.044 Write completed with error (sct=0, sc=8) 00:30:08.044 Write completed with error (sct=0, sc=8) 00:30:08.044 starting I/O failed: -6 00:30:08.044 Write completed with error (sct=0, sc=8) 00:30:08.044 Write completed with error (sct=0, sc=8) 00:30:08.044 Write completed with error (sct=0, sc=8) 00:30:08.044 Write completed with error (sct=0, sc=8) 00:30:08.044 starting I/O failed: -6 00:30:08.044 Write completed with error (sct=0, sc=8) 00:30:08.044 Write completed with error (sct=0, sc=8) 00:30:08.044 Write completed with error (sct=0, sc=8) 00:30:08.044 Write completed with error (sct=0, sc=8) 00:30:08.044 starting I/O failed: -6 00:30:08.044 Write completed with error (sct=0, sc=8) 00:30:08.044 Write completed with error (sct=0, sc=8) 00:30:08.044 Write completed with error (sct=0, sc=8) 00:30:08.044 Write completed with error (sct=0, sc=8) 00:30:08.044 starting I/O failed: -6 00:30:08.044 Write completed with error (sct=0, sc=8) 00:30:08.044 Write completed with error (sct=0, sc=8) 00:30:08.044 Write completed with error (sct=0, sc=8) 00:30:08.044 Write completed with error (sct=0, sc=8) 00:30:08.044 starting I/O failed: -6 00:30:08.044 Write completed with error (sct=0, sc=8) 00:30:08.044 Write completed with error (sct=0, sc=8) 00:30:08.044 Write completed with error (sct=0, sc=8) 00:30:08.044 Write completed with error (sct=0, sc=8) 00:30:08.044 starting I/O failed: -6 00:30:08.044 Write completed with error (sct=0, sc=8) 00:30:08.044 Write completed with error (sct=0, sc=8) 00:30:08.044 Write completed with error (sct=0, sc=8) 00:30:08.044 Write completed with error (sct=0, sc=8) 00:30:08.044 starting I/O failed: -6 00:30:08.044 Write completed with error (sct=0, sc=8) 00:30:08.044 Write completed with error (sct=0, sc=8) 00:30:08.044 Write completed with error (sct=0, sc=8) 00:30:08.044 Write completed with error (sct=0, sc=8) 00:30:08.044 starting I/O failed: -6 00:30:08.044 Write completed with error (sct=0, sc=8) 00:30:08.044 Write completed with error (sct=0, sc=8) 00:30:08.044 Write completed with error (sct=0, sc=8) 00:30:08.044 Write completed with error (sct=0, sc=8) 00:30:08.044 starting I/O failed: -6 00:30:08.044 [2024-12-14 00:10:46.755881] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:08.044 Write completed with error (sct=0, sc=8) 00:30:08.044 Write completed with error (sct=0, sc=8) 00:30:08.044 starting I/O failed: -6 00:30:08.044 Write completed with error (sct=0, sc=8) 00:30:08.044 Write completed with error (sct=0, sc=8) 00:30:08.044 starting I/O failed: -6 00:30:08.044 Write completed with error (sct=0, sc=8) 00:30:08.044 Write completed with error (sct=0, sc=8) 00:30:08.044 starting I/O failed: -6 00:30:08.044 Write completed with error (sct=0, sc=8) 00:30:08.044 Write completed with error (sct=0, sc=8) 00:30:08.044 starting I/O failed: -6 00:30:08.044 Write completed with error (sct=0, sc=8) 00:30:08.044 Write completed with error (sct=0, sc=8) 00:30:08.044 starting I/O failed: -6 00:30:08.044 Write completed with error (sct=0, sc=8) 00:30:08.044 Write completed with error (sct=0, sc=8) 00:30:08.044 starting I/O failed: -6 00:30:08.044 Write completed with error (sct=0, sc=8) 00:30:08.044 Write completed with error (sct=0, sc=8) 00:30:08.044 starting I/O failed: -6 00:30:08.044 Write completed with error (sct=0, sc=8) 00:30:08.044 Write completed with error (sct=0, sc=8) 00:30:08.044 starting I/O failed: -6 00:30:08.044 Write completed with error (sct=0, sc=8) 00:30:08.044 Write completed with error (sct=0, sc=8) 00:30:08.044 starting I/O failed: -6 00:30:08.044 Write completed with error (sct=0, sc=8) 00:30:08.044 Write completed with error (sct=0, sc=8) 00:30:08.044 starting I/O failed: -6 00:30:08.044 Write completed with error (sct=0, sc=8) 00:30:08.044 Write completed with error (sct=0, sc=8) 00:30:08.044 starting I/O failed: -6 00:30:08.044 Write completed with error (sct=0, sc=8) 00:30:08.044 Write completed with error (sct=0, sc=8) 00:30:08.044 starting I/O failed: -6 00:30:08.044 Write completed with error (sct=0, sc=8) 00:30:08.044 Write completed with error (sct=0, sc=8) 00:30:08.044 starting I/O failed: -6 00:30:08.044 Write completed with error (sct=0, sc=8) 00:30:08.044 Write completed with error (sct=0, sc=8) 00:30:08.044 starting I/O failed: -6 00:30:08.044 Write completed with error (sct=0, sc=8) 00:30:08.044 Write completed with error (sct=0, sc=8) 00:30:08.044 starting I/O failed: -6 00:30:08.044 Write completed with error (sct=0, sc=8) 00:30:08.044 Write completed with error (sct=0, sc=8) 00:30:08.044 starting I/O failed: -6 00:30:08.044 Write completed with error (sct=0, sc=8) 00:30:08.044 Write completed with error (sct=0, sc=8) 00:30:08.044 starting I/O failed: -6 00:30:08.044 Write completed with error (sct=0, sc=8) 00:30:08.044 Write completed with error (sct=0, sc=8) 00:30:08.044 starting I/O failed: -6 00:30:08.044 Write completed with error (sct=0, sc=8) 00:30:08.044 Write completed with error (sct=0, sc=8) 00:30:08.044 starting I/O failed: -6 00:30:08.044 Write completed with error (sct=0, sc=8) 00:30:08.044 [2024-12-14 00:10:46.757485] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:08.044 Write completed with error (sct=0, sc=8) 00:30:08.044 starting I/O failed: -6 00:30:08.044 Write completed with error (sct=0, sc=8) 00:30:08.044 Write completed with error (sct=0, sc=8) 00:30:08.044 starting I/O failed: -6 00:30:08.044 Write completed with error (sct=0, sc=8) 00:30:08.044 starting I/O failed: -6 00:30:08.044 Write completed with error (sct=0, sc=8) 00:30:08.044 starting I/O failed: -6 00:30:08.044 Write completed with error (sct=0, sc=8) 00:30:08.044 Write completed with error (sct=0, sc=8) 00:30:08.044 starting I/O failed: -6 00:30:08.044 Write completed with error (sct=0, sc=8) 00:30:08.044 starting I/O failed: -6 00:30:08.044 Write completed with error (sct=0, sc=8) 00:30:08.044 starting I/O failed: -6 00:30:08.044 Write completed with error (sct=0, sc=8) 00:30:08.044 Write completed with error (sct=0, sc=8) 00:30:08.044 starting I/O failed: -6 00:30:08.044 Write completed with error (sct=0, sc=8) 00:30:08.044 starting I/O failed: -6 00:30:08.044 Write completed with error (sct=0, sc=8) 00:30:08.044 starting I/O failed: -6 00:30:08.044 Write completed with error (sct=0, sc=8) 00:30:08.044 Write completed with error (sct=0, sc=8) 00:30:08.044 starting I/O failed: -6 00:30:08.044 Write completed with error (sct=0, sc=8) 00:30:08.044 starting I/O failed: -6 00:30:08.044 Write completed with error (sct=0, sc=8) 00:30:08.044 starting I/O failed: -6 00:30:08.044 Write completed with error (sct=0, sc=8) 00:30:08.044 Write completed with error (sct=0, sc=8) 00:30:08.044 starting I/O failed: -6 00:30:08.044 Write completed with error (sct=0, sc=8) 00:30:08.044 starting I/O failed: -6 00:30:08.044 Write completed with error (sct=0, sc=8) 00:30:08.044 starting I/O failed: -6 00:30:08.044 Write completed with error (sct=0, sc=8) 00:30:08.044 Write completed with error (sct=0, sc=8) 00:30:08.044 starting I/O failed: -6 00:30:08.044 Write completed with error (sct=0, sc=8) 00:30:08.044 starting I/O failed: -6 00:30:08.044 Write completed with error (sct=0, sc=8) 00:30:08.044 starting I/O failed: -6 00:30:08.044 Write completed with error (sct=0, sc=8) 00:30:08.044 starting I/O failed: -6 00:30:08.044 Write completed with error (sct=0, sc=8) 00:30:08.044 starting I/O failed: -6 00:30:08.044 Write completed with error (sct=0, sc=8) 00:30:08.044 starting I/O failed: -6 00:30:08.044 Write completed with error (sct=0, sc=8) 00:30:08.044 starting I/O failed: -6 00:30:08.044 Write completed with error (sct=0, sc=8) 00:30:08.044 starting I/O failed: -6 00:30:08.044 Write completed with error (sct=0, sc=8) 00:30:08.044 starting I/O failed: -6 00:30:08.044 Write completed with error (sct=0, sc=8) 00:30:08.045 starting I/O failed: -6 00:30:08.045 Write completed with error (sct=0, sc=8) 00:30:08.045 starting I/O failed: -6 00:30:08.045 Write completed with error (sct=0, sc=8) 00:30:08.045 starting I/O failed: -6 00:30:08.045 Write completed with error (sct=0, sc=8) 00:30:08.045 starting I/O failed: -6 00:30:08.045 Write completed with error (sct=0, sc=8) 00:30:08.045 starting I/O failed: -6 00:30:08.045 Write completed with error (sct=0, sc=8) 00:30:08.045 starting I/O failed: -6 00:30:08.045 Write completed with error (sct=0, sc=8) 00:30:08.045 starting I/O failed: -6 00:30:08.045 Write completed with error (sct=0, sc=8) 00:30:08.045 starting I/O failed: -6 00:30:08.045 Write completed with error (sct=0, sc=8) 00:30:08.045 starting I/O failed: -6 00:30:08.045 Write completed with error (sct=0, sc=8) 00:30:08.045 starting I/O failed: -6 00:30:08.045 Write completed with error (sct=0, sc=8) 00:30:08.045 starting I/O failed: -6 00:30:08.045 Write completed with error (sct=0, sc=8) 00:30:08.045 starting I/O failed: -6 00:30:08.045 Write completed with error (sct=0, sc=8) 00:30:08.045 starting I/O failed: -6 00:30:08.045 Write completed with error (sct=0, sc=8) 00:30:08.045 starting I/O failed: -6 00:30:08.045 Write completed with error (sct=0, sc=8) 00:30:08.045 starting I/O failed: -6 00:30:08.045 Write completed with error (sct=0, sc=8) 00:30:08.045 starting I/O failed: -6 00:30:08.045 Write completed with error (sct=0, sc=8) 00:30:08.045 starting I/O failed: -6 00:30:08.045 Write completed with error (sct=0, sc=8) 00:30:08.045 starting I/O failed: -6 00:30:08.045 Write completed with error (sct=0, sc=8) 00:30:08.045 starting I/O failed: -6 00:30:08.045 Write completed with error (sct=0, sc=8) 00:30:08.045 starting I/O failed: -6 00:30:08.045 Write completed with error (sct=0, sc=8) 00:30:08.045 starting I/O failed: -6 00:30:08.045 Write completed with error (sct=0, sc=8) 00:30:08.045 starting I/O failed: -6 00:30:08.045 Write completed with error (sct=0, sc=8) 00:30:08.045 starting I/O failed: -6 00:30:08.045 Write completed with error (sct=0, sc=8) 00:30:08.045 starting I/O failed: -6 00:30:08.045 Write completed with error (sct=0, sc=8) 00:30:08.045 starting I/O failed: -6 00:30:08.045 Write completed with error (sct=0, sc=8) 00:30:08.045 starting I/O failed: -6 00:30:08.045 Write completed with error (sct=0, sc=8) 00:30:08.045 starting I/O failed: -6 00:30:08.045 Write completed with error (sct=0, sc=8) 00:30:08.045 starting I/O failed: -6 00:30:08.045 Write completed with error (sct=0, sc=8) 00:30:08.045 starting I/O failed: -6 00:30:08.045 Write completed with error (sct=0, sc=8) 00:30:08.045 starting I/O failed: -6 00:30:08.045 Write completed with error (sct=0, sc=8) 00:30:08.045 starting I/O failed: -6 00:30:08.045 Write completed with error (sct=0, sc=8) 00:30:08.045 starting I/O failed: -6 00:30:08.045 Write completed with error (sct=0, sc=8) 00:30:08.045 starting I/O failed: -6 00:30:08.045 Write completed with error (sct=0, sc=8) 00:30:08.045 starting I/O failed: -6 00:30:08.045 Write completed with error (sct=0, sc=8) 00:30:08.045 starting I/O failed: -6 00:30:08.045 Write completed with error (sct=0, sc=8) 00:30:08.045 starting I/O failed: -6 00:30:08.045 Write completed with error (sct=0, sc=8) 00:30:08.045 starting I/O failed: -6 00:30:08.045 Write completed with error (sct=0, sc=8) 00:30:08.045 starting I/O failed: -6 00:30:08.045 Write completed with error (sct=0, sc=8) 00:30:08.045 starting I/O failed: -6 00:30:08.045 Write completed with error (sct=0, sc=8) 00:30:08.045 starting I/O failed: -6 00:30:08.045 Write completed with error (sct=0, sc=8) 00:30:08.045 starting I/O failed: -6 00:30:08.045 Write completed with error (sct=0, sc=8) 00:30:08.045 starting I/O failed: -6 00:30:08.045 Write completed with error (sct=0, sc=8) 00:30:08.045 starting I/O failed: -6 00:30:08.045 Write completed with error (sct=0, sc=8) 00:30:08.045 starting I/O failed: -6 00:30:08.045 Write completed with error (sct=0, sc=8) 00:30:08.045 starting I/O failed: -6 00:30:08.045 Write completed with error (sct=0, sc=8) 00:30:08.045 starting I/O failed: -6 00:30:08.045 Write completed with error (sct=0, sc=8) 00:30:08.045 starting I/O failed: -6 00:30:08.045 Write completed with error (sct=0, sc=8) 00:30:08.045 starting I/O failed: -6 00:30:08.045 Write completed with error (sct=0, sc=8) 00:30:08.045 starting I/O failed: -6 00:30:08.045 Write completed with error (sct=0, sc=8) 00:30:08.045 starting I/O failed: -6 00:30:08.045 Write completed with error (sct=0, sc=8) 00:30:08.045 starting I/O failed: -6 00:30:08.045 Write completed with error (sct=0, sc=8) 00:30:08.045 starting I/O failed: -6 00:30:08.045 Write completed with error (sct=0, sc=8) 00:30:08.045 starting I/O failed: -6 00:30:08.045 Write completed with error (sct=0, sc=8) 00:30:08.045 starting I/O failed: -6 00:30:08.045 Write completed with error (sct=0, sc=8) 00:30:08.045 starting I/O failed: -6 00:30:08.045 Write completed with error (sct=0, sc=8) 00:30:08.045 starting I/O failed: -6 00:30:08.045 Write completed with error (sct=0, sc=8) 00:30:08.045 starting I/O failed: -6 00:30:08.045 Write completed with error (sct=0, sc=8) 00:30:08.045 starting I/O failed: -6 00:30:08.045 Write completed with error (sct=0, sc=8) 00:30:08.045 starting I/O failed: -6 00:30:08.045 Write completed with error (sct=0, sc=8) 00:30:08.045 starting I/O failed: -6 00:30:08.045 Write completed with error (sct=0, sc=8) 00:30:08.045 starting I/O failed: -6 00:30:08.045 Write completed with error (sct=0, sc=8) 00:30:08.045 starting I/O failed: -6 00:30:08.045 Write completed with error (sct=0, sc=8) 00:30:08.045 starting I/O failed: -6 00:30:08.045 Write completed with error (sct=0, sc=8) 00:30:08.045 starting I/O failed: -6 00:30:08.045 Write completed with error (sct=0, sc=8) 00:30:08.045 starting I/O failed: -6 00:30:08.045 Write completed with error (sct=0, sc=8) 00:30:08.045 starting I/O failed: -6 00:30:08.045 Write completed with error (sct=0, sc=8) 00:30:08.045 starting I/O failed: -6 00:30:08.045 Write completed with error (sct=0, sc=8) 00:30:08.045 starting I/O failed: -6 00:30:08.045 Write completed with error (sct=0, sc=8) 00:30:08.045 starting I/O failed: -6 00:30:08.045 Write completed with error (sct=0, sc=8) 00:30:08.045 starting I/O failed: -6 00:30:08.045 Write completed with error (sct=0, sc=8) 00:30:08.045 starting I/O failed: -6 00:30:08.045 Write completed with error (sct=0, sc=8) 00:30:08.045 starting I/O failed: -6 00:30:08.045 Write completed with error (sct=0, sc=8) 00:30:08.045 starting I/O failed: -6 00:30:08.045 Write completed with error (sct=0, sc=8) 00:30:08.045 starting I/O failed: -6 00:30:08.045 [2024-12-14 00:10:46.771132] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:08.045 NVMe io qpair process completion error 00:30:08.045 Write completed with error (sct=0, sc=8) 00:30:08.045 starting I/O failed: -6 00:30:08.045 Write completed with error (sct=0, sc=8) 00:30:08.045 Write completed with error (sct=0, sc=8) 00:30:08.045 Write completed with error (sct=0, sc=8) 00:30:08.045 Write completed with error (sct=0, sc=8) 00:30:08.045 starting I/O failed: -6 00:30:08.045 Write completed with error (sct=0, sc=8) 00:30:08.045 Write completed with error (sct=0, sc=8) 00:30:08.045 Write completed with error (sct=0, sc=8) 00:30:08.045 Write completed with error (sct=0, sc=8) 00:30:08.045 starting I/O failed: -6 00:30:08.045 Write completed with error (sct=0, sc=8) 00:30:08.045 Write completed with error (sct=0, sc=8) 00:30:08.045 Write completed with error (sct=0, sc=8) 00:30:08.045 Write completed with error (sct=0, sc=8) 00:30:08.045 starting I/O failed: -6 00:30:08.045 Write completed with error (sct=0, sc=8) 00:30:08.045 Write completed with error (sct=0, sc=8) 00:30:08.045 Write completed with error (sct=0, sc=8) 00:30:08.045 Write completed with error (sct=0, sc=8) 00:30:08.045 starting I/O failed: -6 00:30:08.045 Write completed with error (sct=0, sc=8) 00:30:08.045 Write completed with error (sct=0, sc=8) 00:30:08.045 Write completed with error (sct=0, sc=8) 00:30:08.045 Write completed with error (sct=0, sc=8) 00:30:08.045 starting I/O failed: -6 00:30:08.045 Write completed with error (sct=0, sc=8) 00:30:08.045 Write completed with error (sct=0, sc=8) 00:30:08.045 Write completed with error (sct=0, sc=8) 00:30:08.045 Write completed with error (sct=0, sc=8) 00:30:08.045 starting I/O failed: -6 00:30:08.045 Write completed with error (sct=0, sc=8) 00:30:08.045 Write completed with error (sct=0, sc=8) 00:30:08.045 Write completed with error (sct=0, sc=8) 00:30:08.045 Write completed with error (sct=0, sc=8) 00:30:08.045 starting I/O failed: -6 00:30:08.045 Write completed with error (sct=0, sc=8) 00:30:08.045 Write completed with error (sct=0, sc=8) 00:30:08.045 Write completed with error (sct=0, sc=8) 00:30:08.045 Write completed with error (sct=0, sc=8) 00:30:08.045 starting I/O failed: -6 00:30:08.045 Write completed with error (sct=0, sc=8) 00:30:08.046 [2024-12-14 00:10:46.772707] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:08.046 Write completed with error (sct=0, sc=8) 00:30:08.046 Write completed with error (sct=0, sc=8) 00:30:08.046 starting I/O failed: -6 00:30:08.046 Write completed with error (sct=0, sc=8) 00:30:08.046 starting I/O failed: -6 00:30:08.046 Write completed with error (sct=0, sc=8) 00:30:08.046 Write completed with error (sct=0, sc=8) 00:30:08.046 Write completed with error (sct=0, sc=8) 00:30:08.046 starting I/O failed: -6 00:30:08.046 Write completed with error (sct=0, sc=8) 00:30:08.046 starting I/O failed: -6 00:30:08.046 Write completed with error (sct=0, sc=8) 00:30:08.046 Write completed with error (sct=0, sc=8) 00:30:08.046 Write completed with error (sct=0, sc=8) 00:30:08.046 starting I/O failed: -6 00:30:08.046 Write completed with error (sct=0, sc=8) 00:30:08.046 starting I/O failed: -6 00:30:08.046 Write completed with error (sct=0, sc=8) 00:30:08.046 Write completed with error (sct=0, sc=8) 00:30:08.046 Write completed with error (sct=0, sc=8) 00:30:08.046 starting I/O failed: -6 00:30:08.046 Write completed with error (sct=0, sc=8) 00:30:08.046 starting I/O failed: -6 00:30:08.046 Write completed with error (sct=0, sc=8) 00:30:08.046 Write completed with error (sct=0, sc=8) 00:30:08.046 Write completed with error (sct=0, sc=8) 00:30:08.046 starting I/O failed: -6 00:30:08.046 Write completed with error (sct=0, sc=8) 00:30:08.046 starting I/O failed: -6 00:30:08.046 Write completed with error (sct=0, sc=8) 00:30:08.046 Write completed with error (sct=0, sc=8) 00:30:08.046 Write completed with error (sct=0, sc=8) 00:30:08.046 starting I/O failed: -6 00:30:08.046 Write completed with error (sct=0, sc=8) 00:30:08.046 starting I/O failed: -6 00:30:08.046 Write completed with error (sct=0, sc=8) 00:30:08.046 Write completed with error (sct=0, sc=8) 00:30:08.046 Write completed with error (sct=0, sc=8) 00:30:08.046 starting I/O failed: -6 00:30:08.046 Write completed with error (sct=0, sc=8) 00:30:08.046 starting I/O failed: -6 00:30:08.046 Write completed with error (sct=0, sc=8) 00:30:08.046 Write completed with error (sct=0, sc=8) 00:30:08.046 Write completed with error (sct=0, sc=8) 00:30:08.046 starting I/O failed: -6 00:30:08.046 Write completed with error (sct=0, sc=8) 00:30:08.046 starting I/O failed: -6 00:30:08.046 Write completed with error (sct=0, sc=8) 00:30:08.046 Write completed with error (sct=0, sc=8) 00:30:08.046 Write completed with error (sct=0, sc=8) 00:30:08.046 starting I/O failed: -6 00:30:08.046 Write completed with error (sct=0, sc=8) 00:30:08.046 starting I/O failed: -6 00:30:08.046 Write completed with error (sct=0, sc=8) 00:30:08.046 Write completed with error (sct=0, sc=8) 00:30:08.046 Write completed with error (sct=0, sc=8) 00:30:08.046 starting I/O failed: -6 00:30:08.046 Write completed with error (sct=0, sc=8) 00:30:08.046 starting I/O failed: -6 00:30:08.046 Write completed with error (sct=0, sc=8) 00:30:08.046 Write completed with error (sct=0, sc=8) 00:30:08.046 Write completed with error (sct=0, sc=8) 00:30:08.046 starting I/O failed: -6 00:30:08.046 Write completed with error (sct=0, sc=8) 00:30:08.046 starting I/O failed: -6 00:30:08.046 [2024-12-14 00:10:46.774582] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:08.046 starting I/O failed: -6 00:30:08.046 starting I/O failed: -6 00:30:08.046 starting I/O failed: -6 00:30:08.046 starting I/O failed: -6 00:30:08.046 starting I/O failed: -6 00:30:08.046 starting I/O failed: -6 00:30:08.046 starting I/O failed: -6 00:30:08.046 starting I/O failed: -6 00:30:08.046 Write completed with error (sct=0, sc=8) 00:30:08.046 Write completed with error (sct=0, sc=8) 00:30:08.046 starting I/O failed: -6 00:30:08.046 Write completed with error (sct=0, sc=8) 00:30:08.046 starting I/O failed: -6 00:30:08.046 Write completed with error (sct=0, sc=8) 00:30:08.046 starting I/O failed: -6 00:30:08.046 Write completed with error (sct=0, sc=8) 00:30:08.046 Write completed with error (sct=0, sc=8) 00:30:08.046 starting I/O failed: -6 00:30:08.046 Write completed with error (sct=0, sc=8) 00:30:08.046 starting I/O failed: -6 00:30:08.046 Write completed with error (sct=0, sc=8) 00:30:08.046 starting I/O failed: -6 00:30:08.046 Write completed with error (sct=0, sc=8) 00:30:08.046 Write completed with error (sct=0, sc=8) 00:30:08.046 starting I/O failed: -6 00:30:08.046 Write completed with error (sct=0, sc=8) 00:30:08.046 starting I/O failed: -6 00:30:08.046 Write completed with error (sct=0, sc=8) 00:30:08.046 starting I/O failed: -6 00:30:08.046 Write completed with error (sct=0, sc=8) 00:30:08.046 Write completed with error (sct=0, sc=8) 00:30:08.046 starting I/O failed: -6 00:30:08.046 Write completed with error (sct=0, sc=8) 00:30:08.046 starting I/O failed: -6 00:30:08.046 Write completed with error (sct=0, sc=8) 00:30:08.046 starting I/O failed: -6 00:30:08.046 Write completed with error (sct=0, sc=8) 00:30:08.046 Write completed with error (sct=0, sc=8) 00:30:08.046 starting I/O failed: -6 00:30:08.046 Write completed with error (sct=0, sc=8) 00:30:08.046 starting I/O failed: -6 00:30:08.046 Write completed with error (sct=0, sc=8) 00:30:08.046 starting I/O failed: -6 00:30:08.046 Write completed with error (sct=0, sc=8) 00:30:08.046 Write completed with error (sct=0, sc=8) 00:30:08.046 starting I/O failed: -6 00:30:08.046 Write completed with error (sct=0, sc=8) 00:30:08.046 starting I/O failed: -6 00:30:08.046 Write completed with error (sct=0, sc=8) 00:30:08.046 starting I/O failed: -6 00:30:08.046 Write completed with error (sct=0, sc=8) 00:30:08.046 Write completed with error (sct=0, sc=8) 00:30:08.046 starting I/O failed: -6 00:30:08.046 Write completed with error (sct=0, sc=8) 00:30:08.046 starting I/O failed: -6 00:30:08.046 Write completed with error (sct=0, sc=8) 00:30:08.046 starting I/O failed: -6 00:30:08.046 Write completed with error (sct=0, sc=8) 00:30:08.046 Write completed with error (sct=0, sc=8) 00:30:08.046 starting I/O failed: -6 00:30:08.046 Write completed with error (sct=0, sc=8) 00:30:08.046 starting I/O failed: -6 00:30:08.046 Write completed with error (sct=0, sc=8) 00:30:08.046 starting I/O failed: -6 00:30:08.046 Write completed with error (sct=0, sc=8) 00:30:08.046 Write completed with error (sct=0, sc=8) 00:30:08.046 starting I/O failed: -6 00:30:08.046 Write completed with error (sct=0, sc=8) 00:30:08.046 starting I/O failed: -6 00:30:08.046 Write completed with error (sct=0, sc=8) 00:30:08.046 starting I/O failed: -6 00:30:08.046 Write completed with error (sct=0, sc=8) 00:30:08.046 Write completed with error (sct=0, sc=8) 00:30:08.046 starting I/O failed: -6 00:30:08.046 [2024-12-14 00:10:46.777299] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:08.046 Write completed with error (sct=0, sc=8) 00:30:08.046 starting I/O failed: -6 00:30:08.046 Write completed with error (sct=0, sc=8) 00:30:08.046 starting I/O failed: -6 00:30:08.046 Write completed with error (sct=0, sc=8) 00:30:08.046 starting I/O failed: -6 00:30:08.046 Write completed with error (sct=0, sc=8) 00:30:08.046 starting I/O failed: -6 00:30:08.046 Write completed with error (sct=0, sc=8) 00:30:08.046 starting I/O failed: -6 00:30:08.046 Write completed with error (sct=0, sc=8) 00:30:08.046 starting I/O failed: -6 00:30:08.046 Write completed with error (sct=0, sc=8) 00:30:08.046 starting I/O failed: -6 00:30:08.046 Write completed with error (sct=0, sc=8) 00:30:08.046 starting I/O failed: -6 00:30:08.046 Write completed with error (sct=0, sc=8) 00:30:08.046 starting I/O failed: -6 00:30:08.046 Write completed with error (sct=0, sc=8) 00:30:08.046 starting I/O failed: -6 00:30:08.046 Write completed with error (sct=0, sc=8) 00:30:08.046 starting I/O failed: -6 00:30:08.046 Write completed with error (sct=0, sc=8) 00:30:08.046 starting I/O failed: -6 00:30:08.046 Write completed with error (sct=0, sc=8) 00:30:08.046 starting I/O failed: -6 00:30:08.046 Write completed with error (sct=0, sc=8) 00:30:08.046 starting I/O failed: -6 00:30:08.046 Write completed with error (sct=0, sc=8) 00:30:08.046 starting I/O failed: -6 00:30:08.046 Write completed with error (sct=0, sc=8) 00:30:08.046 starting I/O failed: -6 00:30:08.046 Write completed with error (sct=0, sc=8) 00:30:08.046 starting I/O failed: -6 00:30:08.046 Write completed with error (sct=0, sc=8) 00:30:08.046 starting I/O failed: -6 00:30:08.046 Write completed with error (sct=0, sc=8) 00:30:08.046 starting I/O failed: -6 00:30:08.046 Write completed with error (sct=0, sc=8) 00:30:08.046 starting I/O failed: -6 00:30:08.046 Write completed with error (sct=0, sc=8) 00:30:08.046 starting I/O failed: -6 00:30:08.046 Write completed with error (sct=0, sc=8) 00:30:08.046 starting I/O failed: -6 00:30:08.046 Write completed with error (sct=0, sc=8) 00:30:08.046 starting I/O failed: -6 00:30:08.046 Write completed with error (sct=0, sc=8) 00:30:08.046 starting I/O failed: -6 00:30:08.046 Write completed with error (sct=0, sc=8) 00:30:08.046 starting I/O failed: -6 00:30:08.046 Write completed with error (sct=0, sc=8) 00:30:08.046 starting I/O failed: -6 00:30:08.046 Write completed with error (sct=0, sc=8) 00:30:08.046 starting I/O failed: -6 00:30:08.046 Write completed with error (sct=0, sc=8) 00:30:08.046 starting I/O failed: -6 00:30:08.046 Write completed with error (sct=0, sc=8) 00:30:08.046 starting I/O failed: -6 00:30:08.046 Write completed with error (sct=0, sc=8) 00:30:08.046 starting I/O failed: -6 00:30:08.046 Write completed with error (sct=0, sc=8) 00:30:08.046 starting I/O failed: -6 00:30:08.047 Write completed with error (sct=0, sc=8) 00:30:08.047 starting I/O failed: -6 00:30:08.047 Write completed with error (sct=0, sc=8) 00:30:08.047 starting I/O failed: -6 00:30:08.047 Write completed with error (sct=0, sc=8) 00:30:08.047 starting I/O failed: -6 00:30:08.047 Write completed with error (sct=0, sc=8) 00:30:08.047 starting I/O failed: -6 00:30:08.047 Write completed with error (sct=0, sc=8) 00:30:08.047 starting I/O failed: -6 00:30:08.047 Write completed with error (sct=0, sc=8) 00:30:08.047 starting I/O failed: -6 00:30:08.047 Write completed with error (sct=0, sc=8) 00:30:08.047 starting I/O failed: -6 00:30:08.047 Write completed with error (sct=0, sc=8) 00:30:08.047 starting I/O failed: -6 00:30:08.047 Write completed with error (sct=0, sc=8) 00:30:08.047 starting I/O failed: -6 00:30:08.047 Write completed with error (sct=0, sc=8) 00:30:08.047 starting I/O failed: -6 00:30:08.047 Write completed with error (sct=0, sc=8) 00:30:08.047 starting I/O failed: -6 00:30:08.047 Write completed with error (sct=0, sc=8) 00:30:08.047 starting I/O failed: -6 00:30:08.047 Write completed with error (sct=0, sc=8) 00:30:08.047 starting I/O failed: -6 00:30:08.047 Write completed with error (sct=0, sc=8) 00:30:08.047 starting I/O failed: -6 00:30:08.047 Write completed with error (sct=0, sc=8) 00:30:08.047 starting I/O failed: -6 00:30:08.047 Write completed with error (sct=0, sc=8) 00:30:08.047 starting I/O failed: -6 00:30:08.047 Write completed with error (sct=0, sc=8) 00:30:08.047 starting I/O failed: -6 00:30:08.047 Write completed with error (sct=0, sc=8) 00:30:08.047 starting I/O failed: -6 00:30:08.047 Write completed with error (sct=0, sc=8) 00:30:08.047 starting I/O failed: -6 00:30:08.047 Write completed with error (sct=0, sc=8) 00:30:08.047 starting I/O failed: -6 00:30:08.047 Write completed with error (sct=0, sc=8) 00:30:08.047 starting I/O failed: -6 00:30:08.047 Write completed with error (sct=0, sc=8) 00:30:08.047 starting I/O failed: -6 00:30:08.047 Write completed with error (sct=0, sc=8) 00:30:08.047 starting I/O failed: -6 00:30:08.047 Write completed with error (sct=0, sc=8) 00:30:08.047 starting I/O failed: -6 00:30:08.047 Write completed with error (sct=0, sc=8) 00:30:08.047 starting I/O failed: -6 00:30:08.047 Write completed with error (sct=0, sc=8) 00:30:08.047 starting I/O failed: -6 00:30:08.047 Write completed with error (sct=0, sc=8) 00:30:08.047 starting I/O failed: -6 00:30:08.047 Write completed with error (sct=0, sc=8) 00:30:08.047 starting I/O failed: -6 00:30:08.047 Write completed with error (sct=0, sc=8) 00:30:08.047 starting I/O failed: -6 00:30:08.047 Write completed with error (sct=0, sc=8) 00:30:08.047 starting I/O failed: -6 00:30:08.047 [2024-12-14 00:10:46.788032] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:08.047 NVMe io qpair process completion error 00:30:08.047 Write completed with error (sct=0, sc=8) 00:30:08.047 Write completed with error (sct=0, sc=8) 00:30:08.047 Write completed with error (sct=0, sc=8) 00:30:08.047 Write completed with error (sct=0, sc=8) 00:30:08.047 Write completed with error (sct=0, sc=8) 00:30:08.047 Write completed with error (sct=0, sc=8) 00:30:08.047 Write completed with error (sct=0, sc=8) 00:30:08.047 Write completed with error (sct=0, sc=8) 00:30:08.047 Write completed with error (sct=0, sc=8) 00:30:08.047 Write completed with error (sct=0, sc=8) 00:30:08.047 Write completed with error (sct=0, sc=8) 00:30:08.047 Write completed with error (sct=0, sc=8) 00:30:08.047 Write completed with error (sct=0, sc=8) 00:30:08.047 Write completed with error (sct=0, sc=8) 00:30:08.047 Write completed with error (sct=0, sc=8) 00:30:08.047 Write completed with error (sct=0, sc=8) 00:30:08.047 Write completed with error (sct=0, sc=8) 00:30:08.047 Write completed with error (sct=0, sc=8) 00:30:08.047 Write completed with error (sct=0, sc=8) 00:30:08.047 Write completed with error (sct=0, sc=8) 00:30:08.047 Write completed with error (sct=0, sc=8) 00:30:08.047 Write completed with error (sct=0, sc=8) 00:30:08.047 Write completed with error (sct=0, sc=8) 00:30:08.047 Write completed with error (sct=0, sc=8) 00:30:08.047 Write completed with error (sct=0, sc=8) 00:30:08.047 Write completed with error (sct=0, sc=8) 00:30:08.047 Write completed with error (sct=0, sc=8) 00:30:08.047 Write completed with error (sct=0, sc=8) 00:30:08.047 Write completed with error (sct=0, sc=8) 00:30:08.047 Write completed with error (sct=0, sc=8) 00:30:08.047 Write completed with error (sct=0, sc=8) 00:30:08.047 Write completed with error (sct=0, sc=8) 00:30:08.047 Write completed with error (sct=0, sc=8) 00:30:08.047 Write completed with error (sct=0, sc=8) 00:30:08.047 Write completed with error (sct=0, sc=8) 00:30:08.047 Write completed with error (sct=0, sc=8) 00:30:08.047 Write completed with error (sct=0, sc=8) 00:30:08.047 Write completed with error (sct=0, sc=8) 00:30:08.047 Write completed with error (sct=0, sc=8) 00:30:08.047 Write completed with error (sct=0, sc=8) 00:30:08.047 Write completed with error (sct=0, sc=8) 00:30:08.047 Write completed with error (sct=0, sc=8) 00:30:08.047 Write completed with error (sct=0, sc=8) 00:30:08.047 Write completed with error (sct=0, sc=8) 00:30:08.047 Write completed with error (sct=0, sc=8) 00:30:08.047 Write completed with error (sct=0, sc=8) 00:30:08.047 Write completed with error (sct=0, sc=8) 00:30:08.047 Write completed with error (sct=0, sc=8) 00:30:08.047 Write completed with error (sct=0, sc=8) 00:30:08.047 Write completed with error (sct=0, sc=8) 00:30:08.047 Write completed with error (sct=0, sc=8) 00:30:08.047 Write completed with error (sct=0, sc=8) 00:30:08.047 Write completed with error (sct=0, sc=8) 00:30:08.047 Write completed with error (sct=0, sc=8) 00:30:08.047 Write completed with error (sct=0, sc=8) 00:30:08.047 Write completed with error (sct=0, sc=8) 00:30:08.047 Write completed with error (sct=0, sc=8) 00:30:08.047 Write completed with error (sct=0, sc=8) 00:30:08.047 Write completed with error (sct=0, sc=8) 00:30:08.047 Write completed with error (sct=0, sc=8) 00:30:08.047 Write completed with error (sct=0, sc=8) 00:30:08.047 Write completed with error (sct=0, sc=8) 00:30:08.047 Write completed with error (sct=0, sc=8) 00:30:08.047 Write completed with error (sct=0, sc=8) 00:30:08.047 Write completed with error (sct=0, sc=8) 00:30:08.047 Write completed with error (sct=0, sc=8) 00:30:08.047 Write completed with error (sct=0, sc=8) 00:30:08.047 Write completed with error (sct=0, sc=8) 00:30:08.047 Write completed with error (sct=0, sc=8) 00:30:08.047 Write completed with error (sct=0, sc=8) 00:30:08.047 Write completed with error (sct=0, sc=8) 00:30:08.047 Write completed with error (sct=0, sc=8) 00:30:08.047 Write completed with error (sct=0, sc=8) 00:30:08.047 Write completed with error (sct=0, sc=8) 00:30:08.047 Write completed with error (sct=0, sc=8) 00:30:08.047 Write completed with error (sct=0, sc=8) 00:30:08.047 Write completed with error (sct=0, sc=8) 00:30:08.047 Write completed with error (sct=0, sc=8) 00:30:08.047 Write completed with error (sct=0, sc=8) 00:30:08.047 Write completed with error (sct=0, sc=8) 00:30:08.047 Write completed with error (sct=0, sc=8) 00:30:08.047 Write completed with error (sct=0, sc=8) 00:30:08.047 [2024-12-14 00:10:46.793072] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:08.047 Write completed with error (sct=0, sc=8) 00:30:08.047 Write completed with error (sct=0, sc=8) 00:30:08.047 Write completed with error (sct=0, sc=8) 00:30:08.047 Write completed with error (sct=0, sc=8) 00:30:08.047 Write completed with error (sct=0, sc=8) 00:30:08.047 Write completed with error (sct=0, sc=8) 00:30:08.047 Write completed with error (sct=0, sc=8) 00:30:08.047 Write completed with error (sct=0, sc=8) 00:30:08.047 Write completed with error (sct=0, sc=8) 00:30:08.047 Write completed with error (sct=0, sc=8) 00:30:08.047 Write completed with error (sct=0, sc=8) 00:30:08.047 Write completed with error (sct=0, sc=8) 00:30:08.047 Write completed with error (sct=0, sc=8) 00:30:08.047 Write completed with error (sct=0, sc=8) 00:30:08.047 Write completed with error (sct=0, sc=8) 00:30:08.047 Write completed with error (sct=0, sc=8) 00:30:08.047 Write completed with error (sct=0, sc=8) 00:30:08.047 Write completed with error (sct=0, sc=8) 00:30:08.047 Write completed with error (sct=0, sc=8) 00:30:08.047 Write completed with error (sct=0, sc=8) 00:30:08.047 Write completed with error (sct=0, sc=8) 00:30:08.047 Write completed with error (sct=0, sc=8) 00:30:08.047 Write completed with error (sct=0, sc=8) 00:30:08.047 Write completed with error (sct=0, sc=8) 00:30:08.047 Write completed with error (sct=0, sc=8) 00:30:08.047 Write completed with error (sct=0, sc=8) 00:30:08.047 Write completed with error (sct=0, sc=8) 00:30:08.047 Write completed with error (sct=0, sc=8) 00:30:08.047 Write completed with error (sct=0, sc=8) 00:30:08.047 Write completed with error (sct=0, sc=8) 00:30:08.047 Write completed with error (sct=0, sc=8) 00:30:08.047 [2024-12-14 00:10:46.805894] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:08.047 NVMe io qpair process completion error 00:30:08.047 Write completed with error (sct=0, sc=8) 00:30:08.047 starting I/O failed: -6 00:30:08.047 Write completed with error (sct=0, sc=8) 00:30:08.047 Write completed with error (sct=0, sc=8) 00:30:08.047 Write completed with error (sct=0, sc=8) 00:30:08.047 Write completed with error (sct=0, sc=8) 00:30:08.047 starting I/O failed: -6 00:30:08.047 Write completed with error (sct=0, sc=8) 00:30:08.047 Write completed with error (sct=0, sc=8) 00:30:08.047 Write completed with error (sct=0, sc=8) 00:30:08.047 Write completed with error (sct=0, sc=8) 00:30:08.047 starting I/O failed: -6 00:30:08.047 Write completed with error (sct=0, sc=8) 00:30:08.047 Write completed with error (sct=0, sc=8) 00:30:08.047 Write completed with error (sct=0, sc=8) 00:30:08.047 Write completed with error (sct=0, sc=8) 00:30:08.047 starting I/O failed: -6 00:30:08.048 Write completed with error (sct=0, sc=8) 00:30:08.048 Write completed with error (sct=0, sc=8) 00:30:08.048 Write completed with error (sct=0, sc=8) 00:30:08.048 Write completed with error (sct=0, sc=8) 00:30:08.048 starting I/O failed: -6 00:30:08.048 Write completed with error (sct=0, sc=8) 00:30:08.048 Write completed with error (sct=0, sc=8) 00:30:08.048 Write completed with error (sct=0, sc=8) 00:30:08.048 Write completed with error (sct=0, sc=8) 00:30:08.048 starting I/O failed: -6 00:30:08.048 Write completed with error (sct=0, sc=8) 00:30:08.048 Write completed with error (sct=0, sc=8) 00:30:08.048 Write completed with error (sct=0, sc=8) 00:30:08.048 Write completed with error (sct=0, sc=8) 00:30:08.048 starting I/O failed: -6 00:30:08.048 Write completed with error (sct=0, sc=8) 00:30:08.048 Write completed with error (sct=0, sc=8) 00:30:08.048 Write completed with error (sct=0, sc=8) 00:30:08.048 Write completed with error (sct=0, sc=8) 00:30:08.048 starting I/O failed: -6 00:30:08.048 Write completed with error (sct=0, sc=8) 00:30:08.048 Write completed with error (sct=0, sc=8) 00:30:08.048 Write completed with error (sct=0, sc=8) 00:30:08.048 Write completed with error (sct=0, sc=8) 00:30:08.048 starting I/O failed: -6 00:30:08.048 Write completed with error (sct=0, sc=8) 00:30:08.048 Write completed with error (sct=0, sc=8) 00:30:08.048 Write completed with error (sct=0, sc=8) 00:30:08.048 Write completed with error (sct=0, sc=8) 00:30:08.048 starting I/O failed: -6 00:30:08.048 Write completed with error (sct=0, sc=8) 00:30:08.048 [2024-12-14 00:10:46.807484] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:08.048 starting I/O failed: -6 00:30:08.048 starting I/O failed: -6 00:30:08.048 starting I/O failed: -6 00:30:08.048 starting I/O failed: -6 00:30:08.048 starting I/O failed: -6 00:30:08.048 Write completed with error (sct=0, sc=8) 00:30:08.048 Write completed with error (sct=0, sc=8) 00:30:08.048 Write completed with error (sct=0, sc=8) 00:30:08.048 starting I/O failed: -6 00:30:08.048 Write completed with error (sct=0, sc=8) 00:30:08.048 starting I/O failed: -6 00:30:08.048 Write completed with error (sct=0, sc=8) 00:30:08.048 Write completed with error (sct=0, sc=8) 00:30:08.048 Write completed with error (sct=0, sc=8) 00:30:08.048 starting I/O failed: -6 00:30:08.048 Write completed with error (sct=0, sc=8) 00:30:08.048 starting I/O failed: -6 00:30:08.048 Write completed with error (sct=0, sc=8) 00:30:08.048 Write completed with error (sct=0, sc=8) 00:30:08.048 Write completed with error (sct=0, sc=8) 00:30:08.048 starting I/O failed: -6 00:30:08.048 Write completed with error (sct=0, sc=8) 00:30:08.048 starting I/O failed: -6 00:30:08.048 Write completed with error (sct=0, sc=8) 00:30:08.048 Write completed with error (sct=0, sc=8) 00:30:08.048 Write completed with error (sct=0, sc=8) 00:30:08.048 starting I/O failed: -6 00:30:08.048 Write completed with error (sct=0, sc=8) 00:30:08.048 starting I/O failed: -6 00:30:08.048 Write completed with error (sct=0, sc=8) 00:30:08.048 Write completed with error (sct=0, sc=8) 00:30:08.048 Write completed with error (sct=0, sc=8) 00:30:08.048 starting I/O failed: -6 00:30:08.048 Write completed with error (sct=0, sc=8) 00:30:08.048 starting I/O failed: -6 00:30:08.048 Write completed with error (sct=0, sc=8) 00:30:08.048 Write completed with error (sct=0, sc=8) 00:30:08.048 Write completed with error (sct=0, sc=8) 00:30:08.048 starting I/O failed: -6 00:30:08.048 Write completed with error (sct=0, sc=8) 00:30:08.048 starting I/O failed: -6 00:30:08.048 Write completed with error (sct=0, sc=8) 00:30:08.048 Write completed with error (sct=0, sc=8) 00:30:08.048 Write completed with error (sct=0, sc=8) 00:30:08.048 starting I/O failed: -6 00:30:08.048 Write completed with error (sct=0, sc=8) 00:30:08.048 starting I/O failed: -6 00:30:08.048 Write completed with error (sct=0, sc=8) 00:30:08.048 [2024-12-14 00:10:46.809281] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:08.048 starting I/O failed: -6 00:30:08.048 Write completed with error (sct=0, sc=8) 00:30:08.048 starting I/O failed: -6 00:30:08.048 Write completed with error (sct=0, sc=8) 00:30:08.048 Write completed with error (sct=0, sc=8) 00:30:08.048 starting I/O failed: -6 00:30:08.048 Write completed with error (sct=0, sc=8) 00:30:08.048 starting I/O failed: -6 00:30:08.048 Write completed with error (sct=0, sc=8) 00:30:08.048 starting I/O failed: -6 00:30:08.048 Write completed with error (sct=0, sc=8) 00:30:08.048 Write completed with error (sct=0, sc=8) 00:30:08.048 starting I/O failed: -6 00:30:08.048 Write completed with error (sct=0, sc=8) 00:30:08.048 starting I/O failed: -6 00:30:08.048 Write completed with error (sct=0, sc=8) 00:30:08.048 starting I/O failed: -6 00:30:08.048 Write completed with error (sct=0, sc=8) 00:30:08.048 Write completed with error (sct=0, sc=8) 00:30:08.048 starting I/O failed: -6 00:30:08.048 Write completed with error (sct=0, sc=8) 00:30:08.048 starting I/O failed: -6 00:30:08.048 Write completed with error (sct=0, sc=8) 00:30:08.048 starting I/O failed: -6 00:30:08.048 Write completed with error (sct=0, sc=8) 00:30:08.048 Write completed with error (sct=0, sc=8) 00:30:08.048 starting I/O failed: -6 00:30:08.048 Write completed with error (sct=0, sc=8) 00:30:08.048 starting I/O failed: -6 00:30:08.048 Write completed with error (sct=0, sc=8) 00:30:08.048 starting I/O failed: -6 00:30:08.048 Write completed with error (sct=0, sc=8) 00:30:08.048 Write completed with error (sct=0, sc=8) 00:30:08.048 starting I/O failed: -6 00:30:08.048 Write completed with error (sct=0, sc=8) 00:30:08.048 starting I/O failed: -6 00:30:08.048 Write completed with error (sct=0, sc=8) 00:30:08.048 starting I/O failed: -6 00:30:08.048 Write completed with error (sct=0, sc=8) 00:30:08.048 Write completed with error (sct=0, sc=8) 00:30:08.048 starting I/O failed: -6 00:30:08.048 Write completed with error (sct=0, sc=8) 00:30:08.048 starting I/O failed: -6 00:30:08.048 Write completed with error (sct=0, sc=8) 00:30:08.048 starting I/O failed: -6 00:30:08.048 Write completed with error (sct=0, sc=8) 00:30:08.048 Write completed with error (sct=0, sc=8) 00:30:08.048 starting I/O failed: -6 00:30:08.048 Write completed with error (sct=0, sc=8) 00:30:08.048 starting I/O failed: -6 00:30:08.048 Write completed with error (sct=0, sc=8) 00:30:08.048 starting I/O failed: -6 00:30:08.048 Write completed with error (sct=0, sc=8) 00:30:08.048 Write completed with error (sct=0, sc=8) 00:30:08.048 starting I/O failed: -6 00:30:08.048 Write completed with error (sct=0, sc=8) 00:30:08.048 starting I/O failed: -6 00:30:08.048 Write completed with error (sct=0, sc=8) 00:30:08.048 starting I/O failed: -6 00:30:08.048 Write completed with error (sct=0, sc=8) 00:30:08.048 Write completed with error (sct=0, sc=8) 00:30:08.048 starting I/O failed: -6 00:30:08.048 Write completed with error (sct=0, sc=8) 00:30:08.048 starting I/O failed: -6 00:30:08.048 Write completed with error (sct=0, sc=8) 00:30:08.048 starting I/O failed: -6 00:30:08.048 Write completed with error (sct=0, sc=8) 00:30:08.048 Write completed with error (sct=0, sc=8) 00:30:08.048 starting I/O failed: -6 00:30:08.048 Write completed with error (sct=0, sc=8) 00:30:08.048 starting I/O failed: -6 00:30:08.048 Write completed with error (sct=0, sc=8) 00:30:08.048 starting I/O failed: -6 00:30:08.048 Write completed with error (sct=0, sc=8) 00:30:08.048 Write completed with error (sct=0, sc=8) 00:30:08.048 starting I/O failed: -6 00:30:08.048 Write completed with error (sct=0, sc=8) 00:30:08.048 starting I/O failed: -6 00:30:08.048 Write completed with error (sct=0, sc=8) 00:30:08.048 starting I/O failed: -6 00:30:08.048 Write completed with error (sct=0, sc=8) 00:30:08.048 Write completed with error (sct=0, sc=8) 00:30:08.048 starting I/O failed: -6 00:30:08.048 Write completed with error (sct=0, sc=8) 00:30:08.048 starting I/O failed: -6 00:30:08.048 Write completed with error (sct=0, sc=8) 00:30:08.048 starting I/O failed: -6 00:30:08.048 Write completed with error (sct=0, sc=8) 00:30:08.048 [2024-12-14 00:10:46.811906] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:08.048 Write completed with error (sct=0, sc=8) 00:30:08.048 starting I/O failed: -6 00:30:08.048 Write completed with error (sct=0, sc=8) 00:30:08.048 starting I/O failed: -6 00:30:08.048 Write completed with error (sct=0, sc=8) 00:30:08.048 starting I/O failed: -6 00:30:08.048 Write completed with error (sct=0, sc=8) 00:30:08.048 starting I/O failed: -6 00:30:08.048 Write completed with error (sct=0, sc=8) 00:30:08.048 starting I/O failed: -6 00:30:08.048 Write completed with error (sct=0, sc=8) 00:30:08.048 starting I/O failed: -6 00:30:08.048 Write completed with error (sct=0, sc=8) 00:30:08.048 starting I/O failed: -6 00:30:08.048 Write completed with error (sct=0, sc=8) 00:30:08.048 starting I/O failed: -6 00:30:08.048 Write completed with error (sct=0, sc=8) 00:30:08.048 starting I/O failed: -6 00:30:08.048 Write completed with error (sct=0, sc=8) 00:30:08.048 starting I/O failed: -6 00:30:08.048 Write completed with error (sct=0, sc=8) 00:30:08.048 starting I/O failed: -6 00:30:08.048 Write completed with error (sct=0, sc=8) 00:30:08.048 starting I/O failed: -6 00:30:08.048 Write completed with error (sct=0, sc=8) 00:30:08.048 starting I/O failed: -6 00:30:08.048 Write completed with error (sct=0, sc=8) 00:30:08.048 starting I/O failed: -6 00:30:08.048 Write completed with error (sct=0, sc=8) 00:30:08.048 starting I/O failed: -6 00:30:08.048 Write completed with error (sct=0, sc=8) 00:30:08.048 starting I/O failed: -6 00:30:08.048 Write completed with error (sct=0, sc=8) 00:30:08.048 starting I/O failed: -6 00:30:08.048 Write completed with error (sct=0, sc=8) 00:30:08.048 starting I/O failed: -6 00:30:08.048 Write completed with error (sct=0, sc=8) 00:30:08.048 starting I/O failed: -6 00:30:08.048 Write completed with error (sct=0, sc=8) 00:30:08.048 starting I/O failed: -6 00:30:08.048 Write completed with error (sct=0, sc=8) 00:30:08.048 starting I/O failed: -6 00:30:08.048 Write completed with error (sct=0, sc=8) 00:30:08.048 starting I/O failed: -6 00:30:08.048 Write completed with error (sct=0, sc=8) 00:30:08.048 starting I/O failed: -6 00:30:08.048 Write completed with error (sct=0, sc=8) 00:30:08.048 starting I/O failed: -6 00:30:08.048 Write completed with error (sct=0, sc=8) 00:30:08.048 starting I/O failed: -6 00:30:08.048 Write completed with error (sct=0, sc=8) 00:30:08.049 starting I/O failed: -6 00:30:08.049 Write completed with error (sct=0, sc=8) 00:30:08.049 starting I/O failed: -6 00:30:08.049 Write completed with error (sct=0, sc=8) 00:30:08.049 starting I/O failed: -6 00:30:08.049 Write completed with error (sct=0, sc=8) 00:30:08.049 starting I/O failed: -6 00:30:08.049 Write completed with error (sct=0, sc=8) 00:30:08.049 starting I/O failed: -6 00:30:08.049 Write completed with error (sct=0, sc=8) 00:30:08.049 starting I/O failed: -6 00:30:08.049 Write completed with error (sct=0, sc=8) 00:30:08.049 starting I/O failed: -6 00:30:08.049 Write completed with error (sct=0, sc=8) 00:30:08.049 starting I/O failed: -6 00:30:08.049 Write completed with error (sct=0, sc=8) 00:30:08.049 starting I/O failed: -6 00:30:08.049 Write completed with error (sct=0, sc=8) 00:30:08.049 starting I/O failed: -6 00:30:08.049 Write completed with error (sct=0, sc=8) 00:30:08.049 starting I/O failed: -6 00:30:08.049 Write completed with error (sct=0, sc=8) 00:30:08.049 starting I/O failed: -6 00:30:08.049 Write completed with error (sct=0, sc=8) 00:30:08.049 starting I/O failed: -6 00:30:08.049 Write completed with error (sct=0, sc=8) 00:30:08.049 starting I/O failed: -6 00:30:08.049 Write completed with error (sct=0, sc=8) 00:30:08.049 starting I/O failed: -6 00:30:08.049 Write completed with error (sct=0, sc=8) 00:30:08.049 starting I/O failed: -6 00:30:08.049 Write completed with error (sct=0, sc=8) 00:30:08.049 starting I/O failed: -6 00:30:08.049 Write completed with error (sct=0, sc=8) 00:30:08.049 starting I/O failed: -6 00:30:08.049 Write completed with error (sct=0, sc=8) 00:30:08.049 starting I/O failed: -6 00:30:08.049 Write completed with error (sct=0, sc=8) 00:30:08.049 starting I/O failed: -6 00:30:08.049 Write completed with error (sct=0, sc=8) 00:30:08.049 starting I/O failed: -6 00:30:08.049 Write completed with error (sct=0, sc=8) 00:30:08.049 starting I/O failed: -6 00:30:08.049 Write completed with error (sct=0, sc=8) 00:30:08.049 starting I/O failed: -6 00:30:08.049 Write completed with error (sct=0, sc=8) 00:30:08.049 starting I/O failed: -6 00:30:08.049 Write completed with error (sct=0, sc=8) 00:30:08.049 starting I/O failed: -6 00:30:08.049 Write completed with error (sct=0, sc=8) 00:30:08.049 starting I/O failed: -6 00:30:08.049 Write completed with error (sct=0, sc=8) 00:30:08.049 starting I/O failed: -6 00:30:08.049 Write completed with error (sct=0, sc=8) 00:30:08.049 starting I/O failed: -6 00:30:08.049 Write completed with error (sct=0, sc=8) 00:30:08.049 starting I/O failed: -6 00:30:08.049 Write completed with error (sct=0, sc=8) 00:30:08.049 starting I/O failed: -6 00:30:08.049 Write completed with error (sct=0, sc=8) 00:30:08.049 starting I/O failed: -6 00:30:08.049 Write completed with error (sct=0, sc=8) 00:30:08.049 starting I/O failed: -6 00:30:08.049 Write completed with error (sct=0, sc=8) 00:30:08.049 starting I/O failed: -6 00:30:08.049 Write completed with error (sct=0, sc=8) 00:30:08.049 starting I/O failed: -6 00:30:08.049 Write completed with error (sct=0, sc=8) 00:30:08.049 starting I/O failed: -6 00:30:08.049 Write completed with error (sct=0, sc=8) 00:30:08.049 starting I/O failed: -6 00:30:08.049 [2024-12-14 00:10:46.826262] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:08.049 NVMe io qpair process completion error 00:30:08.049 Write completed with error (sct=0, sc=8) 00:30:08.049 Write completed with error (sct=0, sc=8) 00:30:08.049 Write completed with error (sct=0, sc=8) 00:30:08.049 starting I/O failed: -6 00:30:08.049 Write completed with error (sct=0, sc=8) 00:30:08.049 Write completed with error (sct=0, sc=8) 00:30:08.049 Write completed with error (sct=0, sc=8) 00:30:08.049 Write completed with error (sct=0, sc=8) 00:30:08.049 starting I/O failed: -6 00:30:08.049 Write completed with error (sct=0, sc=8) 00:30:08.049 Write completed with error (sct=0, sc=8) 00:30:08.049 Write completed with error (sct=0, sc=8) 00:30:08.049 Write completed with error (sct=0, sc=8) 00:30:08.049 starting I/O failed: -6 00:30:08.049 Write completed with error (sct=0, sc=8) 00:30:08.049 Write completed with error (sct=0, sc=8) 00:30:08.049 Write completed with error (sct=0, sc=8) 00:30:08.049 Write completed with error (sct=0, sc=8) 00:30:08.049 starting I/O failed: -6 00:30:08.049 Write completed with error (sct=0, sc=8) 00:30:08.049 Write completed with error (sct=0, sc=8) 00:30:08.049 Write completed with error (sct=0, sc=8) 00:30:08.049 Write completed with error (sct=0, sc=8) 00:30:08.049 starting I/O failed: -6 00:30:08.049 Write completed with error (sct=0, sc=8) 00:30:08.049 Write completed with error (sct=0, sc=8) 00:30:08.049 Write completed with error (sct=0, sc=8) 00:30:08.049 Write completed with error (sct=0, sc=8) 00:30:08.049 starting I/O failed: -6 00:30:08.049 Write completed with error (sct=0, sc=8) 00:30:08.049 Write completed with error (sct=0, sc=8) 00:30:08.049 Write completed with error (sct=0, sc=8) 00:30:08.049 Write completed with error (sct=0, sc=8) 00:30:08.049 starting I/O failed: -6 00:30:08.049 Write completed with error (sct=0, sc=8) 00:30:08.049 Write completed with error (sct=0, sc=8) 00:30:08.049 Write completed with error (sct=0, sc=8) 00:30:08.049 Write completed with error (sct=0, sc=8) 00:30:08.049 starting I/O failed: -6 00:30:08.049 Write completed with error (sct=0, sc=8) 00:30:08.049 Write completed with error (sct=0, sc=8) 00:30:08.049 Write completed with error (sct=0, sc=8) 00:30:08.049 Write completed with error (sct=0, sc=8) 00:30:08.049 starting I/O failed: -6 00:30:08.049 Write completed with error (sct=0, sc=8) 00:30:08.049 Write completed with error (sct=0, sc=8) 00:30:08.049 Write completed with error (sct=0, sc=8) 00:30:08.049 [2024-12-14 00:10:46.827984] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:08.049 Write completed with error (sct=0, sc=8) 00:30:08.049 starting I/O failed: -6 00:30:08.049 Write completed with error (sct=0, sc=8) 00:30:08.049 starting I/O failed: -6 00:30:08.049 Write completed with error (sct=0, sc=8) 00:30:08.049 Write completed with error (sct=0, sc=8) 00:30:08.049 Write completed with error (sct=0, sc=8) 00:30:08.049 starting I/O failed: -6 00:30:08.049 Write completed with error (sct=0, sc=8) 00:30:08.049 starting I/O failed: -6 00:30:08.049 Write completed with error (sct=0, sc=8) 00:30:08.049 Write completed with error (sct=0, sc=8) 00:30:08.049 Write completed with error (sct=0, sc=8) 00:30:08.049 starting I/O failed: -6 00:30:08.049 Write completed with error (sct=0, sc=8) 00:30:08.049 starting I/O failed: -6 00:30:08.049 Write completed with error (sct=0, sc=8) 00:30:08.049 Write completed with error (sct=0, sc=8) 00:30:08.049 Write completed with error (sct=0, sc=8) 00:30:08.049 starting I/O failed: -6 00:30:08.049 Write completed with error (sct=0, sc=8) 00:30:08.049 starting I/O failed: -6 00:30:08.049 Write completed with error (sct=0, sc=8) 00:30:08.049 Write completed with error (sct=0, sc=8) 00:30:08.049 Write completed with error (sct=0, sc=8) 00:30:08.049 starting I/O failed: -6 00:30:08.049 Write completed with error (sct=0, sc=8) 00:30:08.049 starting I/O failed: -6 00:30:08.049 Write completed with error (sct=0, sc=8) 00:30:08.049 Write completed with error (sct=0, sc=8) 00:30:08.049 Write completed with error (sct=0, sc=8) 00:30:08.049 starting I/O failed: -6 00:30:08.049 Write completed with error (sct=0, sc=8) 00:30:08.049 starting I/O failed: -6 00:30:08.049 Write completed with error (sct=0, sc=8) 00:30:08.049 Write completed with error (sct=0, sc=8) 00:30:08.049 Write completed with error (sct=0, sc=8) 00:30:08.049 starting I/O failed: -6 00:30:08.049 Write completed with error (sct=0, sc=8) 00:30:08.049 starting I/O failed: -6 00:30:08.049 Write completed with error (sct=0, sc=8) 00:30:08.049 Write completed with error (sct=0, sc=8) 00:30:08.049 Write completed with error (sct=0, sc=8) 00:30:08.049 starting I/O failed: -6 00:30:08.049 Write completed with error (sct=0, sc=8) 00:30:08.049 starting I/O failed: -6 00:30:08.049 Write completed with error (sct=0, sc=8) 00:30:08.049 Write completed with error (sct=0, sc=8) 00:30:08.049 Write completed with error (sct=0, sc=8) 00:30:08.049 starting I/O failed: -6 00:30:08.049 Write completed with error (sct=0, sc=8) 00:30:08.049 starting I/O failed: -6 00:30:08.049 Write completed with error (sct=0, sc=8) 00:30:08.049 Write completed with error (sct=0, sc=8) 00:30:08.049 Write completed with error (sct=0, sc=8) 00:30:08.049 starting I/O failed: -6 00:30:08.049 Write completed with error (sct=0, sc=8) 00:30:08.049 starting I/O failed: -6 00:30:08.049 Write completed with error (sct=0, sc=8) 00:30:08.049 Write completed with error (sct=0, sc=8) 00:30:08.049 Write completed with error (sct=0, sc=8) 00:30:08.049 starting I/O failed: -6 00:30:08.049 Write completed with error (sct=0, sc=8) 00:30:08.049 starting I/O failed: -6 00:30:08.049 Write completed with error (sct=0, sc=8) 00:30:08.049 Write completed with error (sct=0, sc=8) 00:30:08.049 [2024-12-14 00:10:46.829828] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:08.049 Write completed with error (sct=0, sc=8) 00:30:08.049 starting I/O failed: -6 00:30:08.049 Write completed with error (sct=0, sc=8) 00:30:08.049 starting I/O failed: -6 00:30:08.049 Write completed with error (sct=0, sc=8) 00:30:08.049 Write completed with error (sct=0, sc=8) 00:30:08.049 starting I/O failed: -6 00:30:08.049 Write completed with error (sct=0, sc=8) 00:30:08.049 starting I/O failed: -6 00:30:08.049 Write completed with error (sct=0, sc=8) 00:30:08.049 starting I/O failed: -6 00:30:08.049 Write completed with error (sct=0, sc=8) 00:30:08.049 Write completed with error (sct=0, sc=8) 00:30:08.049 starting I/O failed: -6 00:30:08.049 Write completed with error (sct=0, sc=8) 00:30:08.049 starting I/O failed: -6 00:30:08.049 Write completed with error (sct=0, sc=8) 00:30:08.049 starting I/O failed: -6 00:30:08.049 Write completed with error (sct=0, sc=8) 00:30:08.049 Write completed with error (sct=0, sc=8) 00:30:08.049 starting I/O failed: -6 00:30:08.049 Write completed with error (sct=0, sc=8) 00:30:08.049 starting I/O failed: -6 00:30:08.049 Write completed with error (sct=0, sc=8) 00:30:08.049 starting I/O failed: -6 00:30:08.049 Write completed with error (sct=0, sc=8) 00:30:08.049 Write completed with error (sct=0, sc=8) 00:30:08.049 starting I/O failed: -6 00:30:08.049 Write completed with error (sct=0, sc=8) 00:30:08.049 starting I/O failed: -6 00:30:08.049 Write completed with error (sct=0, sc=8) 00:30:08.049 starting I/O failed: -6 00:30:08.050 Write completed with error (sct=0, sc=8) 00:30:08.050 Write completed with error (sct=0, sc=8) 00:30:08.050 starting I/O failed: -6 00:30:08.050 Write completed with error (sct=0, sc=8) 00:30:08.050 starting I/O failed: -6 00:30:08.050 Write completed with error (sct=0, sc=8) 00:30:08.050 starting I/O failed: -6 00:30:08.050 Write completed with error (sct=0, sc=8) 00:30:08.050 Write completed with error (sct=0, sc=8) 00:30:08.050 starting I/O failed: -6 00:30:08.050 Write completed with error (sct=0, sc=8) 00:30:08.050 starting I/O failed: -6 00:30:08.050 Write completed with error (sct=0, sc=8) 00:30:08.050 starting I/O failed: -6 00:30:08.050 Write completed with error (sct=0, sc=8) 00:30:08.050 Write completed with error (sct=0, sc=8) 00:30:08.050 starting I/O failed: -6 00:30:08.050 Write completed with error (sct=0, sc=8) 00:30:08.050 starting I/O failed: -6 00:30:08.050 Write completed with error (sct=0, sc=8) 00:30:08.050 starting I/O failed: -6 00:30:08.050 Write completed with error (sct=0, sc=8) 00:30:08.050 Write completed with error (sct=0, sc=8) 00:30:08.050 starting I/O failed: -6 00:30:08.050 Write completed with error (sct=0, sc=8) 00:30:08.050 starting I/O failed: -6 00:30:08.050 Write completed with error (sct=0, sc=8) 00:30:08.050 starting I/O failed: -6 00:30:08.050 Write completed with error (sct=0, sc=8) 00:30:08.050 Write completed with error (sct=0, sc=8) 00:30:08.050 starting I/O failed: -6 00:30:08.050 Write completed with error (sct=0, sc=8) 00:30:08.050 starting I/O failed: -6 00:30:08.050 Write completed with error (sct=0, sc=8) 00:30:08.050 starting I/O failed: -6 00:30:08.050 Write completed with error (sct=0, sc=8) 00:30:08.050 Write completed with error (sct=0, sc=8) 00:30:08.050 starting I/O failed: -6 00:30:08.050 Write completed with error (sct=0, sc=8) 00:30:08.050 starting I/O failed: -6 00:30:08.050 Write completed with error (sct=0, sc=8) 00:30:08.050 starting I/O failed: -6 00:30:08.050 Write completed with error (sct=0, sc=8) 00:30:08.050 Write completed with error (sct=0, sc=8) 00:30:08.050 starting I/O failed: -6 00:30:08.050 Write completed with error (sct=0, sc=8) 00:30:08.050 starting I/O failed: -6 00:30:08.050 Write completed with error (sct=0, sc=8) 00:30:08.050 starting I/O failed: -6 00:30:08.050 Write completed with error (sct=0, sc=8) 00:30:08.050 Write completed with error (sct=0, sc=8) 00:30:08.050 starting I/O failed: -6 00:30:08.050 Write completed with error (sct=0, sc=8) 00:30:08.050 starting I/O failed: -6 00:30:08.050 [2024-12-14 00:10:46.832388] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:08.050 Write completed with error (sct=0, sc=8) 00:30:08.050 starting I/O failed: -6 00:30:08.050 Write completed with error (sct=0, sc=8) 00:30:08.050 starting I/O failed: -6 00:30:08.050 Write completed with error (sct=0, sc=8) 00:30:08.050 starting I/O failed: -6 00:30:08.050 Write completed with error (sct=0, sc=8) 00:30:08.050 starting I/O failed: -6 00:30:08.050 Write completed with error (sct=0, sc=8) 00:30:08.050 starting I/O failed: -6 00:30:08.050 Write completed with error (sct=0, sc=8) 00:30:08.050 starting I/O failed: -6 00:30:08.050 Write completed with error (sct=0, sc=8) 00:30:08.050 starting I/O failed: -6 00:30:08.050 Write completed with error (sct=0, sc=8) 00:30:08.050 starting I/O failed: -6 00:30:08.050 Write completed with error (sct=0, sc=8) 00:30:08.050 starting I/O failed: -6 00:30:08.050 Write completed with error (sct=0, sc=8) 00:30:08.050 starting I/O failed: -6 00:30:08.050 Write completed with error (sct=0, sc=8) 00:30:08.050 starting I/O failed: -6 00:30:08.050 Write completed with error (sct=0, sc=8) 00:30:08.050 starting I/O failed: -6 00:30:08.050 Write completed with error (sct=0, sc=8) 00:30:08.050 starting I/O failed: -6 00:30:08.050 Write completed with error (sct=0, sc=8) 00:30:08.050 starting I/O failed: -6 00:30:08.050 Write completed with error (sct=0, sc=8) 00:30:08.050 starting I/O failed: -6 00:30:08.050 Write completed with error (sct=0, sc=8) 00:30:08.050 starting I/O failed: -6 00:30:08.050 Write completed with error (sct=0, sc=8) 00:30:08.050 starting I/O failed: -6 00:30:08.050 Write completed with error (sct=0, sc=8) 00:30:08.050 starting I/O failed: -6 00:30:08.050 Write completed with error (sct=0, sc=8) 00:30:08.050 starting I/O failed: -6 00:30:08.050 Write completed with error (sct=0, sc=8) 00:30:08.050 starting I/O failed: -6 00:30:08.050 Write completed with error (sct=0, sc=8) 00:30:08.050 starting I/O failed: -6 00:30:08.050 Write completed with error (sct=0, sc=8) 00:30:08.050 starting I/O failed: -6 00:30:08.050 Write completed with error (sct=0, sc=8) 00:30:08.050 starting I/O failed: -6 00:30:08.050 Write completed with error (sct=0, sc=8) 00:30:08.050 starting I/O failed: -6 00:30:08.050 Write completed with error (sct=0, sc=8) 00:30:08.050 starting I/O failed: -6 00:30:08.050 Write completed with error (sct=0, sc=8) 00:30:08.050 starting I/O failed: -6 00:30:08.050 Write completed with error (sct=0, sc=8) 00:30:08.050 starting I/O failed: -6 00:30:08.050 Write completed with error (sct=0, sc=8) 00:30:08.050 starting I/O failed: -6 00:30:08.050 Write completed with error (sct=0, sc=8) 00:30:08.050 starting I/O failed: -6 00:30:08.050 Write completed with error (sct=0, sc=8) 00:30:08.050 starting I/O failed: -6 00:30:08.050 Write completed with error (sct=0, sc=8) 00:30:08.050 starting I/O failed: -6 00:30:08.050 Write completed with error (sct=0, sc=8) 00:30:08.050 starting I/O failed: -6 00:30:08.050 Write completed with error (sct=0, sc=8) 00:30:08.050 starting I/O failed: -6 00:30:08.050 Write completed with error (sct=0, sc=8) 00:30:08.050 starting I/O failed: -6 00:30:08.050 Write completed with error (sct=0, sc=8) 00:30:08.050 starting I/O failed: -6 00:30:08.050 Write completed with error (sct=0, sc=8) 00:30:08.050 starting I/O failed: -6 00:30:08.050 Write completed with error (sct=0, sc=8) 00:30:08.050 starting I/O failed: -6 00:30:08.050 Write completed with error (sct=0, sc=8) 00:30:08.050 starting I/O failed: -6 00:30:08.050 Write completed with error (sct=0, sc=8) 00:30:08.050 starting I/O failed: -6 00:30:08.050 Write completed with error (sct=0, sc=8) 00:30:08.050 starting I/O failed: -6 00:30:08.050 Write completed with error (sct=0, sc=8) 00:30:08.050 starting I/O failed: -6 00:30:08.050 Write completed with error (sct=0, sc=8) 00:30:08.050 starting I/O failed: -6 00:30:08.050 Write completed with error (sct=0, sc=8) 00:30:08.050 starting I/O failed: -6 00:30:08.050 Write completed with error (sct=0, sc=8) 00:30:08.050 starting I/O failed: -6 00:30:08.050 Write completed with error (sct=0, sc=8) 00:30:08.050 starting I/O failed: -6 00:30:08.050 Write completed with error (sct=0, sc=8) 00:30:08.050 starting I/O failed: -6 00:30:08.050 Write completed with error (sct=0, sc=8) 00:30:08.050 starting I/O failed: -6 00:30:08.050 Write completed with error (sct=0, sc=8) 00:30:08.050 starting I/O failed: -6 00:30:08.050 Write completed with error (sct=0, sc=8) 00:30:08.050 starting I/O failed: -6 00:30:08.050 Write completed with error (sct=0, sc=8) 00:30:08.050 starting I/O failed: -6 00:30:08.050 Write completed with error (sct=0, sc=8) 00:30:08.050 starting I/O failed: -6 00:30:08.050 Write completed with error (sct=0, sc=8) 00:30:08.050 starting I/O failed: -6 00:30:08.050 Write completed with error (sct=0, sc=8) 00:30:08.050 starting I/O failed: -6 00:30:08.050 Write completed with error (sct=0, sc=8) 00:30:08.050 starting I/O failed: -6 00:30:08.050 Write completed with error (sct=0, sc=8) 00:30:08.050 starting I/O failed: -6 00:30:08.050 Write completed with error (sct=0, sc=8) 00:30:08.050 starting I/O failed: -6 00:30:08.050 Write completed with error (sct=0, sc=8) 00:30:08.050 starting I/O failed: -6 00:30:08.050 Write completed with error (sct=0, sc=8) 00:30:08.050 starting I/O failed: -6 00:30:08.050 Write completed with error (sct=0, sc=8) 00:30:08.050 starting I/O failed: -6 00:30:08.050 Write completed with error (sct=0, sc=8) 00:30:08.050 starting I/O failed: -6 00:30:08.050 [2024-12-14 00:10:46.848432] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:08.050 NVMe io qpair process completion error 00:30:08.050 Write completed with error (sct=0, sc=8) 00:30:08.050 Write completed with error (sct=0, sc=8) 00:30:08.050 Write completed with error (sct=0, sc=8) 00:30:08.050 starting I/O failed: -6 00:30:08.050 Write completed with error (sct=0, sc=8) 00:30:08.050 Write completed with error (sct=0, sc=8) 00:30:08.050 Write completed with error (sct=0, sc=8) 00:30:08.050 Write completed with error (sct=0, sc=8) 00:30:08.050 starting I/O failed: -6 00:30:08.050 Write completed with error (sct=0, sc=8) 00:30:08.051 Write completed with error (sct=0, sc=8) 00:30:08.051 Write completed with error (sct=0, sc=8) 00:30:08.051 Write completed with error (sct=0, sc=8) 00:30:08.051 starting I/O failed: -6 00:30:08.051 Write completed with error (sct=0, sc=8) 00:30:08.051 Write completed with error (sct=0, sc=8) 00:30:08.051 Write completed with error (sct=0, sc=8) 00:30:08.051 Write completed with error (sct=0, sc=8) 00:30:08.051 starting I/O failed: -6 00:30:08.051 Write completed with error (sct=0, sc=8) 00:30:08.051 Write completed with error (sct=0, sc=8) 00:30:08.051 Write completed with error (sct=0, sc=8) 00:30:08.051 Write completed with error (sct=0, sc=8) 00:30:08.051 starting I/O failed: -6 00:30:08.051 Write completed with error (sct=0, sc=8) 00:30:08.051 Write completed with error (sct=0, sc=8) 00:30:08.051 Write completed with error (sct=0, sc=8) 00:30:08.051 Write completed with error (sct=0, sc=8) 00:30:08.051 starting I/O failed: -6 00:30:08.051 Write completed with error (sct=0, sc=8) 00:30:08.051 Write completed with error (sct=0, sc=8) 00:30:08.051 Write completed with error (sct=0, sc=8) 00:30:08.051 Write completed with error (sct=0, sc=8) 00:30:08.051 starting I/O failed: -6 00:30:08.051 Write completed with error (sct=0, sc=8) 00:30:08.051 Write completed with error (sct=0, sc=8) 00:30:08.051 Write completed with error (sct=0, sc=8) 00:30:08.051 Write completed with error (sct=0, sc=8) 00:30:08.051 starting I/O failed: -6 00:30:08.051 Write completed with error (sct=0, sc=8) 00:30:08.051 Write completed with error (sct=0, sc=8) 00:30:08.051 Write completed with error (sct=0, sc=8) 00:30:08.051 Write completed with error (sct=0, sc=8) 00:30:08.051 starting I/O failed: -6 00:30:08.051 Write completed with error (sct=0, sc=8) 00:30:08.051 Write completed with error (sct=0, sc=8) 00:30:08.051 [2024-12-14 00:10:46.849996] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:08.051 starting I/O failed: -6 00:30:08.051 Write completed with error (sct=0, sc=8) 00:30:08.051 Write completed with error (sct=0, sc=8) 00:30:08.051 Write completed with error (sct=0, sc=8) 00:30:08.051 starting I/O failed: -6 00:30:08.051 Write completed with error (sct=0, sc=8) 00:30:08.051 starting I/O failed: -6 00:30:08.051 Write completed with error (sct=0, sc=8) 00:30:08.051 Write completed with error (sct=0, sc=8) 00:30:08.051 Write completed with error (sct=0, sc=8) 00:30:08.051 starting I/O failed: -6 00:30:08.051 Write completed with error (sct=0, sc=8) 00:30:08.051 starting I/O failed: -6 00:30:08.051 Write completed with error (sct=0, sc=8) 00:30:08.051 Write completed with error (sct=0, sc=8) 00:30:08.051 Write completed with error (sct=0, sc=8) 00:30:08.051 starting I/O failed: -6 00:30:08.051 Write completed with error (sct=0, sc=8) 00:30:08.051 starting I/O failed: -6 00:30:08.051 Write completed with error (sct=0, sc=8) 00:30:08.051 Write completed with error (sct=0, sc=8) 00:30:08.051 Write completed with error (sct=0, sc=8) 00:30:08.051 starting I/O failed: -6 00:30:08.051 Write completed with error (sct=0, sc=8) 00:30:08.051 starting I/O failed: -6 00:30:08.051 Write completed with error (sct=0, sc=8) 00:30:08.051 Write completed with error (sct=0, sc=8) 00:30:08.051 Write completed with error (sct=0, sc=8) 00:30:08.051 starting I/O failed: -6 00:30:08.051 Write completed with error (sct=0, sc=8) 00:30:08.051 starting I/O failed: -6 00:30:08.051 Write completed with error (sct=0, sc=8) 00:30:08.051 Write completed with error (sct=0, sc=8) 00:30:08.051 Write completed with error (sct=0, sc=8) 00:30:08.051 starting I/O failed: -6 00:30:08.051 Write completed with error (sct=0, sc=8) 00:30:08.051 starting I/O failed: -6 00:30:08.051 Write completed with error (sct=0, sc=8) 00:30:08.051 Write completed with error (sct=0, sc=8) 00:30:08.051 Write completed with error (sct=0, sc=8) 00:30:08.051 starting I/O failed: -6 00:30:08.051 Write completed with error (sct=0, sc=8) 00:30:08.051 starting I/O failed: -6 00:30:08.051 Write completed with error (sct=0, sc=8) 00:30:08.051 Write completed with error (sct=0, sc=8) 00:30:08.051 Write completed with error (sct=0, sc=8) 00:30:08.051 starting I/O failed: -6 00:30:08.051 Write completed with error (sct=0, sc=8) 00:30:08.051 starting I/O failed: -6 00:30:08.051 Write completed with error (sct=0, sc=8) 00:30:08.051 Write completed with error (sct=0, sc=8) 00:30:08.051 Write completed with error (sct=0, sc=8) 00:30:08.051 starting I/O failed: -6 00:30:08.051 Write completed with error (sct=0, sc=8) 00:30:08.051 starting I/O failed: -6 00:30:08.051 Write completed with error (sct=0, sc=8) 00:30:08.051 Write completed with error (sct=0, sc=8) 00:30:08.051 Write completed with error (sct=0, sc=8) 00:30:08.051 starting I/O failed: -6 00:30:08.051 Write completed with error (sct=0, sc=8) 00:30:08.051 starting I/O failed: -6 00:30:08.051 Write completed with error (sct=0, sc=8) 00:30:08.051 Write completed with error (sct=0, sc=8) 00:30:08.051 Write completed with error (sct=0, sc=8) 00:30:08.051 starting I/O failed: -6 00:30:08.051 [2024-12-14 00:10:46.851896] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:08.051 Write completed with error (sct=0, sc=8) 00:30:08.051 starting I/O failed: -6 00:30:08.051 Write completed with error (sct=0, sc=8) 00:30:08.051 Write completed with error (sct=0, sc=8) 00:30:08.051 starting I/O failed: -6 00:30:08.051 Write completed with error (sct=0, sc=8) 00:30:08.051 starting I/O failed: -6 00:30:08.051 Write completed with error (sct=0, sc=8) 00:30:08.051 starting I/O failed: -6 00:30:08.051 Write completed with error (sct=0, sc=8) 00:30:08.051 Write completed with error (sct=0, sc=8) 00:30:08.051 starting I/O failed: -6 00:30:08.051 Write completed with error (sct=0, sc=8) 00:30:08.051 starting I/O failed: -6 00:30:08.051 Write completed with error (sct=0, sc=8) 00:30:08.051 starting I/O failed: -6 00:30:08.051 Write completed with error (sct=0, sc=8) 00:30:08.051 Write completed with error (sct=0, sc=8) 00:30:08.051 starting I/O failed: -6 00:30:08.051 Write completed with error (sct=0, sc=8) 00:30:08.051 starting I/O failed: -6 00:30:08.051 Write completed with error (sct=0, sc=8) 00:30:08.051 starting I/O failed: -6 00:30:08.051 Write completed with error (sct=0, sc=8) 00:30:08.051 Write completed with error (sct=0, sc=8) 00:30:08.051 starting I/O failed: -6 00:30:08.051 Write completed with error (sct=0, sc=8) 00:30:08.051 starting I/O failed: -6 00:30:08.051 Write completed with error (sct=0, sc=8) 00:30:08.051 starting I/O failed: -6 00:30:08.051 Write completed with error (sct=0, sc=8) 00:30:08.051 Write completed with error (sct=0, sc=8) 00:30:08.051 starting I/O failed: -6 00:30:08.051 Write completed with error (sct=0, sc=8) 00:30:08.051 starting I/O failed: -6 00:30:08.051 Write completed with error (sct=0, sc=8) 00:30:08.051 starting I/O failed: -6 00:30:08.051 Write completed with error (sct=0, sc=8) 00:30:08.051 Write completed with error (sct=0, sc=8) 00:30:08.051 starting I/O failed: -6 00:30:08.051 Write completed with error (sct=0, sc=8) 00:30:08.051 starting I/O failed: -6 00:30:08.051 Write completed with error (sct=0, sc=8) 00:30:08.051 starting I/O failed: -6 00:30:08.051 Write completed with error (sct=0, sc=8) 00:30:08.051 Write completed with error (sct=0, sc=8) 00:30:08.051 starting I/O failed: -6 00:30:08.051 Write completed with error (sct=0, sc=8) 00:30:08.051 starting I/O failed: -6 00:30:08.051 Write completed with error (sct=0, sc=8) 00:30:08.051 starting I/O failed: -6 00:30:08.051 Write completed with error (sct=0, sc=8) 00:30:08.051 Write completed with error (sct=0, sc=8) 00:30:08.051 starting I/O failed: -6 00:30:08.051 Write completed with error (sct=0, sc=8) 00:30:08.051 starting I/O failed: -6 00:30:08.051 Write completed with error (sct=0, sc=8) 00:30:08.051 starting I/O failed: -6 00:30:08.051 Write completed with error (sct=0, sc=8) 00:30:08.051 Write completed with error (sct=0, sc=8) 00:30:08.051 starting I/O failed: -6 00:30:08.051 Write completed with error (sct=0, sc=8) 00:30:08.051 starting I/O failed: -6 00:30:08.051 Write completed with error (sct=0, sc=8) 00:30:08.051 starting I/O failed: -6 00:30:08.051 Write completed with error (sct=0, sc=8) 00:30:08.051 Write completed with error (sct=0, sc=8) 00:30:08.051 starting I/O failed: -6 00:30:08.051 Write completed with error (sct=0, sc=8) 00:30:08.051 starting I/O failed: -6 00:30:08.051 Write completed with error (sct=0, sc=8) 00:30:08.051 starting I/O failed: -6 00:30:08.051 Write completed with error (sct=0, sc=8) 00:30:08.051 Write completed with error (sct=0, sc=8) 00:30:08.051 starting I/O failed: -6 00:30:08.051 Write completed with error (sct=0, sc=8) 00:30:08.051 starting I/O failed: -6 00:30:08.051 Write completed with error (sct=0, sc=8) 00:30:08.051 starting I/O failed: -6 00:30:08.051 Write completed with error (sct=0, sc=8) 00:30:08.051 Write completed with error (sct=0, sc=8) 00:30:08.051 starting I/O failed: -6 00:30:08.051 Write completed with error (sct=0, sc=8) 00:30:08.051 starting I/O failed: -6 00:30:08.051 Write completed with error (sct=0, sc=8) 00:30:08.051 starting I/O failed: -6 00:30:08.051 Write completed with error (sct=0, sc=8) 00:30:08.051 Write completed with error (sct=0, sc=8) 00:30:08.051 starting I/O failed: -6 00:30:08.051 [2024-12-14 00:10:46.854512] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:08.051 Write completed with error (sct=0, sc=8) 00:30:08.051 starting I/O failed: -6 00:30:08.051 Write completed with error (sct=0, sc=8) 00:30:08.051 starting I/O failed: -6 00:30:08.051 Write completed with error (sct=0, sc=8) 00:30:08.051 starting I/O failed: -6 00:30:08.051 Write completed with error (sct=0, sc=8) 00:30:08.051 starting I/O failed: -6 00:30:08.051 Write completed with error (sct=0, sc=8) 00:30:08.051 starting I/O failed: -6 00:30:08.051 Write completed with error (sct=0, sc=8) 00:30:08.051 starting I/O failed: -6 00:30:08.051 Write completed with error (sct=0, sc=8) 00:30:08.051 starting I/O failed: -6 00:30:08.051 Write completed with error (sct=0, sc=8) 00:30:08.051 starting I/O failed: -6 00:30:08.051 Write completed with error (sct=0, sc=8) 00:30:08.051 starting I/O failed: -6 00:30:08.052 Write completed with error (sct=0, sc=8) 00:30:08.052 starting I/O failed: -6 00:30:08.052 Write completed with error (sct=0, sc=8) 00:30:08.052 starting I/O failed: -6 00:30:08.052 Write completed with error (sct=0, sc=8) 00:30:08.052 starting I/O failed: -6 00:30:08.052 Write completed with error (sct=0, sc=8) 00:30:08.052 starting I/O failed: -6 00:30:08.052 Write completed with error (sct=0, sc=8) 00:30:08.052 starting I/O failed: -6 00:30:08.052 Write completed with error (sct=0, sc=8) 00:30:08.052 starting I/O failed: -6 00:30:08.052 Write completed with error (sct=0, sc=8) 00:30:08.052 starting I/O failed: -6 00:30:08.052 Write completed with error (sct=0, sc=8) 00:30:08.052 starting I/O failed: -6 00:30:08.052 Write completed with error (sct=0, sc=8) 00:30:08.052 starting I/O failed: -6 00:30:08.052 Write completed with error (sct=0, sc=8) 00:30:08.052 starting I/O failed: -6 00:30:08.052 Write completed with error (sct=0, sc=8) 00:30:08.052 starting I/O failed: -6 00:30:08.052 Write completed with error (sct=0, sc=8) 00:30:08.052 starting I/O failed: -6 00:30:08.052 Write completed with error (sct=0, sc=8) 00:30:08.052 starting I/O failed: -6 00:30:08.052 Write completed with error (sct=0, sc=8) 00:30:08.052 starting I/O failed: -6 00:30:08.052 Write completed with error (sct=0, sc=8) 00:30:08.052 starting I/O failed: -6 00:30:08.052 Write completed with error (sct=0, sc=8) 00:30:08.052 starting I/O failed: -6 00:30:08.052 Write completed with error (sct=0, sc=8) 00:30:08.052 starting I/O failed: -6 00:30:08.052 Write completed with error (sct=0, sc=8) 00:30:08.052 starting I/O failed: -6 00:30:08.052 Write completed with error (sct=0, sc=8) 00:30:08.052 starting I/O failed: -6 00:30:08.052 Write completed with error (sct=0, sc=8) 00:30:08.052 starting I/O failed: -6 00:30:08.052 Write completed with error (sct=0, sc=8) 00:30:08.052 starting I/O failed: -6 00:30:08.052 Write completed with error (sct=0, sc=8) 00:30:08.052 starting I/O failed: -6 00:30:08.052 Write completed with error (sct=0, sc=8) 00:30:08.052 starting I/O failed: -6 00:30:08.052 Write completed with error (sct=0, sc=8) 00:30:08.052 starting I/O failed: -6 00:30:08.052 Write completed with error (sct=0, sc=8) 00:30:08.052 starting I/O failed: -6 00:30:08.052 Write completed with error (sct=0, sc=8) 00:30:08.052 starting I/O failed: -6 00:30:08.052 Write completed with error (sct=0, sc=8) 00:30:08.052 starting I/O failed: -6 00:30:08.052 Write completed with error (sct=0, sc=8) 00:30:08.052 starting I/O failed: -6 00:30:08.052 Write completed with error (sct=0, sc=8) 00:30:08.052 starting I/O failed: -6 00:30:08.052 Write completed with error (sct=0, sc=8) 00:30:08.052 starting I/O failed: -6 00:30:08.052 Write completed with error (sct=0, sc=8) 00:30:08.052 starting I/O failed: -6 00:30:08.052 Write completed with error (sct=0, sc=8) 00:30:08.052 starting I/O failed: -6 00:30:08.052 Write completed with error (sct=0, sc=8) 00:30:08.052 starting I/O failed: -6 00:30:08.052 Write completed with error (sct=0, sc=8) 00:30:08.052 starting I/O failed: -6 00:30:08.052 Write completed with error (sct=0, sc=8) 00:30:08.052 starting I/O failed: -6 00:30:08.052 Write completed with error (sct=0, sc=8) 00:30:08.052 starting I/O failed: -6 00:30:08.052 Write completed with error (sct=0, sc=8) 00:30:08.052 starting I/O failed: -6 00:30:08.052 Write completed with error (sct=0, sc=8) 00:30:08.052 starting I/O failed: -6 00:30:08.052 Write completed with error (sct=0, sc=8) 00:30:08.052 starting I/O failed: -6 00:30:08.052 Write completed with error (sct=0, sc=8) 00:30:08.052 starting I/O failed: -6 00:30:08.052 Write completed with error (sct=0, sc=8) 00:30:08.052 starting I/O failed: -6 00:30:08.052 Write completed with error (sct=0, sc=8) 00:30:08.052 starting I/O failed: -6 00:30:08.052 Write completed with error (sct=0, sc=8) 00:30:08.052 starting I/O failed: -6 00:30:08.052 Write completed with error (sct=0, sc=8) 00:30:08.052 starting I/O failed: -6 00:30:08.052 Write completed with error (sct=0, sc=8) 00:30:08.052 starting I/O failed: -6 00:30:08.052 Write completed with error (sct=0, sc=8) 00:30:08.052 starting I/O failed: -6 00:30:08.052 Write completed with error (sct=0, sc=8) 00:30:08.052 starting I/O failed: -6 00:30:08.052 Write completed with error (sct=0, sc=8) 00:30:08.052 starting I/O failed: -6 00:30:08.052 Write completed with error (sct=0, sc=8) 00:30:08.052 starting I/O failed: -6 00:30:08.052 Write completed with error (sct=0, sc=8) 00:30:08.052 starting I/O failed: -6 00:30:08.052 [2024-12-14 00:10:46.865184] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:08.052 NVMe io qpair process completion error 00:30:08.052 Write completed with error (sct=0, sc=8) 00:30:08.052 Write completed with error (sct=0, sc=8) 00:30:08.052 Write completed with error (sct=0, sc=8) 00:30:08.052 Write completed with error (sct=0, sc=8) 00:30:08.052 starting I/O failed: -6 00:30:08.052 Write completed with error (sct=0, sc=8) 00:30:08.052 Write completed with error (sct=0, sc=8) 00:30:08.052 Write completed with error (sct=0, sc=8) 00:30:08.052 Write completed with error (sct=0, sc=8) 00:30:08.052 starting I/O failed: -6 00:30:08.052 Write completed with error (sct=0, sc=8) 00:30:08.052 Write completed with error (sct=0, sc=8) 00:30:08.052 Write completed with error (sct=0, sc=8) 00:30:08.052 Write completed with error (sct=0, sc=8) 00:30:08.052 starting I/O failed: -6 00:30:08.052 Write completed with error (sct=0, sc=8) 00:30:08.052 Write completed with error (sct=0, sc=8) 00:30:08.052 Write completed with error (sct=0, sc=8) 00:30:08.052 Write completed with error (sct=0, sc=8) 00:30:08.052 starting I/O failed: -6 00:30:08.052 Write completed with error (sct=0, sc=8) 00:30:08.052 Write completed with error (sct=0, sc=8) 00:30:08.052 Write completed with error (sct=0, sc=8) 00:30:08.052 Write completed with error (sct=0, sc=8) 00:30:08.052 starting I/O failed: -6 00:30:08.052 Write completed with error (sct=0, sc=8) 00:30:08.052 Write completed with error (sct=0, sc=8) 00:30:08.052 Write completed with error (sct=0, sc=8) 00:30:08.052 Write completed with error (sct=0, sc=8) 00:30:08.052 starting I/O failed: -6 00:30:08.052 Write completed with error (sct=0, sc=8) 00:30:08.052 Write completed with error (sct=0, sc=8) 00:30:08.052 Write completed with error (sct=0, sc=8) 00:30:08.052 Write completed with error (sct=0, sc=8) 00:30:08.052 starting I/O failed: -6 00:30:08.052 Write completed with error (sct=0, sc=8) 00:30:08.052 Write completed with error (sct=0, sc=8) 00:30:08.052 Write completed with error (sct=0, sc=8) 00:30:08.052 Write completed with error (sct=0, sc=8) 00:30:08.052 starting I/O failed: -6 00:30:08.052 Write completed with error (sct=0, sc=8) 00:30:08.052 Write completed with error (sct=0, sc=8) 00:30:08.052 Write completed with error (sct=0, sc=8) 00:30:08.052 Write completed with error (sct=0, sc=8) 00:30:08.052 starting I/O failed: -6 00:30:08.052 Write completed with error (sct=0, sc=8) 00:30:08.052 Write completed with error (sct=0, sc=8) 00:30:08.052 Write completed with error (sct=0, sc=8) 00:30:08.052 Write completed with error (sct=0, sc=8) 00:30:08.052 starting I/O failed: -6 00:30:08.052 Write completed with error (sct=0, sc=8) 00:30:08.052 [2024-12-14 00:10:46.866846] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:08.052 Write completed with error (sct=0, sc=8) 00:30:08.052 starting I/O failed: -6 00:30:08.052 Write completed with error (sct=0, sc=8) 00:30:08.052 Write completed with error (sct=0, sc=8) 00:30:08.052 starting I/O failed: -6 00:30:08.052 Write completed with error (sct=0, sc=8) 00:30:08.052 Write completed with error (sct=0, sc=8) 00:30:08.052 starting I/O failed: -6 00:30:08.052 Write completed with error (sct=0, sc=8) 00:30:08.052 Write completed with error (sct=0, sc=8) 00:30:08.052 starting I/O failed: -6 00:30:08.052 Write completed with error (sct=0, sc=8) 00:30:08.052 Write completed with error (sct=0, sc=8) 00:30:08.052 starting I/O failed: -6 00:30:08.052 Write completed with error (sct=0, sc=8) 00:30:08.052 Write completed with error (sct=0, sc=8) 00:30:08.052 starting I/O failed: -6 00:30:08.052 Write completed with error (sct=0, sc=8) 00:30:08.052 Write completed with error (sct=0, sc=8) 00:30:08.052 starting I/O failed: -6 00:30:08.052 Write completed with error (sct=0, sc=8) 00:30:08.052 Write completed with error (sct=0, sc=8) 00:30:08.052 starting I/O failed: -6 00:30:08.052 Write completed with error (sct=0, sc=8) 00:30:08.052 Write completed with error (sct=0, sc=8) 00:30:08.052 starting I/O failed: -6 00:30:08.052 Write completed with error (sct=0, sc=8) 00:30:08.052 Write completed with error (sct=0, sc=8) 00:30:08.052 starting I/O failed: -6 00:30:08.052 Write completed with error (sct=0, sc=8) 00:30:08.052 Write completed with error (sct=0, sc=8) 00:30:08.052 starting I/O failed: -6 00:30:08.052 Write completed with error (sct=0, sc=8) 00:30:08.052 Write completed with error (sct=0, sc=8) 00:30:08.052 starting I/O failed: -6 00:30:08.052 Write completed with error (sct=0, sc=8) 00:30:08.052 Write completed with error (sct=0, sc=8) 00:30:08.052 starting I/O failed: -6 00:30:08.052 Write completed with error (sct=0, sc=8) 00:30:08.052 Write completed with error (sct=0, sc=8) 00:30:08.052 starting I/O failed: -6 00:30:08.052 Write completed with error (sct=0, sc=8) 00:30:08.052 Write completed with error (sct=0, sc=8) 00:30:08.052 starting I/O failed: -6 00:30:08.052 Write completed with error (sct=0, sc=8) 00:30:08.052 Write completed with error (sct=0, sc=8) 00:30:08.052 starting I/O failed: -6 00:30:08.052 Write completed with error (sct=0, sc=8) 00:30:08.052 Write completed with error (sct=0, sc=8) 00:30:08.052 starting I/O failed: -6 00:30:08.052 Write completed with error (sct=0, sc=8) 00:30:08.052 Write completed with error (sct=0, sc=8) 00:30:08.052 starting I/O failed: -6 00:30:08.052 Write completed with error (sct=0, sc=8) 00:30:08.052 Write completed with error (sct=0, sc=8) 00:30:08.052 starting I/O failed: -6 00:30:08.052 Write completed with error (sct=0, sc=8) 00:30:08.052 Write completed with error (sct=0, sc=8) 00:30:08.052 starting I/O failed: -6 00:30:08.052 Write completed with error (sct=0, sc=8) 00:30:08.052 Write completed with error (sct=0, sc=8) 00:30:08.052 starting I/O failed: -6 00:30:08.052 Write completed with error (sct=0, sc=8) 00:30:08.052 Write completed with error (sct=0, sc=8) 00:30:08.052 starting I/O failed: -6 00:30:08.052 Write completed with error (sct=0, sc=8) 00:30:08.052 [2024-12-14 00:10:46.868715] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:08.053 Write completed with error (sct=0, sc=8) 00:30:08.053 starting I/O failed: -6 00:30:08.053 Write completed with error (sct=0, sc=8) 00:30:08.053 Write completed with error (sct=0, sc=8) 00:30:08.053 starting I/O failed: -6 00:30:08.053 Write completed with error (sct=0, sc=8) 00:30:08.053 starting I/O failed: -6 00:30:08.053 Write completed with error (sct=0, sc=8) 00:30:08.053 starting I/O failed: -6 00:30:08.053 Write completed with error (sct=0, sc=8) 00:30:08.053 Write completed with error (sct=0, sc=8) 00:30:08.053 starting I/O failed: -6 00:30:08.053 Write completed with error (sct=0, sc=8) 00:30:08.053 starting I/O failed: -6 00:30:08.053 Write completed with error (sct=0, sc=8) 00:30:08.053 starting I/O failed: -6 00:30:08.053 Write completed with error (sct=0, sc=8) 00:30:08.053 Write completed with error (sct=0, sc=8) 00:30:08.053 starting I/O failed: -6 00:30:08.053 Write completed with error (sct=0, sc=8) 00:30:08.053 starting I/O failed: -6 00:30:08.053 Write completed with error (sct=0, sc=8) 00:30:08.053 starting I/O failed: -6 00:30:08.053 Write completed with error (sct=0, sc=8) 00:30:08.053 Write completed with error (sct=0, sc=8) 00:30:08.053 starting I/O failed: -6 00:30:08.053 Write completed with error (sct=0, sc=8) 00:30:08.053 starting I/O failed: -6 00:30:08.053 Write completed with error (sct=0, sc=8) 00:30:08.053 starting I/O failed: -6 00:30:08.053 Write completed with error (sct=0, sc=8) 00:30:08.053 Write completed with error (sct=0, sc=8) 00:30:08.053 starting I/O failed: -6 00:30:08.053 Write completed with error (sct=0, sc=8) 00:30:08.053 starting I/O failed: -6 00:30:08.053 Write completed with error (sct=0, sc=8) 00:30:08.053 starting I/O failed: -6 00:30:08.053 Write completed with error (sct=0, sc=8) 00:30:08.053 Write completed with error (sct=0, sc=8) 00:30:08.053 starting I/O failed: -6 00:30:08.053 Write completed with error (sct=0, sc=8) 00:30:08.053 starting I/O failed: -6 00:30:08.053 Write completed with error (sct=0, sc=8) 00:30:08.053 starting I/O failed: -6 00:30:08.053 Write completed with error (sct=0, sc=8) 00:30:08.053 Write completed with error (sct=0, sc=8) 00:30:08.053 starting I/O failed: -6 00:30:08.053 Write completed with error (sct=0, sc=8) 00:30:08.053 starting I/O failed: -6 00:30:08.053 Write completed with error (sct=0, sc=8) 00:30:08.053 starting I/O failed: -6 00:30:08.053 Write completed with error (sct=0, sc=8) 00:30:08.053 Write completed with error (sct=0, sc=8) 00:30:08.053 starting I/O failed: -6 00:30:08.053 Write completed with error (sct=0, sc=8) 00:30:08.053 starting I/O failed: -6 00:30:08.053 Write completed with error (sct=0, sc=8) 00:30:08.053 starting I/O failed: -6 00:30:08.053 Write completed with error (sct=0, sc=8) 00:30:08.053 Write completed with error (sct=0, sc=8) 00:30:08.053 starting I/O failed: -6 00:30:08.053 Write completed with error (sct=0, sc=8) 00:30:08.053 starting I/O failed: -6 00:30:08.053 Write completed with error (sct=0, sc=8) 00:30:08.053 starting I/O failed: -6 00:30:08.053 Write completed with error (sct=0, sc=8) 00:30:08.053 Write completed with error (sct=0, sc=8) 00:30:08.053 starting I/O failed: -6 00:30:08.053 Write completed with error (sct=0, sc=8) 00:30:08.053 starting I/O failed: -6 00:30:08.053 Write completed with error (sct=0, sc=8) 00:30:08.053 starting I/O failed: -6 00:30:08.053 Write completed with error (sct=0, sc=8) 00:30:08.053 Write completed with error (sct=0, sc=8) 00:30:08.053 starting I/O failed: -6 00:30:08.053 Write completed with error (sct=0, sc=8) 00:30:08.053 starting I/O failed: -6 00:30:08.053 Write completed with error (sct=0, sc=8) 00:30:08.053 starting I/O failed: -6 00:30:08.053 Write completed with error (sct=0, sc=8) 00:30:08.053 Write completed with error (sct=0, sc=8) 00:30:08.053 starting I/O failed: -6 00:30:08.053 Write completed with error (sct=0, sc=8) 00:30:08.053 starting I/O failed: -6 00:30:08.053 [2024-12-14 00:10:46.871221] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:08.053 starting I/O failed: -6 00:30:08.053 Write completed with error (sct=0, sc=8) 00:30:08.053 starting I/O failed: -6 00:30:08.053 Write completed with error (sct=0, sc=8) 00:30:08.053 starting I/O failed: -6 00:30:08.053 Write completed with error (sct=0, sc=8) 00:30:08.053 starting I/O failed: -6 00:30:08.053 Write completed with error (sct=0, sc=8) 00:30:08.053 starting I/O failed: -6 00:30:08.053 Write completed with error (sct=0, sc=8) 00:30:08.053 starting I/O failed: -6 00:30:08.053 Write completed with error (sct=0, sc=8) 00:30:08.053 starting I/O failed: -6 00:30:08.053 Write completed with error (sct=0, sc=8) 00:30:08.053 starting I/O failed: -6 00:30:08.053 Write completed with error (sct=0, sc=8) 00:30:08.053 starting I/O failed: -6 00:30:08.053 Write completed with error (sct=0, sc=8) 00:30:08.053 starting I/O failed: -6 00:30:08.053 Write completed with error (sct=0, sc=8) 00:30:08.053 starting I/O failed: -6 00:30:08.053 Write completed with error (sct=0, sc=8) 00:30:08.053 starting I/O failed: -6 00:30:08.053 Write completed with error (sct=0, sc=8) 00:30:08.053 starting I/O failed: -6 00:30:08.053 Write completed with error (sct=0, sc=8) 00:30:08.053 starting I/O failed: -6 00:30:08.053 Write completed with error (sct=0, sc=8) 00:30:08.053 starting I/O failed: -6 00:30:08.053 Write completed with error (sct=0, sc=8) 00:30:08.053 starting I/O failed: -6 00:30:08.053 Write completed with error (sct=0, sc=8) 00:30:08.053 starting I/O failed: -6 00:30:08.053 Write completed with error (sct=0, sc=8) 00:30:08.053 starting I/O failed: -6 00:30:08.053 Write completed with error (sct=0, sc=8) 00:30:08.053 starting I/O failed: -6 00:30:08.053 Write completed with error (sct=0, sc=8) 00:30:08.053 starting I/O failed: -6 00:30:08.053 Write completed with error (sct=0, sc=8) 00:30:08.053 starting I/O failed: -6 00:30:08.053 Write completed with error (sct=0, sc=8) 00:30:08.053 starting I/O failed: -6 00:30:08.053 Write completed with error (sct=0, sc=8) 00:30:08.053 starting I/O failed: -6 00:30:08.053 Write completed with error (sct=0, sc=8) 00:30:08.053 starting I/O failed: -6 00:30:08.053 Write completed with error (sct=0, sc=8) 00:30:08.053 starting I/O failed: -6 00:30:08.053 Write completed with error (sct=0, sc=8) 00:30:08.053 starting I/O failed: -6 00:30:08.053 Write completed with error (sct=0, sc=8) 00:30:08.053 starting I/O failed: -6 00:30:08.053 Write completed with error (sct=0, sc=8) 00:30:08.053 starting I/O failed: -6 00:30:08.053 Write completed with error (sct=0, sc=8) 00:30:08.053 starting I/O failed: -6 00:30:08.053 Write completed with error (sct=0, sc=8) 00:30:08.053 starting I/O failed: -6 00:30:08.053 Write completed with error (sct=0, sc=8) 00:30:08.053 starting I/O failed: -6 00:30:08.053 Write completed with error (sct=0, sc=8) 00:30:08.053 starting I/O failed: -6 00:30:08.053 Write completed with error (sct=0, sc=8) 00:30:08.053 starting I/O failed: -6 00:30:08.053 Write completed with error (sct=0, sc=8) 00:30:08.053 starting I/O failed: -6 00:30:08.053 Write completed with error (sct=0, sc=8) 00:30:08.053 starting I/O failed: -6 00:30:08.053 Write completed with error (sct=0, sc=8) 00:30:08.053 starting I/O failed: -6 00:30:08.053 Write completed with error (sct=0, sc=8) 00:30:08.053 starting I/O failed: -6 00:30:08.053 Write completed with error (sct=0, sc=8) 00:30:08.053 starting I/O failed: -6 00:30:08.053 Write completed with error (sct=0, sc=8) 00:30:08.053 starting I/O failed: -6 00:30:08.053 Write completed with error (sct=0, sc=8) 00:30:08.053 starting I/O failed: -6 00:30:08.053 Write completed with error (sct=0, sc=8) 00:30:08.053 starting I/O failed: -6 00:30:08.053 Write completed with error (sct=0, sc=8) 00:30:08.053 starting I/O failed: -6 00:30:08.053 Write completed with error (sct=0, sc=8) 00:30:08.053 starting I/O failed: -6 00:30:08.053 Write completed with error (sct=0, sc=8) 00:30:08.053 starting I/O failed: -6 00:30:08.053 Write completed with error (sct=0, sc=8) 00:30:08.053 starting I/O failed: -6 00:30:08.053 Write completed with error (sct=0, sc=8) 00:30:08.053 starting I/O failed: -6 00:30:08.053 Write completed with error (sct=0, sc=8) 00:30:08.053 starting I/O failed: -6 00:30:08.053 Write completed with error (sct=0, sc=8) 00:30:08.053 starting I/O failed: -6 00:30:08.053 Write completed with error (sct=0, sc=8) 00:30:08.053 starting I/O failed: -6 00:30:08.053 Write completed with error (sct=0, sc=8) 00:30:08.053 starting I/O failed: -6 00:30:08.053 Write completed with error (sct=0, sc=8) 00:30:08.053 starting I/O failed: -6 00:30:08.053 Write completed with error (sct=0, sc=8) 00:30:08.053 starting I/O failed: -6 00:30:08.053 Write completed with error (sct=0, sc=8) 00:30:08.053 starting I/O failed: -6 00:30:08.053 Write completed with error (sct=0, sc=8) 00:30:08.053 starting I/O failed: -6 00:30:08.053 Write completed with error (sct=0, sc=8) 00:30:08.053 starting I/O failed: -6 00:30:08.053 Write completed with error (sct=0, sc=8) 00:30:08.053 starting I/O failed: -6 00:30:08.053 Write completed with error (sct=0, sc=8) 00:30:08.053 starting I/O failed: -6 00:30:08.053 Write completed with error (sct=0, sc=8) 00:30:08.053 starting I/O failed: -6 00:30:08.053 Write completed with error (sct=0, sc=8) 00:30:08.053 starting I/O failed: -6 00:30:08.053 Write completed with error (sct=0, sc=8) 00:30:08.053 starting I/O failed: -6 00:30:08.053 [2024-12-14 00:10:46.885281] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:08.053 NVMe io qpair process completion error 00:30:08.053 Write completed with error (sct=0, sc=8) 00:30:08.053 Write completed with error (sct=0, sc=8) 00:30:08.053 Write completed with error (sct=0, sc=8) 00:30:08.053 starting I/O failed: -6 00:30:08.053 Write completed with error (sct=0, sc=8) 00:30:08.053 Write completed with error (sct=0, sc=8) 00:30:08.053 Write completed with error (sct=0, sc=8) 00:30:08.053 Write completed with error (sct=0, sc=8) 00:30:08.053 starting I/O failed: -6 00:30:08.053 Write completed with error (sct=0, sc=8) 00:30:08.054 Write completed with error (sct=0, sc=8) 00:30:08.054 Write completed with error (sct=0, sc=8) 00:30:08.054 Write completed with error (sct=0, sc=8) 00:30:08.054 starting I/O failed: -6 00:30:08.054 Write completed with error (sct=0, sc=8) 00:30:08.054 Write completed with error (sct=0, sc=8) 00:30:08.054 Write completed with error (sct=0, sc=8) 00:30:08.054 Write completed with error (sct=0, sc=8) 00:30:08.054 starting I/O failed: -6 00:30:08.054 Write completed with error (sct=0, sc=8) 00:30:08.054 Write completed with error (sct=0, sc=8) 00:30:08.054 Write completed with error (sct=0, sc=8) 00:30:08.054 Write completed with error (sct=0, sc=8) 00:30:08.054 starting I/O failed: -6 00:30:08.054 Write completed with error (sct=0, sc=8) 00:30:08.054 Write completed with error (sct=0, sc=8) 00:30:08.054 Write completed with error (sct=0, sc=8) 00:30:08.054 Write completed with error (sct=0, sc=8) 00:30:08.054 starting I/O failed: -6 00:30:08.054 Write completed with error (sct=0, sc=8) 00:30:08.054 Write completed with error (sct=0, sc=8) 00:30:08.054 Write completed with error (sct=0, sc=8) 00:30:08.054 Write completed with error (sct=0, sc=8) 00:30:08.054 starting I/O failed: -6 00:30:08.054 Write completed with error (sct=0, sc=8) 00:30:08.054 Write completed with error (sct=0, sc=8) 00:30:08.054 Write completed with error (sct=0, sc=8) 00:30:08.054 Write completed with error (sct=0, sc=8) 00:30:08.054 starting I/O failed: -6 00:30:08.054 Write completed with error (sct=0, sc=8) 00:30:08.054 Write completed with error (sct=0, sc=8) 00:30:08.054 Write completed with error (sct=0, sc=8) 00:30:08.054 Write completed with error (sct=0, sc=8) 00:30:08.054 starting I/O failed: -6 00:30:08.054 Write completed with error (sct=0, sc=8) 00:30:08.054 Write completed with error (sct=0, sc=8) 00:30:08.054 Write completed with error (sct=0, sc=8) 00:30:08.054 Write completed with error (sct=0, sc=8) 00:30:08.054 starting I/O failed: -6 00:30:08.054 Write completed with error (sct=0, sc=8) 00:30:08.054 Write completed with error (sct=0, sc=8) 00:30:08.054 [2024-12-14 00:10:46.886884] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:08.054 Write completed with error (sct=0, sc=8) 00:30:08.054 starting I/O failed: -6 00:30:08.054 Write completed with error (sct=0, sc=8) 00:30:08.054 Write completed with error (sct=0, sc=8) 00:30:08.054 starting I/O failed: -6 00:30:08.054 Write completed with error (sct=0, sc=8) 00:30:08.054 Write completed with error (sct=0, sc=8) 00:30:08.054 starting I/O failed: -6 00:30:08.054 Write completed with error (sct=0, sc=8) 00:30:08.054 Write completed with error (sct=0, sc=8) 00:30:08.054 starting I/O failed: -6 00:30:08.054 Write completed with error (sct=0, sc=8) 00:30:08.054 Write completed with error (sct=0, sc=8) 00:30:08.054 starting I/O failed: -6 00:30:08.054 Write completed with error (sct=0, sc=8) 00:30:08.054 Write completed with error (sct=0, sc=8) 00:30:08.054 starting I/O failed: -6 00:30:08.054 Write completed with error (sct=0, sc=8) 00:30:08.054 Write completed with error (sct=0, sc=8) 00:30:08.054 starting I/O failed: -6 00:30:08.054 Write completed with error (sct=0, sc=8) 00:30:08.054 Write completed with error (sct=0, sc=8) 00:30:08.054 starting I/O failed: -6 00:30:08.054 Write completed with error (sct=0, sc=8) 00:30:08.054 Write completed with error (sct=0, sc=8) 00:30:08.054 starting I/O failed: -6 00:30:08.054 Write completed with error (sct=0, sc=8) 00:30:08.054 Write completed with error (sct=0, sc=8) 00:30:08.054 starting I/O failed: -6 00:30:08.054 Write completed with error (sct=0, sc=8) 00:30:08.054 Write completed with error (sct=0, sc=8) 00:30:08.054 starting I/O failed: -6 00:30:08.054 Write completed with error (sct=0, sc=8) 00:30:08.054 Write completed with error (sct=0, sc=8) 00:30:08.054 starting I/O failed: -6 00:30:08.054 Write completed with error (sct=0, sc=8) 00:30:08.054 Write completed with error (sct=0, sc=8) 00:30:08.054 starting I/O failed: -6 00:30:08.054 Write completed with error (sct=0, sc=8) 00:30:08.054 Write completed with error (sct=0, sc=8) 00:30:08.054 starting I/O failed: -6 00:30:08.054 Write completed with error (sct=0, sc=8) 00:30:08.054 Write completed with error (sct=0, sc=8) 00:30:08.054 starting I/O failed: -6 00:30:08.054 Write completed with error (sct=0, sc=8) 00:30:08.054 Write completed with error (sct=0, sc=8) 00:30:08.054 starting I/O failed: -6 00:30:08.054 Write completed with error (sct=0, sc=8) 00:30:08.054 Write completed with error (sct=0, sc=8) 00:30:08.054 starting I/O failed: -6 00:30:08.054 Write completed with error (sct=0, sc=8) 00:30:08.054 Write completed with error (sct=0, sc=8) 00:30:08.054 starting I/O failed: -6 00:30:08.054 Write completed with error (sct=0, sc=8) 00:30:08.054 Write completed with error (sct=0, sc=8) 00:30:08.054 starting I/O failed: -6 00:30:08.054 Write completed with error (sct=0, sc=8) 00:30:08.054 Write completed with error (sct=0, sc=8) 00:30:08.054 starting I/O failed: -6 00:30:08.054 [2024-12-14 00:10:46.888565] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:08.054 Write completed with error (sct=0, sc=8) 00:30:08.054 Write completed with error (sct=0, sc=8) 00:30:08.054 starting I/O failed: -6 00:30:08.054 Write completed with error (sct=0, sc=8) 00:30:08.054 starting I/O failed: -6 00:30:08.054 Write completed with error (sct=0, sc=8) 00:30:08.054 starting I/O failed: -6 00:30:08.054 Write completed with error (sct=0, sc=8) 00:30:08.054 Write completed with error (sct=0, sc=8) 00:30:08.054 starting I/O failed: -6 00:30:08.054 Write completed with error (sct=0, sc=8) 00:30:08.054 starting I/O failed: -6 00:30:08.054 Write completed with error (sct=0, sc=8) 00:30:08.054 starting I/O failed: -6 00:30:08.054 Write completed with error (sct=0, sc=8) 00:30:08.054 Write completed with error (sct=0, sc=8) 00:30:08.054 starting I/O failed: -6 00:30:08.054 Write completed with error (sct=0, sc=8) 00:30:08.054 starting I/O failed: -6 00:30:08.054 Write completed with error (sct=0, sc=8) 00:30:08.054 starting I/O failed: -6 00:30:08.054 Write completed with error (sct=0, sc=8) 00:30:08.054 Write completed with error (sct=0, sc=8) 00:30:08.054 starting I/O failed: -6 00:30:08.054 Write completed with error (sct=0, sc=8) 00:30:08.054 starting I/O failed: -6 00:30:08.054 Write completed with error (sct=0, sc=8) 00:30:08.054 starting I/O failed: -6 00:30:08.054 Write completed with error (sct=0, sc=8) 00:30:08.054 Write completed with error (sct=0, sc=8) 00:30:08.054 starting I/O failed: -6 00:30:08.054 Write completed with error (sct=0, sc=8) 00:30:08.054 starting I/O failed: -6 00:30:08.054 Write completed with error (sct=0, sc=8) 00:30:08.054 starting I/O failed: -6 00:30:08.054 Write completed with error (sct=0, sc=8) 00:30:08.054 Write completed with error (sct=0, sc=8) 00:30:08.054 starting I/O failed: -6 00:30:08.054 Write completed with error (sct=0, sc=8) 00:30:08.054 starting I/O failed: -6 00:30:08.054 Write completed with error (sct=0, sc=8) 00:30:08.054 starting I/O failed: -6 00:30:08.054 Write completed with error (sct=0, sc=8) 00:30:08.054 Write completed with error (sct=0, sc=8) 00:30:08.054 starting I/O failed: -6 00:30:08.054 Write completed with error (sct=0, sc=8) 00:30:08.054 starting I/O failed: -6 00:30:08.054 Write completed with error (sct=0, sc=8) 00:30:08.054 starting I/O failed: -6 00:30:08.054 Write completed with error (sct=0, sc=8) 00:30:08.054 Write completed with error (sct=0, sc=8) 00:30:08.054 starting I/O failed: -6 00:30:08.054 Write completed with error (sct=0, sc=8) 00:30:08.054 starting I/O failed: -6 00:30:08.054 Write completed with error (sct=0, sc=8) 00:30:08.054 starting I/O failed: -6 00:30:08.054 Write completed with error (sct=0, sc=8) 00:30:08.054 Write completed with error (sct=0, sc=8) 00:30:08.054 starting I/O failed: -6 00:30:08.054 Write completed with error (sct=0, sc=8) 00:30:08.054 starting I/O failed: -6 00:30:08.054 Write completed with error (sct=0, sc=8) 00:30:08.054 starting I/O failed: -6 00:30:08.054 Write completed with error (sct=0, sc=8) 00:30:08.054 Write completed with error (sct=0, sc=8) 00:30:08.054 starting I/O failed: -6 00:30:08.054 Write completed with error (sct=0, sc=8) 00:30:08.054 starting I/O failed: -6 00:30:08.054 Write completed with error (sct=0, sc=8) 00:30:08.054 starting I/O failed: -6 00:30:08.054 Write completed with error (sct=0, sc=8) 00:30:08.054 Write completed with error (sct=0, sc=8) 00:30:08.054 starting I/O failed: -6 00:30:08.054 Write completed with error (sct=0, sc=8) 00:30:08.054 starting I/O failed: -6 00:30:08.054 Write completed with error (sct=0, sc=8) 00:30:08.054 starting I/O failed: -6 00:30:08.054 Write completed with error (sct=0, sc=8) 00:30:08.054 Write completed with error (sct=0, sc=8) 00:30:08.054 starting I/O failed: -6 00:30:08.054 Write completed with error (sct=0, sc=8) 00:30:08.054 starting I/O failed: -6 00:30:08.054 Write completed with error (sct=0, sc=8) 00:30:08.054 starting I/O failed: -6 00:30:08.054 Write completed with error (sct=0, sc=8) 00:30:08.054 Write completed with error (sct=0, sc=8) 00:30:08.054 starting I/O failed: -6 00:30:08.054 Write completed with error (sct=0, sc=8) 00:30:08.054 starting I/O failed: -6 00:30:08.054 [2024-12-14 00:10:46.891168] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:08.054 Write completed with error (sct=0, sc=8) 00:30:08.054 starting I/O failed: -6 00:30:08.054 Write completed with error (sct=0, sc=8) 00:30:08.054 starting I/O failed: -6 00:30:08.054 Write completed with error (sct=0, sc=8) 00:30:08.054 starting I/O failed: -6 00:30:08.054 Write completed with error (sct=0, sc=8) 00:30:08.054 starting I/O failed: -6 00:30:08.054 Write completed with error (sct=0, sc=8) 00:30:08.054 starting I/O failed: -6 00:30:08.054 Write completed with error (sct=0, sc=8) 00:30:08.054 starting I/O failed: -6 00:30:08.054 Write completed with error (sct=0, sc=8) 00:30:08.054 starting I/O failed: -6 00:30:08.054 Write completed with error (sct=0, sc=8) 00:30:08.054 starting I/O failed: -6 00:30:08.054 Write completed with error (sct=0, sc=8) 00:30:08.054 starting I/O failed: -6 00:30:08.054 Write completed with error (sct=0, sc=8) 00:30:08.054 starting I/O failed: -6 00:30:08.054 Write completed with error (sct=0, sc=8) 00:30:08.054 starting I/O failed: -6 00:30:08.054 Write completed with error (sct=0, sc=8) 00:30:08.054 starting I/O failed: -6 00:30:08.054 Write completed with error (sct=0, sc=8) 00:30:08.054 starting I/O failed: -6 00:30:08.054 Write completed with error (sct=0, sc=8) 00:30:08.054 starting I/O failed: -6 00:30:08.054 Write completed with error (sct=0, sc=8) 00:30:08.054 starting I/O failed: -6 00:30:08.055 Write completed with error (sct=0, sc=8) 00:30:08.055 starting I/O failed: -6 00:30:08.055 Write completed with error (sct=0, sc=8) 00:30:08.055 starting I/O failed: -6 00:30:08.055 Write completed with error (sct=0, sc=8) 00:30:08.055 starting I/O failed: -6 00:30:08.055 Write completed with error (sct=0, sc=8) 00:30:08.055 starting I/O failed: -6 00:30:08.055 Write completed with error (sct=0, sc=8) 00:30:08.055 starting I/O failed: -6 00:30:08.055 Write completed with error (sct=0, sc=8) 00:30:08.055 starting I/O failed: -6 00:30:08.055 Write completed with error (sct=0, sc=8) 00:30:08.055 starting I/O failed: -6 00:30:08.055 Write completed with error (sct=0, sc=8) 00:30:08.055 starting I/O failed: -6 00:30:08.055 Write completed with error (sct=0, sc=8) 00:30:08.055 starting I/O failed: -6 00:30:08.055 Write completed with error (sct=0, sc=8) 00:30:08.055 starting I/O failed: -6 00:30:08.055 Write completed with error (sct=0, sc=8) 00:30:08.055 starting I/O failed: -6 00:30:08.055 Write completed with error (sct=0, sc=8) 00:30:08.055 starting I/O failed: -6 00:30:08.055 Write completed with error (sct=0, sc=8) 00:30:08.055 starting I/O failed: -6 00:30:08.055 Write completed with error (sct=0, sc=8) 00:30:08.055 starting I/O failed: -6 00:30:08.055 Write completed with error (sct=0, sc=8) 00:30:08.055 starting I/O failed: -6 00:30:08.055 Write completed with error (sct=0, sc=8) 00:30:08.055 starting I/O failed: -6 00:30:08.055 Write completed with error (sct=0, sc=8) 00:30:08.055 starting I/O failed: -6 00:30:08.055 Write completed with error (sct=0, sc=8) 00:30:08.055 starting I/O failed: -6 00:30:08.055 Write completed with error (sct=0, sc=8) 00:30:08.055 starting I/O failed: -6 00:30:08.055 Write completed with error (sct=0, sc=8) 00:30:08.055 starting I/O failed: -6 00:30:08.055 Write completed with error (sct=0, sc=8) 00:30:08.055 starting I/O failed: -6 00:30:08.055 Write completed with error (sct=0, sc=8) 00:30:08.055 starting I/O failed: -6 00:30:08.055 Write completed with error (sct=0, sc=8) 00:30:08.055 starting I/O failed: -6 00:30:08.055 Write completed with error (sct=0, sc=8) 00:30:08.055 starting I/O failed: -6 00:30:08.055 Write completed with error (sct=0, sc=8) 00:30:08.055 starting I/O failed: -6 00:30:08.055 Write completed with error (sct=0, sc=8) 00:30:08.055 starting I/O failed: -6 00:30:08.055 Write completed with error (sct=0, sc=8) 00:30:08.055 starting I/O failed: -6 00:30:08.055 Write completed with error (sct=0, sc=8) 00:30:08.055 starting I/O failed: -6 00:30:08.055 Write completed with error (sct=0, sc=8) 00:30:08.055 starting I/O failed: -6 00:30:08.055 Write completed with error (sct=0, sc=8) 00:30:08.055 starting I/O failed: -6 00:30:08.055 Write completed with error (sct=0, sc=8) 00:30:08.055 starting I/O failed: -6 00:30:08.055 Write completed with error (sct=0, sc=8) 00:30:08.055 starting I/O failed: -6 00:30:08.055 Write completed with error (sct=0, sc=8) 00:30:08.055 starting I/O failed: -6 00:30:08.055 Write completed with error (sct=0, sc=8) 00:30:08.055 starting I/O failed: -6 00:30:08.055 Write completed with error (sct=0, sc=8) 00:30:08.055 starting I/O failed: -6 00:30:08.055 Write completed with error (sct=0, sc=8) 00:30:08.055 starting I/O failed: -6 00:30:08.055 Write completed with error (sct=0, sc=8) 00:30:08.055 starting I/O failed: -6 00:30:08.055 Write completed with error (sct=0, sc=8) 00:30:08.055 starting I/O failed: -6 00:30:08.055 Write completed with error (sct=0, sc=8) 00:30:08.055 starting I/O failed: -6 00:30:08.055 Write completed with error (sct=0, sc=8) 00:30:08.055 starting I/O failed: -6 00:30:08.055 Write completed with error (sct=0, sc=8) 00:30:08.055 starting I/O failed: -6 00:30:08.055 Write completed with error (sct=0, sc=8) 00:30:08.055 starting I/O failed: -6 00:30:08.055 Write completed with error (sct=0, sc=8) 00:30:08.055 starting I/O failed: -6 00:30:08.055 Write completed with error (sct=0, sc=8) 00:30:08.055 starting I/O failed: -6 00:30:08.055 Write completed with error (sct=0, sc=8) 00:30:08.055 starting I/O failed: -6 00:30:08.055 [2024-12-14 00:10:46.908856] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:08.055 NVMe io qpair process completion error 00:30:08.055 Write completed with error (sct=0, sc=8) 00:30:08.055 Write completed with error (sct=0, sc=8) 00:30:08.055 Write completed with error (sct=0, sc=8) 00:30:08.055 Write completed with error (sct=0, sc=8) 00:30:08.055 Write completed with error (sct=0, sc=8) 00:30:08.055 Write completed with error (sct=0, sc=8) 00:30:08.055 Write completed with error (sct=0, sc=8) 00:30:08.055 Write completed with error (sct=0, sc=8) 00:30:08.055 Write completed with error (sct=0, sc=8) 00:30:08.055 Write completed with error (sct=0, sc=8) 00:30:08.055 Write completed with error (sct=0, sc=8) 00:30:08.055 Write completed with error (sct=0, sc=8) 00:30:08.055 Write completed with error (sct=0, sc=8) 00:30:08.055 Write completed with error (sct=0, sc=8) 00:30:08.055 Write completed with error (sct=0, sc=8) 00:30:08.055 Write completed with error (sct=0, sc=8) 00:30:08.055 Write completed with error (sct=0, sc=8) 00:30:08.055 Write completed with error (sct=0, sc=8) 00:30:08.055 Write completed with error (sct=0, sc=8) 00:30:08.055 Write completed with error (sct=0, sc=8) 00:30:08.055 Write completed with error (sct=0, sc=8) 00:30:08.055 Write completed with error (sct=0, sc=8) 00:30:08.055 Write completed with error (sct=0, sc=8) 00:30:08.055 Write completed with error (sct=0, sc=8) 00:30:08.055 Write completed with error (sct=0, sc=8) 00:30:08.055 Write completed with error (sct=0, sc=8) 00:30:08.055 Write completed with error (sct=0, sc=8) 00:30:08.055 Write completed with error (sct=0, sc=8) 00:30:08.055 Write completed with error (sct=0, sc=8) 00:30:08.055 Write completed with error (sct=0, sc=8) 00:30:08.055 Write completed with error (sct=0, sc=8) 00:30:08.055 Write completed with error (sct=0, sc=8) 00:30:08.055 Write completed with error (sct=0, sc=8) 00:30:08.055 Write completed with error (sct=0, sc=8) 00:30:08.055 Write completed with error (sct=0, sc=8) 00:30:08.055 Write completed with error (sct=0, sc=8) 00:30:08.055 Write completed with error (sct=0, sc=8) 00:30:08.055 Write completed with error (sct=0, sc=8) 00:30:08.055 Write completed with error (sct=0, sc=8) 00:30:08.055 Write completed with error (sct=0, sc=8) 00:30:08.055 Write completed with error (sct=0, sc=8) 00:30:08.055 Write completed with error (sct=0, sc=8) 00:30:08.055 Write completed with error (sct=0, sc=8) 00:30:08.055 Write completed with error (sct=0, sc=8) 00:30:08.055 Write completed with error (sct=0, sc=8) 00:30:08.055 Write completed with error (sct=0, sc=8) 00:30:08.055 Write completed with error (sct=0, sc=8) 00:30:08.055 Write completed with error (sct=0, sc=8) 00:30:08.055 Write completed with error (sct=0, sc=8) 00:30:08.055 Write completed with error (sct=0, sc=8) 00:30:08.055 Write completed with error (sct=0, sc=8) 00:30:08.055 Write completed with error (sct=0, sc=8) 00:30:08.055 Write completed with error (sct=0, sc=8) 00:30:08.055 Write completed with error (sct=0, sc=8) 00:30:08.055 Write completed with error (sct=0, sc=8) 00:30:08.055 Write completed with error (sct=0, sc=8) 00:30:08.055 Write completed with error (sct=0, sc=8) 00:30:08.055 Write completed with error (sct=0, sc=8) 00:30:08.055 Initializing NVMe Controllers 00:30:08.055 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode2 00:30:08.055 Controller IO queue size 128, less than required. 00:30:08.055 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:08.055 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:08.055 Controller IO queue size 128, less than required. 00:30:08.055 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:08.055 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode9 00:30:08.055 Controller IO queue size 128, less than required. 00:30:08.055 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:08.055 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode4 00:30:08.055 Controller IO queue size 128, less than required. 00:30:08.055 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:08.055 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode3 00:30:08.055 Controller IO queue size 128, less than required. 00:30:08.055 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:08.055 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode8 00:30:08.056 Controller IO queue size 128, less than required. 00:30:08.056 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:08.056 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode5 00:30:08.056 Controller IO queue size 128, less than required. 00:30:08.056 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:08.056 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode10 00:30:08.056 Controller IO queue size 128, less than required. 00:30:08.056 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:08.056 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode6 00:30:08.056 Controller IO queue size 128, less than required. 00:30:08.056 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:08.056 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode7 00:30:08.056 Controller IO queue size 128, less than required. 00:30:08.056 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:08.056 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 with lcore 0 00:30:08.056 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:30:08.056 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 with lcore 0 00:30:08.056 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 with lcore 0 00:30:08.056 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 with lcore 0 00:30:08.056 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 with lcore 0 00:30:08.056 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 with lcore 0 00:30:08.056 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 with lcore 0 00:30:08.056 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 with lcore 0 00:30:08.056 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 with lcore 0 00:30:08.056 Initialization complete. Launching workers. 00:30:08.056 ======================================================== 00:30:08.056 Latency(us) 00:30:08.056 Device Information : IOPS MiB/s Average min max 00:30:08.056 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 from core 0: 1884.29 80.97 67944.67 1841.89 164815.81 00:30:08.056 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1844.89 79.27 69417.21 942.80 180788.68 00:30:08.056 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 from core 0: 1866.63 80.21 68838.65 1084.78 176129.82 00:30:08.056 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 from core 0: 1866.85 80.22 69004.17 1203.37 166669.06 00:30:08.056 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 from core 0: 1864.70 80.12 69275.29 1706.81 186931.53 00:30:08.056 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 from core 0: 1796.67 77.20 72031.39 1274.38 226827.38 00:30:08.056 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 from core 0: 1795.81 77.16 72225.97 1645.12 244041.14 00:30:08.056 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 from core 0: 1795.59 77.15 69703.48 1235.49 116504.50 00:30:08.056 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 from core 0: 1817.34 78.09 70093.77 1285.73 232981.04 00:30:08.056 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 from core 0: 1784.18 76.66 70259.56 1223.13 132721.92 00:30:08.056 ======================================================== 00:30:08.056 Total : 18316.94 787.06 69859.26 942.80 244041.14 00:30:08.056 00:30:08.056 [2024-12-14 00:10:46.960618] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500001e800 is same with the state(6) to be set 00:30:08.056 [2024-12-14 00:10:46.960689] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500001de00 is same with the state(6) to be set 00:30:08.056 [2024-12-14 00:10:46.960733] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000020b00 is same with the state(6) to be set 00:30:08.056 [2024-12-14 00:10:46.960774] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500001f200 is same with the state(6) to be set 00:30:08.056 [2024-12-14 00:10:46.960818] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500001ed00 is same with the state(6) to be set 00:30:08.056 [2024-12-14 00:10:46.960859] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000020600 is same with the state(6) to be set 00:30:08.056 [2024-12-14 00:10:46.960900] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500001f700 is same with the state(6) to be set 00:30:08.056 [2024-12-14 00:10:46.960947] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500001e300 is same with the state(6) to be set 00:30:08.056 [2024-12-14 00:10:46.960988] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500001fc00 is same with the state(6) to be set 00:30:08.056 [2024-12-14 00:10:46.961036] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000020100 is same with the state(6) to be set 00:30:08.056 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:30:11.341 00:10:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@156 -- # sleep 1 00:30:11.909 00:10:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@158 -- # NOT wait 4131995 00:30:11.909 00:10:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@652 -- # local es=0 00:30:11.909 00:10:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@654 -- # valid_exec_arg wait 4131995 00:30:11.909 00:10:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@640 -- # local arg=wait 00:30:11.909 00:10:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:11.909 00:10:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # type -t wait 00:30:11.909 00:10:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:11.909 00:10:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@655 -- # wait 4131995 00:30:11.909 00:10:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@655 -- # es=1 00:30:11.909 00:10:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:30:11.909 00:10:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:30:11.909 00:10:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:30:11.909 00:10:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@159 -- # stoptarget 00:30:11.909 00:10:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:30:11.909 00:10:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:30:11.909 00:10:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:30:11.909 00:10:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@46 -- # nvmftestfini 00:30:11.909 00:10:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:11.909 00:10:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@121 -- # sync 00:30:11.909 00:10:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:11.909 00:10:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@124 -- # set +e 00:30:11.909 00:10:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:11.909 00:10:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:11.909 rmmod nvme_tcp 00:30:11.909 rmmod nvme_fabrics 00:30:11.909 rmmod nvme_keyring 00:30:11.909 00:10:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:11.909 00:10:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@128 -- # set -e 00:30:11.909 00:10:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@129 -- # return 0 00:30:11.909 00:10:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@517 -- # '[' -n 4131715 ']' 00:30:11.909 00:10:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@518 -- # killprocess 4131715 00:30:11.909 00:10:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@954 -- # '[' -z 4131715 ']' 00:30:11.909 00:10:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@958 -- # kill -0 4131715 00:30:11.909 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (4131715) - No such process 00:30:11.909 00:10:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@981 -- # echo 'Process with pid 4131715 is not found' 00:30:11.909 Process with pid 4131715 is not found 00:30:11.909 00:10:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:11.909 00:10:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:11.909 00:10:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:11.909 00:10:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@297 -- # iptr 00:30:11.909 00:10:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:11.909 00:10:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # iptables-save 00:30:11.909 00:10:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # iptables-restore 00:30:11.909 00:10:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:11.909 00:10:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:11.909 00:10:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:11.909 00:10:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:11.909 00:10:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:14.444 00:10:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:14.445 00:30:14.445 real 0m13.668s 00:30:14.445 user 0m39.538s 00:30:14.445 sys 0m4.960s 00:30:14.445 00:10:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:14.445 00:10:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:30:14.445 ************************************ 00:30:14.445 END TEST nvmf_shutdown_tc4 00:30:14.445 ************************************ 00:30:14.445 00:10:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@170 -- # trap - SIGINT SIGTERM EXIT 00:30:14.445 00:30:14.445 real 0m59.466s 00:30:14.445 user 2m54.909s 00:30:14.445 sys 0m14.717s 00:30:14.445 00:10:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:14.445 00:10:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:30:14.445 ************************************ 00:30:14.445 END TEST nvmf_shutdown 00:30:14.445 ************************************ 00:30:14.445 00:10:53 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@67 -- # run_test nvmf_nsid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:30:14.445 00:10:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:30:14.445 00:10:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:14.445 00:10:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:30:14.445 ************************************ 00:30:14.445 START TEST nvmf_nsid 00:30:14.445 ************************************ 00:30:14.445 00:10:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:30:14.445 * Looking for test storage... 00:30:14.445 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:14.445 00:10:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:30:14.445 00:10:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1711 -- # lcov --version 00:30:14.445 00:10:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:30:14.445 00:10:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:30:14.445 00:10:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:14.445 00:10:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:14.445 00:10:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:14.445 00:10:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # IFS=.-: 00:30:14.445 00:10:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # read -ra ver1 00:30:14.445 00:10:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # IFS=.-: 00:30:14.445 00:10:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # read -ra ver2 00:30:14.445 00:10:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@338 -- # local 'op=<' 00:30:14.445 00:10:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@340 -- # ver1_l=2 00:30:14.445 00:10:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@341 -- # ver2_l=1 00:30:14.445 00:10:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:14.445 00:10:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@344 -- # case "$op" in 00:30:14.445 00:10:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@345 -- # : 1 00:30:14.445 00:10:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:14.445 00:10:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:14.445 00:10:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # decimal 1 00:30:14.445 00:10:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=1 00:30:14.445 00:10:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:14.445 00:10:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 1 00:30:14.445 00:10:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # ver1[v]=1 00:30:14.445 00:10:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # decimal 2 00:30:14.445 00:10:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=2 00:30:14.445 00:10:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:14.445 00:10:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 2 00:30:14.445 00:10:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # ver2[v]=2 00:30:14.445 00:10:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:14.445 00:10:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:14.445 00:10:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # return 0 00:30:14.445 00:10:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:14.445 00:10:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:30:14.445 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:14.445 --rc genhtml_branch_coverage=1 00:30:14.445 --rc genhtml_function_coverage=1 00:30:14.445 --rc genhtml_legend=1 00:30:14.445 --rc geninfo_all_blocks=1 00:30:14.445 --rc geninfo_unexecuted_blocks=1 00:30:14.445 00:30:14.445 ' 00:30:14.445 00:10:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:30:14.445 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:14.445 --rc genhtml_branch_coverage=1 00:30:14.445 --rc genhtml_function_coverage=1 00:30:14.445 --rc genhtml_legend=1 00:30:14.445 --rc geninfo_all_blocks=1 00:30:14.445 --rc geninfo_unexecuted_blocks=1 00:30:14.445 00:30:14.445 ' 00:30:14.445 00:10:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:30:14.445 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:14.445 --rc genhtml_branch_coverage=1 00:30:14.445 --rc genhtml_function_coverage=1 00:30:14.445 --rc genhtml_legend=1 00:30:14.445 --rc geninfo_all_blocks=1 00:30:14.445 --rc geninfo_unexecuted_blocks=1 00:30:14.445 00:30:14.445 ' 00:30:14.445 00:10:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:30:14.445 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:14.445 --rc genhtml_branch_coverage=1 00:30:14.445 --rc genhtml_function_coverage=1 00:30:14.445 --rc genhtml_legend=1 00:30:14.445 --rc geninfo_all_blocks=1 00:30:14.445 --rc geninfo_unexecuted_blocks=1 00:30:14.445 00:30:14.445 ' 00:30:14.445 00:10:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:14.445 00:10:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # uname -s 00:30:14.445 00:10:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:14.445 00:10:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:14.445 00:10:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:14.445 00:10:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:14.445 00:10:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:14.445 00:10:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:14.445 00:10:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:14.445 00:10:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:14.445 00:10:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:14.445 00:10:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:14.445 00:10:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:30:14.445 00:10:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:30:14.445 00:10:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:14.445 00:10:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:14.445 00:10:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:14.445 00:10:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:14.445 00:10:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:14.445 00:10:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@15 -- # shopt -s extglob 00:30:14.445 00:10:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:14.445 00:10:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:14.445 00:10:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:14.445 00:10:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:14.445 00:10:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:14.445 00:10:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:14.445 00:10:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@5 -- # export PATH 00:30:14.446 00:10:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:14.446 00:10:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@51 -- # : 0 00:30:14.446 00:10:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:14.446 00:10:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:14.446 00:10:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:14.446 00:10:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:14.446 00:10:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:14.446 00:10:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:30:14.446 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:30:14.446 00:10:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:14.446 00:10:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:14.446 00:10:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:14.446 00:10:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@11 -- # subnqn1=nqn.2024-10.io.spdk:cnode0 00:30:14.446 00:10:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@12 -- # subnqn2=nqn.2024-10.io.spdk:cnode1 00:30:14.446 00:10:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@13 -- # subnqn3=nqn.2024-10.io.spdk:cnode2 00:30:14.446 00:10:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@14 -- # tgt2sock=/var/tmp/tgt2.sock 00:30:14.446 00:10:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@15 -- # tgt2pid= 00:30:14.446 00:10:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@46 -- # nvmftestinit 00:30:14.446 00:10:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:14.446 00:10:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:14.446 00:10:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:14.446 00:10:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:14.446 00:10:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:14.446 00:10:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:14.446 00:10:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:14.446 00:10:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:14.446 00:10:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:14.446 00:10:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:14.446 00:10:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@309 -- # xtrace_disable 00:30:14.446 00:10:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:30:19.845 00:10:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:19.845 00:10:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@315 -- # pci_devs=() 00:30:19.845 00:10:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:19.845 00:10:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:19.845 00:10:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:19.845 00:10:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:19.845 00:10:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:19.845 00:10:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@319 -- # net_devs=() 00:30:19.845 00:10:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:19.845 00:10:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@320 -- # e810=() 00:30:19.845 00:10:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@320 -- # local -ga e810 00:30:19.845 00:10:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@321 -- # x722=() 00:30:19.845 00:10:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@321 -- # local -ga x722 00:30:19.845 00:10:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@322 -- # mlx=() 00:30:19.845 00:10:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@322 -- # local -ga mlx 00:30:19.845 00:10:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:19.845 00:10:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:19.845 00:10:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:19.845 00:10:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:19.845 00:10:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:19.845 00:10:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:19.845 00:10:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:19.845 00:10:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:19.845 00:10:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:19.845 00:10:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:19.845 00:10:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:19.845 00:10:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:19.845 00:10:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:19.845 00:10:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:19.845 00:10:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:19.845 00:10:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:19.845 00:10:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:19.845 00:10:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:19.845 00:10:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:19.845 00:10:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:30:19.845 Found 0000:af:00.0 (0x8086 - 0x159b) 00:30:19.845 00:10:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:19.845 00:10:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:19.845 00:10:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:19.845 00:10:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:19.845 00:10:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:19.845 00:10:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:19.845 00:10:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:30:19.845 Found 0000:af:00.1 (0x8086 - 0x159b) 00:30:19.845 00:10:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:19.845 00:10:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:19.845 00:10:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:19.845 00:10:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:19.845 00:10:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:19.845 00:10:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:19.845 00:10:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:19.845 00:10:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:19.845 00:10:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:19.845 00:10:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:19.845 00:10:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:19.845 00:10:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:19.845 00:10:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:19.845 00:10:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:19.845 00:10:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:19.845 00:10:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:30:19.845 Found net devices under 0000:af:00.0: cvl_0_0 00:30:19.845 00:10:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:19.845 00:10:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:19.845 00:10:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:19.845 00:10:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:19.845 00:10:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:19.845 00:10:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:19.845 00:10:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:19.845 00:10:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:19.845 00:10:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:30:19.845 Found net devices under 0000:af:00.1: cvl_0_1 00:30:19.845 00:10:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:19.845 00:10:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:19.845 00:10:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # is_hw=yes 00:30:19.845 00:10:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:19.845 00:10:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:19.845 00:10:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:19.845 00:10:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:19.845 00:10:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:19.845 00:10:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:19.845 00:10:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:19.845 00:10:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:19.845 00:10:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:19.845 00:10:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:19.845 00:10:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:19.846 00:10:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:19.846 00:10:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:19.846 00:10:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:19.846 00:10:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:19.846 00:10:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:19.846 00:10:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:19.846 00:10:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:19.846 00:10:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:19.846 00:10:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:19.846 00:10:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:19.846 00:10:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:19.846 00:10:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:19.846 00:10:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:19.846 00:10:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:19.846 00:10:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:19.846 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:19.846 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.391 ms 00:30:19.846 00:30:19.846 --- 10.0.0.2 ping statistics --- 00:30:19.846 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:19.846 rtt min/avg/max/mdev = 0.391/0.391/0.391/0.000 ms 00:30:19.846 00:10:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:19.846 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:19.846 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.122 ms 00:30:19.846 00:30:19.846 --- 10.0.0.1 ping statistics --- 00:30:19.846 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:19.846 rtt min/avg/max/mdev = 0.122/0.122/0.122/0.000 ms 00:30:19.846 00:10:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:19.846 00:10:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@450 -- # return 0 00:30:19.846 00:10:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:19.846 00:10:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:19.846 00:10:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:19.846 00:10:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:19.846 00:10:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:19.846 00:10:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:19.846 00:10:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:19.846 00:10:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@47 -- # nvmfappstart -m 1 00:30:19.846 00:10:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:19.846 00:10:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:19.846 00:10:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:30:19.846 00:10:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@509 -- # nvmfpid=4136826 00:30:19.846 00:10:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 1 00:30:19.846 00:10:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@510 -- # waitforlisten 4136826 00:30:19.846 00:10:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 4136826 ']' 00:30:19.846 00:10:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:19.846 00:10:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:19.846 00:10:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:19.846 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:19.846 00:10:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:19.846 00:10:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:30:19.846 [2024-12-14 00:10:58.761918] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:30:19.846 [2024-12-14 00:10:58.762011] [ DPDK EAL parameters: nvmf -c 1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:19.846 [2024-12-14 00:10:58.878102] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:20.105 [2024-12-14 00:10:58.990373] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:20.105 [2024-12-14 00:10:58.990416] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:20.105 [2024-12-14 00:10:58.990427] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:20.105 [2024-12-14 00:10:58.990445] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:20.105 [2024-12-14 00:10:58.990454] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:20.105 [2024-12-14 00:10:58.991877] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:30:20.673 00:10:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:20.673 00:10:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:30:20.673 00:10:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:20.673 00:10:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:20.673 00:10:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:30:20.673 00:10:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:20.673 00:10:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@49 -- # trap cleanup SIGINT SIGTERM EXIT 00:30:20.673 00:10:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@52 -- # tgt2pid=4137065 00:30:20.673 00:10:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/tgt2.sock 00:30:20.673 00:10:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@54 -- # tgt1addr=10.0.0.2 00:30:20.673 00:10:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # get_main_ns_ip 00:30:20.673 00:10:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@769 -- # local ip 00:30:20.673 00:10:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:20.673 00:10:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:20.673 00:10:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:20.673 00:10:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:20.673 00:10:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:20.673 00:10:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:20.673 00:10:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:20.673 00:10:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:20.673 00:10:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:20.673 00:10:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # tgt2addr=10.0.0.1 00:30:20.673 00:10:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # uuidgen 00:30:20.673 00:10:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # ns1uuid=e7b29949-f5de-46bc-a52a-d0670223d23b 00:30:20.673 00:10:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # uuidgen 00:30:20.673 00:10:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # ns2uuid=331175c4-9c67-4b7a-9701-adb1b431edd8 00:30:20.673 00:10:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # uuidgen 00:30:20.673 00:10:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # ns3uuid=4787b6cf-46f0-4ef8-bff3-f46b6ded10cd 00:30:20.673 00:10:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@63 -- # rpc_cmd 00:30:20.673 00:10:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:20.673 00:10:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:30:20.673 null0 00:30:20.673 null1 00:30:20.673 null2 00:30:20.673 [2024-12-14 00:10:59.650345] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:20.673 [2024-12-14 00:10:59.674603] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:20.673 [2024-12-14 00:10:59.675337] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:30:20.673 [2024-12-14 00:10:59.675422] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4137065 ] 00:30:20.673 00:10:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:20.673 00:10:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@79 -- # waitforlisten 4137065 /var/tmp/tgt2.sock 00:30:20.673 00:10:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 4137065 ']' 00:30:20.673 00:10:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/tgt2.sock 00:30:20.673 00:10:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:20.673 00:10:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock...' 00:30:20.673 Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock... 00:30:20.673 00:10:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:20.673 00:10:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:30:20.673 [2024-12-14 00:10:59.787251] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:20.931 [2024-12-14 00:10:59.899125] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:30:21.868 00:11:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:21.868 00:11:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:30:21.868 00:11:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/tgt2.sock 00:30:22.127 [2024-12-14 00:11:01.061896] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:22.127 [2024-12-14 00:11:01.078036] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.1 port 4421 *** 00:30:22.127 nvme0n1 nvme0n2 00:30:22.127 nvme1n1 00:30:22.127 00:11:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # nvme_connect 00:30:22.127 00:11:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@23 -- # local ctrlr 00:30:22.127 00:11:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@25 -- # nvme connect -t tcp -a 10.0.0.1 -s 4421 -n nqn.2024-10.io.spdk:cnode2 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 00:30:23.063 00:11:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@28 -- # for ctrlr in /sys/class/nvme/nvme* 00:30:23.063 00:11:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ -e /sys/class/nvme/nvme0/subsysnqn ]] 00:30:23.063 00:11:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ nqn.2024-10.io.spdk:cnode2 == \n\q\n\.\2\0\2\4\-\1\0\.\i\o\.\s\p\d\k\:\c\n\o\d\e\2 ]] 00:30:23.063 00:11:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@31 -- # echo nvme0 00:30:23.063 00:11:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@32 -- # return 0 00:30:23.063 00:11:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # ctrlr=nvme0 00:30:23.063 00:11:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@95 -- # waitforblk nvme0n1 00:30:23.063 00:11:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:30:23.063 00:11:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:30:23.063 00:11:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:30:23.063 00:11:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1241 -- # '[' 0 -lt 15 ']' 00:30:23.063 00:11:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1242 -- # i=1 00:30:23.063 00:11:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1243 -- # sleep 1 00:30:24.440 00:11:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:30:24.440 00:11:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:30:24.440 00:11:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:30:24.440 00:11:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:30:24.440 00:11:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:30:24.440 00:11:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # uuid2nguid e7b29949-f5de-46bc-a52a-d0670223d23b 00:30:24.440 00:11:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:30:24.440 00:11:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # nvme_get_nguid nvme0 1 00:30:24.440 00:11:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=1 nguid 00:30:24.440 00:11:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n1 -o json 00:30:24.440 00:11:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:30:24.440 00:11:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=e7b29949f5de46bca52ad0670223d23b 00:30:24.440 00:11:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo E7B29949F5DE46BCA52AD0670223D23B 00:30:24.440 00:11:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # [[ E7B29949F5DE46BCA52AD0670223D23B == \E\7\B\2\9\9\4\9\F\5\D\E\4\6\B\C\A\5\2\A\D\0\6\7\0\2\2\3\D\2\3\B ]] 00:30:24.440 00:11:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@97 -- # waitforblk nvme0n2 00:30:24.440 00:11:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:30:24.440 00:11:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:30:24.440 00:11:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n2 00:30:24.440 00:11:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:30:24.440 00:11:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n2 00:30:24.440 00:11:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:30:24.440 00:11:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # uuid2nguid 331175c4-9c67-4b7a-9701-adb1b431edd8 00:30:24.440 00:11:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:30:24.440 00:11:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # nvme_get_nguid nvme0 2 00:30:24.440 00:11:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=2 nguid 00:30:24.440 00:11:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n2 -o json 00:30:24.440 00:11:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:30:24.440 00:11:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=331175c49c674b7a9701adb1b431edd8 00:30:24.440 00:11:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 331175C49C674B7A9701ADB1B431EDD8 00:30:24.440 00:11:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # [[ 331175C49C674B7A9701ADB1B431EDD8 == \3\3\1\1\7\5\C\4\9\C\6\7\4\B\7\A\9\7\0\1\A\D\B\1\B\4\3\1\E\D\D\8 ]] 00:30:24.440 00:11:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@99 -- # waitforblk nvme0n3 00:30:24.440 00:11:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:30:24.440 00:11:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:30:24.440 00:11:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n3 00:30:24.440 00:11:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:30:24.440 00:11:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n3 00:30:24.440 00:11:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:30:24.440 00:11:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # uuid2nguid 4787b6cf-46f0-4ef8-bff3-f46b6ded10cd 00:30:24.440 00:11:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:30:24.440 00:11:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # nvme_get_nguid nvme0 3 00:30:24.440 00:11:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=3 nguid 00:30:24.440 00:11:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n3 -o json 00:30:24.440 00:11:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:30:24.440 00:11:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=4787b6cf46f04ef8bff3f46b6ded10cd 00:30:24.440 00:11:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 4787B6CF46F04EF8BFF3F46B6DED10CD 00:30:24.440 00:11:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # [[ 4787B6CF46F04EF8BFF3F46B6DED10CD == \4\7\8\7\B\6\C\F\4\6\F\0\4\E\F\8\B\F\F\3\F\4\6\B\6\D\E\D\1\0\C\D ]] 00:30:24.440 00:11:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@101 -- # nvme disconnect -d /dev/nvme0 00:30:24.699 00:11:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@103 -- # trap - SIGINT SIGTERM EXIT 00:30:24.699 00:11:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@104 -- # cleanup 00:30:24.699 00:11:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@18 -- # killprocess 4137065 00:30:24.699 00:11:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 4137065 ']' 00:30:24.699 00:11:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 4137065 00:30:24.699 00:11:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:30:24.699 00:11:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:24.699 00:11:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4137065 00:30:24.957 00:11:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:30:24.957 00:11:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:30:24.957 00:11:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4137065' 00:30:24.957 killing process with pid 4137065 00:30:24.957 00:11:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 4137065 00:30:24.957 00:11:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 4137065 00:30:27.489 00:11:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@19 -- # nvmftestfini 00:30:27.489 00:11:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:27.489 00:11:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@121 -- # sync 00:30:27.489 00:11:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:27.489 00:11:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@124 -- # set +e 00:30:27.489 00:11:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:27.489 00:11:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:27.489 rmmod nvme_tcp 00:30:27.489 rmmod nvme_fabrics 00:30:27.489 rmmod nvme_keyring 00:30:27.489 00:11:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:27.489 00:11:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@128 -- # set -e 00:30:27.489 00:11:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@129 -- # return 0 00:30:27.489 00:11:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@517 -- # '[' -n 4136826 ']' 00:30:27.489 00:11:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@518 -- # killprocess 4136826 00:30:27.489 00:11:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 4136826 ']' 00:30:27.489 00:11:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 4136826 00:30:27.489 00:11:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:30:27.489 00:11:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:27.489 00:11:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4136826 00:30:27.489 00:11:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:30:27.489 00:11:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:30:27.489 00:11:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4136826' 00:30:27.489 killing process with pid 4136826 00:30:27.489 00:11:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 4136826 00:30:27.489 00:11:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 4136826 00:30:28.426 00:11:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:28.426 00:11:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:28.426 00:11:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:28.426 00:11:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@297 -- # iptr 00:30:28.426 00:11:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-save 00:30:28.426 00:11:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:28.426 00:11:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-restore 00:30:28.426 00:11:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:28.426 00:11:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:28.426 00:11:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:28.426 00:11:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:28.426 00:11:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:30.330 00:11:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:30.330 00:30:30.330 real 0m16.342s 00:30:30.330 user 0m16.968s 00:30:30.330 sys 0m5.404s 00:30:30.330 00:11:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:30.330 00:11:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:30:30.330 ************************************ 00:30:30.330 END TEST nvmf_nsid 00:30:30.330 ************************************ 00:30:30.330 00:11:09 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:30:30.330 00:30:30.330 real 18m45.093s 00:30:30.330 user 49m59.459s 00:30:30.330 sys 4m2.262s 00:30:30.330 00:11:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:30.330 00:11:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:30:30.330 ************************************ 00:30:30.330 END TEST nvmf_target_extra 00:30:30.330 ************************************ 00:30:30.589 00:11:09 nvmf_tcp -- nvmf/nvmf.sh@16 -- # run_test nvmf_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:30:30.589 00:11:09 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:30:30.589 00:11:09 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:30.589 00:11:09 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:30.589 ************************************ 00:30:30.589 START TEST nvmf_host 00:30:30.589 ************************************ 00:30:30.589 00:11:09 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:30:30.589 * Looking for test storage... 00:30:30.589 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:30:30.589 00:11:09 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:30:30.589 00:11:09 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1711 -- # lcov --version 00:30:30.589 00:11:09 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:30:30.589 00:11:09 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:30:30.589 00:11:09 nvmf_tcp.nvmf_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:30.589 00:11:09 nvmf_tcp.nvmf_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:30.589 00:11:09 nvmf_tcp.nvmf_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:30.589 00:11:09 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # IFS=.-: 00:30:30.589 00:11:09 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # read -ra ver1 00:30:30.589 00:11:09 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # IFS=.-: 00:30:30.589 00:11:09 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # read -ra ver2 00:30:30.589 00:11:09 nvmf_tcp.nvmf_host -- scripts/common.sh@338 -- # local 'op=<' 00:30:30.589 00:11:09 nvmf_tcp.nvmf_host -- scripts/common.sh@340 -- # ver1_l=2 00:30:30.589 00:11:09 nvmf_tcp.nvmf_host -- scripts/common.sh@341 -- # ver2_l=1 00:30:30.589 00:11:09 nvmf_tcp.nvmf_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:30.589 00:11:09 nvmf_tcp.nvmf_host -- scripts/common.sh@344 -- # case "$op" in 00:30:30.589 00:11:09 nvmf_tcp.nvmf_host -- scripts/common.sh@345 -- # : 1 00:30:30.589 00:11:09 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:30.589 00:11:09 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:30.589 00:11:09 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # decimal 1 00:30:30.589 00:11:09 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=1 00:30:30.589 00:11:09 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:30.589 00:11:09 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 1 00:30:30.589 00:11:09 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # ver1[v]=1 00:30:30.589 00:11:09 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # decimal 2 00:30:30.589 00:11:09 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=2 00:30:30.589 00:11:09 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:30.589 00:11:09 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 2 00:30:30.589 00:11:09 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # ver2[v]=2 00:30:30.589 00:11:09 nvmf_tcp.nvmf_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:30.589 00:11:09 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:30.590 00:11:09 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # return 0 00:30:30.590 00:11:09 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:30.590 00:11:09 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:30:30.590 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:30.590 --rc genhtml_branch_coverage=1 00:30:30.590 --rc genhtml_function_coverage=1 00:30:30.590 --rc genhtml_legend=1 00:30:30.590 --rc geninfo_all_blocks=1 00:30:30.590 --rc geninfo_unexecuted_blocks=1 00:30:30.590 00:30:30.590 ' 00:30:30.590 00:11:09 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:30:30.590 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:30.590 --rc genhtml_branch_coverage=1 00:30:30.590 --rc genhtml_function_coverage=1 00:30:30.590 --rc genhtml_legend=1 00:30:30.590 --rc geninfo_all_blocks=1 00:30:30.590 --rc geninfo_unexecuted_blocks=1 00:30:30.590 00:30:30.590 ' 00:30:30.590 00:11:09 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:30:30.590 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:30.590 --rc genhtml_branch_coverage=1 00:30:30.590 --rc genhtml_function_coverage=1 00:30:30.590 --rc genhtml_legend=1 00:30:30.590 --rc geninfo_all_blocks=1 00:30:30.590 --rc geninfo_unexecuted_blocks=1 00:30:30.590 00:30:30.590 ' 00:30:30.590 00:11:09 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:30:30.590 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:30.590 --rc genhtml_branch_coverage=1 00:30:30.590 --rc genhtml_function_coverage=1 00:30:30.590 --rc genhtml_legend=1 00:30:30.590 --rc geninfo_all_blocks=1 00:30:30.590 --rc geninfo_unexecuted_blocks=1 00:30:30.590 00:30:30.590 ' 00:30:30.590 00:11:09 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:30.590 00:11:09 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # uname -s 00:30:30.590 00:11:09 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:30.590 00:11:09 nvmf_tcp.nvmf_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:30.590 00:11:09 nvmf_tcp.nvmf_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:30.590 00:11:09 nvmf_tcp.nvmf_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:30.590 00:11:09 nvmf_tcp.nvmf_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:30.590 00:11:09 nvmf_tcp.nvmf_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:30.590 00:11:09 nvmf_tcp.nvmf_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:30.590 00:11:09 nvmf_tcp.nvmf_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:30.590 00:11:09 nvmf_tcp.nvmf_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:30.590 00:11:09 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:30.850 00:11:09 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:30:30.850 00:11:09 nvmf_tcp.nvmf_host -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:30:30.850 00:11:09 nvmf_tcp.nvmf_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:30.850 00:11:09 nvmf_tcp.nvmf_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:30.850 00:11:09 nvmf_tcp.nvmf_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:30.850 00:11:09 nvmf_tcp.nvmf_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:30.850 00:11:09 nvmf_tcp.nvmf_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:30.850 00:11:09 nvmf_tcp.nvmf_host -- scripts/common.sh@15 -- # shopt -s extglob 00:30:30.850 00:11:09 nvmf_tcp.nvmf_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:30.850 00:11:09 nvmf_tcp.nvmf_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:30.850 00:11:09 nvmf_tcp.nvmf_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:30.850 00:11:09 nvmf_tcp.nvmf_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:30.850 00:11:09 nvmf_tcp.nvmf_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:30.850 00:11:09 nvmf_tcp.nvmf_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:30.850 00:11:09 nvmf_tcp.nvmf_host -- paths/export.sh@5 -- # export PATH 00:30:30.850 00:11:09 nvmf_tcp.nvmf_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:30.850 00:11:09 nvmf_tcp.nvmf_host -- nvmf/common.sh@51 -- # : 0 00:30:30.850 00:11:09 nvmf_tcp.nvmf_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:30.850 00:11:09 nvmf_tcp.nvmf_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:30.850 00:11:09 nvmf_tcp.nvmf_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:30.850 00:11:09 nvmf_tcp.nvmf_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:30.850 00:11:09 nvmf_tcp.nvmf_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:30.850 00:11:09 nvmf_tcp.nvmf_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:30:30.850 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:30:30.850 00:11:09 nvmf_tcp.nvmf_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:30.850 00:11:09 nvmf_tcp.nvmf_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:30.850 00:11:09 nvmf_tcp.nvmf_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:30.850 00:11:09 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:30:30.850 00:11:09 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@13 -- # TEST_ARGS=("$@") 00:30:30.850 00:11:09 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@15 -- # [[ 0 -eq 0 ]] 00:30:30.850 00:11:09 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@16 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:30:30.850 00:11:09 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:30:30.850 00:11:09 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:30.850 00:11:09 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:30:30.850 ************************************ 00:30:30.850 START TEST nvmf_multicontroller 00:30:30.850 ************************************ 00:30:30.850 00:11:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:30:30.850 * Looking for test storage... 00:30:30.850 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:30:30.850 00:11:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:30:30.850 00:11:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1711 -- # lcov --version 00:30:30.850 00:11:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:30:30.850 00:11:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:30:30.850 00:11:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:30.850 00:11:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:30.850 00:11:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:30.850 00:11:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # IFS=.-: 00:30:30.850 00:11:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # read -ra ver1 00:30:30.850 00:11:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # IFS=.-: 00:30:30.850 00:11:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # read -ra ver2 00:30:30.850 00:11:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@338 -- # local 'op=<' 00:30:30.850 00:11:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@340 -- # ver1_l=2 00:30:30.850 00:11:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@341 -- # ver2_l=1 00:30:30.850 00:11:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:30.850 00:11:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@344 -- # case "$op" in 00:30:30.850 00:11:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@345 -- # : 1 00:30:30.850 00:11:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:30.850 00:11:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:30.850 00:11:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # decimal 1 00:30:30.850 00:11:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=1 00:30:30.850 00:11:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:30.850 00:11:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 1 00:30:30.850 00:11:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # ver1[v]=1 00:30:30.850 00:11:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # decimal 2 00:30:30.850 00:11:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=2 00:30:30.850 00:11:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:30.850 00:11:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 2 00:30:30.850 00:11:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # ver2[v]=2 00:30:30.850 00:11:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:30.850 00:11:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:30.850 00:11:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # return 0 00:30:30.850 00:11:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:30.850 00:11:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:30:30.850 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:30.850 --rc genhtml_branch_coverage=1 00:30:30.850 --rc genhtml_function_coverage=1 00:30:30.850 --rc genhtml_legend=1 00:30:30.850 --rc geninfo_all_blocks=1 00:30:30.850 --rc geninfo_unexecuted_blocks=1 00:30:30.850 00:30:30.850 ' 00:30:30.850 00:11:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:30:30.850 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:30.850 --rc genhtml_branch_coverage=1 00:30:30.850 --rc genhtml_function_coverage=1 00:30:30.850 --rc genhtml_legend=1 00:30:30.850 --rc geninfo_all_blocks=1 00:30:30.850 --rc geninfo_unexecuted_blocks=1 00:30:30.850 00:30:30.850 ' 00:30:30.850 00:11:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:30:30.850 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:30.850 --rc genhtml_branch_coverage=1 00:30:30.850 --rc genhtml_function_coverage=1 00:30:30.850 --rc genhtml_legend=1 00:30:30.850 --rc geninfo_all_blocks=1 00:30:30.850 --rc geninfo_unexecuted_blocks=1 00:30:30.850 00:30:30.850 ' 00:30:30.850 00:11:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:30:30.850 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:30.850 --rc genhtml_branch_coverage=1 00:30:30.850 --rc genhtml_function_coverage=1 00:30:30.850 --rc genhtml_legend=1 00:30:30.850 --rc geninfo_all_blocks=1 00:30:30.850 --rc geninfo_unexecuted_blocks=1 00:30:30.850 00:30:30.850 ' 00:30:30.850 00:11:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:30.850 00:11:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:30:30.850 00:11:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:30.850 00:11:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:30.851 00:11:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:30.851 00:11:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:30.851 00:11:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:30.851 00:11:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:30.851 00:11:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:30.851 00:11:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:30.851 00:11:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:30.851 00:11:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:30.851 00:11:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:30:30.851 00:11:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:30:30.851 00:11:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:30.851 00:11:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:30.851 00:11:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:30.851 00:11:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:30.851 00:11:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:30.851 00:11:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@15 -- # shopt -s extglob 00:30:30.851 00:11:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:30.851 00:11:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:30.851 00:11:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:30.851 00:11:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:30.851 00:11:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:30.851 00:11:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:30.851 00:11:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:30:30.851 00:11:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:30.851 00:11:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@51 -- # : 0 00:30:30.851 00:11:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:30.851 00:11:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:30.851 00:11:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:30.851 00:11:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:30.851 00:11:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:30.851 00:11:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:30:30.851 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:30:30.851 00:11:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:30.851 00:11:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:30.851 00:11:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:30.851 00:11:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:30:30.851 00:11:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:30:30.851 00:11:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:30:30.851 00:11:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:30:30.851 00:11:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:30:30.851 00:11:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:30:30.851 00:11:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:30:30.851 00:11:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:30.851 00:11:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:30.851 00:11:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:30.851 00:11:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:30.851 00:11:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:30.851 00:11:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:30.851 00:11:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:30.851 00:11:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:30.851 00:11:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:30.851 00:11:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:30.851 00:11:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@309 -- # xtrace_disable 00:30:30.851 00:11:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:37.426 00:11:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:37.426 00:11:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # pci_devs=() 00:30:37.426 00:11:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:37.426 00:11:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:37.426 00:11:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:37.426 00:11:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:37.426 00:11:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:37.426 00:11:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # net_devs=() 00:30:37.426 00:11:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:37.426 00:11:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # e810=() 00:30:37.426 00:11:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # local -ga e810 00:30:37.426 00:11:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # x722=() 00:30:37.426 00:11:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # local -ga x722 00:30:37.426 00:11:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # mlx=() 00:30:37.426 00:11:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # local -ga mlx 00:30:37.426 00:11:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:37.426 00:11:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:37.426 00:11:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:37.426 00:11:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:37.426 00:11:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:37.426 00:11:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:37.426 00:11:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:37.426 00:11:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:37.426 00:11:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:37.426 00:11:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:37.426 00:11:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:37.426 00:11:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:37.426 00:11:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:37.426 00:11:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:37.426 00:11:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:37.426 00:11:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:37.426 00:11:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:37.426 00:11:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:37.426 00:11:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:37.426 00:11:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:30:37.426 Found 0000:af:00.0 (0x8086 - 0x159b) 00:30:37.426 00:11:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:37.426 00:11:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:37.426 00:11:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:37.426 00:11:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:37.426 00:11:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:37.426 00:11:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:37.426 00:11:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:30:37.426 Found 0000:af:00.1 (0x8086 - 0x159b) 00:30:37.426 00:11:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:37.426 00:11:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:37.426 00:11:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:37.426 00:11:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:37.426 00:11:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:37.426 00:11:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:37.427 00:11:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:37.427 00:11:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:37.427 00:11:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:37.427 00:11:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:37.427 00:11:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:37.427 00:11:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:37.427 00:11:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:37.427 00:11:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:37.427 00:11:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:37.427 00:11:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:30:37.427 Found net devices under 0000:af:00.0: cvl_0_0 00:30:37.427 00:11:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:37.427 00:11:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:37.427 00:11:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:37.427 00:11:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:37.427 00:11:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:37.427 00:11:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:37.427 00:11:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:37.427 00:11:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:37.427 00:11:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:30:37.427 Found net devices under 0000:af:00.1: cvl_0_1 00:30:37.427 00:11:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:37.427 00:11:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:37.427 00:11:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # is_hw=yes 00:30:37.427 00:11:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:37.427 00:11:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:37.427 00:11:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:37.427 00:11:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:37.427 00:11:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:37.427 00:11:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:37.427 00:11:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:37.427 00:11:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:37.427 00:11:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:37.427 00:11:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:37.427 00:11:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:37.427 00:11:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:37.427 00:11:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:37.427 00:11:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:37.427 00:11:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:37.427 00:11:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:37.427 00:11:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:37.427 00:11:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:37.427 00:11:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:37.427 00:11:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:37.427 00:11:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:37.427 00:11:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:37.427 00:11:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:37.427 00:11:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:37.427 00:11:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:37.427 00:11:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:37.427 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:37.427 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.362 ms 00:30:37.427 00:30:37.427 --- 10.0.0.2 ping statistics --- 00:30:37.427 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:37.427 rtt min/avg/max/mdev = 0.362/0.362/0.362/0.000 ms 00:30:37.427 00:11:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:37.427 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:37.427 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.128 ms 00:30:37.427 00:30:37.427 --- 10.0.0.1 ping statistics --- 00:30:37.427 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:37.427 rtt min/avg/max/mdev = 0.128/0.128/0.128/0.000 ms 00:30:37.427 00:11:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:37.427 00:11:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@450 -- # return 0 00:30:37.427 00:11:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:37.427 00:11:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:37.427 00:11:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:37.427 00:11:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:37.427 00:11:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:37.427 00:11:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:37.427 00:11:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:37.427 00:11:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:30:37.427 00:11:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:37.427 00:11:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:37.427 00:11:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:37.427 00:11:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@509 -- # nvmfpid=4141757 00:30:37.427 00:11:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@510 -- # waitforlisten 4141757 00:30:37.427 00:11:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # '[' -z 4141757 ']' 00:30:37.427 00:11:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:37.427 00:11:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:37.427 00:11:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:30:37.427 00:11:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:37.427 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:37.427 00:11:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:37.427 00:11:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:37.427 [2024-12-14 00:11:15.691328] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:30:37.427 [2024-12-14 00:11:15.691418] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:37.427 [2024-12-14 00:11:15.806750] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:30:37.427 [2024-12-14 00:11:15.911081] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:37.427 [2024-12-14 00:11:15.911125] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:37.427 [2024-12-14 00:11:15.911137] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:37.427 [2024-12-14 00:11:15.911147] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:37.427 [2024-12-14 00:11:15.911155] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:37.427 [2024-12-14 00:11:15.913570] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:30:37.427 [2024-12-14 00:11:15.913633] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:30:37.427 [2024-12-14 00:11:15.913655] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:30:37.427 00:11:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:37.427 00:11:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@868 -- # return 0 00:30:37.427 00:11:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:37.427 00:11:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:37.427 00:11:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:37.427 00:11:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:37.428 00:11:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:37.428 00:11:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:37.428 00:11:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:37.428 [2024-12-14 00:11:16.548289] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:37.428 00:11:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:37.428 00:11:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:30:37.428 00:11:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:37.428 00:11:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:37.687 Malloc0 00:30:37.687 00:11:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:37.687 00:11:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:37.687 00:11:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:37.687 00:11:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:37.687 00:11:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:37.687 00:11:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:37.687 00:11:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:37.687 00:11:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:37.687 00:11:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:37.687 00:11:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:37.687 00:11:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:37.687 00:11:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:37.687 [2024-12-14 00:11:16.661071] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:37.687 00:11:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:37.687 00:11:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:30:37.687 00:11:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:37.687 00:11:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:37.687 [2024-12-14 00:11:16.673022] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:30:37.687 00:11:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:37.687 00:11:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:30:37.687 00:11:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:37.687 00:11:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:37.687 Malloc1 00:30:37.687 00:11:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:37.687 00:11:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:30:37.687 00:11:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:37.687 00:11:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:37.687 00:11:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:37.687 00:11:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:30:37.687 00:11:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:37.687 00:11:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:37.687 00:11:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:37.687 00:11:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:30:37.687 00:11:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:37.687 00:11:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:37.687 00:11:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:37.687 00:11:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:30:37.687 00:11:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:37.687 00:11:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:37.687 00:11:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:37.687 00:11:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=4141996 00:30:37.687 00:11:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:30:37.687 00:11:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:30:37.687 00:11:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 4141996 /var/tmp/bdevperf.sock 00:30:37.687 00:11:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # '[' -z 4141996 ']' 00:30:37.687 00:11:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:30:37.687 00:11:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:37.687 00:11:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:30:37.687 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:30:37.687 00:11:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:37.687 00:11:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:38.623 00:11:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:38.623 00:11:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@868 -- # return 0 00:30:38.623 00:11:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:30:38.623 00:11:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:38.623 00:11:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:38.882 NVMe0n1 00:30:38.882 00:11:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:38.882 00:11:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:30:38.882 00:11:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:30:38.882 00:11:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:38.882 00:11:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:38.882 00:11:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:38.882 1 00:30:38.882 00:11:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:30:38.882 00:11:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:30:38.882 00:11:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:30:38.882 00:11:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:30:38.882 00:11:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:38.882 00:11:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:30:38.882 00:11:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:38.882 00:11:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:30:38.882 00:11:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:38.882 00:11:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:38.882 request: 00:30:38.882 { 00:30:38.882 "name": "NVMe0", 00:30:38.882 "trtype": "tcp", 00:30:38.882 "traddr": "10.0.0.2", 00:30:38.882 "adrfam": "ipv4", 00:30:38.882 "trsvcid": "4420", 00:30:38.882 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:38.882 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:30:38.882 "hostaddr": "10.0.0.1", 00:30:38.882 "prchk_reftag": false, 00:30:38.882 "prchk_guard": false, 00:30:38.882 "hdgst": false, 00:30:38.882 "ddgst": false, 00:30:38.882 "allow_unrecognized_csi": false, 00:30:38.882 "method": "bdev_nvme_attach_controller", 00:30:38.882 "req_id": 1 00:30:38.882 } 00:30:38.882 Got JSON-RPC error response 00:30:38.882 response: 00:30:38.882 { 00:30:38.882 "code": -114, 00:30:38.882 "message": "A controller named NVMe0 already exists with the specified network path" 00:30:38.882 } 00:30:38.882 00:11:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:30:38.882 00:11:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:30:38.882 00:11:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:30:38.882 00:11:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:30:38.882 00:11:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:30:38.882 00:11:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:30:38.882 00:11:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:30:38.883 00:11:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:30:38.883 00:11:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:30:38.883 00:11:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:38.883 00:11:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:30:38.883 00:11:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:38.883 00:11:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:30:38.883 00:11:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:38.883 00:11:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:38.883 request: 00:30:38.883 { 00:30:38.883 "name": "NVMe0", 00:30:38.883 "trtype": "tcp", 00:30:38.883 "traddr": "10.0.0.2", 00:30:38.883 "adrfam": "ipv4", 00:30:38.883 "trsvcid": "4420", 00:30:38.883 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:30:38.883 "hostaddr": "10.0.0.1", 00:30:38.883 "prchk_reftag": false, 00:30:38.883 "prchk_guard": false, 00:30:38.883 "hdgst": false, 00:30:38.883 "ddgst": false, 00:30:38.883 "allow_unrecognized_csi": false, 00:30:38.883 "method": "bdev_nvme_attach_controller", 00:30:38.883 "req_id": 1 00:30:38.883 } 00:30:38.883 Got JSON-RPC error response 00:30:38.883 response: 00:30:38.883 { 00:30:38.883 "code": -114, 00:30:38.883 "message": "A controller named NVMe0 already exists with the specified network path" 00:30:38.883 } 00:30:38.883 00:11:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:30:38.883 00:11:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:30:38.883 00:11:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:30:38.883 00:11:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:30:38.883 00:11:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:30:38.883 00:11:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:30:38.883 00:11:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:30:38.883 00:11:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:30:38.883 00:11:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:30:38.883 00:11:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:38.883 00:11:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:30:38.883 00:11:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:38.883 00:11:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:30:38.883 00:11:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:38.883 00:11:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:38.883 request: 00:30:38.883 { 00:30:38.883 "name": "NVMe0", 00:30:38.883 "trtype": "tcp", 00:30:38.883 "traddr": "10.0.0.2", 00:30:38.883 "adrfam": "ipv4", 00:30:38.883 "trsvcid": "4420", 00:30:38.883 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:38.883 "hostaddr": "10.0.0.1", 00:30:38.883 "prchk_reftag": false, 00:30:38.883 "prchk_guard": false, 00:30:38.883 "hdgst": false, 00:30:38.883 "ddgst": false, 00:30:38.883 "multipath": "disable", 00:30:38.883 "allow_unrecognized_csi": false, 00:30:38.883 "method": "bdev_nvme_attach_controller", 00:30:38.883 "req_id": 1 00:30:38.883 } 00:30:38.883 Got JSON-RPC error response 00:30:38.883 response: 00:30:38.883 { 00:30:38.883 "code": -114, 00:30:38.883 "message": "A controller named NVMe0 already exists and multipath is disabled" 00:30:38.883 } 00:30:38.883 00:11:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:30:38.883 00:11:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:30:38.883 00:11:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:30:38.883 00:11:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:30:38.883 00:11:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:30:38.883 00:11:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:30:38.883 00:11:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:30:38.883 00:11:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:30:38.883 00:11:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:30:38.883 00:11:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:38.883 00:11:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:30:38.883 00:11:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:38.883 00:11:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:30:38.883 00:11:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:38.883 00:11:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:38.883 request: 00:30:38.883 { 00:30:38.883 "name": "NVMe0", 00:30:38.883 "trtype": "tcp", 00:30:38.883 "traddr": "10.0.0.2", 00:30:38.883 "adrfam": "ipv4", 00:30:38.883 "trsvcid": "4420", 00:30:38.883 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:38.883 "hostaddr": "10.0.0.1", 00:30:38.883 "prchk_reftag": false, 00:30:38.883 "prchk_guard": false, 00:30:38.883 "hdgst": false, 00:30:38.883 "ddgst": false, 00:30:38.883 "multipath": "failover", 00:30:38.883 "allow_unrecognized_csi": false, 00:30:38.883 "method": "bdev_nvme_attach_controller", 00:30:38.883 "req_id": 1 00:30:38.883 } 00:30:38.883 Got JSON-RPC error response 00:30:38.883 response: 00:30:38.883 { 00:30:38.883 "code": -114, 00:30:38.883 "message": "A controller named NVMe0 already exists with the specified network path" 00:30:38.883 } 00:30:38.883 00:11:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:30:38.883 00:11:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:30:38.883 00:11:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:30:38.883 00:11:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:30:38.883 00:11:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:30:38.883 00:11:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:38.883 00:11:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:38.883 00:11:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:38.883 NVMe0n1 00:30:38.883 00:11:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:38.883 00:11:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:38.883 00:11:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:38.883 00:11:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:38.883 00:11:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:38.883 00:11:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:30:38.883 00:11:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:39.142 00:11:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:39.142 00:30:39.142 00:11:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:39.142 00:11:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:30:39.142 00:11:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:30:39.142 00:11:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:39.142 00:11:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:39.142 00:11:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:39.142 00:11:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:30:39.142 00:11:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:30:40.520 { 00:30:40.520 "results": [ 00:30:40.520 { 00:30:40.520 "job": "NVMe0n1", 00:30:40.520 "core_mask": "0x1", 00:30:40.520 "workload": "write", 00:30:40.520 "status": "finished", 00:30:40.520 "queue_depth": 128, 00:30:40.520 "io_size": 4096, 00:30:40.520 "runtime": 1.003901, 00:30:40.520 "iops": 21389.55932905735, 00:30:40.520 "mibps": 83.55296612913027, 00:30:40.520 "io_failed": 0, 00:30:40.520 "io_timeout": 0, 00:30:40.520 "avg_latency_us": 5975.644647164878, 00:30:40.520 "min_latency_us": 3651.2914285714287, 00:30:40.520 "max_latency_us": 13793.76761904762 00:30:40.520 } 00:30:40.520 ], 00:30:40.520 "core_count": 1 00:30:40.520 } 00:30:40.520 00:11:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:30:40.520 00:11:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:40.520 00:11:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:40.520 00:11:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:40.520 00:11:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@100 -- # [[ -n '' ]] 00:30:40.520 00:11:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@116 -- # killprocess 4141996 00:30:40.520 00:11:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # '[' -z 4141996 ']' 00:30:40.520 00:11:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # kill -0 4141996 00:30:40.520 00:11:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # uname 00:30:40.520 00:11:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:40.520 00:11:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4141996 00:30:40.520 00:11:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:30:40.520 00:11:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:30:40.520 00:11:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4141996' 00:30:40.520 killing process with pid 4141996 00:30:40.520 00:11:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@973 -- # kill 4141996 00:30:40.520 00:11:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@978 -- # wait 4141996 00:30:41.456 00:11:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@118 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:41.456 00:11:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:41.456 00:11:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:41.456 00:11:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:41.456 00:11:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@119 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:30:41.456 00:11:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:41.456 00:11:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:41.456 00:11:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:41.456 00:11:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:30:41.456 00:11:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@123 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:30:41.456 00:11:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1599 -- # read -r file 00:30:41.456 00:11:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1598 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:30:41.456 00:11:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1598 -- # sort -u 00:30:41.456 00:11:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1600 -- # cat 00:30:41.456 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:30:41.456 [2024-12-14 00:11:16.856998] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:30:41.456 [2024-12-14 00:11:16.857089] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4141996 ] 00:30:41.456 [2024-12-14 00:11:16.970767] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:41.456 [2024-12-14 00:11:17.087353] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:30:41.456 [2024-12-14 00:11:18.124860] bdev.c:4957:bdev_name_add: *ERROR*: Bdev name 1856f405-51c8-4824-aaa7-ec21e3ce3e43 already exists 00:30:41.456 [2024-12-14 00:11:18.124901] bdev.c:8177:bdev_register: *ERROR*: Unable to add uuid:1856f405-51c8-4824-aaa7-ec21e3ce3e43 alias for bdev NVMe1n1 00:30:41.456 [2024-12-14 00:11:18.124914] bdev_nvme.c:4666:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:30:41.456 Running I/O for 1 seconds... 00:30:41.456 21345.00 IOPS, 83.38 MiB/s 00:30:41.456 Latency(us) 00:30:41.456 [2024-12-13T23:11:20.597Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:41.456 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:30:41.456 NVMe0n1 : 1.00 21389.56 83.55 0.00 0.00 5975.64 3651.29 13793.77 00:30:41.456 [2024-12-13T23:11:20.597Z] =================================================================================================================== 00:30:41.456 [2024-12-13T23:11:20.597Z] Total : 21389.56 83.55 0.00 0.00 5975.64 3651.29 13793.77 00:30:41.456 Received shutdown signal, test time was about 1.000000 seconds 00:30:41.456 00:30:41.456 Latency(us) 00:30:41.456 [2024-12-13T23:11:20.597Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:41.456 [2024-12-13T23:11:20.597Z] =================================================================================================================== 00:30:41.456 [2024-12-13T23:11:20.597Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:41.456 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:30:41.456 00:11:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1605 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:30:41.456 00:11:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1599 -- # read -r file 00:30:41.456 00:11:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@124 -- # nvmftestfini 00:30:41.457 00:11:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:41.457 00:11:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@121 -- # sync 00:30:41.457 00:11:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:41.457 00:11:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@124 -- # set +e 00:30:41.457 00:11:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:41.457 00:11:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:41.457 rmmod nvme_tcp 00:30:41.457 rmmod nvme_fabrics 00:30:41.457 rmmod nvme_keyring 00:30:41.457 00:11:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:41.457 00:11:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@128 -- # set -e 00:30:41.457 00:11:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@129 -- # return 0 00:30:41.457 00:11:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@517 -- # '[' -n 4141757 ']' 00:30:41.457 00:11:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@518 -- # killprocess 4141757 00:30:41.457 00:11:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # '[' -z 4141757 ']' 00:30:41.457 00:11:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # kill -0 4141757 00:30:41.457 00:11:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # uname 00:30:41.457 00:11:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:41.457 00:11:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4141757 00:30:41.457 00:11:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:30:41.457 00:11:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:30:41.457 00:11:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4141757' 00:30:41.457 killing process with pid 4141757 00:30:41.457 00:11:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@973 -- # kill 4141757 00:30:41.457 00:11:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@978 -- # wait 4141757 00:30:42.834 00:11:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:42.834 00:11:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:42.834 00:11:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:42.834 00:11:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@297 -- # iptr 00:30:42.834 00:11:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # iptables-restore 00:30:42.834 00:11:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # iptables-save 00:30:42.834 00:11:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:42.834 00:11:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:42.834 00:11:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:42.834 00:11:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:42.834 00:11:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:42.834 00:11:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:45.367 00:11:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:45.367 00:30:45.367 real 0m14.254s 00:30:45.367 user 0m22.817s 00:30:45.367 sys 0m5.200s 00:30:45.367 00:11:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:45.367 00:11:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:45.367 ************************************ 00:30:45.367 END TEST nvmf_multicontroller 00:30:45.367 ************************************ 00:30:45.367 00:11:24 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@17 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:30:45.367 00:11:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:30:45.367 00:11:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:45.367 00:11:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:30:45.367 ************************************ 00:30:45.367 START TEST nvmf_aer 00:30:45.367 ************************************ 00:30:45.367 00:11:24 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:30:45.367 * Looking for test storage... 00:30:45.367 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:30:45.367 00:11:24 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:30:45.367 00:11:24 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1711 -- # lcov --version 00:30:45.367 00:11:24 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:30:45.367 00:11:24 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:30:45.367 00:11:24 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:45.367 00:11:24 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:45.367 00:11:24 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:45.367 00:11:24 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # IFS=.-: 00:30:45.367 00:11:24 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # read -ra ver1 00:30:45.367 00:11:24 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # IFS=.-: 00:30:45.367 00:11:24 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # read -ra ver2 00:30:45.367 00:11:24 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@338 -- # local 'op=<' 00:30:45.367 00:11:24 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@340 -- # ver1_l=2 00:30:45.367 00:11:24 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@341 -- # ver2_l=1 00:30:45.367 00:11:24 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:45.367 00:11:24 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@344 -- # case "$op" in 00:30:45.367 00:11:24 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@345 -- # : 1 00:30:45.367 00:11:24 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:45.367 00:11:24 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:45.367 00:11:24 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # decimal 1 00:30:45.367 00:11:24 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=1 00:30:45.367 00:11:24 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:45.367 00:11:24 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 1 00:30:45.367 00:11:24 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # ver1[v]=1 00:30:45.367 00:11:24 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # decimal 2 00:30:45.367 00:11:24 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=2 00:30:45.367 00:11:24 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:45.367 00:11:24 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 2 00:30:45.367 00:11:24 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # ver2[v]=2 00:30:45.367 00:11:24 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:45.367 00:11:24 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:45.367 00:11:24 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # return 0 00:30:45.367 00:11:24 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:45.367 00:11:24 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:30:45.367 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:45.367 --rc genhtml_branch_coverage=1 00:30:45.367 --rc genhtml_function_coverage=1 00:30:45.367 --rc genhtml_legend=1 00:30:45.367 --rc geninfo_all_blocks=1 00:30:45.367 --rc geninfo_unexecuted_blocks=1 00:30:45.367 00:30:45.367 ' 00:30:45.367 00:11:24 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:30:45.367 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:45.367 --rc genhtml_branch_coverage=1 00:30:45.367 --rc genhtml_function_coverage=1 00:30:45.367 --rc genhtml_legend=1 00:30:45.367 --rc geninfo_all_blocks=1 00:30:45.367 --rc geninfo_unexecuted_blocks=1 00:30:45.367 00:30:45.367 ' 00:30:45.367 00:11:24 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:30:45.367 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:45.367 --rc genhtml_branch_coverage=1 00:30:45.367 --rc genhtml_function_coverage=1 00:30:45.367 --rc genhtml_legend=1 00:30:45.367 --rc geninfo_all_blocks=1 00:30:45.367 --rc geninfo_unexecuted_blocks=1 00:30:45.367 00:30:45.367 ' 00:30:45.367 00:11:24 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:30:45.367 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:45.367 --rc genhtml_branch_coverage=1 00:30:45.367 --rc genhtml_function_coverage=1 00:30:45.367 --rc genhtml_legend=1 00:30:45.367 --rc geninfo_all_blocks=1 00:30:45.367 --rc geninfo_unexecuted_blocks=1 00:30:45.367 00:30:45.367 ' 00:30:45.367 00:11:24 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:45.367 00:11:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:30:45.367 00:11:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:45.367 00:11:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:45.367 00:11:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:45.367 00:11:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:45.367 00:11:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:45.367 00:11:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:45.367 00:11:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:45.367 00:11:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:45.367 00:11:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:45.367 00:11:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:45.367 00:11:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:30:45.367 00:11:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:30:45.367 00:11:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:45.367 00:11:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:45.368 00:11:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:45.368 00:11:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:45.368 00:11:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:45.368 00:11:24 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@15 -- # shopt -s extglob 00:30:45.368 00:11:24 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:45.368 00:11:24 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:45.368 00:11:24 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:45.368 00:11:24 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:45.368 00:11:24 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:45.368 00:11:24 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:45.368 00:11:24 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:30:45.368 00:11:24 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:45.368 00:11:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@51 -- # : 0 00:30:45.368 00:11:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:45.368 00:11:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:45.368 00:11:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:45.368 00:11:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:45.368 00:11:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:45.368 00:11:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:30:45.368 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:30:45.368 00:11:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:45.368 00:11:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:45.368 00:11:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:45.368 00:11:24 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:30:45.368 00:11:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:45.368 00:11:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:45.368 00:11:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:45.368 00:11:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:45.368 00:11:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:45.368 00:11:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:45.368 00:11:24 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:45.368 00:11:24 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:45.368 00:11:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:45.368 00:11:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:45.368 00:11:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@309 -- # xtrace_disable 00:30:45.368 00:11:24 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:50.641 00:11:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:50.641 00:11:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # pci_devs=() 00:30:50.641 00:11:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:50.641 00:11:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:50.641 00:11:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:50.641 00:11:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:50.641 00:11:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:50.641 00:11:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # net_devs=() 00:30:50.641 00:11:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:50.641 00:11:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # e810=() 00:30:50.641 00:11:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # local -ga e810 00:30:50.641 00:11:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # x722=() 00:30:50.641 00:11:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # local -ga x722 00:30:50.641 00:11:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # mlx=() 00:30:50.641 00:11:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # local -ga mlx 00:30:50.641 00:11:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:50.641 00:11:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:50.641 00:11:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:50.641 00:11:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:50.641 00:11:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:50.641 00:11:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:50.641 00:11:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:50.641 00:11:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:50.641 00:11:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:50.641 00:11:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:50.641 00:11:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:50.641 00:11:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:50.641 00:11:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:50.641 00:11:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:50.641 00:11:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:50.641 00:11:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:50.641 00:11:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:50.641 00:11:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:50.641 00:11:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:50.641 00:11:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:30:50.641 Found 0000:af:00.0 (0x8086 - 0x159b) 00:30:50.641 00:11:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:50.641 00:11:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:50.641 00:11:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:50.641 00:11:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:50.641 00:11:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:50.641 00:11:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:50.641 00:11:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:30:50.641 Found 0000:af:00.1 (0x8086 - 0x159b) 00:30:50.641 00:11:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:50.641 00:11:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:50.641 00:11:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:50.641 00:11:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:50.641 00:11:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:50.641 00:11:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:50.641 00:11:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:50.641 00:11:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:50.641 00:11:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:50.641 00:11:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:50.641 00:11:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:50.641 00:11:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:50.641 00:11:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:50.641 00:11:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:50.641 00:11:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:50.641 00:11:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:30:50.641 Found net devices under 0000:af:00.0: cvl_0_0 00:30:50.641 00:11:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:50.641 00:11:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:50.641 00:11:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:50.641 00:11:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:50.641 00:11:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:50.641 00:11:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:50.641 00:11:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:50.641 00:11:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:50.641 00:11:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:30:50.641 Found net devices under 0000:af:00.1: cvl_0_1 00:30:50.641 00:11:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:50.641 00:11:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:50.641 00:11:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # is_hw=yes 00:30:50.641 00:11:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:50.641 00:11:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:50.641 00:11:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:50.641 00:11:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:50.641 00:11:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:50.641 00:11:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:50.641 00:11:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:50.641 00:11:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:50.641 00:11:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:50.641 00:11:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:50.641 00:11:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:50.641 00:11:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:50.641 00:11:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:50.641 00:11:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:50.641 00:11:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:50.641 00:11:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:50.641 00:11:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:50.641 00:11:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:50.641 00:11:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:50.641 00:11:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:50.641 00:11:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:50.641 00:11:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:50.641 00:11:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:50.641 00:11:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:50.641 00:11:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:50.641 00:11:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:50.641 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:50.641 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.381 ms 00:30:50.641 00:30:50.641 --- 10.0.0.2 ping statistics --- 00:30:50.641 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:50.641 rtt min/avg/max/mdev = 0.381/0.381/0.381/0.000 ms 00:30:50.641 00:11:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:50.641 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:50.641 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.197 ms 00:30:50.641 00:30:50.641 --- 10.0.0.1 ping statistics --- 00:30:50.641 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:50.641 rtt min/avg/max/mdev = 0.197/0.197/0.197/0.000 ms 00:30:50.641 00:11:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:50.641 00:11:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@450 -- # return 0 00:30:50.641 00:11:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:50.641 00:11:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:50.641 00:11:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:50.641 00:11:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:50.641 00:11:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:50.641 00:11:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:50.641 00:11:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:50.642 00:11:29 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:30:50.642 00:11:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:50.642 00:11:29 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:50.642 00:11:29 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:50.642 00:11:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:30:50.642 00:11:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@509 -- # nvmfpid=4146146 00:30:50.642 00:11:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@510 -- # waitforlisten 4146146 00:30:50.642 00:11:29 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@835 -- # '[' -z 4146146 ']' 00:30:50.642 00:11:29 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:50.642 00:11:29 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:50.642 00:11:29 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:50.642 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:50.642 00:11:29 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:50.642 00:11:29 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:50.642 [2024-12-14 00:11:29.663899] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:30:50.642 [2024-12-14 00:11:29.663993] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:50.901 [2024-12-14 00:11:29.780758] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:50.901 [2024-12-14 00:11:29.891603] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:50.901 [2024-12-14 00:11:29.891649] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:50.901 [2024-12-14 00:11:29.891659] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:50.901 [2024-12-14 00:11:29.891669] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:50.901 [2024-12-14 00:11:29.891679] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:50.901 [2024-12-14 00:11:29.893806] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:30:50.901 [2024-12-14 00:11:29.893878] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:30:50.901 [2024-12-14 00:11:29.893984] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:30:50.901 [2024-12-14 00:11:29.893994] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:30:51.469 00:11:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:51.469 00:11:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@868 -- # return 0 00:30:51.469 00:11:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:51.469 00:11:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:51.469 00:11:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:51.469 00:11:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:51.469 00:11:30 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:51.470 00:11:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:51.470 00:11:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:51.470 [2024-12-14 00:11:30.535234] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:51.470 00:11:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:51.470 00:11:30 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:30:51.470 00:11:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:51.470 00:11:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:51.729 Malloc0 00:30:51.729 00:11:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:51.729 00:11:30 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:30:51.729 00:11:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:51.729 00:11:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:51.729 00:11:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:51.729 00:11:30 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:51.729 00:11:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:51.729 00:11:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:51.729 00:11:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:51.729 00:11:30 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:51.729 00:11:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:51.729 00:11:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:51.729 [2024-12-14 00:11:30.667391] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:51.729 00:11:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:51.729 00:11:30 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:30:51.729 00:11:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:51.729 00:11:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:51.729 [ 00:30:51.729 { 00:30:51.729 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:30:51.729 "subtype": "Discovery", 00:30:51.729 "listen_addresses": [], 00:30:51.729 "allow_any_host": true, 00:30:51.729 "hosts": [] 00:30:51.729 }, 00:30:51.729 { 00:30:51.729 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:30:51.729 "subtype": "NVMe", 00:30:51.729 "listen_addresses": [ 00:30:51.729 { 00:30:51.729 "trtype": "TCP", 00:30:51.729 "adrfam": "IPv4", 00:30:51.729 "traddr": "10.0.0.2", 00:30:51.729 "trsvcid": "4420" 00:30:51.729 } 00:30:51.729 ], 00:30:51.729 "allow_any_host": true, 00:30:51.729 "hosts": [], 00:30:51.729 "serial_number": "SPDK00000000000001", 00:30:51.729 "model_number": "SPDK bdev Controller", 00:30:51.729 "max_namespaces": 2, 00:30:51.729 "min_cntlid": 1, 00:30:51.729 "max_cntlid": 65519, 00:30:51.729 "namespaces": [ 00:30:51.729 { 00:30:51.729 "nsid": 1, 00:30:51.729 "bdev_name": "Malloc0", 00:30:51.729 "name": "Malloc0", 00:30:51.729 "nguid": "C8D957CEA6FD4D79AC2F92F8E8C19A32", 00:30:51.729 "uuid": "c8d957ce-a6fd-4d79-ac2f-92f8e8c19a32" 00:30:51.729 } 00:30:51.729 ] 00:30:51.729 } 00:30:51.729 ] 00:30:51.729 00:11:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:51.729 00:11:30 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:30:51.729 00:11:30 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:30:51.729 00:11:30 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@33 -- # aerpid=4146393 00:30:51.729 00:11:30 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:30:51.729 00:11:30 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:30:51.729 00:11:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # local i=0 00:30:51.729 00:11:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:30:51.729 00:11:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 0 -lt 200 ']' 00:30:51.729 00:11:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=1 00:30:51.729 00:11:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:30:51.729 00:11:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:30:51.729 00:11:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 1 -lt 200 ']' 00:30:51.729 00:11:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=2 00:30:51.729 00:11:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:30:51.988 00:11:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:30:51.988 00:11:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 2 -lt 200 ']' 00:30:51.988 00:11:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=3 00:30:51.988 00:11:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:30:51.988 00:11:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:30:51.988 00:11:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:30:51.988 00:11:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1280 -- # return 0 00:30:51.988 00:11:31 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:30:51.988 00:11:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:51.988 00:11:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:52.247 Malloc1 00:30:52.247 00:11:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:52.247 00:11:31 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:30:52.247 00:11:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:52.247 00:11:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:52.247 00:11:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:52.247 00:11:31 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:30:52.247 00:11:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:52.247 00:11:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:52.247 [ 00:30:52.247 { 00:30:52.247 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:30:52.247 "subtype": "Discovery", 00:30:52.247 "listen_addresses": [], 00:30:52.247 "allow_any_host": true, 00:30:52.247 "hosts": [] 00:30:52.247 }, 00:30:52.247 { 00:30:52.247 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:30:52.247 "subtype": "NVMe", 00:30:52.247 "listen_addresses": [ 00:30:52.247 { 00:30:52.247 "trtype": "TCP", 00:30:52.247 "adrfam": "IPv4", 00:30:52.247 "traddr": "10.0.0.2", 00:30:52.247 "trsvcid": "4420" 00:30:52.247 } 00:30:52.247 ], 00:30:52.247 "allow_any_host": true, 00:30:52.247 "hosts": [], 00:30:52.247 "serial_number": "SPDK00000000000001", 00:30:52.247 "model_number": "SPDK bdev Controller", 00:30:52.247 "max_namespaces": 2, 00:30:52.247 "min_cntlid": 1, 00:30:52.247 "max_cntlid": 65519, 00:30:52.247 "namespaces": [ 00:30:52.247 { 00:30:52.247 "nsid": 1, 00:30:52.247 "bdev_name": "Malloc0", 00:30:52.247 "name": "Malloc0", 00:30:52.247 "nguid": "C8D957CEA6FD4D79AC2F92F8E8C19A32", 00:30:52.247 "uuid": "c8d957ce-a6fd-4d79-ac2f-92f8e8c19a32" 00:30:52.247 }, 00:30:52.247 { 00:30:52.247 "nsid": 2, 00:30:52.247 "bdev_name": "Malloc1", 00:30:52.247 "name": "Malloc1", 00:30:52.247 "nguid": "38F58F2D78074173B011AED0FFAF7F53", 00:30:52.247 "uuid": "38f58f2d-7807-4173-b011-aed0ffaf7f53" 00:30:52.247 } 00:30:52.247 ] 00:30:52.247 } 00:30:52.247 ] 00:30:52.247 00:11:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:52.247 00:11:31 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@43 -- # wait 4146393 00:30:52.247 Asynchronous Event Request test 00:30:52.247 Attaching to 10.0.0.2 00:30:52.247 Attached to 10.0.0.2 00:30:52.247 Registering asynchronous event callbacks... 00:30:52.247 Starting namespace attribute notice tests for all controllers... 00:30:52.247 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:30:52.247 aer_cb - Changed Namespace 00:30:52.247 Cleaning up... 00:30:52.247 00:11:31 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:30:52.247 00:11:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:52.247 00:11:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:52.505 00:11:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:52.505 00:11:31 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:30:52.505 00:11:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:52.505 00:11:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:52.505 00:11:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:52.505 00:11:31 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:52.505 00:11:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:52.505 00:11:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:52.505 00:11:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:52.505 00:11:31 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:30:52.505 00:11:31 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:30:52.505 00:11:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:52.505 00:11:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@121 -- # sync 00:30:52.505 00:11:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:52.505 00:11:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@124 -- # set +e 00:30:52.505 00:11:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:52.505 00:11:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:52.505 rmmod nvme_tcp 00:30:52.764 rmmod nvme_fabrics 00:30:52.764 rmmod nvme_keyring 00:30:52.764 00:11:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:52.764 00:11:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@128 -- # set -e 00:30:52.764 00:11:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@129 -- # return 0 00:30:52.764 00:11:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@517 -- # '[' -n 4146146 ']' 00:30:52.764 00:11:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@518 -- # killprocess 4146146 00:30:52.764 00:11:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@954 -- # '[' -z 4146146 ']' 00:30:52.764 00:11:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@958 -- # kill -0 4146146 00:30:52.764 00:11:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@959 -- # uname 00:30:52.764 00:11:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:52.764 00:11:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4146146 00:30:52.764 00:11:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:30:52.764 00:11:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:30:52.764 00:11:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4146146' 00:30:52.764 killing process with pid 4146146 00:30:52.764 00:11:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@973 -- # kill 4146146 00:30:52.764 00:11:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@978 -- # wait 4146146 00:30:54.141 00:11:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:54.141 00:11:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:54.141 00:11:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:54.141 00:11:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@297 -- # iptr 00:30:54.141 00:11:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:54.141 00:11:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # iptables-save 00:30:54.141 00:11:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # iptables-restore 00:30:54.141 00:11:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:54.141 00:11:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:54.141 00:11:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:54.141 00:11:32 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:54.141 00:11:32 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:56.046 00:11:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:56.046 00:30:56.046 real 0m10.853s 00:30:56.046 user 0m12.626s 00:30:56.046 sys 0m4.564s 00:30:56.046 00:11:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:56.046 00:11:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:56.046 ************************************ 00:30:56.046 END TEST nvmf_aer 00:30:56.046 ************************************ 00:30:56.046 00:11:34 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@18 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:30:56.046 00:11:34 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:30:56.046 00:11:34 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:56.046 00:11:34 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:30:56.046 ************************************ 00:30:56.046 START TEST nvmf_async_init 00:30:56.046 ************************************ 00:30:56.046 00:11:35 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:30:56.046 * Looking for test storage... 00:30:56.046 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:30:56.046 00:11:35 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:30:56.046 00:11:35 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1711 -- # lcov --version 00:30:56.046 00:11:35 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:30:56.046 00:11:35 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:30:56.046 00:11:35 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:56.046 00:11:35 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:56.046 00:11:35 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:56.046 00:11:35 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # IFS=.-: 00:30:56.046 00:11:35 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # read -ra ver1 00:30:56.046 00:11:35 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # IFS=.-: 00:30:56.046 00:11:35 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # read -ra ver2 00:30:56.046 00:11:35 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@338 -- # local 'op=<' 00:30:56.046 00:11:35 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@340 -- # ver1_l=2 00:30:56.046 00:11:35 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@341 -- # ver2_l=1 00:30:56.046 00:11:35 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:56.046 00:11:35 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@344 -- # case "$op" in 00:30:56.046 00:11:35 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@345 -- # : 1 00:30:56.046 00:11:35 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:56.046 00:11:35 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:56.046 00:11:35 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # decimal 1 00:30:56.046 00:11:35 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=1 00:30:56.046 00:11:35 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:56.046 00:11:35 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 1 00:30:56.046 00:11:35 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # ver1[v]=1 00:30:56.046 00:11:35 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # decimal 2 00:30:56.046 00:11:35 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=2 00:30:56.046 00:11:35 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:56.305 00:11:35 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 2 00:30:56.305 00:11:35 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # ver2[v]=2 00:30:56.305 00:11:35 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:56.305 00:11:35 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:56.305 00:11:35 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # return 0 00:30:56.305 00:11:35 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:56.305 00:11:35 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:30:56.305 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:56.305 --rc genhtml_branch_coverage=1 00:30:56.305 --rc genhtml_function_coverage=1 00:30:56.305 --rc genhtml_legend=1 00:30:56.305 --rc geninfo_all_blocks=1 00:30:56.305 --rc geninfo_unexecuted_blocks=1 00:30:56.305 00:30:56.305 ' 00:30:56.305 00:11:35 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:30:56.305 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:56.305 --rc genhtml_branch_coverage=1 00:30:56.305 --rc genhtml_function_coverage=1 00:30:56.305 --rc genhtml_legend=1 00:30:56.305 --rc geninfo_all_blocks=1 00:30:56.305 --rc geninfo_unexecuted_blocks=1 00:30:56.305 00:30:56.305 ' 00:30:56.305 00:11:35 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:30:56.305 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:56.305 --rc genhtml_branch_coverage=1 00:30:56.305 --rc genhtml_function_coverage=1 00:30:56.305 --rc genhtml_legend=1 00:30:56.305 --rc geninfo_all_blocks=1 00:30:56.305 --rc geninfo_unexecuted_blocks=1 00:30:56.305 00:30:56.305 ' 00:30:56.305 00:11:35 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:30:56.305 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:56.305 --rc genhtml_branch_coverage=1 00:30:56.305 --rc genhtml_function_coverage=1 00:30:56.305 --rc genhtml_legend=1 00:30:56.305 --rc geninfo_all_blocks=1 00:30:56.305 --rc geninfo_unexecuted_blocks=1 00:30:56.305 00:30:56.305 ' 00:30:56.305 00:11:35 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:56.305 00:11:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:30:56.305 00:11:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:56.305 00:11:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:56.305 00:11:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:56.305 00:11:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:56.305 00:11:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:56.305 00:11:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:56.305 00:11:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:56.305 00:11:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:56.305 00:11:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:56.305 00:11:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:56.305 00:11:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:30:56.305 00:11:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:30:56.305 00:11:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:56.305 00:11:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:56.305 00:11:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:56.305 00:11:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:56.305 00:11:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:56.305 00:11:35 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@15 -- # shopt -s extglob 00:30:56.305 00:11:35 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:56.305 00:11:35 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:56.305 00:11:35 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:56.305 00:11:35 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:56.305 00:11:35 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:56.305 00:11:35 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:56.305 00:11:35 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:30:56.305 00:11:35 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:56.305 00:11:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@51 -- # : 0 00:30:56.305 00:11:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:56.305 00:11:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:56.305 00:11:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:56.305 00:11:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:56.305 00:11:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:56.305 00:11:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:30:56.305 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:30:56.305 00:11:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:56.305 00:11:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:56.305 00:11:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:56.306 00:11:35 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:30:56.306 00:11:35 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:30:56.306 00:11:35 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:30:56.306 00:11:35 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:30:56.306 00:11:35 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:30:56.306 00:11:35 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:30:56.306 00:11:35 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # nguid=a4964244ec374cc1aded8d35cf98f683 00:30:56.306 00:11:35 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:30:56.306 00:11:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:56.306 00:11:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:56.306 00:11:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:56.306 00:11:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:56.306 00:11:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:56.306 00:11:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:56.306 00:11:35 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:56.306 00:11:35 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:56.306 00:11:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:56.306 00:11:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:56.306 00:11:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@309 -- # xtrace_disable 00:30:56.306 00:11:35 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:31:01.577 00:11:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:01.577 00:11:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # pci_devs=() 00:31:01.577 00:11:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:01.577 00:11:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:01.577 00:11:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:01.577 00:11:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:01.577 00:11:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:01.577 00:11:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # net_devs=() 00:31:01.577 00:11:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:01.577 00:11:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # e810=() 00:31:01.577 00:11:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # local -ga e810 00:31:01.577 00:11:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # x722=() 00:31:01.577 00:11:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # local -ga x722 00:31:01.577 00:11:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # mlx=() 00:31:01.577 00:11:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # local -ga mlx 00:31:01.577 00:11:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:01.577 00:11:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:01.577 00:11:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:01.577 00:11:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:01.577 00:11:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:01.577 00:11:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:01.577 00:11:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:01.577 00:11:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:01.577 00:11:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:01.577 00:11:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:01.577 00:11:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:01.577 00:11:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:01.577 00:11:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:01.577 00:11:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:01.577 00:11:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:01.577 00:11:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:01.577 00:11:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:01.577 00:11:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:01.577 00:11:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:01.577 00:11:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:31:01.577 Found 0000:af:00.0 (0x8086 - 0x159b) 00:31:01.577 00:11:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:01.578 00:11:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:01.578 00:11:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:01.578 00:11:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:01.578 00:11:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:01.578 00:11:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:01.578 00:11:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:31:01.578 Found 0000:af:00.1 (0x8086 - 0x159b) 00:31:01.578 00:11:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:01.578 00:11:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:01.578 00:11:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:01.578 00:11:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:01.578 00:11:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:01.578 00:11:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:01.578 00:11:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:01.578 00:11:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:01.578 00:11:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:01.578 00:11:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:01.578 00:11:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:01.578 00:11:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:01.578 00:11:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:01.578 00:11:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:01.578 00:11:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:01.578 00:11:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:31:01.578 Found net devices under 0000:af:00.0: cvl_0_0 00:31:01.578 00:11:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:01.578 00:11:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:01.578 00:11:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:01.578 00:11:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:01.578 00:11:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:01.578 00:11:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:01.578 00:11:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:01.578 00:11:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:01.578 00:11:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:31:01.578 Found net devices under 0000:af:00.1: cvl_0_1 00:31:01.578 00:11:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:01.578 00:11:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:31:01.578 00:11:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # is_hw=yes 00:31:01.578 00:11:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:31:01.578 00:11:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:31:01.578 00:11:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:31:01.578 00:11:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:01.578 00:11:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:01.578 00:11:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:01.578 00:11:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:01.578 00:11:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:01.578 00:11:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:01.578 00:11:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:01.578 00:11:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:01.578 00:11:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:01.578 00:11:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:01.578 00:11:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:01.578 00:11:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:01.578 00:11:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:01.578 00:11:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:01.578 00:11:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:01.578 00:11:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:01.578 00:11:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:01.578 00:11:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:01.578 00:11:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:01.578 00:11:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:01.578 00:11:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:01.578 00:11:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:01.578 00:11:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:01.578 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:01.578 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.377 ms 00:31:01.578 00:31:01.578 --- 10.0.0.2 ping statistics --- 00:31:01.578 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:01.578 rtt min/avg/max/mdev = 0.377/0.377/0.377/0.000 ms 00:31:01.578 00:11:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:01.578 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:01.578 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.210 ms 00:31:01.578 00:31:01.578 --- 10.0.0.1 ping statistics --- 00:31:01.578 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:01.578 rtt min/avg/max/mdev = 0.210/0.210/0.210/0.000 ms 00:31:01.578 00:11:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:01.578 00:11:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@450 -- # return 0 00:31:01.578 00:11:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:31:01.578 00:11:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:01.578 00:11:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:31:01.578 00:11:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:31:01.578 00:11:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:01.578 00:11:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:31:01.578 00:11:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:31:01.578 00:11:40 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:31:01.578 00:11:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:31:01.578 00:11:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:01.578 00:11:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:31:01.578 00:11:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@509 -- # nvmfpid=4150085 00:31:01.578 00:11:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@510 -- # waitforlisten 4150085 00:31:01.578 00:11:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:31:01.578 00:11:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@835 -- # '[' -z 4150085 ']' 00:31:01.578 00:11:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:01.578 00:11:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:01.578 00:11:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:01.578 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:01.578 00:11:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:01.578 00:11:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:31:01.837 [2024-12-14 00:11:40.729558] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:31:01.837 [2024-12-14 00:11:40.729642] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:01.837 [2024-12-14 00:11:40.846120] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:01.837 [2024-12-14 00:11:40.947927] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:01.837 [2024-12-14 00:11:40.947977] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:01.837 [2024-12-14 00:11:40.947987] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:01.837 [2024-12-14 00:11:40.948013] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:01.837 [2024-12-14 00:11:40.948022] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:01.837 [2024-12-14 00:11:40.949257] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:31:02.405 00:11:41 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:02.405 00:11:41 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@868 -- # return 0 00:31:02.405 00:11:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:31:02.405 00:11:41 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:02.405 00:11:41 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:31:02.664 00:11:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:02.664 00:11:41 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:31:02.664 00:11:41 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:02.664 00:11:41 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:31:02.664 [2024-12-14 00:11:41.563607] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:02.664 00:11:41 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:02.664 00:11:41 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:31:02.664 00:11:41 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:02.664 00:11:41 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:31:02.664 null0 00:31:02.664 00:11:41 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:02.664 00:11:41 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:31:02.664 00:11:41 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:02.664 00:11:41 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:31:02.664 00:11:41 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:02.664 00:11:41 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:31:02.664 00:11:41 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:02.664 00:11:41 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:31:02.664 00:11:41 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:02.664 00:11:41 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g a4964244ec374cc1aded8d35cf98f683 00:31:02.664 00:11:41 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:02.664 00:11:41 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:31:02.664 00:11:41 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:02.664 00:11:41 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:02.664 00:11:41 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:02.664 00:11:41 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:31:02.664 [2024-12-14 00:11:41.615902] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:02.664 00:11:41 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:02.664 00:11:41 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:31:02.664 00:11:41 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:02.664 00:11:41 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:31:02.923 nvme0n1 00:31:02.923 00:11:41 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:02.923 00:11:41 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:31:02.923 00:11:41 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:02.923 00:11:41 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:31:02.923 [ 00:31:02.923 { 00:31:02.923 "name": "nvme0n1", 00:31:02.923 "aliases": [ 00:31:02.923 "a4964244-ec37-4cc1-aded-8d35cf98f683" 00:31:02.923 ], 00:31:02.923 "product_name": "NVMe disk", 00:31:02.923 "block_size": 512, 00:31:02.923 "num_blocks": 2097152, 00:31:02.923 "uuid": "a4964244-ec37-4cc1-aded-8d35cf98f683", 00:31:02.923 "numa_id": 1, 00:31:02.923 "assigned_rate_limits": { 00:31:02.923 "rw_ios_per_sec": 0, 00:31:02.923 "rw_mbytes_per_sec": 0, 00:31:02.923 "r_mbytes_per_sec": 0, 00:31:02.923 "w_mbytes_per_sec": 0 00:31:02.923 }, 00:31:02.923 "claimed": false, 00:31:02.923 "zoned": false, 00:31:02.923 "supported_io_types": { 00:31:02.923 "read": true, 00:31:02.923 "write": true, 00:31:02.923 "unmap": false, 00:31:02.923 "flush": true, 00:31:02.923 "reset": true, 00:31:02.923 "nvme_admin": true, 00:31:02.923 "nvme_io": true, 00:31:02.923 "nvme_io_md": false, 00:31:02.923 "write_zeroes": true, 00:31:02.923 "zcopy": false, 00:31:02.923 "get_zone_info": false, 00:31:02.923 "zone_management": false, 00:31:02.923 "zone_append": false, 00:31:02.923 "compare": true, 00:31:02.923 "compare_and_write": true, 00:31:02.924 "abort": true, 00:31:02.924 "seek_hole": false, 00:31:02.924 "seek_data": false, 00:31:02.924 "copy": true, 00:31:02.924 "nvme_iov_md": false 00:31:02.924 }, 00:31:02.924 "memory_domains": [ 00:31:02.924 { 00:31:02.924 "dma_device_id": "system", 00:31:02.924 "dma_device_type": 1 00:31:02.924 } 00:31:02.924 ], 00:31:02.924 "driver_specific": { 00:31:02.924 "nvme": [ 00:31:02.924 { 00:31:02.924 "trid": { 00:31:02.924 "trtype": "TCP", 00:31:02.924 "adrfam": "IPv4", 00:31:02.924 "traddr": "10.0.0.2", 00:31:02.924 "trsvcid": "4420", 00:31:02.924 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:31:02.924 }, 00:31:02.924 "ctrlr_data": { 00:31:02.924 "cntlid": 1, 00:31:02.924 "vendor_id": "0x8086", 00:31:02.924 "model_number": "SPDK bdev Controller", 00:31:02.924 "serial_number": "00000000000000000000", 00:31:02.924 "firmware_revision": "25.01", 00:31:02.924 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:02.924 "oacs": { 00:31:02.924 "security": 0, 00:31:02.924 "format": 0, 00:31:02.924 "firmware": 0, 00:31:02.924 "ns_manage": 0 00:31:02.924 }, 00:31:02.924 "multi_ctrlr": true, 00:31:02.924 "ana_reporting": false 00:31:02.924 }, 00:31:02.924 "vs": { 00:31:02.924 "nvme_version": "1.3" 00:31:02.924 }, 00:31:02.924 "ns_data": { 00:31:02.924 "id": 1, 00:31:02.924 "can_share": true 00:31:02.924 } 00:31:02.924 } 00:31:02.924 ], 00:31:02.924 "mp_policy": "active_passive" 00:31:02.924 } 00:31:02.924 } 00:31:02.924 ] 00:31:02.924 00:11:41 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:02.924 00:11:41 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:31:02.924 00:11:41 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:02.924 00:11:41 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:31:02.924 [2024-12-14 00:11:41.886089] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:31:02.924 [2024-12-14 00:11:41.886189] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:31:02.924 [2024-12-14 00:11:42.028554] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:31:02.924 00:11:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:02.924 00:11:42 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:31:02.924 00:11:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:02.924 00:11:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:31:02.924 [ 00:31:02.924 { 00:31:02.924 "name": "nvme0n1", 00:31:02.924 "aliases": [ 00:31:02.924 "a4964244-ec37-4cc1-aded-8d35cf98f683" 00:31:02.924 ], 00:31:02.924 "product_name": "NVMe disk", 00:31:02.924 "block_size": 512, 00:31:02.924 "num_blocks": 2097152, 00:31:02.924 "uuid": "a4964244-ec37-4cc1-aded-8d35cf98f683", 00:31:02.924 "numa_id": 1, 00:31:02.924 "assigned_rate_limits": { 00:31:02.924 "rw_ios_per_sec": 0, 00:31:02.924 "rw_mbytes_per_sec": 0, 00:31:02.924 "r_mbytes_per_sec": 0, 00:31:02.924 "w_mbytes_per_sec": 0 00:31:02.924 }, 00:31:02.924 "claimed": false, 00:31:02.924 "zoned": false, 00:31:02.924 "supported_io_types": { 00:31:02.924 "read": true, 00:31:02.924 "write": true, 00:31:02.924 "unmap": false, 00:31:02.924 "flush": true, 00:31:02.924 "reset": true, 00:31:02.924 "nvme_admin": true, 00:31:02.924 "nvme_io": true, 00:31:02.924 "nvme_io_md": false, 00:31:02.924 "write_zeroes": true, 00:31:02.924 "zcopy": false, 00:31:02.924 "get_zone_info": false, 00:31:02.924 "zone_management": false, 00:31:02.924 "zone_append": false, 00:31:02.924 "compare": true, 00:31:02.924 "compare_and_write": true, 00:31:02.924 "abort": true, 00:31:02.924 "seek_hole": false, 00:31:02.924 "seek_data": false, 00:31:02.924 "copy": true, 00:31:02.924 "nvme_iov_md": false 00:31:02.924 }, 00:31:02.924 "memory_domains": [ 00:31:02.924 { 00:31:02.924 "dma_device_id": "system", 00:31:02.924 "dma_device_type": 1 00:31:02.924 } 00:31:02.924 ], 00:31:02.924 "driver_specific": { 00:31:02.924 "nvme": [ 00:31:02.924 { 00:31:02.924 "trid": { 00:31:02.924 "trtype": "TCP", 00:31:02.924 "adrfam": "IPv4", 00:31:02.924 "traddr": "10.0.0.2", 00:31:02.924 "trsvcid": "4420", 00:31:02.924 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:31:02.924 }, 00:31:02.924 "ctrlr_data": { 00:31:02.924 "cntlid": 2, 00:31:02.924 "vendor_id": "0x8086", 00:31:02.924 "model_number": "SPDK bdev Controller", 00:31:02.924 "serial_number": "00000000000000000000", 00:31:02.924 "firmware_revision": "25.01", 00:31:02.924 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:02.924 "oacs": { 00:31:02.924 "security": 0, 00:31:02.924 "format": 0, 00:31:02.924 "firmware": 0, 00:31:02.924 "ns_manage": 0 00:31:02.924 }, 00:31:02.924 "multi_ctrlr": true, 00:31:02.924 "ana_reporting": false 00:31:02.924 }, 00:31:02.924 "vs": { 00:31:02.924 "nvme_version": "1.3" 00:31:02.924 }, 00:31:02.924 "ns_data": { 00:31:02.924 "id": 1, 00:31:02.924 "can_share": true 00:31:02.924 } 00:31:02.924 } 00:31:02.924 ], 00:31:02.924 "mp_policy": "active_passive" 00:31:02.924 } 00:31:02.924 } 00:31:02.924 ] 00:31:02.924 00:11:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:02.924 00:11:42 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:02.924 00:11:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:02.924 00:11:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:31:03.183 00:11:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:03.183 00:11:42 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:31:03.183 00:11:42 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.5xxQYqjqO4 00:31:03.183 00:11:42 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:31:03.183 00:11:42 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.5xxQYqjqO4 00:31:03.183 00:11:42 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd keyring_file_add_key key0 /tmp/tmp.5xxQYqjqO4 00:31:03.183 00:11:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:03.183 00:11:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:31:03.183 00:11:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:03.183 00:11:42 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:31:03.183 00:11:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:03.183 00:11:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:31:03.183 00:11:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:03.183 00:11:42 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@58 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:31:03.183 00:11:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:03.183 00:11:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:31:03.183 [2024-12-14 00:11:42.102781] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:31:03.183 [2024-12-14 00:11:42.102939] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:31:03.183 00:11:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:03.183 00:11:42 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@60 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk key0 00:31:03.183 00:11:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:03.183 00:11:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:31:03.183 00:11:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:03.183 00:11:42 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@66 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk key0 00:31:03.183 00:11:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:03.183 00:11:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:31:03.183 [2024-12-14 00:11:42.122839] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:31:03.183 nvme0n1 00:31:03.183 00:11:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:03.183 00:11:42 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@70 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:31:03.183 00:11:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:03.183 00:11:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:31:03.183 [ 00:31:03.183 { 00:31:03.183 "name": "nvme0n1", 00:31:03.183 "aliases": [ 00:31:03.183 "a4964244-ec37-4cc1-aded-8d35cf98f683" 00:31:03.183 ], 00:31:03.183 "product_name": "NVMe disk", 00:31:03.183 "block_size": 512, 00:31:03.183 "num_blocks": 2097152, 00:31:03.183 "uuid": "a4964244-ec37-4cc1-aded-8d35cf98f683", 00:31:03.183 "numa_id": 1, 00:31:03.183 "assigned_rate_limits": { 00:31:03.183 "rw_ios_per_sec": 0, 00:31:03.183 "rw_mbytes_per_sec": 0, 00:31:03.183 "r_mbytes_per_sec": 0, 00:31:03.183 "w_mbytes_per_sec": 0 00:31:03.183 }, 00:31:03.183 "claimed": false, 00:31:03.183 "zoned": false, 00:31:03.183 "supported_io_types": { 00:31:03.183 "read": true, 00:31:03.183 "write": true, 00:31:03.183 "unmap": false, 00:31:03.183 "flush": true, 00:31:03.183 "reset": true, 00:31:03.183 "nvme_admin": true, 00:31:03.183 "nvme_io": true, 00:31:03.183 "nvme_io_md": false, 00:31:03.183 "write_zeroes": true, 00:31:03.183 "zcopy": false, 00:31:03.183 "get_zone_info": false, 00:31:03.183 "zone_management": false, 00:31:03.183 "zone_append": false, 00:31:03.183 "compare": true, 00:31:03.183 "compare_and_write": true, 00:31:03.183 "abort": true, 00:31:03.183 "seek_hole": false, 00:31:03.183 "seek_data": false, 00:31:03.183 "copy": true, 00:31:03.183 "nvme_iov_md": false 00:31:03.183 }, 00:31:03.183 "memory_domains": [ 00:31:03.183 { 00:31:03.183 "dma_device_id": "system", 00:31:03.183 "dma_device_type": 1 00:31:03.183 } 00:31:03.183 ], 00:31:03.183 "driver_specific": { 00:31:03.183 "nvme": [ 00:31:03.183 { 00:31:03.183 "trid": { 00:31:03.183 "trtype": "TCP", 00:31:03.183 "adrfam": "IPv4", 00:31:03.183 "traddr": "10.0.0.2", 00:31:03.183 "trsvcid": "4421", 00:31:03.183 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:31:03.183 }, 00:31:03.183 "ctrlr_data": { 00:31:03.183 "cntlid": 3, 00:31:03.183 "vendor_id": "0x8086", 00:31:03.183 "model_number": "SPDK bdev Controller", 00:31:03.183 "serial_number": "00000000000000000000", 00:31:03.183 "firmware_revision": "25.01", 00:31:03.183 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:03.183 "oacs": { 00:31:03.183 "security": 0, 00:31:03.183 "format": 0, 00:31:03.183 "firmware": 0, 00:31:03.183 "ns_manage": 0 00:31:03.183 }, 00:31:03.183 "multi_ctrlr": true, 00:31:03.183 "ana_reporting": false 00:31:03.183 }, 00:31:03.183 "vs": { 00:31:03.183 "nvme_version": "1.3" 00:31:03.183 }, 00:31:03.183 "ns_data": { 00:31:03.183 "id": 1, 00:31:03.183 "can_share": true 00:31:03.183 } 00:31:03.183 } 00:31:03.183 ], 00:31:03.183 "mp_policy": "active_passive" 00:31:03.183 } 00:31:03.183 } 00:31:03.183 ] 00:31:03.183 00:11:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:03.183 00:11:42 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@73 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:03.183 00:11:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:03.184 00:11:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:31:03.184 00:11:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:03.184 00:11:42 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@76 -- # rm -f /tmp/tmp.5xxQYqjqO4 00:31:03.184 00:11:42 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@78 -- # trap - SIGINT SIGTERM EXIT 00:31:03.184 00:11:42 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@79 -- # nvmftestfini 00:31:03.184 00:11:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:03.184 00:11:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@121 -- # sync 00:31:03.184 00:11:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:03.184 00:11:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@124 -- # set +e 00:31:03.184 00:11:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:03.184 00:11:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:03.184 rmmod nvme_tcp 00:31:03.184 rmmod nvme_fabrics 00:31:03.184 rmmod nvme_keyring 00:31:03.184 00:11:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:03.184 00:11:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@128 -- # set -e 00:31:03.184 00:11:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@129 -- # return 0 00:31:03.184 00:11:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@517 -- # '[' -n 4150085 ']' 00:31:03.184 00:11:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@518 -- # killprocess 4150085 00:31:03.184 00:11:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@954 -- # '[' -z 4150085 ']' 00:31:03.184 00:11:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@958 -- # kill -0 4150085 00:31:03.184 00:11:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@959 -- # uname 00:31:03.184 00:11:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:03.184 00:11:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4150085 00:31:03.442 00:11:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:31:03.442 00:11:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:31:03.442 00:11:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4150085' 00:31:03.442 killing process with pid 4150085 00:31:03.442 00:11:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@973 -- # kill 4150085 00:31:03.442 00:11:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@978 -- # wait 4150085 00:31:04.379 00:11:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:04.379 00:11:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:31:04.379 00:11:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:31:04.379 00:11:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@297 -- # iptr 00:31:04.379 00:11:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # iptables-save 00:31:04.379 00:11:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:31:04.379 00:11:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # iptables-restore 00:31:04.379 00:11:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:04.379 00:11:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:04.379 00:11:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:04.379 00:11:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:04.379 00:11:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:06.913 00:11:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:06.913 00:31:06.913 real 0m10.504s 00:31:06.913 user 0m4.619s 00:31:06.913 sys 0m4.492s 00:31:06.913 00:11:45 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:06.913 00:11:45 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:31:06.913 ************************************ 00:31:06.913 END TEST nvmf_async_init 00:31:06.913 ************************************ 00:31:06.913 00:11:45 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@19 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:31:06.913 00:11:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:31:06.913 00:11:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:06.913 00:11:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:31:06.913 ************************************ 00:31:06.913 START TEST dma 00:31:06.913 ************************************ 00:31:06.913 00:11:45 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:31:06.913 * Looking for test storage... 00:31:06.913 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:31:06.913 00:11:45 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:31:06.913 00:11:45 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1711 -- # lcov --version 00:31:06.913 00:11:45 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:31:06.913 00:11:45 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:31:06.913 00:11:45 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:06.913 00:11:45 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:06.913 00:11:45 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:06.913 00:11:45 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # IFS=.-: 00:31:06.913 00:11:45 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # read -ra ver1 00:31:06.913 00:11:45 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # IFS=.-: 00:31:06.913 00:11:45 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # read -ra ver2 00:31:06.913 00:11:45 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@338 -- # local 'op=<' 00:31:06.913 00:11:45 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@340 -- # ver1_l=2 00:31:06.913 00:11:45 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@341 -- # ver2_l=1 00:31:06.913 00:11:45 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:06.913 00:11:45 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@344 -- # case "$op" in 00:31:06.913 00:11:45 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@345 -- # : 1 00:31:06.913 00:11:45 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:06.913 00:11:45 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:06.913 00:11:45 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # decimal 1 00:31:06.913 00:11:45 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=1 00:31:06.913 00:11:45 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:06.913 00:11:45 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 1 00:31:06.913 00:11:45 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # ver1[v]=1 00:31:06.913 00:11:45 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # decimal 2 00:31:06.913 00:11:45 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=2 00:31:06.914 00:11:45 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:06.914 00:11:45 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 2 00:31:06.914 00:11:45 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # ver2[v]=2 00:31:06.914 00:11:45 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:06.914 00:11:45 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:06.914 00:11:45 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # return 0 00:31:06.914 00:11:45 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:06.914 00:11:45 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:31:06.914 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:06.914 --rc genhtml_branch_coverage=1 00:31:06.914 --rc genhtml_function_coverage=1 00:31:06.914 --rc genhtml_legend=1 00:31:06.914 --rc geninfo_all_blocks=1 00:31:06.914 --rc geninfo_unexecuted_blocks=1 00:31:06.914 00:31:06.914 ' 00:31:06.914 00:11:45 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:31:06.914 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:06.914 --rc genhtml_branch_coverage=1 00:31:06.914 --rc genhtml_function_coverage=1 00:31:06.914 --rc genhtml_legend=1 00:31:06.914 --rc geninfo_all_blocks=1 00:31:06.914 --rc geninfo_unexecuted_blocks=1 00:31:06.914 00:31:06.914 ' 00:31:06.914 00:11:45 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:31:06.914 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:06.914 --rc genhtml_branch_coverage=1 00:31:06.914 --rc genhtml_function_coverage=1 00:31:06.914 --rc genhtml_legend=1 00:31:06.914 --rc geninfo_all_blocks=1 00:31:06.914 --rc geninfo_unexecuted_blocks=1 00:31:06.914 00:31:06.914 ' 00:31:06.914 00:11:45 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:31:06.914 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:06.914 --rc genhtml_branch_coverage=1 00:31:06.914 --rc genhtml_function_coverage=1 00:31:06.914 --rc genhtml_legend=1 00:31:06.914 --rc geninfo_all_blocks=1 00:31:06.914 --rc geninfo_unexecuted_blocks=1 00:31:06.914 00:31:06.914 ' 00:31:06.914 00:11:45 nvmf_tcp.nvmf_host.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:06.914 00:11:45 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # uname -s 00:31:06.914 00:11:45 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:06.914 00:11:45 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:06.914 00:11:45 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:06.914 00:11:45 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:06.914 00:11:45 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:06.914 00:11:45 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:06.914 00:11:45 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:06.914 00:11:45 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:06.914 00:11:45 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:06.914 00:11:45 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:06.914 00:11:45 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:31:06.914 00:11:45 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:31:06.914 00:11:45 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:06.914 00:11:45 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:06.914 00:11:45 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:06.914 00:11:45 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:06.914 00:11:45 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:06.914 00:11:45 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@15 -- # shopt -s extglob 00:31:06.914 00:11:45 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:06.914 00:11:45 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:06.914 00:11:45 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:06.914 00:11:45 nvmf_tcp.nvmf_host.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:06.914 00:11:45 nvmf_tcp.nvmf_host.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:06.914 00:11:45 nvmf_tcp.nvmf_host.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:06.914 00:11:45 nvmf_tcp.nvmf_host.dma -- paths/export.sh@5 -- # export PATH 00:31:06.914 00:11:45 nvmf_tcp.nvmf_host.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:06.914 00:11:45 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@51 -- # : 0 00:31:06.914 00:11:45 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:06.914 00:11:45 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:06.914 00:11:45 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:06.914 00:11:45 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:06.914 00:11:45 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:06.914 00:11:45 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:31:06.914 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:31:06.914 00:11:45 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:06.914 00:11:45 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:06.914 00:11:45 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:06.914 00:11:45 nvmf_tcp.nvmf_host.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:31:06.914 00:11:45 nvmf_tcp.nvmf_host.dma -- host/dma.sh@13 -- # exit 0 00:31:06.914 00:31:06.914 real 0m0.212s 00:31:06.914 user 0m0.127s 00:31:06.914 sys 0m0.100s 00:31:06.914 00:11:45 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:06.914 00:11:45 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:31:06.914 ************************************ 00:31:06.914 END TEST dma 00:31:06.914 ************************************ 00:31:06.914 00:11:45 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@22 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:31:06.914 00:11:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:31:06.914 00:11:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:06.914 00:11:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:31:06.914 ************************************ 00:31:06.914 START TEST nvmf_identify 00:31:06.914 ************************************ 00:31:06.914 00:11:45 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:31:06.914 * Looking for test storage... 00:31:06.914 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:31:06.914 00:11:45 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:31:06.914 00:11:45 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1711 -- # lcov --version 00:31:06.914 00:11:45 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:31:06.914 00:11:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:31:06.914 00:11:46 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:06.914 00:11:46 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:06.914 00:11:46 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:06.914 00:11:46 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # IFS=.-: 00:31:06.914 00:11:46 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # read -ra ver1 00:31:06.914 00:11:46 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # IFS=.-: 00:31:06.914 00:11:46 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # read -ra ver2 00:31:06.914 00:11:46 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@338 -- # local 'op=<' 00:31:06.914 00:11:46 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@340 -- # ver1_l=2 00:31:06.914 00:11:46 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@341 -- # ver2_l=1 00:31:06.914 00:11:46 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:06.914 00:11:46 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@344 -- # case "$op" in 00:31:06.914 00:11:46 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@345 -- # : 1 00:31:06.914 00:11:46 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:06.914 00:11:46 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:06.914 00:11:46 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # decimal 1 00:31:06.914 00:11:46 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=1 00:31:06.914 00:11:46 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:06.914 00:11:46 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 1 00:31:06.914 00:11:46 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # ver1[v]=1 00:31:06.914 00:11:46 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # decimal 2 00:31:06.914 00:11:46 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=2 00:31:06.915 00:11:46 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:06.915 00:11:46 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 2 00:31:07.174 00:11:46 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # ver2[v]=2 00:31:07.174 00:11:46 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:07.174 00:11:46 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:07.174 00:11:46 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # return 0 00:31:07.174 00:11:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:07.174 00:11:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:31:07.174 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:07.174 --rc genhtml_branch_coverage=1 00:31:07.174 --rc genhtml_function_coverage=1 00:31:07.174 --rc genhtml_legend=1 00:31:07.174 --rc geninfo_all_blocks=1 00:31:07.174 --rc geninfo_unexecuted_blocks=1 00:31:07.174 00:31:07.174 ' 00:31:07.174 00:11:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:31:07.174 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:07.174 --rc genhtml_branch_coverage=1 00:31:07.174 --rc genhtml_function_coverage=1 00:31:07.174 --rc genhtml_legend=1 00:31:07.174 --rc geninfo_all_blocks=1 00:31:07.174 --rc geninfo_unexecuted_blocks=1 00:31:07.174 00:31:07.174 ' 00:31:07.174 00:11:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:31:07.174 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:07.174 --rc genhtml_branch_coverage=1 00:31:07.174 --rc genhtml_function_coverage=1 00:31:07.174 --rc genhtml_legend=1 00:31:07.174 --rc geninfo_all_blocks=1 00:31:07.174 --rc geninfo_unexecuted_blocks=1 00:31:07.174 00:31:07.174 ' 00:31:07.174 00:11:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:31:07.174 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:07.174 --rc genhtml_branch_coverage=1 00:31:07.174 --rc genhtml_function_coverage=1 00:31:07.174 --rc genhtml_legend=1 00:31:07.174 --rc geninfo_all_blocks=1 00:31:07.174 --rc geninfo_unexecuted_blocks=1 00:31:07.174 00:31:07.174 ' 00:31:07.174 00:11:46 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:07.174 00:11:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:31:07.174 00:11:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:07.174 00:11:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:07.174 00:11:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:07.174 00:11:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:07.174 00:11:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:07.174 00:11:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:07.174 00:11:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:07.174 00:11:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:07.174 00:11:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:07.174 00:11:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:07.174 00:11:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:31:07.174 00:11:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:31:07.174 00:11:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:07.174 00:11:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:07.174 00:11:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:07.174 00:11:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:07.174 00:11:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:07.174 00:11:46 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@15 -- # shopt -s extglob 00:31:07.174 00:11:46 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:07.174 00:11:46 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:07.174 00:11:46 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:07.174 00:11:46 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:07.175 00:11:46 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:07.175 00:11:46 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:07.175 00:11:46 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:31:07.175 00:11:46 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:07.175 00:11:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@51 -- # : 0 00:31:07.175 00:11:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:07.175 00:11:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:07.175 00:11:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:07.175 00:11:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:07.175 00:11:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:07.175 00:11:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:31:07.175 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:31:07.175 00:11:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:07.175 00:11:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:07.175 00:11:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:07.175 00:11:46 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:31:07.175 00:11:46 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:31:07.175 00:11:46 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:31:07.175 00:11:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:31:07.175 00:11:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:07.175 00:11:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@476 -- # prepare_net_devs 00:31:07.175 00:11:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@438 -- # local -g is_hw=no 00:31:07.175 00:11:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@440 -- # remove_spdk_ns 00:31:07.175 00:11:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:07.175 00:11:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:07.175 00:11:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:07.175 00:11:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:31:07.175 00:11:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:31:07.175 00:11:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@309 -- # xtrace_disable 00:31:07.175 00:11:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:31:12.447 00:11:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:12.447 00:11:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # pci_devs=() 00:31:12.447 00:11:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:12.447 00:11:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:12.447 00:11:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:12.447 00:11:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:12.447 00:11:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:12.447 00:11:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # net_devs=() 00:31:12.447 00:11:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:12.447 00:11:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # e810=() 00:31:12.447 00:11:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # local -ga e810 00:31:12.447 00:11:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # x722=() 00:31:12.447 00:11:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # local -ga x722 00:31:12.447 00:11:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # mlx=() 00:31:12.447 00:11:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # local -ga mlx 00:31:12.447 00:11:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:12.447 00:11:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:12.447 00:11:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:12.447 00:11:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:12.447 00:11:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:12.447 00:11:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:12.447 00:11:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:12.447 00:11:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:12.447 00:11:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:12.447 00:11:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:12.447 00:11:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:12.447 00:11:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:12.447 00:11:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:12.447 00:11:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:12.447 00:11:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:12.447 00:11:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:12.447 00:11:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:12.447 00:11:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:12.447 00:11:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:12.447 00:11:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:31:12.447 Found 0000:af:00.0 (0x8086 - 0x159b) 00:31:12.447 00:11:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:12.447 00:11:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:12.447 00:11:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:12.447 00:11:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:12.447 00:11:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:12.447 00:11:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:12.447 00:11:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:31:12.447 Found 0000:af:00.1 (0x8086 - 0x159b) 00:31:12.447 00:11:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:12.447 00:11:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:12.447 00:11:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:12.447 00:11:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:12.447 00:11:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:12.447 00:11:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:12.447 00:11:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:12.447 00:11:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:12.447 00:11:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:12.447 00:11:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:12.447 00:11:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:12.447 00:11:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:12.447 00:11:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:12.447 00:11:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:12.447 00:11:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:12.447 00:11:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:31:12.447 Found net devices under 0000:af:00.0: cvl_0_0 00:31:12.447 00:11:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:12.447 00:11:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:12.447 00:11:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:12.447 00:11:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:12.447 00:11:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:12.447 00:11:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:12.447 00:11:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:12.447 00:11:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:12.447 00:11:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:31:12.447 Found net devices under 0000:af:00.1: cvl_0_1 00:31:12.447 00:11:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:12.448 00:11:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:31:12.448 00:11:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # is_hw=yes 00:31:12.448 00:11:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:31:12.448 00:11:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:31:12.448 00:11:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:31:12.448 00:11:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:12.448 00:11:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:12.448 00:11:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:12.448 00:11:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:12.448 00:11:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:12.448 00:11:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:12.448 00:11:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:12.448 00:11:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:12.448 00:11:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:12.448 00:11:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:12.448 00:11:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:12.448 00:11:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:12.448 00:11:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:12.448 00:11:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:12.448 00:11:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:12.448 00:11:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:12.448 00:11:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:12.448 00:11:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:12.448 00:11:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:12.448 00:11:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:12.448 00:11:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:12.448 00:11:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:12.448 00:11:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:12.448 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:12.448 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.349 ms 00:31:12.448 00:31:12.448 --- 10.0.0.2 ping statistics --- 00:31:12.448 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:12.448 rtt min/avg/max/mdev = 0.349/0.349/0.349/0.000 ms 00:31:12.448 00:11:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:12.448 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:12.448 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.215 ms 00:31:12.448 00:31:12.448 --- 10.0.0.1 ping statistics --- 00:31:12.448 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:12.448 rtt min/avg/max/mdev = 0.215/0.215/0.215/0.000 ms 00:31:12.448 00:11:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:12.448 00:11:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@450 -- # return 0 00:31:12.448 00:11:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:31:12.448 00:11:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:12.448 00:11:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:31:12.448 00:11:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:31:12.448 00:11:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:12.448 00:11:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:31:12.448 00:11:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:31:12.448 00:11:51 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:31:12.448 00:11:51 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:12.448 00:11:51 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:31:12.448 00:11:51 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=4153894 00:31:12.448 00:11:51 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:31:12.448 00:11:51 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 4153894 00:31:12.448 00:11:51 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@835 -- # '[' -z 4153894 ']' 00:31:12.448 00:11:51 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:12.448 00:11:51 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:31:12.448 00:11:51 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:12.448 00:11:51 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:12.448 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:12.448 00:11:51 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:12.448 00:11:51 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:31:12.448 [2024-12-14 00:11:51.405407] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:31:12.448 [2024-12-14 00:11:51.405523] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:12.448 [2024-12-14 00:11:51.522925] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:31:12.706 [2024-12-14 00:11:51.630529] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:12.706 [2024-12-14 00:11:51.630576] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:12.706 [2024-12-14 00:11:51.630586] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:12.707 [2024-12-14 00:11:51.630613] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:12.707 [2024-12-14 00:11:51.630621] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:12.707 [2024-12-14 00:11:51.633058] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:31:12.707 [2024-12-14 00:11:51.633138] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:31:12.707 [2024-12-14 00:11:51.633237] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:31:12.707 [2024-12-14 00:11:51.633246] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:31:13.274 00:11:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:13.274 00:11:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@868 -- # return 0 00:31:13.274 00:11:52 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:31:13.274 00:11:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:13.274 00:11:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:31:13.274 [2024-12-14 00:11:52.215263] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:13.274 00:11:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:13.274 00:11:52 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:31:13.274 00:11:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:13.274 00:11:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:31:13.274 00:11:52 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:31:13.274 00:11:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:13.274 00:11:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:31:13.274 Malloc0 00:31:13.274 00:11:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:13.274 00:11:52 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:31:13.274 00:11:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:13.274 00:11:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:31:13.274 00:11:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:13.274 00:11:52 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:31:13.274 00:11:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:13.274 00:11:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:31:13.274 00:11:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:13.274 00:11:52 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:13.274 00:11:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:13.274 00:11:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:31:13.274 [2024-12-14 00:11:52.366027] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:13.274 00:11:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:13.274 00:11:52 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:31:13.274 00:11:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:13.274 00:11:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:31:13.274 00:11:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:13.274 00:11:52 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:31:13.274 00:11:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:13.274 00:11:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:31:13.274 [ 00:31:13.274 { 00:31:13.274 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:31:13.274 "subtype": "Discovery", 00:31:13.274 "listen_addresses": [ 00:31:13.274 { 00:31:13.274 "trtype": "TCP", 00:31:13.274 "adrfam": "IPv4", 00:31:13.274 "traddr": "10.0.0.2", 00:31:13.274 "trsvcid": "4420" 00:31:13.274 } 00:31:13.274 ], 00:31:13.274 "allow_any_host": true, 00:31:13.274 "hosts": [] 00:31:13.274 }, 00:31:13.274 { 00:31:13.274 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:31:13.274 "subtype": "NVMe", 00:31:13.274 "listen_addresses": [ 00:31:13.274 { 00:31:13.274 "trtype": "TCP", 00:31:13.274 "adrfam": "IPv4", 00:31:13.274 "traddr": "10.0.0.2", 00:31:13.274 "trsvcid": "4420" 00:31:13.274 } 00:31:13.274 ], 00:31:13.274 "allow_any_host": true, 00:31:13.274 "hosts": [], 00:31:13.274 "serial_number": "SPDK00000000000001", 00:31:13.274 "model_number": "SPDK bdev Controller", 00:31:13.274 "max_namespaces": 32, 00:31:13.274 "min_cntlid": 1, 00:31:13.274 "max_cntlid": 65519, 00:31:13.274 "namespaces": [ 00:31:13.274 { 00:31:13.274 "nsid": 1, 00:31:13.274 "bdev_name": "Malloc0", 00:31:13.274 "name": "Malloc0", 00:31:13.274 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:31:13.274 "eui64": "ABCDEF0123456789", 00:31:13.274 "uuid": "4d1cbfeb-ab4d-4c37-a5c0-c40f43a1a9be" 00:31:13.274 } 00:31:13.274 ] 00:31:13.274 } 00:31:13.274 ] 00:31:13.274 00:11:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:13.274 00:11:52 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:31:13.536 [2024-12-14 00:11:52.442292] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:31:13.537 [2024-12-14 00:11:52.442358] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4154100 ] 00:31:13.537 [2024-12-14 00:11:52.502652] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to connect adminq (no timeout) 00:31:13.537 [2024-12-14 00:11:52.502758] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:31:13.537 [2024-12-14 00:11:52.502772] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:31:13.537 [2024-12-14 00:11:52.502795] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:31:13.537 [2024-12-14 00:11:52.502811] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:31:13.537 [2024-12-14 00:11:52.503424] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to wait for connect adminq (no timeout) 00:31:13.537 [2024-12-14 00:11:52.503475] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x61500001db80 0 00:31:13.537 [2024-12-14 00:11:52.509453] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:31:13.537 [2024-12-14 00:11:52.509480] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:31:13.537 [2024-12-14 00:11:52.509492] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:31:13.537 [2024-12-14 00:11:52.509498] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:31:13.537 [2024-12-14 00:11:52.509551] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:13.537 [2024-12-14 00:11:52.509560] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:13.537 [2024-12-14 00:11:52.509568] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500001db80) 00:31:13.537 [2024-12-14 00:11:52.509591] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:31:13.537 [2024-12-14 00:11:52.509615] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:31:13.537 [2024-12-14 00:11:52.513453] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:13.537 [2024-12-14 00:11:52.513473] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:13.537 [2024-12-14 00:11:52.513478] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:13.537 [2024-12-14 00:11:52.513486] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500001db80 00:31:13.537 [2024-12-14 00:11:52.513507] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:31:13.537 [2024-12-14 00:11:52.513524] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs (no timeout) 00:31:13.537 [2024-12-14 00:11:52.513533] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs wait for vs (no timeout) 00:31:13.537 [2024-12-14 00:11:52.513551] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:13.537 [2024-12-14 00:11:52.513559] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:13.537 [2024-12-14 00:11:52.513567] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500001db80) 00:31:13.537 [2024-12-14 00:11:52.513585] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.537 [2024-12-14 00:11:52.513606] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:31:13.537 [2024-12-14 00:11:52.513733] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:13.537 [2024-12-14 00:11:52.513743] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:13.537 [2024-12-14 00:11:52.513749] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:13.537 [2024-12-14 00:11:52.513755] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500001db80 00:31:13.537 [2024-12-14 00:11:52.513766] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap (no timeout) 00:31:13.537 [2024-12-14 00:11:52.513778] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap wait for cap (no timeout) 00:31:13.537 [2024-12-14 00:11:52.513790] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:13.537 [2024-12-14 00:11:52.513797] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:13.537 [2024-12-14 00:11:52.513803] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500001db80) 00:31:13.537 [2024-12-14 00:11:52.513817] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.537 [2024-12-14 00:11:52.513836] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:31:13.537 [2024-12-14 00:11:52.513940] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:13.537 [2024-12-14 00:11:52.513948] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:13.537 [2024-12-14 00:11:52.513953] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:13.537 [2024-12-14 00:11:52.513958] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500001db80 00:31:13.537 [2024-12-14 00:11:52.513966] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en (no timeout) 00:31:13.537 [2024-12-14 00:11:52.513978] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en wait for cc (timeout 15000 ms) 00:31:13.537 [2024-12-14 00:11:52.513988] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:13.537 [2024-12-14 00:11:52.513994] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:13.537 [2024-12-14 00:11:52.514000] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500001db80) 00:31:13.537 [2024-12-14 00:11:52.514013] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.537 [2024-12-14 00:11:52.514027] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:31:13.537 [2024-12-14 00:11:52.514104] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:13.537 [2024-12-14 00:11:52.514113] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:13.537 [2024-12-14 00:11:52.514118] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:13.537 [2024-12-14 00:11:52.514123] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500001db80 00:31:13.537 [2024-12-14 00:11:52.514131] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:31:13.537 [2024-12-14 00:11:52.514146] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:13.537 [2024-12-14 00:11:52.514153] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:13.537 [2024-12-14 00:11:52.514159] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500001db80) 00:31:13.537 [2024-12-14 00:11:52.514169] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.537 [2024-12-14 00:11:52.514183] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:31:13.537 [2024-12-14 00:11:52.514263] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:13.537 [2024-12-14 00:11:52.514271] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:13.537 [2024-12-14 00:11:52.514276] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:13.537 [2024-12-14 00:11:52.514281] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500001db80 00:31:13.537 [2024-12-14 00:11:52.514288] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 0 && CSTS.RDY = 0 00:31:13.537 [2024-12-14 00:11:52.514296] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to controller is disabled (timeout 15000 ms) 00:31:13.537 [2024-12-14 00:11:52.514309] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:31:13.537 [2024-12-14 00:11:52.514418] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Setting CC.EN = 1 00:31:13.537 [2024-12-14 00:11:52.514429] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:31:13.537 [2024-12-14 00:11:52.514452] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:13.537 [2024-12-14 00:11:52.514462] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:13.537 [2024-12-14 00:11:52.514469] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500001db80) 00:31:13.537 [2024-12-14 00:11:52.514479] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.537 [2024-12-14 00:11:52.514493] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:31:13.537 [2024-12-14 00:11:52.514570] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:13.537 [2024-12-14 00:11:52.514583] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:13.537 [2024-12-14 00:11:52.514588] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:13.537 [2024-12-14 00:11:52.514593] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500001db80 00:31:13.537 [2024-12-14 00:11:52.514601] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:31:13.537 [2024-12-14 00:11:52.514617] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:13.537 [2024-12-14 00:11:52.514623] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:13.537 [2024-12-14 00:11:52.514630] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500001db80) 00:31:13.537 [2024-12-14 00:11:52.514640] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.537 [2024-12-14 00:11:52.514655] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:31:13.537 [2024-12-14 00:11:52.514729] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:13.537 [2024-12-14 00:11:52.514737] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:13.537 [2024-12-14 00:11:52.514742] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:13.537 [2024-12-14 00:11:52.514747] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500001db80 00:31:13.537 [2024-12-14 00:11:52.514754] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:31:13.537 [2024-12-14 00:11:52.514762] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to reset admin queue (timeout 30000 ms) 00:31:13.537 [2024-12-14 00:11:52.514772] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to identify controller (no timeout) 00:31:13.537 [2024-12-14 00:11:52.514786] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for identify controller (timeout 30000 ms) 00:31:13.537 [2024-12-14 00:11:52.514804] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:13.537 [2024-12-14 00:11:52.514810] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500001db80) 00:31:13.538 [2024-12-14 00:11:52.514824] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.538 [2024-12-14 00:11:52.514838] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:31:13.538 [2024-12-14 00:11:52.514961] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:31:13.538 [2024-12-14 00:11:52.514971] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:31:13.538 [2024-12-14 00:11:52.514975] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:31:13.538 [2024-12-14 00:11:52.514982] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x61500001db80): datao=0, datal=4096, cccid=0 00:31:13.538 [2024-12-14 00:11:52.514989] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b100) on tqpair(0x61500001db80): expected_datao=0, payload_size=4096 00:31:13.538 [2024-12-14 00:11:52.514996] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:13.538 [2024-12-14 00:11:52.515011] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:31:13.538 [2024-12-14 00:11:52.515021] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:31:13.538 [2024-12-14 00:11:52.515035] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:13.538 [2024-12-14 00:11:52.515042] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:13.538 [2024-12-14 00:11:52.515047] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:13.538 [2024-12-14 00:11:52.515052] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500001db80 00:31:13.538 [2024-12-14 00:11:52.515067] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_xfer_size 4294967295 00:31:13.538 [2024-12-14 00:11:52.515076] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] MDTS max_xfer_size 131072 00:31:13.538 [2024-12-14 00:11:52.515083] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CNTLID 0x0001 00:31:13.538 [2024-12-14 00:11:52.515095] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_sges 16 00:31:13.538 [2024-12-14 00:11:52.515101] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] fuses compare and write: 1 00:31:13.538 [2024-12-14 00:11:52.515112] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to configure AER (timeout 30000 ms) 00:31:13.538 [2024-12-14 00:11:52.515127] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for configure aer (timeout 30000 ms) 00:31:13.538 [2024-12-14 00:11:52.515138] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:13.538 [2024-12-14 00:11:52.515144] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:13.538 [2024-12-14 00:11:52.515150] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500001db80) 00:31:13.538 [2024-12-14 00:11:52.515162] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:31:13.538 [2024-12-14 00:11:52.515177] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:31:13.538 [2024-12-14 00:11:52.515263] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:13.538 [2024-12-14 00:11:52.515272] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:13.538 [2024-12-14 00:11:52.515277] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:13.538 [2024-12-14 00:11:52.515282] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500001db80 00:31:13.538 [2024-12-14 00:11:52.515295] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:13.538 [2024-12-14 00:11:52.515303] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:13.538 [2024-12-14 00:11:52.515309] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500001db80) 00:31:13.538 [2024-12-14 00:11:52.515321] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:31:13.538 [2024-12-14 00:11:52.515330] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:13.538 [2024-12-14 00:11:52.515335] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:13.538 [2024-12-14 00:11:52.515340] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x61500001db80) 00:31:13.538 [2024-12-14 00:11:52.515350] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:31:13.538 [2024-12-14 00:11:52.515358] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:13.538 [2024-12-14 00:11:52.515363] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:13.538 [2024-12-14 00:11:52.515368] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x61500001db80) 00:31:13.538 [2024-12-14 00:11:52.515376] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:31:13.538 [2024-12-14 00:11:52.515385] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:13.538 [2024-12-14 00:11:52.515390] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:13.538 [2024-12-14 00:11:52.515395] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500001db80) 00:31:13.538 [2024-12-14 00:11:52.515403] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:31:13.538 [2024-12-14 00:11:52.515410] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:31:13.538 [2024-12-14 00:11:52.515424] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:31:13.538 [2024-12-14 00:11:52.515435] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:13.538 [2024-12-14 00:11:52.515447] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x61500001db80) 00:31:13.538 [2024-12-14 00:11:52.515457] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.538 [2024-12-14 00:11:52.515473] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:31:13.538 [2024-12-14 00:11:52.515480] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b280, cid 1, qid 0 00:31:13.538 [2024-12-14 00:11:52.515486] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b400, cid 2, qid 0 00:31:13.538 [2024-12-14 00:11:52.515492] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:31:13.538 [2024-12-14 00:11:52.515498] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:31:13.538 [2024-12-14 00:11:52.515610] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:13.538 [2024-12-14 00:11:52.515618] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:13.538 [2024-12-14 00:11:52.515623] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:13.538 [2024-12-14 00:11:52.515629] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x61500001db80 00:31:13.538 [2024-12-14 00:11:52.515637] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Sending keep alive every 5000000 us 00:31:13.538 [2024-12-14 00:11:52.515646] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to ready (no timeout) 00:31:13.538 [2024-12-14 00:11:52.515666] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:13.538 [2024-12-14 00:11:52.515673] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x61500001db80) 00:31:13.538 [2024-12-14 00:11:52.515684] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.538 [2024-12-14 00:11:52.515698] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:31:13.538 [2024-12-14 00:11:52.515800] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:31:13.538 [2024-12-14 00:11:52.515812] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:31:13.538 [2024-12-14 00:11:52.515818] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:31:13.538 [2024-12-14 00:11:52.515824] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x61500001db80): datao=0, datal=4096, cccid=4 00:31:13.538 [2024-12-14 00:11:52.515833] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b700) on tqpair(0x61500001db80): expected_datao=0, payload_size=4096 00:31:13.538 [2024-12-14 00:11:52.515839] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:13.538 [2024-12-14 00:11:52.515849] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:31:13.538 [2024-12-14 00:11:52.515854] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:31:13.538 [2024-12-14 00:11:52.515867] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:13.538 [2024-12-14 00:11:52.515876] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:13.538 [2024-12-14 00:11:52.515882] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:13.538 [2024-12-14 00:11:52.515888] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x61500001db80 00:31:13.538 [2024-12-14 00:11:52.515908] nvme_ctrlr.c:4202:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Ctrlr already in ready state 00:31:13.538 [2024-12-14 00:11:52.515947] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:13.538 [2024-12-14 00:11:52.515955] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x61500001db80) 00:31:13.538 [2024-12-14 00:11:52.515966] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.538 [2024-12-14 00:11:52.515979] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:13.538 [2024-12-14 00:11:52.515985] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:13.538 [2024-12-14 00:11:52.515990] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x61500001db80) 00:31:13.538 [2024-12-14 00:11:52.516003] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:31:13.538 [2024-12-14 00:11:52.516023] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:31:13.538 [2024-12-14 00:11:52.516031] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b880, cid 5, qid 0 00:31:13.538 [2024-12-14 00:11:52.516202] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:31:13.538 [2024-12-14 00:11:52.516214] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:31:13.538 [2024-12-14 00:11:52.516220] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:31:13.538 [2024-12-14 00:11:52.516226] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x61500001db80): datao=0, datal=1024, cccid=4 00:31:13.538 [2024-12-14 00:11:52.516232] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b700) on tqpair(0x61500001db80): expected_datao=0, payload_size=1024 00:31:13.538 [2024-12-14 00:11:52.516239] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:13.538 [2024-12-14 00:11:52.516250] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:31:13.538 [2024-12-14 00:11:52.516255] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:31:13.538 [2024-12-14 00:11:52.516263] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:13.538 [2024-12-14 00:11:52.516270] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:13.538 [2024-12-14 00:11:52.516274] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:13.538 [2024-12-14 00:11:52.516280] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b880) on tqpair=0x61500001db80 00:31:13.538 [2024-12-14 00:11:52.559452] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:13.538 [2024-12-14 00:11:52.559473] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:13.538 [2024-12-14 00:11:52.559478] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:13.539 [2024-12-14 00:11:52.559493] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x61500001db80 00:31:13.539 [2024-12-14 00:11:52.559523] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:13.539 [2024-12-14 00:11:52.559531] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x61500001db80) 00:31:13.539 [2024-12-14 00:11:52.559544] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.539 [2024-12-14 00:11:52.559572] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:31:13.539 [2024-12-14 00:11:52.559773] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:31:13.539 [2024-12-14 00:11:52.559782] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:31:13.539 [2024-12-14 00:11:52.559789] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:31:13.539 [2024-12-14 00:11:52.559795] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x61500001db80): datao=0, datal=3072, cccid=4 00:31:13.539 [2024-12-14 00:11:52.559802] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b700) on tqpair(0x61500001db80): expected_datao=0, payload_size=3072 00:31:13.539 [2024-12-14 00:11:52.559807] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:13.539 [2024-12-14 00:11:52.559817] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:31:13.539 [2024-12-14 00:11:52.559822] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:31:13.539 [2024-12-14 00:11:52.559838] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:13.539 [2024-12-14 00:11:52.559846] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:13.539 [2024-12-14 00:11:52.559851] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:13.539 [2024-12-14 00:11:52.559856] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x61500001db80 00:31:13.539 [2024-12-14 00:11:52.559871] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:13.539 [2024-12-14 00:11:52.559878] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x61500001db80) 00:31:13.539 [2024-12-14 00:11:52.559893] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.539 [2024-12-14 00:11:52.559914] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:31:13.539 [2024-12-14 00:11:52.560014] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:31:13.539 [2024-12-14 00:11:52.560027] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:31:13.539 [2024-12-14 00:11:52.560033] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:31:13.539 [2024-12-14 00:11:52.560038] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x61500001db80): datao=0, datal=8, cccid=4 00:31:13.539 [2024-12-14 00:11:52.560043] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b700) on tqpair(0x61500001db80): expected_datao=0, payload_size=8 00:31:13.539 [2024-12-14 00:11:52.560049] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:13.539 [2024-12-14 00:11:52.560057] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:31:13.539 [2024-12-14 00:11:52.560062] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:31:13.539 [2024-12-14 00:11:52.601575] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:13.539 [2024-12-14 00:11:52.601594] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:13.539 [2024-12-14 00:11:52.601600] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:13.539 [2024-12-14 00:11:52.601606] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x61500001db80 00:31:13.539 ===================================================== 00:31:13.539 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:31:13.539 ===================================================== 00:31:13.539 Controller Capabilities/Features 00:31:13.539 ================================ 00:31:13.539 Vendor ID: 0000 00:31:13.539 Subsystem Vendor ID: 0000 00:31:13.539 Serial Number: .................... 00:31:13.539 Model Number: ........................................ 00:31:13.539 Firmware Version: 25.01 00:31:13.539 Recommended Arb Burst: 0 00:31:13.539 IEEE OUI Identifier: 00 00 00 00:31:13.539 Multi-path I/O 00:31:13.539 May have multiple subsystem ports: No 00:31:13.539 May have multiple controllers: No 00:31:13.539 Associated with SR-IOV VF: No 00:31:13.539 Max Data Transfer Size: 131072 00:31:13.539 Max Number of Namespaces: 0 00:31:13.539 Max Number of I/O Queues: 1024 00:31:13.539 NVMe Specification Version (VS): 1.3 00:31:13.539 NVMe Specification Version (Identify): 1.3 00:31:13.539 Maximum Queue Entries: 128 00:31:13.539 Contiguous Queues Required: Yes 00:31:13.539 Arbitration Mechanisms Supported 00:31:13.539 Weighted Round Robin: Not Supported 00:31:13.539 Vendor Specific: Not Supported 00:31:13.539 Reset Timeout: 15000 ms 00:31:13.539 Doorbell Stride: 4 bytes 00:31:13.539 NVM Subsystem Reset: Not Supported 00:31:13.539 Command Sets Supported 00:31:13.539 NVM Command Set: Supported 00:31:13.539 Boot Partition: Not Supported 00:31:13.539 Memory Page Size Minimum: 4096 bytes 00:31:13.539 Memory Page Size Maximum: 4096 bytes 00:31:13.539 Persistent Memory Region: Not Supported 00:31:13.539 Optional Asynchronous Events Supported 00:31:13.539 Namespace Attribute Notices: Not Supported 00:31:13.539 Firmware Activation Notices: Not Supported 00:31:13.539 ANA Change Notices: Not Supported 00:31:13.539 PLE Aggregate Log Change Notices: Not Supported 00:31:13.539 LBA Status Info Alert Notices: Not Supported 00:31:13.539 EGE Aggregate Log Change Notices: Not Supported 00:31:13.539 Normal NVM Subsystem Shutdown event: Not Supported 00:31:13.539 Zone Descriptor Change Notices: Not Supported 00:31:13.539 Discovery Log Change Notices: Supported 00:31:13.539 Controller Attributes 00:31:13.539 128-bit Host Identifier: Not Supported 00:31:13.539 Non-Operational Permissive Mode: Not Supported 00:31:13.539 NVM Sets: Not Supported 00:31:13.539 Read Recovery Levels: Not Supported 00:31:13.539 Endurance Groups: Not Supported 00:31:13.539 Predictable Latency Mode: Not Supported 00:31:13.539 Traffic Based Keep ALive: Not Supported 00:31:13.539 Namespace Granularity: Not Supported 00:31:13.539 SQ Associations: Not Supported 00:31:13.539 UUID List: Not Supported 00:31:13.539 Multi-Domain Subsystem: Not Supported 00:31:13.539 Fixed Capacity Management: Not Supported 00:31:13.539 Variable Capacity Management: Not Supported 00:31:13.539 Delete Endurance Group: Not Supported 00:31:13.539 Delete NVM Set: Not Supported 00:31:13.539 Extended LBA Formats Supported: Not Supported 00:31:13.539 Flexible Data Placement Supported: Not Supported 00:31:13.539 00:31:13.539 Controller Memory Buffer Support 00:31:13.539 ================================ 00:31:13.539 Supported: No 00:31:13.539 00:31:13.539 Persistent Memory Region Support 00:31:13.539 ================================ 00:31:13.539 Supported: No 00:31:13.539 00:31:13.539 Admin Command Set Attributes 00:31:13.539 ============================ 00:31:13.539 Security Send/Receive: Not Supported 00:31:13.539 Format NVM: Not Supported 00:31:13.539 Firmware Activate/Download: Not Supported 00:31:13.539 Namespace Management: Not Supported 00:31:13.539 Device Self-Test: Not Supported 00:31:13.539 Directives: Not Supported 00:31:13.539 NVMe-MI: Not Supported 00:31:13.539 Virtualization Management: Not Supported 00:31:13.539 Doorbell Buffer Config: Not Supported 00:31:13.539 Get LBA Status Capability: Not Supported 00:31:13.539 Command & Feature Lockdown Capability: Not Supported 00:31:13.539 Abort Command Limit: 1 00:31:13.539 Async Event Request Limit: 4 00:31:13.539 Number of Firmware Slots: N/A 00:31:13.539 Firmware Slot 1 Read-Only: N/A 00:31:13.539 Firmware Activation Without Reset: N/A 00:31:13.539 Multiple Update Detection Support: N/A 00:31:13.539 Firmware Update Granularity: No Information Provided 00:31:13.539 Per-Namespace SMART Log: No 00:31:13.539 Asymmetric Namespace Access Log Page: Not Supported 00:31:13.539 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:31:13.539 Command Effects Log Page: Not Supported 00:31:13.539 Get Log Page Extended Data: Supported 00:31:13.539 Telemetry Log Pages: Not Supported 00:31:13.539 Persistent Event Log Pages: Not Supported 00:31:13.539 Supported Log Pages Log Page: May Support 00:31:13.539 Commands Supported & Effects Log Page: Not Supported 00:31:13.539 Feature Identifiers & Effects Log Page:May Support 00:31:13.539 NVMe-MI Commands & Effects Log Page: May Support 00:31:13.539 Data Area 4 for Telemetry Log: Not Supported 00:31:13.539 Error Log Page Entries Supported: 128 00:31:13.539 Keep Alive: Not Supported 00:31:13.539 00:31:13.539 NVM Command Set Attributes 00:31:13.539 ========================== 00:31:13.539 Submission Queue Entry Size 00:31:13.539 Max: 1 00:31:13.539 Min: 1 00:31:13.539 Completion Queue Entry Size 00:31:13.539 Max: 1 00:31:13.539 Min: 1 00:31:13.539 Number of Namespaces: 0 00:31:13.539 Compare Command: Not Supported 00:31:13.539 Write Uncorrectable Command: Not Supported 00:31:13.539 Dataset Management Command: Not Supported 00:31:13.539 Write Zeroes Command: Not Supported 00:31:13.539 Set Features Save Field: Not Supported 00:31:13.539 Reservations: Not Supported 00:31:13.539 Timestamp: Not Supported 00:31:13.539 Copy: Not Supported 00:31:13.539 Volatile Write Cache: Not Present 00:31:13.539 Atomic Write Unit (Normal): 1 00:31:13.539 Atomic Write Unit (PFail): 1 00:31:13.539 Atomic Compare & Write Unit: 1 00:31:13.539 Fused Compare & Write: Supported 00:31:13.539 Scatter-Gather List 00:31:13.539 SGL Command Set: Supported 00:31:13.539 SGL Keyed: Supported 00:31:13.539 SGL Bit Bucket Descriptor: Not Supported 00:31:13.539 SGL Metadata Pointer: Not Supported 00:31:13.539 Oversized SGL: Not Supported 00:31:13.539 SGL Metadata Address: Not Supported 00:31:13.539 SGL Offset: Supported 00:31:13.539 Transport SGL Data Block: Not Supported 00:31:13.540 Replay Protected Memory Block: Not Supported 00:31:13.540 00:31:13.540 Firmware Slot Information 00:31:13.540 ========================= 00:31:13.540 Active slot: 0 00:31:13.540 00:31:13.540 00:31:13.540 Error Log 00:31:13.540 ========= 00:31:13.540 00:31:13.540 Active Namespaces 00:31:13.540 ================= 00:31:13.540 Discovery Log Page 00:31:13.540 ================== 00:31:13.540 Generation Counter: 2 00:31:13.540 Number of Records: 2 00:31:13.540 Record Format: 0 00:31:13.540 00:31:13.540 Discovery Log Entry 0 00:31:13.540 ---------------------- 00:31:13.540 Transport Type: 3 (TCP) 00:31:13.540 Address Family: 1 (IPv4) 00:31:13.540 Subsystem Type: 3 (Current Discovery Subsystem) 00:31:13.540 Entry Flags: 00:31:13.540 Duplicate Returned Information: 1 00:31:13.540 Explicit Persistent Connection Support for Discovery: 1 00:31:13.540 Transport Requirements: 00:31:13.540 Secure Channel: Not Required 00:31:13.540 Port ID: 0 (0x0000) 00:31:13.540 Controller ID: 65535 (0xffff) 00:31:13.540 Admin Max SQ Size: 128 00:31:13.540 Transport Service Identifier: 4420 00:31:13.540 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:31:13.540 Transport Address: 10.0.0.2 00:31:13.540 Discovery Log Entry 1 00:31:13.540 ---------------------- 00:31:13.540 Transport Type: 3 (TCP) 00:31:13.540 Address Family: 1 (IPv4) 00:31:13.540 Subsystem Type: 2 (NVM Subsystem) 00:31:13.540 Entry Flags: 00:31:13.540 Duplicate Returned Information: 0 00:31:13.540 Explicit Persistent Connection Support for Discovery: 0 00:31:13.540 Transport Requirements: 00:31:13.540 Secure Channel: Not Required 00:31:13.540 Port ID: 0 (0x0000) 00:31:13.540 Controller ID: 65535 (0xffff) 00:31:13.540 Admin Max SQ Size: 128 00:31:13.540 Transport Service Identifier: 4420 00:31:13.540 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:31:13.540 Transport Address: 10.0.0.2 [2024-12-14 00:11:52.601724] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Prepare to destruct SSD 00:31:13.540 [2024-12-14 00:11:52.601739] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500001db80 00:31:13.540 [2024-12-14 00:11:52.601751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.540 [2024-12-14 00:11:52.601759] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b280) on tqpair=0x61500001db80 00:31:13.540 [2024-12-14 00:11:52.601766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.540 [2024-12-14 00:11:52.601772] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b400) on tqpair=0x61500001db80 00:31:13.540 [2024-12-14 00:11:52.601780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.540 [2024-12-14 00:11:52.601786] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500001db80 00:31:13.540 [2024-12-14 00:11:52.601794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.540 [2024-12-14 00:11:52.601807] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:13.540 [2024-12-14 00:11:52.601813] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:13.540 [2024-12-14 00:11:52.601819] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500001db80) 00:31:13.540 [2024-12-14 00:11:52.601831] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.540 [2024-12-14 00:11:52.601850] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:31:13.540 [2024-12-14 00:11:52.602078] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:13.540 [2024-12-14 00:11:52.602088] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:13.540 [2024-12-14 00:11:52.602094] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:13.540 [2024-12-14 00:11:52.602100] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500001db80 00:31:13.540 [2024-12-14 00:11:52.602111] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:13.540 [2024-12-14 00:11:52.602117] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:13.540 [2024-12-14 00:11:52.602123] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500001db80) 00:31:13.540 [2024-12-14 00:11:52.602133] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.540 [2024-12-14 00:11:52.602153] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:31:13.540 [2024-12-14 00:11:52.602277] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:13.540 [2024-12-14 00:11:52.602285] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:13.540 [2024-12-14 00:11:52.602289] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:13.540 [2024-12-14 00:11:52.602295] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500001db80 00:31:13.540 [2024-12-14 00:11:52.602307] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] RTD3E = 0 us 00:31:13.540 [2024-12-14 00:11:52.602317] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown timeout = 10000 ms 00:31:13.540 [2024-12-14 00:11:52.602331] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:13.540 [2024-12-14 00:11:52.602338] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:13.540 [2024-12-14 00:11:52.602343] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500001db80) 00:31:13.540 [2024-12-14 00:11:52.602356] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.540 [2024-12-14 00:11:52.602370] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:31:13.540 [2024-12-14 00:11:52.602451] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:13.540 [2024-12-14 00:11:52.602460] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:13.540 [2024-12-14 00:11:52.602465] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:13.540 [2024-12-14 00:11:52.602470] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500001db80 00:31:13.540 [2024-12-14 00:11:52.602484] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:13.540 [2024-12-14 00:11:52.602490] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:13.540 [2024-12-14 00:11:52.602495] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500001db80) 00:31:13.540 [2024-12-14 00:11:52.602504] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.540 [2024-12-14 00:11:52.602518] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:31:13.540 [2024-12-14 00:11:52.602630] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:13.540 [2024-12-14 00:11:52.602638] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:13.540 [2024-12-14 00:11:52.602643] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:13.540 [2024-12-14 00:11:52.602649] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500001db80 00:31:13.540 [2024-12-14 00:11:52.602661] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:13.540 [2024-12-14 00:11:52.602666] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:13.540 [2024-12-14 00:11:52.602671] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500001db80) 00:31:13.540 [2024-12-14 00:11:52.602680] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.540 [2024-12-14 00:11:52.602693] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:31:13.540 [2024-12-14 00:11:52.602801] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:13.540 [2024-12-14 00:11:52.602809] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:13.540 [2024-12-14 00:11:52.602813] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:13.540 [2024-12-14 00:11:52.602819] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500001db80 00:31:13.540 [2024-12-14 00:11:52.602832] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:13.540 [2024-12-14 00:11:52.602837] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:13.540 [2024-12-14 00:11:52.602842] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500001db80) 00:31:13.540 [2024-12-14 00:11:52.602851] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.540 [2024-12-14 00:11:52.602864] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:31:13.540 [2024-12-14 00:11:52.602988] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:13.540 [2024-12-14 00:11:52.602997] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:13.540 [2024-12-14 00:11:52.603001] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:13.540 [2024-12-14 00:11:52.603006] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500001db80 00:31:13.540 [2024-12-14 00:11:52.603018] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:13.540 [2024-12-14 00:11:52.603025] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:13.540 [2024-12-14 00:11:52.603029] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500001db80) 00:31:13.540 [2024-12-14 00:11:52.603038] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.540 [2024-12-14 00:11:52.603051] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:31:13.540 [2024-12-14 00:11:52.603125] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:13.540 [2024-12-14 00:11:52.603133] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:13.540 [2024-12-14 00:11:52.603138] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:13.540 [2024-12-14 00:11:52.603143] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500001db80 00:31:13.540 [2024-12-14 00:11:52.603156] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:13.540 [2024-12-14 00:11:52.603161] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:13.540 [2024-12-14 00:11:52.603166] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500001db80) 00:31:13.540 [2024-12-14 00:11:52.603175] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.540 [2024-12-14 00:11:52.603187] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:31:13.540 [2024-12-14 00:11:52.603284] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:13.541 [2024-12-14 00:11:52.603292] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:13.541 [2024-12-14 00:11:52.603298] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:13.541 [2024-12-14 00:11:52.603303] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500001db80 00:31:13.541 [2024-12-14 00:11:52.603314] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:13.541 [2024-12-14 00:11:52.603320] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:13.541 [2024-12-14 00:11:52.603325] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500001db80) 00:31:13.541 [2024-12-14 00:11:52.603334] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.541 [2024-12-14 00:11:52.603346] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:31:13.541 [2024-12-14 00:11:52.603435] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:13.541 [2024-12-14 00:11:52.603449] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:13.541 [2024-12-14 00:11:52.603454] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:13.541 [2024-12-14 00:11:52.603459] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500001db80 00:31:13.541 [2024-12-14 00:11:52.603471] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:13.541 [2024-12-14 00:11:52.603477] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:13.541 [2024-12-14 00:11:52.603488] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500001db80) 00:31:13.541 [2024-12-14 00:11:52.603497] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.541 [2024-12-14 00:11:52.603511] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:31:13.541 [2024-12-14 00:11:52.603588] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:13.541 [2024-12-14 00:11:52.603596] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:13.541 [2024-12-14 00:11:52.603601] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:13.541 [2024-12-14 00:11:52.603606] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500001db80 00:31:13.541 [2024-12-14 00:11:52.603618] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:13.541 [2024-12-14 00:11:52.603624] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:13.541 [2024-12-14 00:11:52.603629] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500001db80) 00:31:13.541 [2024-12-14 00:11:52.603640] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.541 [2024-12-14 00:11:52.603654] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:31:13.541 [2024-12-14 00:11:52.603730] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:13.541 [2024-12-14 00:11:52.603739] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:13.541 [2024-12-14 00:11:52.603743] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:13.541 [2024-12-14 00:11:52.603748] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500001db80 00:31:13.541 [2024-12-14 00:11:52.603761] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:13.541 [2024-12-14 00:11:52.603767] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:13.541 [2024-12-14 00:11:52.603772] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500001db80) 00:31:13.541 [2024-12-14 00:11:52.603781] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.541 [2024-12-14 00:11:52.603794] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:31:13.541 [2024-12-14 00:11:52.603889] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:13.541 [2024-12-14 00:11:52.603897] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:13.541 [2024-12-14 00:11:52.603902] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:13.541 [2024-12-14 00:11:52.603907] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500001db80 00:31:13.541 [2024-12-14 00:11:52.603919] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:13.541 [2024-12-14 00:11:52.603925] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:13.541 [2024-12-14 00:11:52.603930] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500001db80) 00:31:13.541 [2024-12-14 00:11:52.603939] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.541 [2024-12-14 00:11:52.603952] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:31:13.541 [2024-12-14 00:11:52.604040] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:13.541 [2024-12-14 00:11:52.604048] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:13.541 [2024-12-14 00:11:52.604053] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:13.541 [2024-12-14 00:11:52.604058] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500001db80 00:31:13.541 [2024-12-14 00:11:52.604070] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:13.541 [2024-12-14 00:11:52.604075] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:13.541 [2024-12-14 00:11:52.604080] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500001db80) 00:31:13.541 [2024-12-14 00:11:52.604089] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.541 [2024-12-14 00:11:52.604102] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:31:13.541 [2024-12-14 00:11:52.604193] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:13.541 [2024-12-14 00:11:52.604202] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:13.541 [2024-12-14 00:11:52.604207] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:13.541 [2024-12-14 00:11:52.604212] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500001db80 00:31:13.541 [2024-12-14 00:11:52.604224] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:13.541 [2024-12-14 00:11:52.604229] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:13.541 [2024-12-14 00:11:52.604234] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500001db80) 00:31:13.541 [2024-12-14 00:11:52.604243] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.541 [2024-12-14 00:11:52.604256] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:31:13.541 [2024-12-14 00:11:52.604326] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:13.541 [2024-12-14 00:11:52.604334] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:13.541 [2024-12-14 00:11:52.604339] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:13.541 [2024-12-14 00:11:52.604344] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500001db80 00:31:13.541 [2024-12-14 00:11:52.604356] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:13.541 [2024-12-14 00:11:52.604362] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:13.541 [2024-12-14 00:11:52.604367] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500001db80) 00:31:13.541 [2024-12-14 00:11:52.604376] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.541 [2024-12-14 00:11:52.604389] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:31:13.541 [2024-12-14 00:11:52.604495] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:13.541 [2024-12-14 00:11:52.604504] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:13.541 [2024-12-14 00:11:52.604509] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:13.541 [2024-12-14 00:11:52.604514] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500001db80 00:31:13.541 [2024-12-14 00:11:52.604526] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:13.541 [2024-12-14 00:11:52.604532] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:13.541 [2024-12-14 00:11:52.604537] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500001db80) 00:31:13.541 [2024-12-14 00:11:52.604547] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.541 [2024-12-14 00:11:52.604560] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:31:13.541 [2024-12-14 00:11:52.604646] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:13.541 [2024-12-14 00:11:52.604655] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:13.541 [2024-12-14 00:11:52.604659] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:13.541 [2024-12-14 00:11:52.604664] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500001db80 00:31:13.541 [2024-12-14 00:11:52.604676] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:13.541 [2024-12-14 00:11:52.604682] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:13.541 [2024-12-14 00:11:52.604687] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500001db80) 00:31:13.541 [2024-12-14 00:11:52.604696] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.541 [2024-12-14 00:11:52.604709] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:31:13.541 [2024-12-14 00:11:52.604797] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:13.541 [2024-12-14 00:11:52.604805] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:13.541 [2024-12-14 00:11:52.604810] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:13.542 [2024-12-14 00:11:52.604815] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500001db80 00:31:13.542 [2024-12-14 00:11:52.604827] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:13.542 [2024-12-14 00:11:52.604832] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:13.542 [2024-12-14 00:11:52.604837] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500001db80) 00:31:13.542 [2024-12-14 00:11:52.604849] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.542 [2024-12-14 00:11:52.604862] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:31:13.542 [2024-12-14 00:11:52.604936] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:13.542 [2024-12-14 00:11:52.604944] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:13.542 [2024-12-14 00:11:52.604949] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:13.542 [2024-12-14 00:11:52.604954] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500001db80 00:31:13.542 [2024-12-14 00:11:52.604966] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:13.542 [2024-12-14 00:11:52.604972] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:13.542 [2024-12-14 00:11:52.604977] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500001db80) 00:31:13.542 [2024-12-14 00:11:52.604986] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.542 [2024-12-14 00:11:52.604998] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:31:13.542 [2024-12-14 00:11:52.605099] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:13.542 [2024-12-14 00:11:52.605107] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:13.542 [2024-12-14 00:11:52.605112] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:13.542 [2024-12-14 00:11:52.605117] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500001db80 00:31:13.542 [2024-12-14 00:11:52.605138] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:13.542 [2024-12-14 00:11:52.605144] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:13.542 [2024-12-14 00:11:52.605149] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500001db80) 00:31:13.542 [2024-12-14 00:11:52.605158] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.542 [2024-12-14 00:11:52.605171] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:31:13.542 [2024-12-14 00:11:52.605251] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:13.542 [2024-12-14 00:11:52.605259] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:13.542 [2024-12-14 00:11:52.605263] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:13.542 [2024-12-14 00:11:52.605272] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500001db80 00:31:13.542 [2024-12-14 00:11:52.605284] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:13.542 [2024-12-14 00:11:52.605289] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:13.542 [2024-12-14 00:11:52.605294] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500001db80) 00:31:13.542 [2024-12-14 00:11:52.605303] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.542 [2024-12-14 00:11:52.605316] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:31:13.542 [2024-12-14 00:11:52.605404] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:13.542 [2024-12-14 00:11:52.605413] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:13.542 [2024-12-14 00:11:52.605417] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:13.542 [2024-12-14 00:11:52.605422] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500001db80 00:31:13.542 [2024-12-14 00:11:52.605434] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:13.542 [2024-12-14 00:11:52.609453] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:13.542 [2024-12-14 00:11:52.609462] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500001db80) 00:31:13.542 [2024-12-14 00:11:52.609473] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.542 [2024-12-14 00:11:52.609493] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:31:13.542 [2024-12-14 00:11:52.609673] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:13.542 [2024-12-14 00:11:52.609682] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:13.542 [2024-12-14 00:11:52.609687] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:13.542 [2024-12-14 00:11:52.609693] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500001db80 00:31:13.542 [2024-12-14 00:11:52.609704] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown complete in 7 milliseconds 00:31:13.542 00:31:13.542 00:11:52 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:31:13.804 [2024-12-14 00:11:52.707971] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:31:13.804 [2024-12-14 00:11:52.708041] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4154109 ] 00:31:13.804 [2024-12-14 00:11:52.769948] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to connect adminq (no timeout) 00:31:13.804 [2024-12-14 00:11:52.770049] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:31:13.804 [2024-12-14 00:11:52.770060] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:31:13.804 [2024-12-14 00:11:52.770081] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:31:13.804 [2024-12-14 00:11:52.770094] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:31:13.804 [2024-12-14 00:11:52.773745] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to wait for connect adminq (no timeout) 00:31:13.804 [2024-12-14 00:11:52.773788] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x61500001db80 0 00:31:13.804 [2024-12-14 00:11:52.781457] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:31:13.804 [2024-12-14 00:11:52.781483] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:31:13.804 [2024-12-14 00:11:52.781496] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:31:13.804 [2024-12-14 00:11:52.781502] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:31:13.804 [2024-12-14 00:11:52.781547] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:13.804 [2024-12-14 00:11:52.781556] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:13.804 [2024-12-14 00:11:52.781566] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500001db80) 00:31:13.804 [2024-12-14 00:11:52.781583] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:31:13.804 [2024-12-14 00:11:52.781608] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:31:13.804 [2024-12-14 00:11:52.788454] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:13.804 [2024-12-14 00:11:52.788476] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:13.804 [2024-12-14 00:11:52.788482] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:13.804 [2024-12-14 00:11:52.788493] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500001db80 00:31:13.804 [2024-12-14 00:11:52.788508] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:31:13.804 [2024-12-14 00:11:52.788523] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs (no timeout) 00:31:13.804 [2024-12-14 00:11:52.788532] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs wait for vs (no timeout) 00:31:13.804 [2024-12-14 00:11:52.788555] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:13.804 [2024-12-14 00:11:52.788562] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:13.804 [2024-12-14 00:11:52.788568] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500001db80) 00:31:13.804 [2024-12-14 00:11:52.788581] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.804 [2024-12-14 00:11:52.788602] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:31:13.804 [2024-12-14 00:11:52.788758] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:13.804 [2024-12-14 00:11:52.788767] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:13.804 [2024-12-14 00:11:52.788773] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:13.804 [2024-12-14 00:11:52.788785] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500001db80 00:31:13.804 [2024-12-14 00:11:52.788800] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap (no timeout) 00:31:13.804 [2024-12-14 00:11:52.788812] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap wait for cap (no timeout) 00:31:13.804 [2024-12-14 00:11:52.788822] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:13.804 [2024-12-14 00:11:52.788829] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:13.804 [2024-12-14 00:11:52.788835] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500001db80) 00:31:13.804 [2024-12-14 00:11:52.788847] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.804 [2024-12-14 00:11:52.788863] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:31:13.804 [2024-12-14 00:11:52.788937] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:13.804 [2024-12-14 00:11:52.788946] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:13.804 [2024-12-14 00:11:52.788950] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:13.804 [2024-12-14 00:11:52.788956] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500001db80 00:31:13.804 [2024-12-14 00:11:52.788963] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en (no timeout) 00:31:13.804 [2024-12-14 00:11:52.788975] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en wait for cc (timeout 15000 ms) 00:31:13.804 [2024-12-14 00:11:52.788984] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:13.804 [2024-12-14 00:11:52.788993] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:13.804 [2024-12-14 00:11:52.788998] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500001db80) 00:31:13.804 [2024-12-14 00:11:52.789011] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.804 [2024-12-14 00:11:52.789025] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:31:13.804 [2024-12-14 00:11:52.789102] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:13.804 [2024-12-14 00:11:52.789111] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:13.804 [2024-12-14 00:11:52.789117] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:13.805 [2024-12-14 00:11:52.789123] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500001db80 00:31:13.805 [2024-12-14 00:11:52.789131] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:31:13.805 [2024-12-14 00:11:52.789145] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:13.805 [2024-12-14 00:11:52.789151] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:13.805 [2024-12-14 00:11:52.789157] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500001db80) 00:31:13.805 [2024-12-14 00:11:52.789167] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.805 [2024-12-14 00:11:52.789181] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:31:13.805 [2024-12-14 00:11:52.789248] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:13.805 [2024-12-14 00:11:52.789257] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:13.805 [2024-12-14 00:11:52.789262] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:13.805 [2024-12-14 00:11:52.789267] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500001db80 00:31:13.805 [2024-12-14 00:11:52.789274] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 0 && CSTS.RDY = 0 00:31:13.805 [2024-12-14 00:11:52.789287] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to controller is disabled (timeout 15000 ms) 00:31:13.805 [2024-12-14 00:11:52.789301] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:31:13.805 [2024-12-14 00:11:52.789410] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Setting CC.EN = 1 00:31:13.805 [2024-12-14 00:11:52.789417] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:31:13.805 [2024-12-14 00:11:52.789434] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:13.805 [2024-12-14 00:11:52.789447] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:13.805 [2024-12-14 00:11:52.789454] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500001db80) 00:31:13.805 [2024-12-14 00:11:52.789464] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.805 [2024-12-14 00:11:52.789480] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:31:13.805 [2024-12-14 00:11:52.789590] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:13.805 [2024-12-14 00:11:52.789598] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:13.805 [2024-12-14 00:11:52.789603] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:13.805 [2024-12-14 00:11:52.789608] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500001db80 00:31:13.805 [2024-12-14 00:11:52.789615] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:31:13.805 [2024-12-14 00:11:52.789631] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:13.805 [2024-12-14 00:11:52.789637] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:13.805 [2024-12-14 00:11:52.789643] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500001db80) 00:31:13.805 [2024-12-14 00:11:52.789655] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.805 [2024-12-14 00:11:52.789670] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:31:13.805 [2024-12-14 00:11:52.789767] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:13.805 [2024-12-14 00:11:52.789775] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:13.805 [2024-12-14 00:11:52.789780] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:13.805 [2024-12-14 00:11:52.789785] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500001db80 00:31:13.805 [2024-12-14 00:11:52.789792] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:31:13.805 [2024-12-14 00:11:52.789799] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to reset admin queue (timeout 30000 ms) 00:31:13.805 [2024-12-14 00:11:52.789811] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller (no timeout) 00:31:13.805 [2024-12-14 00:11:52.789825] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify controller (timeout 30000 ms) 00:31:13.805 [2024-12-14 00:11:52.789841] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:13.805 [2024-12-14 00:11:52.789849] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500001db80) 00:31:13.805 [2024-12-14 00:11:52.789859] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.805 [2024-12-14 00:11:52.789873] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:31:13.805 [2024-12-14 00:11:52.790004] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:31:13.805 [2024-12-14 00:11:52.790013] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:31:13.805 [2024-12-14 00:11:52.790017] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:31:13.805 [2024-12-14 00:11:52.790023] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x61500001db80): datao=0, datal=4096, cccid=0 00:31:13.805 [2024-12-14 00:11:52.790030] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b100) on tqpair(0x61500001db80): expected_datao=0, payload_size=4096 00:31:13.805 [2024-12-14 00:11:52.790036] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:13.805 [2024-12-14 00:11:52.790055] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:31:13.805 [2024-12-14 00:11:52.790062] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:31:13.805 [2024-12-14 00:11:52.830685] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:13.805 [2024-12-14 00:11:52.830706] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:13.805 [2024-12-14 00:11:52.830712] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:13.805 [2024-12-14 00:11:52.830719] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500001db80 00:31:13.805 [2024-12-14 00:11:52.830735] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_xfer_size 4294967295 00:31:13.805 [2024-12-14 00:11:52.830744] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] MDTS max_xfer_size 131072 00:31:13.805 [2024-12-14 00:11:52.830755] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CNTLID 0x0001 00:31:13.805 [2024-12-14 00:11:52.830765] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_sges 16 00:31:13.805 [2024-12-14 00:11:52.830772] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] fuses compare and write: 1 00:31:13.805 [2024-12-14 00:11:52.830779] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to configure AER (timeout 30000 ms) 00:31:13.805 [2024-12-14 00:11:52.830795] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for configure aer (timeout 30000 ms) 00:31:13.805 [2024-12-14 00:11:52.830806] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:13.805 [2024-12-14 00:11:52.830813] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:13.805 [2024-12-14 00:11:52.830820] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500001db80) 00:31:13.805 [2024-12-14 00:11:52.830835] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:31:13.805 [2024-12-14 00:11:52.830855] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:31:13.805 [2024-12-14 00:11:52.830934] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:13.805 [2024-12-14 00:11:52.830943] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:13.805 [2024-12-14 00:11:52.830948] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:13.805 [2024-12-14 00:11:52.830954] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500001db80 00:31:13.805 [2024-12-14 00:11:52.830969] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:13.805 [2024-12-14 00:11:52.830979] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:13.805 [2024-12-14 00:11:52.830985] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500001db80) 00:31:13.805 [2024-12-14 00:11:52.830996] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:31:13.805 [2024-12-14 00:11:52.831005] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:13.805 [2024-12-14 00:11:52.831010] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:13.805 [2024-12-14 00:11:52.831015] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x61500001db80) 00:31:13.805 [2024-12-14 00:11:52.831026] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:31:13.805 [2024-12-14 00:11:52.831033] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:13.805 [2024-12-14 00:11:52.831038] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:13.805 [2024-12-14 00:11:52.831044] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x61500001db80) 00:31:13.805 [2024-12-14 00:11:52.831052] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:31:13.805 [2024-12-14 00:11:52.831059] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:13.805 [2024-12-14 00:11:52.831064] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:13.805 [2024-12-14 00:11:52.831069] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500001db80) 00:31:13.805 [2024-12-14 00:11:52.831076] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:31:13.805 [2024-12-14 00:11:52.831083] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:31:13.805 [2024-12-14 00:11:52.831100] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:31:13.805 [2024-12-14 00:11:52.831109] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:13.805 [2024-12-14 00:11:52.831115] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x61500001db80) 00:31:13.805 [2024-12-14 00:11:52.831125] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.805 [2024-12-14 00:11:52.831142] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:31:13.805 [2024-12-14 00:11:52.831149] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b280, cid 1, qid 0 00:31:13.805 [2024-12-14 00:11:52.831155] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b400, cid 2, qid 0 00:31:13.805 [2024-12-14 00:11:52.831161] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:31:13.805 [2024-12-14 00:11:52.831167] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:31:13.805 [2024-12-14 00:11:52.831281] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:13.805 [2024-12-14 00:11:52.831290] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:13.805 [2024-12-14 00:11:52.831295] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:13.806 [2024-12-14 00:11:52.831300] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x61500001db80 00:31:13.806 [2024-12-14 00:11:52.831308] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Sending keep alive every 5000000 us 00:31:13.806 [2024-12-14 00:11:52.831318] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller iocs specific (timeout 30000 ms) 00:31:13.806 [2024-12-14 00:11:52.831330] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set number of queues (timeout 30000 ms) 00:31:13.806 [2024-12-14 00:11:52.831339] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set number of queues (timeout 30000 ms) 00:31:13.806 [2024-12-14 00:11:52.831347] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:13.806 [2024-12-14 00:11:52.831354] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:13.806 [2024-12-14 00:11:52.831360] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x61500001db80) 00:31:13.806 [2024-12-14 00:11:52.831370] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:31:13.806 [2024-12-14 00:11:52.831386] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:31:13.806 [2024-12-14 00:11:52.831463] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:13.806 [2024-12-14 00:11:52.831472] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:13.806 [2024-12-14 00:11:52.831477] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:13.806 [2024-12-14 00:11:52.831482] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x61500001db80 00:31:13.806 [2024-12-14 00:11:52.831549] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify active ns (timeout 30000 ms) 00:31:13.806 [2024-12-14 00:11:52.831567] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify active ns (timeout 30000 ms) 00:31:13.806 [2024-12-14 00:11:52.831580] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:13.806 [2024-12-14 00:11:52.831588] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x61500001db80) 00:31:13.806 [2024-12-14 00:11:52.831598] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.806 [2024-12-14 00:11:52.831613] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:31:13.806 [2024-12-14 00:11:52.831713] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:31:13.806 [2024-12-14 00:11:52.831722] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:31:13.806 [2024-12-14 00:11:52.831727] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:31:13.806 [2024-12-14 00:11:52.831732] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x61500001db80): datao=0, datal=4096, cccid=4 00:31:13.806 [2024-12-14 00:11:52.831739] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b700) on tqpair(0x61500001db80): expected_datao=0, payload_size=4096 00:31:13.806 [2024-12-14 00:11:52.831745] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:13.806 [2024-12-14 00:11:52.831762] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:31:13.806 [2024-12-14 00:11:52.831768] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:31:13.806 [2024-12-14 00:11:52.875451] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:13.806 [2024-12-14 00:11:52.875475] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:13.806 [2024-12-14 00:11:52.875481] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:13.806 [2024-12-14 00:11:52.875488] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x61500001db80 00:31:13.806 [2024-12-14 00:11:52.875513] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Namespace 1 was added 00:31:13.806 [2024-12-14 00:11:52.875535] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns (timeout 30000 ms) 00:31:13.806 [2024-12-14 00:11:52.875552] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify ns (timeout 30000 ms) 00:31:13.806 [2024-12-14 00:11:52.875565] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:13.806 [2024-12-14 00:11:52.875572] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x61500001db80) 00:31:13.806 [2024-12-14 00:11:52.875584] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.806 [2024-12-14 00:11:52.875604] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:31:13.806 [2024-12-14 00:11:52.875818] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:31:13.806 [2024-12-14 00:11:52.875828] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:31:13.806 [2024-12-14 00:11:52.875832] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:31:13.806 [2024-12-14 00:11:52.875838] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x61500001db80): datao=0, datal=4096, cccid=4 00:31:13.806 [2024-12-14 00:11:52.875848] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b700) on tqpair(0x61500001db80): expected_datao=0, payload_size=4096 00:31:13.806 [2024-12-14 00:11:52.875859] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:13.806 [2024-12-14 00:11:52.875874] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:31:13.806 [2024-12-14 00:11:52.875880] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:31:13.806 [2024-12-14 00:11:52.916592] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:13.806 [2024-12-14 00:11:52.916611] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:13.806 [2024-12-14 00:11:52.916616] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:13.806 [2024-12-14 00:11:52.916623] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x61500001db80 00:31:13.806 [2024-12-14 00:11:52.916649] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:31:13.806 [2024-12-14 00:11:52.916666] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:31:13.806 [2024-12-14 00:11:52.916682] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:13.806 [2024-12-14 00:11:52.916689] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x61500001db80) 00:31:13.806 [2024-12-14 00:11:52.916701] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.806 [2024-12-14 00:11:52.916719] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:31:13.806 [2024-12-14 00:11:52.916812] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:31:13.806 [2024-12-14 00:11:52.916821] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:31:13.806 [2024-12-14 00:11:52.916826] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:31:13.806 [2024-12-14 00:11:52.916832] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x61500001db80): datao=0, datal=4096, cccid=4 00:31:13.806 [2024-12-14 00:11:52.916838] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b700) on tqpair(0x61500001db80): expected_datao=0, payload_size=4096 00:31:13.806 [2024-12-14 00:11:52.916843] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:13.806 [2024-12-14 00:11:52.916853] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:31:13.806 [2024-12-14 00:11:52.916858] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:31:13.806 [2024-12-14 00:11:52.916873] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:13.806 [2024-12-14 00:11:52.916881] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:13.806 [2024-12-14 00:11:52.916885] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:13.806 [2024-12-14 00:11:52.916891] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x61500001db80 00:31:13.806 [2024-12-14 00:11:52.916910] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns iocs specific (timeout 30000 ms) 00:31:13.806 [2024-12-14 00:11:52.916922] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported log pages (timeout 30000 ms) 00:31:13.806 [2024-12-14 00:11:52.916933] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported features (timeout 30000 ms) 00:31:13.806 [2024-12-14 00:11:52.916942] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host behavior support feature (timeout 30000 ms) 00:31:13.806 [2024-12-14 00:11:52.916949] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set doorbell buffer config (timeout 30000 ms) 00:31:13.806 [2024-12-14 00:11:52.916956] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host ID (timeout 30000 ms) 00:31:13.806 [2024-12-14 00:11:52.916966] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] NVMe-oF transport - not sending Set Features - Host ID 00:31:13.806 [2024-12-14 00:11:52.916976] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to transport ready (timeout 30000 ms) 00:31:13.806 [2024-12-14 00:11:52.916983] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to ready (no timeout) 00:31:13.806 [2024-12-14 00:11:52.917012] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:13.806 [2024-12-14 00:11:52.917019] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x61500001db80) 00:31:13.806 [2024-12-14 00:11:52.917030] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.806 [2024-12-14 00:11:52.917038] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:13.806 [2024-12-14 00:11:52.917044] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:13.806 [2024-12-14 00:11:52.917050] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x61500001db80) 00:31:13.806 [2024-12-14 00:11:52.917063] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:31:13.806 [2024-12-14 00:11:52.917082] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:31:13.806 [2024-12-14 00:11:52.917090] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b880, cid 5, qid 0 00:31:13.806 [2024-12-14 00:11:52.917186] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:13.806 [2024-12-14 00:11:52.917199] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:13.806 [2024-12-14 00:11:52.917204] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:13.806 [2024-12-14 00:11:52.917211] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x61500001db80 00:31:13.806 [2024-12-14 00:11:52.917220] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:13.806 [2024-12-14 00:11:52.917227] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:13.806 [2024-12-14 00:11:52.917232] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:13.806 [2024-12-14 00:11:52.917237] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b880) on tqpair=0x61500001db80 00:31:13.806 [2024-12-14 00:11:52.917250] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:13.806 [2024-12-14 00:11:52.917256] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x61500001db80) 00:31:13.807 [2024-12-14 00:11:52.917265] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.807 [2024-12-14 00:11:52.917279] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b880, cid 5, qid 0 00:31:13.807 [2024-12-14 00:11:52.917356] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:13.807 [2024-12-14 00:11:52.917365] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:13.807 [2024-12-14 00:11:52.917369] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:13.807 [2024-12-14 00:11:52.917374] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b880) on tqpair=0x61500001db80 00:31:13.807 [2024-12-14 00:11:52.917385] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:13.807 [2024-12-14 00:11:52.917391] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x61500001db80) 00:31:13.807 [2024-12-14 00:11:52.917400] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.807 [2024-12-14 00:11:52.917413] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b880, cid 5, qid 0 00:31:13.807 [2024-12-14 00:11:52.917489] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:13.807 [2024-12-14 00:11:52.917500] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:13.807 [2024-12-14 00:11:52.917506] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:13.807 [2024-12-14 00:11:52.917511] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b880) on tqpair=0x61500001db80 00:31:13.807 [2024-12-14 00:11:52.917522] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:13.807 [2024-12-14 00:11:52.917527] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x61500001db80) 00:31:13.807 [2024-12-14 00:11:52.917536] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.807 [2024-12-14 00:11:52.917550] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b880, cid 5, qid 0 00:31:13.807 [2024-12-14 00:11:52.917623] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:13.807 [2024-12-14 00:11:52.917632] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:13.807 [2024-12-14 00:11:52.917637] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:13.807 [2024-12-14 00:11:52.917642] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b880) on tqpair=0x61500001db80 00:31:13.807 [2024-12-14 00:11:52.917664] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:13.807 [2024-12-14 00:11:52.917671] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x61500001db80) 00:31:13.807 [2024-12-14 00:11:52.917681] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.807 [2024-12-14 00:11:52.917691] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:13.807 [2024-12-14 00:11:52.917697] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x61500001db80) 00:31:13.807 [2024-12-14 00:11:52.917707] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.807 [2024-12-14 00:11:52.917716] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:13.807 [2024-12-14 00:11:52.917722] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x61500001db80) 00:31:13.807 [2024-12-14 00:11:52.917734] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.807 [2024-12-14 00:11:52.917748] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:13.807 [2024-12-14 00:11:52.917754] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x61500001db80) 00:31:13.807 [2024-12-14 00:11:52.917764] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.807 [2024-12-14 00:11:52.917779] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b880, cid 5, qid 0 00:31:13.807 [2024-12-14 00:11:52.917786] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:31:13.807 [2024-12-14 00:11:52.917793] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001ba00, cid 6, qid 0 00:31:13.807 [2024-12-14 00:11:52.917799] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001bb80, cid 7, qid 0 00:31:13.807 [2024-12-14 00:11:52.918000] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:31:13.807 [2024-12-14 00:11:52.918010] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:31:13.807 [2024-12-14 00:11:52.918015] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:31:13.807 [2024-12-14 00:11:52.918021] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x61500001db80): datao=0, datal=8192, cccid=5 00:31:13.807 [2024-12-14 00:11:52.918028] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b880) on tqpair(0x61500001db80): expected_datao=0, payload_size=8192 00:31:13.807 [2024-12-14 00:11:52.918036] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:13.807 [2024-12-14 00:11:52.918056] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:31:13.807 [2024-12-14 00:11:52.918063] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:31:13.807 [2024-12-14 00:11:52.918074] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:31:13.807 [2024-12-14 00:11:52.918085] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:31:13.807 [2024-12-14 00:11:52.918090] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:31:13.807 [2024-12-14 00:11:52.918095] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x61500001db80): datao=0, datal=512, cccid=4 00:31:13.807 [2024-12-14 00:11:52.918101] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b700) on tqpair(0x61500001db80): expected_datao=0, payload_size=512 00:31:13.807 [2024-12-14 00:11:52.918107] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:13.807 [2024-12-14 00:11:52.918114] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:31:13.807 [2024-12-14 00:11:52.918119] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:31:13.807 [2024-12-14 00:11:52.918126] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:31:13.807 [2024-12-14 00:11:52.918133] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:31:13.807 [2024-12-14 00:11:52.918137] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:31:13.807 [2024-12-14 00:11:52.918142] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x61500001db80): datao=0, datal=512, cccid=6 00:31:13.807 [2024-12-14 00:11:52.918148] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001ba00) on tqpair(0x61500001db80): expected_datao=0, payload_size=512 00:31:13.807 [2024-12-14 00:11:52.918153] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:13.807 [2024-12-14 00:11:52.918161] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:31:13.807 [2024-12-14 00:11:52.918166] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:31:13.807 [2024-12-14 00:11:52.918173] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:31:13.807 [2024-12-14 00:11:52.918180] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:31:13.807 [2024-12-14 00:11:52.918184] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:31:13.807 [2024-12-14 00:11:52.918189] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x61500001db80): datao=0, datal=4096, cccid=7 00:31:13.807 [2024-12-14 00:11:52.918195] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001bb80) on tqpair(0x61500001db80): expected_datao=0, payload_size=4096 00:31:13.807 [2024-12-14 00:11:52.918200] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:13.807 [2024-12-14 00:11:52.918208] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:31:13.807 [2024-12-14 00:11:52.918213] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:31:13.807 [2024-12-14 00:11:52.918223] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:13.807 [2024-12-14 00:11:52.918230] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:13.807 [2024-12-14 00:11:52.918234] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:13.807 [2024-12-14 00:11:52.918240] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b880) on tqpair=0x61500001db80 00:31:13.807 [2024-12-14 00:11:52.918260] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:13.807 [2024-12-14 00:11:52.918271] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:13.807 [2024-12-14 00:11:52.918276] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:13.807 [2024-12-14 00:11:52.918281] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x61500001db80 00:31:13.807 [2024-12-14 00:11:52.918295] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:13.807 [2024-12-14 00:11:52.918302] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:13.807 [2024-12-14 00:11:52.918307] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:13.807 [2024-12-14 00:11:52.918312] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001ba00) on tqpair=0x61500001db80 00:31:13.807 [2024-12-14 00:11:52.918323] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:13.807 [2024-12-14 00:11:52.918330] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:13.807 [2024-12-14 00:11:52.918335] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:13.807 [2024-12-14 00:11:52.918340] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001bb80) on tqpair=0x61500001db80 00:31:13.807 ===================================================== 00:31:13.807 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:13.807 ===================================================== 00:31:13.807 Controller Capabilities/Features 00:31:13.807 ================================ 00:31:13.807 Vendor ID: 8086 00:31:13.807 Subsystem Vendor ID: 8086 00:31:13.807 Serial Number: SPDK00000000000001 00:31:13.807 Model Number: SPDK bdev Controller 00:31:13.807 Firmware Version: 25.01 00:31:13.807 Recommended Arb Burst: 6 00:31:13.807 IEEE OUI Identifier: e4 d2 5c 00:31:13.807 Multi-path I/O 00:31:13.807 May have multiple subsystem ports: Yes 00:31:13.807 May have multiple controllers: Yes 00:31:13.807 Associated with SR-IOV VF: No 00:31:13.807 Max Data Transfer Size: 131072 00:31:13.807 Max Number of Namespaces: 32 00:31:13.807 Max Number of I/O Queues: 127 00:31:13.808 NVMe Specification Version (VS): 1.3 00:31:13.808 NVMe Specification Version (Identify): 1.3 00:31:13.808 Maximum Queue Entries: 128 00:31:13.808 Contiguous Queues Required: Yes 00:31:13.808 Arbitration Mechanisms Supported 00:31:13.808 Weighted Round Robin: Not Supported 00:31:13.808 Vendor Specific: Not Supported 00:31:13.808 Reset Timeout: 15000 ms 00:31:13.808 Doorbell Stride: 4 bytes 00:31:13.808 NVM Subsystem Reset: Not Supported 00:31:13.808 Command Sets Supported 00:31:13.808 NVM Command Set: Supported 00:31:13.808 Boot Partition: Not Supported 00:31:13.808 Memory Page Size Minimum: 4096 bytes 00:31:13.808 Memory Page Size Maximum: 4096 bytes 00:31:13.808 Persistent Memory Region: Not Supported 00:31:13.808 Optional Asynchronous Events Supported 00:31:13.808 Namespace Attribute Notices: Supported 00:31:13.808 Firmware Activation Notices: Not Supported 00:31:13.808 ANA Change Notices: Not Supported 00:31:13.808 PLE Aggregate Log Change Notices: Not Supported 00:31:13.808 LBA Status Info Alert Notices: Not Supported 00:31:13.808 EGE Aggregate Log Change Notices: Not Supported 00:31:13.808 Normal NVM Subsystem Shutdown event: Not Supported 00:31:13.808 Zone Descriptor Change Notices: Not Supported 00:31:13.808 Discovery Log Change Notices: Not Supported 00:31:13.808 Controller Attributes 00:31:13.808 128-bit Host Identifier: Supported 00:31:13.808 Non-Operational Permissive Mode: Not Supported 00:31:13.808 NVM Sets: Not Supported 00:31:13.808 Read Recovery Levels: Not Supported 00:31:13.808 Endurance Groups: Not Supported 00:31:13.808 Predictable Latency Mode: Not Supported 00:31:13.808 Traffic Based Keep ALive: Not Supported 00:31:13.808 Namespace Granularity: Not Supported 00:31:13.808 SQ Associations: Not Supported 00:31:13.808 UUID List: Not Supported 00:31:13.808 Multi-Domain Subsystem: Not Supported 00:31:13.808 Fixed Capacity Management: Not Supported 00:31:13.808 Variable Capacity Management: Not Supported 00:31:13.808 Delete Endurance Group: Not Supported 00:31:13.808 Delete NVM Set: Not Supported 00:31:13.808 Extended LBA Formats Supported: Not Supported 00:31:13.808 Flexible Data Placement Supported: Not Supported 00:31:13.808 00:31:13.808 Controller Memory Buffer Support 00:31:13.808 ================================ 00:31:13.808 Supported: No 00:31:13.808 00:31:13.808 Persistent Memory Region Support 00:31:13.808 ================================ 00:31:13.808 Supported: No 00:31:13.808 00:31:13.808 Admin Command Set Attributes 00:31:13.808 ============================ 00:31:13.808 Security Send/Receive: Not Supported 00:31:13.808 Format NVM: Not Supported 00:31:13.808 Firmware Activate/Download: Not Supported 00:31:13.808 Namespace Management: Not Supported 00:31:13.808 Device Self-Test: Not Supported 00:31:13.808 Directives: Not Supported 00:31:13.808 NVMe-MI: Not Supported 00:31:13.808 Virtualization Management: Not Supported 00:31:13.808 Doorbell Buffer Config: Not Supported 00:31:13.808 Get LBA Status Capability: Not Supported 00:31:13.808 Command & Feature Lockdown Capability: Not Supported 00:31:13.808 Abort Command Limit: 4 00:31:13.808 Async Event Request Limit: 4 00:31:13.808 Number of Firmware Slots: N/A 00:31:13.808 Firmware Slot 1 Read-Only: N/A 00:31:13.808 Firmware Activation Without Reset: N/A 00:31:13.808 Multiple Update Detection Support: N/A 00:31:13.808 Firmware Update Granularity: No Information Provided 00:31:13.808 Per-Namespace SMART Log: No 00:31:13.808 Asymmetric Namespace Access Log Page: Not Supported 00:31:13.808 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:31:13.808 Command Effects Log Page: Supported 00:31:13.808 Get Log Page Extended Data: Supported 00:31:13.808 Telemetry Log Pages: Not Supported 00:31:13.808 Persistent Event Log Pages: Not Supported 00:31:13.808 Supported Log Pages Log Page: May Support 00:31:13.808 Commands Supported & Effects Log Page: Not Supported 00:31:13.808 Feature Identifiers & Effects Log Page:May Support 00:31:13.808 NVMe-MI Commands & Effects Log Page: May Support 00:31:13.808 Data Area 4 for Telemetry Log: Not Supported 00:31:13.808 Error Log Page Entries Supported: 128 00:31:13.808 Keep Alive: Supported 00:31:13.808 Keep Alive Granularity: 10000 ms 00:31:13.808 00:31:13.808 NVM Command Set Attributes 00:31:13.808 ========================== 00:31:13.808 Submission Queue Entry Size 00:31:13.808 Max: 64 00:31:13.808 Min: 64 00:31:13.808 Completion Queue Entry Size 00:31:13.808 Max: 16 00:31:13.808 Min: 16 00:31:13.808 Number of Namespaces: 32 00:31:13.808 Compare Command: Supported 00:31:13.808 Write Uncorrectable Command: Not Supported 00:31:13.808 Dataset Management Command: Supported 00:31:13.808 Write Zeroes Command: Supported 00:31:13.808 Set Features Save Field: Not Supported 00:31:13.808 Reservations: Supported 00:31:13.808 Timestamp: Not Supported 00:31:13.808 Copy: Supported 00:31:13.808 Volatile Write Cache: Present 00:31:13.808 Atomic Write Unit (Normal): 1 00:31:13.808 Atomic Write Unit (PFail): 1 00:31:13.808 Atomic Compare & Write Unit: 1 00:31:13.808 Fused Compare & Write: Supported 00:31:13.808 Scatter-Gather List 00:31:13.808 SGL Command Set: Supported 00:31:13.808 SGL Keyed: Supported 00:31:13.808 SGL Bit Bucket Descriptor: Not Supported 00:31:13.808 SGL Metadata Pointer: Not Supported 00:31:13.808 Oversized SGL: Not Supported 00:31:13.808 SGL Metadata Address: Not Supported 00:31:13.808 SGL Offset: Supported 00:31:13.808 Transport SGL Data Block: Not Supported 00:31:13.808 Replay Protected Memory Block: Not Supported 00:31:13.808 00:31:13.808 Firmware Slot Information 00:31:13.808 ========================= 00:31:13.808 Active slot: 1 00:31:13.808 Slot 1 Firmware Revision: 25.01 00:31:13.808 00:31:13.808 00:31:13.808 Commands Supported and Effects 00:31:13.808 ============================== 00:31:13.808 Admin Commands 00:31:13.808 -------------- 00:31:13.808 Get Log Page (02h): Supported 00:31:13.808 Identify (06h): Supported 00:31:13.808 Abort (08h): Supported 00:31:13.808 Set Features (09h): Supported 00:31:13.808 Get Features (0Ah): Supported 00:31:13.808 Asynchronous Event Request (0Ch): Supported 00:31:13.808 Keep Alive (18h): Supported 00:31:13.808 I/O Commands 00:31:13.808 ------------ 00:31:13.808 Flush (00h): Supported LBA-Change 00:31:13.808 Write (01h): Supported LBA-Change 00:31:13.808 Read (02h): Supported 00:31:13.808 Compare (05h): Supported 00:31:13.808 Write Zeroes (08h): Supported LBA-Change 00:31:13.808 Dataset Management (09h): Supported LBA-Change 00:31:13.808 Copy (19h): Supported LBA-Change 00:31:13.808 00:31:13.808 Error Log 00:31:13.808 ========= 00:31:13.808 00:31:13.808 Arbitration 00:31:13.808 =========== 00:31:13.809 Arbitration Burst: 1 00:31:13.809 00:31:13.809 Power Management 00:31:13.809 ================ 00:31:13.809 Number of Power States: 1 00:31:13.809 Current Power State: Power State #0 00:31:13.809 Power State #0: 00:31:13.809 Max Power: 0.00 W 00:31:13.809 Non-Operational State: Operational 00:31:13.809 Entry Latency: Not Reported 00:31:13.809 Exit Latency: Not Reported 00:31:13.809 Relative Read Throughput: 0 00:31:13.809 Relative Read Latency: 0 00:31:13.809 Relative Write Throughput: 0 00:31:13.809 Relative Write Latency: 0 00:31:13.809 Idle Power: Not Reported 00:31:13.809 Active Power: Not Reported 00:31:13.809 Non-Operational Permissive Mode: Not Supported 00:31:13.809 00:31:13.809 Health Information 00:31:13.809 ================== 00:31:13.809 Critical Warnings: 00:31:13.809 Available Spare Space: OK 00:31:13.809 Temperature: OK 00:31:13.809 Device Reliability: OK 00:31:13.809 Read Only: No 00:31:13.809 Volatile Memory Backup: OK 00:31:13.809 Current Temperature: 0 Kelvin (-273 Celsius) 00:31:13.809 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:31:13.809 Available Spare: 0% 00:31:13.809 Available Spare Threshold: 0% 00:31:13.809 Life Percentage Used:[2024-12-14 00:11:52.918481] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:13.809 [2024-12-14 00:11:52.918489] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x61500001db80) 00:31:13.809 [2024-12-14 00:11:52.918500] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.809 [2024-12-14 00:11:52.918518] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001bb80, cid 7, qid 0 00:31:13.809 [2024-12-14 00:11:52.918707] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:13.809 [2024-12-14 00:11:52.918715] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:13.809 [2024-12-14 00:11:52.918720] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:13.809 [2024-12-14 00:11:52.918729] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001bb80) on tqpair=0x61500001db80 00:31:13.809 [2024-12-14 00:11:52.918773] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Prepare to destruct SSD 00:31:13.809 [2024-12-14 00:11:52.918787] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500001db80 00:31:13.809 [2024-12-14 00:11:52.918797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.809 [2024-12-14 00:11:52.918805] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b280) on tqpair=0x61500001db80 00:31:13.809 [2024-12-14 00:11:52.918812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.809 [2024-12-14 00:11:52.918819] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b400) on tqpair=0x61500001db80 00:31:13.809 [2024-12-14 00:11:52.918826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.809 [2024-12-14 00:11:52.918832] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500001db80 00:31:13.809 [2024-12-14 00:11:52.918839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.809 [2024-12-14 00:11:52.918849] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:13.809 [2024-12-14 00:11:52.918856] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:13.809 [2024-12-14 00:11:52.918861] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500001db80) 00:31:13.809 [2024-12-14 00:11:52.918872] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.809 [2024-12-14 00:11:52.918889] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:31:13.809 [2024-12-14 00:11:52.918970] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:13.809 [2024-12-14 00:11:52.918981] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:13.809 [2024-12-14 00:11:52.918987] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:13.809 [2024-12-14 00:11:52.918992] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500001db80 00:31:13.809 [2024-12-14 00:11:52.919003] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:13.809 [2024-12-14 00:11:52.919009] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:13.809 [2024-12-14 00:11:52.919015] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500001db80) 00:31:13.809 [2024-12-14 00:11:52.919029] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.809 [2024-12-14 00:11:52.919048] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:31:13.809 [2024-12-14 00:11:52.919145] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:13.809 [2024-12-14 00:11:52.919154] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:13.809 [2024-12-14 00:11:52.919158] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:13.809 [2024-12-14 00:11:52.919164] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500001db80 00:31:13.809 [2024-12-14 00:11:52.919171] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] RTD3E = 0 us 00:31:13.809 [2024-12-14 00:11:52.919178] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown timeout = 10000 ms 00:31:13.809 [2024-12-14 00:11:52.919193] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:13.809 [2024-12-14 00:11:52.919200] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:13.809 [2024-12-14 00:11:52.919205] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500001db80) 00:31:13.809 [2024-12-14 00:11:52.919217] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.809 [2024-12-14 00:11:52.919231] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:31:13.809 [2024-12-14 00:11:52.919307] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:13.809 [2024-12-14 00:11:52.919315] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:13.809 [2024-12-14 00:11:52.919320] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:13.809 [2024-12-14 00:11:52.919325] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500001db80 00:31:13.809 [2024-12-14 00:11:52.919337] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:13.809 [2024-12-14 00:11:52.919343] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:13.809 [2024-12-14 00:11:52.919348] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500001db80) 00:31:13.809 [2024-12-14 00:11:52.919357] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.809 [2024-12-14 00:11:52.919371] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:31:13.809 [2024-12-14 00:11:52.923450] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:13.809 [2024-12-14 00:11:52.923471] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:13.809 [2024-12-14 00:11:52.923476] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:13.809 [2024-12-14 00:11:52.923482] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500001db80 00:31:13.809 [2024-12-14 00:11:52.923499] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:13.809 [2024-12-14 00:11:52.923505] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:13.809 [2024-12-14 00:11:52.923510] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500001db80) 00:31:13.809 [2024-12-14 00:11:52.923521] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.809 [2024-12-14 00:11:52.923538] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:31:13.809 [2024-12-14 00:11:52.923705] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:13.809 [2024-12-14 00:11:52.923714] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:13.809 [2024-12-14 00:11:52.923719] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:13.809 [2024-12-14 00:11:52.923724] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500001db80 00:31:13.809 [2024-12-14 00:11:52.923734] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown complete in 4 milliseconds 00:31:14.068 0% 00:31:14.068 Data Units Read: 0 00:31:14.068 Data Units Written: 0 00:31:14.068 Host Read Commands: 0 00:31:14.068 Host Write Commands: 0 00:31:14.068 Controller Busy Time: 0 minutes 00:31:14.068 Power Cycles: 0 00:31:14.068 Power On Hours: 0 hours 00:31:14.068 Unsafe Shutdowns: 0 00:31:14.068 Unrecoverable Media Errors: 0 00:31:14.068 Lifetime Error Log Entries: 0 00:31:14.068 Warning Temperature Time: 0 minutes 00:31:14.068 Critical Temperature Time: 0 minutes 00:31:14.068 00:31:14.068 Number of Queues 00:31:14.068 ================ 00:31:14.068 Number of I/O Submission Queues: 127 00:31:14.068 Number of I/O Completion Queues: 127 00:31:14.068 00:31:14.068 Active Namespaces 00:31:14.068 ================= 00:31:14.068 Namespace ID:1 00:31:14.068 Error Recovery Timeout: Unlimited 00:31:14.068 Command Set Identifier: NVM (00h) 00:31:14.068 Deallocate: Supported 00:31:14.068 Deallocated/Unwritten Error: Not Supported 00:31:14.068 Deallocated Read Value: Unknown 00:31:14.068 Deallocate in Write Zeroes: Not Supported 00:31:14.068 Deallocated Guard Field: 0xFFFF 00:31:14.068 Flush: Supported 00:31:14.068 Reservation: Supported 00:31:14.068 Namespace Sharing Capabilities: Multiple Controllers 00:31:14.068 Size (in LBAs): 131072 (0GiB) 00:31:14.068 Capacity (in LBAs): 131072 (0GiB) 00:31:14.068 Utilization (in LBAs): 131072 (0GiB) 00:31:14.068 NGUID: ABCDEF0123456789ABCDEF0123456789 00:31:14.068 EUI64: ABCDEF0123456789 00:31:14.068 UUID: 4d1cbfeb-ab4d-4c37-a5c0-c40f43a1a9be 00:31:14.068 Thin Provisioning: Not Supported 00:31:14.069 Per-NS Atomic Units: Yes 00:31:14.069 Atomic Boundary Size (Normal): 0 00:31:14.069 Atomic Boundary Size (PFail): 0 00:31:14.069 Atomic Boundary Offset: 0 00:31:14.069 Maximum Single Source Range Length: 65535 00:31:14.069 Maximum Copy Length: 65535 00:31:14.069 Maximum Source Range Count: 1 00:31:14.069 NGUID/EUI64 Never Reused: No 00:31:14.069 Namespace Write Protected: No 00:31:14.069 Number of LBA Formats: 1 00:31:14.069 Current LBA Format: LBA Format #00 00:31:14.069 LBA Format #00: Data Size: 512 Metadata Size: 0 00:31:14.069 00:31:14.069 00:11:52 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@51 -- # sync 00:31:14.069 00:11:52 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:14.069 00:11:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:14.069 00:11:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:31:14.069 00:11:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:14.069 00:11:52 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:31:14.069 00:11:52 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:31:14.069 00:11:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:14.069 00:11:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@121 -- # sync 00:31:14.069 00:11:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:14.069 00:11:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@124 -- # set +e 00:31:14.069 00:11:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:14.069 00:11:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:14.069 rmmod nvme_tcp 00:31:14.069 rmmod nvme_fabrics 00:31:14.069 rmmod nvme_keyring 00:31:14.069 00:11:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:14.069 00:11:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@128 -- # set -e 00:31:14.069 00:11:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@129 -- # return 0 00:31:14.069 00:11:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@517 -- # '[' -n 4153894 ']' 00:31:14.069 00:11:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@518 -- # killprocess 4153894 00:31:14.069 00:11:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@954 -- # '[' -z 4153894 ']' 00:31:14.069 00:11:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@958 -- # kill -0 4153894 00:31:14.069 00:11:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # uname 00:31:14.069 00:11:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:14.069 00:11:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4153894 00:31:14.069 00:11:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:31:14.069 00:11:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:31:14.069 00:11:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4153894' 00:31:14.069 killing process with pid 4153894 00:31:14.069 00:11:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@973 -- # kill 4153894 00:31:14.069 00:11:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@978 -- # wait 4153894 00:31:15.549 00:11:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:15.549 00:11:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:31:15.549 00:11:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:31:15.549 00:11:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # iptr 00:31:15.549 00:11:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-save 00:31:15.549 00:11:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:31:15.549 00:11:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-restore 00:31:15.549 00:11:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:15.549 00:11:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:15.549 00:11:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:15.550 00:11:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:15.550 00:11:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:17.463 00:11:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:17.463 00:31:17.463 real 0m10.655s 00:31:17.463 user 0m11.857s 00:31:17.463 sys 0m4.557s 00:31:17.463 00:11:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:17.463 00:11:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:31:17.463 ************************************ 00:31:17.463 END TEST nvmf_identify 00:31:17.463 ************************************ 00:31:17.463 00:11:56 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@23 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:31:17.463 00:11:56 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:31:17.463 00:11:56 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:17.463 00:11:56 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:31:17.722 ************************************ 00:31:17.722 START TEST nvmf_perf 00:31:17.722 ************************************ 00:31:17.722 00:11:56 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:31:17.722 * Looking for test storage... 00:31:17.722 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:31:17.722 00:11:56 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:31:17.722 00:11:56 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1711 -- # lcov --version 00:31:17.722 00:11:56 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:31:17.722 00:11:56 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:31:17.722 00:11:56 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:17.722 00:11:56 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:17.722 00:11:56 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:17.722 00:11:56 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # IFS=.-: 00:31:17.722 00:11:56 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # read -ra ver1 00:31:17.722 00:11:56 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # IFS=.-: 00:31:17.722 00:11:56 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # read -ra ver2 00:31:17.722 00:11:56 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@338 -- # local 'op=<' 00:31:17.722 00:11:56 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@340 -- # ver1_l=2 00:31:17.722 00:11:56 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@341 -- # ver2_l=1 00:31:17.722 00:11:56 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:17.722 00:11:56 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@344 -- # case "$op" in 00:31:17.722 00:11:56 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@345 -- # : 1 00:31:17.722 00:11:56 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:17.722 00:11:56 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:17.722 00:11:56 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # decimal 1 00:31:17.722 00:11:56 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=1 00:31:17.722 00:11:56 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:17.722 00:11:56 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 1 00:31:17.722 00:11:56 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # ver1[v]=1 00:31:17.722 00:11:56 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # decimal 2 00:31:17.722 00:11:56 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=2 00:31:17.722 00:11:56 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:17.722 00:11:56 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 2 00:31:17.722 00:11:56 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # ver2[v]=2 00:31:17.722 00:11:56 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:17.722 00:11:56 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:17.722 00:11:56 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # return 0 00:31:17.722 00:11:56 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:17.722 00:11:56 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:31:17.722 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:17.722 --rc genhtml_branch_coverage=1 00:31:17.722 --rc genhtml_function_coverage=1 00:31:17.722 --rc genhtml_legend=1 00:31:17.722 --rc geninfo_all_blocks=1 00:31:17.722 --rc geninfo_unexecuted_blocks=1 00:31:17.722 00:31:17.722 ' 00:31:17.722 00:11:56 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:31:17.722 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:17.722 --rc genhtml_branch_coverage=1 00:31:17.722 --rc genhtml_function_coverage=1 00:31:17.722 --rc genhtml_legend=1 00:31:17.722 --rc geninfo_all_blocks=1 00:31:17.722 --rc geninfo_unexecuted_blocks=1 00:31:17.722 00:31:17.722 ' 00:31:17.722 00:11:56 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:31:17.722 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:17.722 --rc genhtml_branch_coverage=1 00:31:17.722 --rc genhtml_function_coverage=1 00:31:17.722 --rc genhtml_legend=1 00:31:17.722 --rc geninfo_all_blocks=1 00:31:17.722 --rc geninfo_unexecuted_blocks=1 00:31:17.722 00:31:17.722 ' 00:31:17.722 00:11:56 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:31:17.722 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:17.722 --rc genhtml_branch_coverage=1 00:31:17.722 --rc genhtml_function_coverage=1 00:31:17.722 --rc genhtml_legend=1 00:31:17.722 --rc geninfo_all_blocks=1 00:31:17.722 --rc geninfo_unexecuted_blocks=1 00:31:17.722 00:31:17.722 ' 00:31:17.722 00:11:56 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:17.722 00:11:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:31:17.722 00:11:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:17.722 00:11:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:17.722 00:11:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:17.722 00:11:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:17.722 00:11:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:17.722 00:11:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:17.722 00:11:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:17.722 00:11:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:17.722 00:11:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:17.722 00:11:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:17.722 00:11:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:31:17.722 00:11:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:31:17.722 00:11:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:17.722 00:11:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:17.723 00:11:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:17.723 00:11:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:17.723 00:11:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:17.723 00:11:56 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@15 -- # shopt -s extglob 00:31:17.723 00:11:56 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:17.723 00:11:56 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:17.723 00:11:56 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:17.723 00:11:56 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:17.723 00:11:56 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:17.723 00:11:56 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:17.723 00:11:56 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:31:17.723 00:11:56 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:17.723 00:11:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@51 -- # : 0 00:31:17.723 00:11:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:17.723 00:11:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:17.723 00:11:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:17.723 00:11:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:17.723 00:11:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:17.723 00:11:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:31:17.723 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:31:17.723 00:11:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:17.723 00:11:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:17.723 00:11:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:17.723 00:11:56 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:31:17.723 00:11:56 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:31:17.723 00:11:56 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:31:17.723 00:11:56 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:31:17.723 00:11:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:31:17.723 00:11:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:17.723 00:11:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@476 -- # prepare_net_devs 00:31:17.723 00:11:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:31:17.723 00:11:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:31:17.723 00:11:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:17.723 00:11:56 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:17.723 00:11:56 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:17.723 00:11:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:31:17.723 00:11:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:31:17.723 00:11:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@309 -- # xtrace_disable 00:31:17.723 00:11:56 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:31:22.991 00:12:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:22.991 00:12:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # pci_devs=() 00:31:22.991 00:12:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:22.991 00:12:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:22.991 00:12:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:22.991 00:12:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:22.991 00:12:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:22.991 00:12:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # net_devs=() 00:31:22.991 00:12:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:22.991 00:12:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # e810=() 00:31:22.991 00:12:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # local -ga e810 00:31:22.991 00:12:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # x722=() 00:31:22.991 00:12:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # local -ga x722 00:31:22.991 00:12:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # mlx=() 00:31:22.991 00:12:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # local -ga mlx 00:31:22.991 00:12:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:22.991 00:12:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:22.991 00:12:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:22.991 00:12:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:22.991 00:12:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:22.991 00:12:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:22.991 00:12:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:22.991 00:12:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:22.991 00:12:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:22.991 00:12:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:22.991 00:12:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:22.991 00:12:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:22.991 00:12:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:22.991 00:12:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:22.991 00:12:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:22.991 00:12:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:22.991 00:12:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:22.991 00:12:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:22.991 00:12:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:22.991 00:12:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:31:22.991 Found 0000:af:00.0 (0x8086 - 0x159b) 00:31:22.991 00:12:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:22.991 00:12:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:22.991 00:12:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:22.991 00:12:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:22.991 00:12:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:22.991 00:12:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:22.991 00:12:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:31:22.991 Found 0000:af:00.1 (0x8086 - 0x159b) 00:31:22.991 00:12:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:22.991 00:12:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:22.991 00:12:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:22.991 00:12:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:22.991 00:12:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:22.991 00:12:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:22.991 00:12:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:22.991 00:12:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:22.991 00:12:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:22.991 00:12:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:22.991 00:12:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:22.991 00:12:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:22.991 00:12:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:22.991 00:12:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:22.991 00:12:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:22.991 00:12:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:31:22.991 Found net devices under 0000:af:00.0: cvl_0_0 00:31:22.991 00:12:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:22.991 00:12:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:22.991 00:12:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:22.991 00:12:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:22.991 00:12:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:22.991 00:12:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:22.991 00:12:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:22.991 00:12:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:22.991 00:12:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:31:22.991 Found net devices under 0000:af:00.1: cvl_0_1 00:31:22.991 00:12:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:22.991 00:12:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:31:22.991 00:12:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # is_hw=yes 00:31:22.991 00:12:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:31:22.991 00:12:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:31:22.991 00:12:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:31:22.991 00:12:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:22.991 00:12:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:22.991 00:12:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:22.991 00:12:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:22.991 00:12:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:22.991 00:12:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:22.991 00:12:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:22.991 00:12:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:22.991 00:12:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:22.991 00:12:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:22.991 00:12:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:22.991 00:12:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:22.991 00:12:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:22.991 00:12:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:22.992 00:12:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:23.249 00:12:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:23.249 00:12:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:23.249 00:12:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:23.249 00:12:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:23.249 00:12:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:23.249 00:12:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:23.249 00:12:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:23.249 00:12:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:23.249 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:23.249 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.377 ms 00:31:23.249 00:31:23.249 --- 10.0.0.2 ping statistics --- 00:31:23.249 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:23.249 rtt min/avg/max/mdev = 0.377/0.377/0.377/0.000 ms 00:31:23.249 00:12:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:23.249 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:23.249 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.216 ms 00:31:23.249 00:31:23.249 --- 10.0.0.1 ping statistics --- 00:31:23.249 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:23.249 rtt min/avg/max/mdev = 0.216/0.216/0.216/0.000 ms 00:31:23.249 00:12:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:23.249 00:12:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@450 -- # return 0 00:31:23.249 00:12:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:31:23.249 00:12:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:23.249 00:12:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:31:23.249 00:12:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:31:23.249 00:12:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:23.249 00:12:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:31:23.249 00:12:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:31:23.249 00:12:02 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:31:23.249 00:12:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:31:23.249 00:12:02 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:23.249 00:12:02 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:31:23.249 00:12:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@509 -- # nvmfpid=4157913 00:31:23.249 00:12:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@510 -- # waitforlisten 4157913 00:31:23.249 00:12:02 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@835 -- # '[' -z 4157913 ']' 00:31:23.249 00:12:02 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:23.249 00:12:02 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:23.249 00:12:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:31:23.249 00:12:02 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:23.249 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:23.249 00:12:02 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:23.249 00:12:02 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:31:23.249 [2024-12-14 00:12:02.368134] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:31:23.249 [2024-12-14 00:12:02.368226] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:23.508 [2024-12-14 00:12:02.487192] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:31:23.508 [2024-12-14 00:12:02.608063] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:23.508 [2024-12-14 00:12:02.608109] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:23.508 [2024-12-14 00:12:02.608120] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:23.508 [2024-12-14 00:12:02.608131] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:23.508 [2024-12-14 00:12:02.608140] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:23.508 [2024-12-14 00:12:02.610697] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:31:23.508 [2024-12-14 00:12:02.610721] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:31:23.508 [2024-12-14 00:12:02.610803] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:31:23.508 [2024-12-14 00:12:02.610810] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:31:24.074 00:12:03 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:24.074 00:12:03 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@868 -- # return 0 00:31:24.074 00:12:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:31:24.074 00:12:03 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:24.074 00:12:03 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:31:24.332 00:12:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:24.332 00:12:03 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:31:24.332 00:12:03 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:31:27.620 00:12:06 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:31:27.620 00:12:06 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:31:27.620 00:12:06 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:5e:00.0 00:31:27.620 00:12:06 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:31:27.877 00:12:06 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:31:27.877 00:12:06 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:5e:00.0 ']' 00:31:27.877 00:12:06 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:31:27.877 00:12:06 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:31:27.878 00:12:06 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:31:27.878 [2024-12-14 00:12:06.978364] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:27.878 00:12:07 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:31:28.136 00:12:07 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:31:28.136 00:12:07 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:31:28.394 00:12:07 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:31:28.394 00:12:07 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:31:28.653 00:12:07 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:28.911 [2024-12-14 00:12:07.825427] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:28.911 00:12:07 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:31:29.170 00:12:08 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:5e:00.0 ']' 00:31:29.170 00:12:08 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:5e:00.0' 00:31:29.170 00:12:08 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:31:29.170 00:12:08 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:5e:00.0' 00:31:30.547 Initializing NVMe Controllers 00:31:30.547 Attached to NVMe Controller at 0000:5e:00.0 [8086:0a54] 00:31:30.547 Associating PCIE (0000:5e:00.0) NSID 1 with lcore 0 00:31:30.547 Initialization complete. Launching workers. 00:31:30.547 ======================================================== 00:31:30.547 Latency(us) 00:31:30.547 Device Information : IOPS MiB/s Average min max 00:31:30.547 PCIE (0000:5e:00.0) NSID 1 from core 0: 91472.10 357.31 349.31 42.91 4335.20 00:31:30.547 ======================================================== 00:31:30.547 Total : 91472.10 357.31 349.31 42.91 4335.20 00:31:30.547 00:31:30.547 00:12:09 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:31:31.923 Initializing NVMe Controllers 00:31:31.923 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:31.923 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:31:31.924 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:31:31.924 Initialization complete. Launching workers. 00:31:31.924 ======================================================== 00:31:31.924 Latency(us) 00:31:31.924 Device Information : IOPS MiB/s Average min max 00:31:31.924 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 122.54 0.48 8429.88 136.32 44877.52 00:31:31.924 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 60.77 0.24 17090.77 6515.76 47907.98 00:31:31.924 ======================================================== 00:31:31.924 Total : 183.32 0.72 11301.15 136.32 47907.98 00:31:31.924 00:31:31.924 00:12:10 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:31:33.301 Initializing NVMe Controllers 00:31:33.301 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:33.301 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:31:33.301 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:31:33.301 Initialization complete. Launching workers. 00:31:33.301 ======================================================== 00:31:33.301 Latency(us) 00:31:33.301 Device Information : IOPS MiB/s Average min max 00:31:33.301 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 9436.23 36.86 3389.55 480.18 7683.52 00:31:33.301 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3874.31 15.13 8284.16 6018.29 15745.03 00:31:33.301 ======================================================== 00:31:33.301 Total : 13310.54 51.99 4814.23 480.18 15745.03 00:31:33.301 00:31:33.301 00:12:12 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:31:33.301 00:12:12 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:31:33.301 00:12:12 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:31:36.589 Initializing NVMe Controllers 00:31:36.589 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:36.589 Controller IO queue size 128, less than required. 00:31:36.589 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:31:36.589 Controller IO queue size 128, less than required. 00:31:36.589 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:31:36.589 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:31:36.589 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:31:36.589 Initialization complete. Launching workers. 00:31:36.589 ======================================================== 00:31:36.589 Latency(us) 00:31:36.589 Device Information : IOPS MiB/s Average min max 00:31:36.589 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1554.72 388.68 85801.69 58360.47 334315.42 00:31:36.589 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 546.38 136.59 251454.80 120767.73 583253.75 00:31:36.589 ======================================================== 00:31:36.589 Total : 2101.10 525.27 128878.58 58360.47 583253.75 00:31:36.589 00:31:36.589 00:12:15 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:31:36.589 No valid NVMe controllers or AIO or URING devices found 00:31:36.589 Initializing NVMe Controllers 00:31:36.589 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:36.589 Controller IO queue size 128, less than required. 00:31:36.589 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:31:36.589 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:31:36.589 Controller IO queue size 128, less than required. 00:31:36.589 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:31:36.589 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:31:36.589 WARNING: Some requested NVMe devices were skipped 00:31:36.589 00:12:15 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:31:39.877 Initializing NVMe Controllers 00:31:39.877 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:39.877 Controller IO queue size 128, less than required. 00:31:39.877 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:31:39.877 Controller IO queue size 128, less than required. 00:31:39.877 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:31:39.877 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:31:39.877 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:31:39.877 Initialization complete. Launching workers. 00:31:39.877 00:31:39.877 ==================== 00:31:39.877 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:31:39.877 TCP transport: 00:31:39.877 polls: 5969 00:31:39.877 idle_polls: 2893 00:31:39.877 sock_completions: 3076 00:31:39.877 nvme_completions: 5461 00:31:39.877 submitted_requests: 8182 00:31:39.877 queued_requests: 1 00:31:39.877 00:31:39.877 ==================== 00:31:39.877 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:31:39.877 TCP transport: 00:31:39.877 polls: 12167 00:31:39.877 idle_polls: 8909 00:31:39.877 sock_completions: 3258 00:31:39.877 nvme_completions: 5619 00:31:39.877 submitted_requests: 8396 00:31:39.877 queued_requests: 1 00:31:39.877 ======================================================== 00:31:39.877 Latency(us) 00:31:39.877 Device Information : IOPS MiB/s Average min max 00:31:39.877 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1364.87 341.22 99714.06 50912.98 375926.48 00:31:39.877 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1404.36 351.09 95266.99 45633.90 499753.87 00:31:39.877 ======================================================== 00:31:39.877 Total : 2769.23 692.31 97458.81 45633.90 499753.87 00:31:39.877 00:31:39.877 00:12:18 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@66 -- # sync 00:31:39.877 00:12:18 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:40.136 00:12:19 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@69 -- # '[' 1 -eq 1 ']' 00:31:40.136 00:12:19 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@71 -- # '[' -n 0000:5e:00.0 ']' 00:31:40.136 00:12:19 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore Nvme0n1 lvs_0 00:31:43.422 00:12:22 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@72 -- # ls_guid=e637a949-b9e1-4346-97ae-da81bfc33c8b 00:31:43.422 00:12:22 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@73 -- # get_lvs_free_mb e637a949-b9e1-4346-97ae-da81bfc33c8b 00:31:43.422 00:12:22 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1368 -- # local lvs_uuid=e637a949-b9e1-4346-97ae-da81bfc33c8b 00:31:43.422 00:12:22 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1369 -- # local lvs_info 00:31:43.422 00:12:22 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1370 -- # local fc 00:31:43.422 00:12:22 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1371 -- # local cs 00:31:43.422 00:12:22 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1372 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:31:43.422 00:12:22 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1372 -- # lvs_info='[ 00:31:43.422 { 00:31:43.422 "uuid": "e637a949-b9e1-4346-97ae-da81bfc33c8b", 00:31:43.422 "name": "lvs_0", 00:31:43.422 "base_bdev": "Nvme0n1", 00:31:43.422 "total_data_clusters": 238234, 00:31:43.422 "free_clusters": 238234, 00:31:43.422 "block_size": 512, 00:31:43.422 "cluster_size": 4194304 00:31:43.422 } 00:31:43.422 ]' 00:31:43.422 00:12:22 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1373 -- # jq '.[] | select(.uuid=="e637a949-b9e1-4346-97ae-da81bfc33c8b") .free_clusters' 00:31:43.422 00:12:22 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1373 -- # fc=238234 00:31:43.422 00:12:22 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1374 -- # jq '.[] | select(.uuid=="e637a949-b9e1-4346-97ae-da81bfc33c8b") .cluster_size' 00:31:43.422 00:12:22 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1374 -- # cs=4194304 00:31:43.422 00:12:22 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1377 -- # free_mb=952936 00:31:43.422 00:12:22 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1378 -- # echo 952936 00:31:43.422 952936 00:31:43.422 00:12:22 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@77 -- # '[' 952936 -gt 20480 ']' 00:31:43.422 00:12:22 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@78 -- # free_mb=20480 00:31:43.422 00:12:22 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u e637a949-b9e1-4346-97ae-da81bfc33c8b lbd_0 20480 00:31:43.989 00:12:23 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@80 -- # lb_guid=7fa73767-58e6-469e-836d-77ab4a23e236 00:31:43.989 00:12:23 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore 7fa73767-58e6-469e-836d-77ab4a23e236 lvs_n_0 00:31:44.926 00:12:23 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@83 -- # ls_nested_guid=d0e8796e-3cf3-4780-b82d-92f61e698eb3 00:31:44.927 00:12:23 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@84 -- # get_lvs_free_mb d0e8796e-3cf3-4780-b82d-92f61e698eb3 00:31:44.927 00:12:23 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1368 -- # local lvs_uuid=d0e8796e-3cf3-4780-b82d-92f61e698eb3 00:31:44.927 00:12:23 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1369 -- # local lvs_info 00:31:44.927 00:12:23 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1370 -- # local fc 00:31:44.927 00:12:23 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1371 -- # local cs 00:31:44.927 00:12:23 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1372 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:31:44.927 00:12:23 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1372 -- # lvs_info='[ 00:31:44.927 { 00:31:44.927 "uuid": "e637a949-b9e1-4346-97ae-da81bfc33c8b", 00:31:44.927 "name": "lvs_0", 00:31:44.927 "base_bdev": "Nvme0n1", 00:31:44.927 "total_data_clusters": 238234, 00:31:44.927 "free_clusters": 233114, 00:31:44.927 "block_size": 512, 00:31:44.927 "cluster_size": 4194304 00:31:44.927 }, 00:31:44.927 { 00:31:44.927 "uuid": "d0e8796e-3cf3-4780-b82d-92f61e698eb3", 00:31:44.927 "name": "lvs_n_0", 00:31:44.927 "base_bdev": "7fa73767-58e6-469e-836d-77ab4a23e236", 00:31:44.927 "total_data_clusters": 5114, 00:31:44.927 "free_clusters": 5114, 00:31:44.927 "block_size": 512, 00:31:44.927 "cluster_size": 4194304 00:31:44.927 } 00:31:44.927 ]' 00:31:44.927 00:12:23 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1373 -- # jq '.[] | select(.uuid=="d0e8796e-3cf3-4780-b82d-92f61e698eb3") .free_clusters' 00:31:44.927 00:12:24 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1373 -- # fc=5114 00:31:44.927 00:12:24 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1374 -- # jq '.[] | select(.uuid=="d0e8796e-3cf3-4780-b82d-92f61e698eb3") .cluster_size' 00:31:44.927 00:12:24 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1374 -- # cs=4194304 00:31:44.927 00:12:24 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1377 -- # free_mb=20456 00:31:44.927 00:12:24 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1378 -- # echo 20456 00:31:44.927 20456 00:31:44.927 00:12:24 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@85 -- # '[' 20456 -gt 20480 ']' 00:31:44.927 00:12:24 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u d0e8796e-3cf3-4780-b82d-92f61e698eb3 lbd_nest_0 20456 00:31:45.185 00:12:24 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@88 -- # lb_nested_guid=848900a4-7bf4-449b-ae7d-b7e13a124332 00:31:45.185 00:12:24 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:31:45.444 00:12:24 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@90 -- # for bdev in $lb_nested_guid 00:31:45.444 00:12:24 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 848900a4-7bf4-449b-ae7d-b7e13a124332 00:31:45.703 00:12:24 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:45.962 00:12:24 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@95 -- # qd_depth=("1" "32" "128") 00:31:45.962 00:12:24 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@96 -- # io_size=("512" "131072") 00:31:45.962 00:12:24 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:31:45.962 00:12:24 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:31:45.962 00:12:24 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:31:58.171 Initializing NVMe Controllers 00:31:58.171 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:58.171 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:31:58.171 Initialization complete. Launching workers. 00:31:58.171 ======================================================== 00:31:58.171 Latency(us) 00:31:58.171 Device Information : IOPS MiB/s Average min max 00:31:58.171 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 45.70 0.02 21929.35 153.91 47885.65 00:31:58.171 ======================================================== 00:31:58.171 Total : 45.70 0.02 21929.35 153.91 47885.65 00:31:58.171 00:31:58.171 00:12:35 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:31:58.171 00:12:35 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:32:08.148 Initializing NVMe Controllers 00:32:08.148 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:32:08.148 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:32:08.148 Initialization complete. Launching workers. 00:32:08.148 ======================================================== 00:32:08.148 Latency(us) 00:32:08.148 Device Information : IOPS MiB/s Average min max 00:32:08.148 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 73.08 9.14 13692.50 4986.66 50868.55 00:32:08.148 ======================================================== 00:32:08.148 Total : 73.08 9.14 13692.50 4986.66 50868.55 00:32:08.148 00:32:08.148 00:12:45 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:32:08.148 00:12:45 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:32:08.148 00:12:45 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:32:18.125 Initializing NVMe Controllers 00:32:18.125 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:32:18.125 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:32:18.125 Initialization complete. Launching workers. 00:32:18.125 ======================================================== 00:32:18.125 Latency(us) 00:32:18.125 Device Information : IOPS MiB/s Average min max 00:32:18.125 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8197.30 4.00 3904.08 274.88 9417.72 00:32:18.125 ======================================================== 00:32:18.125 Total : 8197.30 4.00 3904.08 274.88 9417.72 00:32:18.125 00:32:18.125 00:12:56 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:32:18.125 00:12:56 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:32:28.099 Initializing NVMe Controllers 00:32:28.099 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:32:28.099 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:32:28.099 Initialization complete. Launching workers. 00:32:28.099 ======================================================== 00:32:28.099 Latency(us) 00:32:28.099 Device Information : IOPS MiB/s Average min max 00:32:28.099 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 3908.45 488.56 8193.08 700.50 25742.89 00:32:28.099 ======================================================== 00:32:28.099 Total : 3908.45 488.56 8193.08 700.50 25742.89 00:32:28.099 00:32:28.099 00:13:06 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:32:28.099 00:13:06 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:32:28.099 00:13:06 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:32:38.076 Initializing NVMe Controllers 00:32:38.076 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:32:38.076 Controller IO queue size 128, less than required. 00:32:38.076 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:32:38.076 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:32:38.076 Initialization complete. Launching workers. 00:32:38.076 ======================================================== 00:32:38.076 Latency(us) 00:32:38.076 Device Information : IOPS MiB/s Average min max 00:32:38.076 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 13095.72 6.39 9773.56 1669.61 25610.10 00:32:38.076 ======================================================== 00:32:38.076 Total : 13095.72 6.39 9773.56 1669.61 25610.10 00:32:38.076 00:32:38.076 00:13:17 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:32:38.076 00:13:17 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:32:50.407 Initializing NVMe Controllers 00:32:50.407 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:32:50.407 Controller IO queue size 128, less than required. 00:32:50.407 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:32:50.407 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:32:50.407 Initialization complete. Launching workers. 00:32:50.407 ======================================================== 00:32:50.407 Latency(us) 00:32:50.407 Device Information : IOPS MiB/s Average min max 00:32:50.407 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1187.37 148.42 107960.72 22331.27 211681.17 00:32:50.407 ======================================================== 00:32:50.407 Total : 1187.37 148.42 107960.72 22331.27 211681.17 00:32:50.407 00:32:50.407 00:13:27 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:32:50.407 00:13:27 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 848900a4-7bf4-449b-ae7d-b7e13a124332 00:32:50.407 00:13:28 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:32:50.407 00:13:28 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@107 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 7fa73767-58e6-469e-836d-77ab4a23e236 00:32:50.407 00:13:29 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@108 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:32:50.407 00:13:29 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:32:50.407 00:13:29 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:32:50.407 00:13:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@516 -- # nvmfcleanup 00:32:50.407 00:13:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@121 -- # sync 00:32:50.407 00:13:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:50.407 00:13:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@124 -- # set +e 00:32:50.407 00:13:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:50.407 00:13:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:50.407 rmmod nvme_tcp 00:32:50.407 rmmod nvme_fabrics 00:32:50.407 rmmod nvme_keyring 00:32:50.407 00:13:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:50.407 00:13:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@128 -- # set -e 00:32:50.407 00:13:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@129 -- # return 0 00:32:50.407 00:13:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@517 -- # '[' -n 4157913 ']' 00:32:50.407 00:13:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@518 -- # killprocess 4157913 00:32:50.407 00:13:29 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@954 -- # '[' -z 4157913 ']' 00:32:50.407 00:13:29 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@958 -- # kill -0 4157913 00:32:50.407 00:13:29 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # uname 00:32:50.407 00:13:29 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:50.407 00:13:29 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4157913 00:32:50.407 00:13:29 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:32:50.407 00:13:29 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:32:50.407 00:13:29 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4157913' 00:32:50.407 killing process with pid 4157913 00:32:50.407 00:13:29 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@973 -- # kill 4157913 00:32:50.407 00:13:29 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@978 -- # wait 4157913 00:32:52.943 00:13:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:32:52.943 00:13:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:32:52.943 00:13:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:32:52.943 00:13:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # iptr 00:32:52.943 00:13:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-save 00:32:52.943 00:13:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:32:52.943 00:13:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-restore 00:32:52.943 00:13:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:52.943 00:13:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:52.943 00:13:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:52.943 00:13:31 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:52.943 00:13:31 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:54.849 00:13:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:54.849 00:32:54.849 real 1m37.226s 00:32:54.849 user 5m49.076s 00:32:54.849 sys 0m16.783s 00:32:54.849 00:13:33 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:54.849 00:13:33 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:32:54.849 ************************************ 00:32:54.849 END TEST nvmf_perf 00:32:54.849 ************************************ 00:32:54.849 00:13:33 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@24 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:32:54.849 00:13:33 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:32:54.849 00:13:33 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:54.849 00:13:33 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:32:54.849 ************************************ 00:32:54.849 START TEST nvmf_fio_host 00:32:54.849 ************************************ 00:32:54.849 00:13:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:32:54.849 * Looking for test storage... 00:32:54.849 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:32:54.849 00:13:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:32:54.849 00:13:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1711 -- # lcov --version 00:32:54.849 00:13:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:32:55.109 00:13:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:32:55.109 00:13:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:55.109 00:13:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:55.109 00:13:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:55.109 00:13:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # IFS=.-: 00:32:55.109 00:13:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # read -ra ver1 00:32:55.109 00:13:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # IFS=.-: 00:32:55.109 00:13:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # read -ra ver2 00:32:55.109 00:13:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@338 -- # local 'op=<' 00:32:55.109 00:13:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@340 -- # ver1_l=2 00:32:55.109 00:13:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@341 -- # ver2_l=1 00:32:55.109 00:13:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:55.109 00:13:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@344 -- # case "$op" in 00:32:55.109 00:13:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@345 -- # : 1 00:32:55.109 00:13:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:55.109 00:13:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:55.109 00:13:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # decimal 1 00:32:55.109 00:13:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=1 00:32:55.109 00:13:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:55.109 00:13:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 1 00:32:55.109 00:13:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # ver1[v]=1 00:32:55.109 00:13:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # decimal 2 00:32:55.109 00:13:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=2 00:32:55.109 00:13:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:55.109 00:13:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 2 00:32:55.109 00:13:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # ver2[v]=2 00:32:55.109 00:13:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:55.109 00:13:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:55.109 00:13:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # return 0 00:32:55.109 00:13:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:55.109 00:13:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:32:55.109 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:55.109 --rc genhtml_branch_coverage=1 00:32:55.109 --rc genhtml_function_coverage=1 00:32:55.109 --rc genhtml_legend=1 00:32:55.109 --rc geninfo_all_blocks=1 00:32:55.109 --rc geninfo_unexecuted_blocks=1 00:32:55.109 00:32:55.109 ' 00:32:55.109 00:13:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:32:55.109 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:55.109 --rc genhtml_branch_coverage=1 00:32:55.109 --rc genhtml_function_coverage=1 00:32:55.109 --rc genhtml_legend=1 00:32:55.109 --rc geninfo_all_blocks=1 00:32:55.109 --rc geninfo_unexecuted_blocks=1 00:32:55.109 00:32:55.109 ' 00:32:55.109 00:13:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:32:55.109 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:55.109 --rc genhtml_branch_coverage=1 00:32:55.109 --rc genhtml_function_coverage=1 00:32:55.109 --rc genhtml_legend=1 00:32:55.109 --rc geninfo_all_blocks=1 00:32:55.109 --rc geninfo_unexecuted_blocks=1 00:32:55.109 00:32:55.109 ' 00:32:55.109 00:13:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:32:55.109 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:55.109 --rc genhtml_branch_coverage=1 00:32:55.109 --rc genhtml_function_coverage=1 00:32:55.109 --rc genhtml_legend=1 00:32:55.109 --rc geninfo_all_blocks=1 00:32:55.109 --rc geninfo_unexecuted_blocks=1 00:32:55.109 00:32:55.109 ' 00:32:55.109 00:13:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:55.109 00:13:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:32:55.109 00:13:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:55.109 00:13:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:55.109 00:13:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:55.110 00:13:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:55.110 00:13:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:55.110 00:13:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:55.110 00:13:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:32:55.110 00:13:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:55.110 00:13:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:55.110 00:13:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:32:55.110 00:13:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:55.110 00:13:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:55.110 00:13:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:55.110 00:13:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:55.110 00:13:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:55.110 00:13:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:55.110 00:13:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:55.110 00:13:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:55.110 00:13:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:55.110 00:13:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:55.110 00:13:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:32:55.110 00:13:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:32:55.110 00:13:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:55.110 00:13:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:55.110 00:13:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:55.110 00:13:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:55.110 00:13:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:55.110 00:13:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:32:55.110 00:13:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:55.110 00:13:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:55.110 00:13:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:55.110 00:13:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:55.110 00:13:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:55.110 00:13:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:55.110 00:13:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:32:55.110 00:13:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:55.110 00:13:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@51 -- # : 0 00:32:55.110 00:13:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:55.110 00:13:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:55.110 00:13:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:55.110 00:13:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:55.110 00:13:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:55.110 00:13:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:32:55.110 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:32:55.110 00:13:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:55.110 00:13:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:55.110 00:13:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:55.110 00:13:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:32:55.110 00:13:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:32:55.110 00:13:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:32:55.110 00:13:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:55.110 00:13:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:32:55.110 00:13:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:32:55.110 00:13:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:32:55.110 00:13:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:55.110 00:13:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:55.110 00:13:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:55.110 00:13:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:32:55.110 00:13:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:32:55.110 00:13:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@309 -- # xtrace_disable 00:32:55.110 00:13:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:33:00.384 00:13:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:00.384 00:13:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # pci_devs=() 00:33:00.384 00:13:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:33:00.384 00:13:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:33:00.384 00:13:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:33:00.384 00:13:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:33:00.384 00:13:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:33:00.384 00:13:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # net_devs=() 00:33:00.384 00:13:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:33:00.384 00:13:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # e810=() 00:33:00.384 00:13:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # local -ga e810 00:33:00.384 00:13:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # x722=() 00:33:00.384 00:13:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # local -ga x722 00:33:00.384 00:13:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # mlx=() 00:33:00.384 00:13:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # local -ga mlx 00:33:00.384 00:13:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:00.384 00:13:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:00.384 00:13:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:00.384 00:13:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:00.384 00:13:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:00.384 00:13:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:00.384 00:13:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:00.384 00:13:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:33:00.384 00:13:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:00.384 00:13:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:00.384 00:13:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:00.384 00:13:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:00.384 00:13:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:33:00.384 00:13:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:33:00.384 00:13:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:33:00.384 00:13:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:33:00.384 00:13:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:33:00.384 00:13:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:33:00.384 00:13:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:00.385 00:13:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:33:00.385 Found 0000:af:00.0 (0x8086 - 0x159b) 00:33:00.385 00:13:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:00.385 00:13:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:00.385 00:13:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:00.385 00:13:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:00.385 00:13:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:00.385 00:13:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:00.385 00:13:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:33:00.385 Found 0000:af:00.1 (0x8086 - 0x159b) 00:33:00.385 00:13:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:00.385 00:13:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:00.385 00:13:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:00.385 00:13:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:00.385 00:13:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:00.385 00:13:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:33:00.385 00:13:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:33:00.385 00:13:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:33:00.385 00:13:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:00.385 00:13:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:00.385 00:13:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:00.385 00:13:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:00.385 00:13:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:00.385 00:13:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:00.385 00:13:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:00.385 00:13:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:33:00.385 Found net devices under 0000:af:00.0: cvl_0_0 00:33:00.385 00:13:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:00.385 00:13:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:00.385 00:13:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:00.385 00:13:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:00.385 00:13:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:00.385 00:13:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:00.385 00:13:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:00.385 00:13:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:00.385 00:13:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:33:00.385 Found net devices under 0000:af:00.1: cvl_0_1 00:33:00.385 00:13:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:00.385 00:13:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:33:00.385 00:13:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # is_hw=yes 00:33:00.385 00:13:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:33:00.385 00:13:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:33:00.385 00:13:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:33:00.385 00:13:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:00.385 00:13:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:00.385 00:13:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:00.385 00:13:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:00.385 00:13:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:33:00.385 00:13:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:00.385 00:13:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:00.385 00:13:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:33:00.385 00:13:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:33:00.385 00:13:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:00.385 00:13:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:00.385 00:13:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:33:00.385 00:13:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:33:00.385 00:13:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:33:00.385 00:13:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:00.385 00:13:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:00.385 00:13:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:00.385 00:13:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:33:00.385 00:13:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:00.385 00:13:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:00.385 00:13:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:00.385 00:13:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:33:00.644 00:13:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:33:00.644 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:00.644 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.365 ms 00:33:00.644 00:33:00.644 --- 10.0.0.2 ping statistics --- 00:33:00.644 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:00.644 rtt min/avg/max/mdev = 0.365/0.365/0.365/0.000 ms 00:33:00.645 00:13:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:00.645 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:00.645 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.184 ms 00:33:00.645 00:33:00.645 --- 10.0.0.1 ping statistics --- 00:33:00.645 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:00.645 rtt min/avg/max/mdev = 0.184/0.184/0.184/0.000 ms 00:33:00.645 00:13:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:00.645 00:13:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@450 -- # return 0 00:33:00.645 00:13:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:33:00.645 00:13:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:00.645 00:13:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:33:00.645 00:13:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:33:00.645 00:13:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:00.645 00:13:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:33:00.645 00:13:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:33:00.645 00:13:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:33:00.645 00:13:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:33:00.645 00:13:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:00.645 00:13:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:33:00.645 00:13:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=4175845 00:33:00.645 00:13:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:33:00.645 00:13:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:33:00.645 00:13:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 4175845 00:33:00.645 00:13:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@835 -- # '[' -z 4175845 ']' 00:33:00.645 00:13:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:00.645 00:13:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:00.645 00:13:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:00.645 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:00.645 00:13:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:00.645 00:13:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:33:00.645 [2024-12-14 00:13:39.669981] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:33:00.645 [2024-12-14 00:13:39.670071] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:00.904 [2024-12-14 00:13:39.787743] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:33:00.904 [2024-12-14 00:13:39.901952] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:00.904 [2024-12-14 00:13:39.901999] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:00.904 [2024-12-14 00:13:39.902010] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:00.904 [2024-12-14 00:13:39.902021] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:00.904 [2024-12-14 00:13:39.902029] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:00.904 [2024-12-14 00:13:39.907470] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:33:00.904 [2024-12-14 00:13:39.907491] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:33:00.904 [2024-12-14 00:13:39.907555] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:33:00.904 [2024-12-14 00:13:39.907564] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:33:01.474 00:13:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:01.474 00:13:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@868 -- # return 0 00:33:01.474 00:13:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:33:01.740 [2024-12-14 00:13:40.637518] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:01.740 00:13:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:33:01.740 00:13:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:01.740 00:13:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:33:01.740 00:13:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:33:01.999 Malloc1 00:33:01.999 00:13:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:33:02.259 00:13:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:33:02.259 00:13:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:02.518 [2024-12-14 00:13:41.551905] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:02.518 00:13:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:33:02.777 00:13:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:33:02.777 00:13:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:33:02.777 00:13:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:33:02.777 00:13:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:33:02.777 00:13:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:33:02.777 00:13:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:33:02.777 00:13:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:33:02.777 00:13:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:33:02.777 00:13:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:33:02.777 00:13:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:33:02.777 00:13:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:33:02.777 00:13:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:33:02.777 00:13:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:33:02.777 00:13:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:33:02.777 00:13:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:33:02.777 00:13:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1351 -- # break 00:33:02.777 00:13:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:33:02.777 00:13:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:33:03.036 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:33:03.036 fio-3.35 00:33:03.036 Starting 1 thread 00:33:05.569 00:33:05.569 test: (groupid=0, jobs=1): err= 0: pid=4176389: Sat Dec 14 00:13:44 2024 00:33:05.569 read: IOPS=10.0k, BW=39.3MiB/s (41.2MB/s)(78.7MiB/2006msec) 00:33:05.569 slat (nsec): min=1704, max=187670, avg=1883.56, stdev=1860.96 00:33:05.569 clat (usec): min=2733, max=11964, avg=6989.19, stdev=533.05 00:33:05.569 lat (usec): min=2768, max=11966, avg=6991.07, stdev=532.89 00:33:05.569 clat percentiles (usec): 00:33:05.569 | 1.00th=[ 5735], 5.00th=[ 6194], 10.00th=[ 6325], 20.00th=[ 6587], 00:33:05.569 | 30.00th=[ 6718], 40.00th=[ 6915], 50.00th=[ 6980], 60.00th=[ 7111], 00:33:05.569 | 70.00th=[ 7242], 80.00th=[ 7373], 90.00th=[ 7635], 95.00th=[ 7767], 00:33:05.569 | 99.00th=[ 8160], 99.50th=[ 8291], 99.90th=[10552], 99.95th=[11338], 00:33:05.569 | 99.99th=[11863] 00:33:05.569 bw ( KiB/s): min=39192, max=40600, per=99.93%, avg=40168.00, stdev=663.31, samples=4 00:33:05.569 iops : min= 9798, max=10150, avg=10042.00, stdev=165.83, samples=4 00:33:05.569 write: IOPS=10.1k, BW=39.3MiB/s (41.2MB/s)(78.8MiB/2006msec); 0 zone resets 00:33:05.569 slat (nsec): min=1759, max=163796, avg=1945.43, stdev=1348.97 00:33:05.569 clat (usec): min=1990, max=10634, avg=5662.47, stdev=429.37 00:33:05.569 lat (usec): min=2006, max=10636, avg=5664.42, stdev=429.26 00:33:05.569 clat percentiles (usec): 00:33:05.569 | 1.00th=[ 4686], 5.00th=[ 5014], 10.00th=[ 5145], 20.00th=[ 5342], 00:33:05.569 | 30.00th=[ 5473], 40.00th=[ 5538], 50.00th=[ 5669], 60.00th=[ 5735], 00:33:05.569 | 70.00th=[ 5866], 80.00th=[ 5997], 90.00th=[ 6194], 95.00th=[ 6325], 00:33:05.569 | 99.00th=[ 6587], 99.50th=[ 6718], 99.90th=[ 7963], 99.95th=[ 8848], 00:33:05.569 | 99.99th=[10552] 00:33:05.569 bw ( KiB/s): min=39608, max=40688, per=100.00%, avg=40242.00, stdev=463.19, samples=4 00:33:05.569 iops : min= 9902, max=10172, avg=10060.50, stdev=115.80, samples=4 00:33:05.569 lat (msec) : 2=0.01%, 4=0.12%, 10=99.80%, 20=0.08% 00:33:05.569 cpu : usr=74.11%, sys=24.64%, ctx=94, majf=0, minf=1505 00:33:05.569 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:33:05.569 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:05.569 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:33:05.569 issued rwts: total=20158,20174,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:05.569 latency : target=0, window=0, percentile=100.00%, depth=128 00:33:05.569 00:33:05.569 Run status group 0 (all jobs): 00:33:05.569 READ: bw=39.3MiB/s (41.2MB/s), 39.3MiB/s-39.3MiB/s (41.2MB/s-41.2MB/s), io=78.7MiB (82.6MB), run=2006-2006msec 00:33:05.569 WRITE: bw=39.3MiB/s (41.2MB/s), 39.3MiB/s-39.3MiB/s (41.2MB/s-41.2MB/s), io=78.8MiB (82.6MB), run=2006-2006msec 00:33:05.829 ----------------------------------------------------- 00:33:05.829 Suppressions used: 00:33:05.829 count bytes template 00:33:05.829 1 57 /usr/src/fio/parse.c 00:33:05.829 1 8 libtcmalloc_minimal.so 00:33:05.829 ----------------------------------------------------- 00:33:05.829 00:33:05.829 00:13:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:33:05.829 00:13:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:33:05.829 00:13:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:33:05.829 00:13:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:33:05.829 00:13:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:33:05.829 00:13:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:33:05.829 00:13:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:33:05.829 00:13:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:33:05.829 00:13:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:33:05.829 00:13:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:33:05.829 00:13:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:33:05.829 00:13:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:33:05.829 00:13:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:33:05.829 00:13:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:33:05.829 00:13:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1351 -- # break 00:33:05.829 00:13:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:33:05.829 00:13:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:33:06.394 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:33:06.394 fio-3.35 00:33:06.394 Starting 1 thread 00:33:08.928 00:33:08.928 test: (groupid=0, jobs=1): err= 0: pid=4176988: Sat Dec 14 00:13:47 2024 00:33:08.928 read: IOPS=9404, BW=147MiB/s (154MB/s)(295MiB/2007msec) 00:33:08.928 slat (nsec): min=2728, max=91420, avg=3137.84, stdev=1391.64 00:33:08.928 clat (usec): min=1919, max=50102, avg=7886.54, stdev=3556.40 00:33:08.928 lat (usec): min=1922, max=50105, avg=7889.67, stdev=3556.41 00:33:08.928 clat percentiles (usec): 00:33:08.928 | 1.00th=[ 4178], 5.00th=[ 5014], 10.00th=[ 5538], 20.00th=[ 6259], 00:33:08.928 | 30.00th=[ 6718], 40.00th=[ 7177], 50.00th=[ 7635], 60.00th=[ 8094], 00:33:08.928 | 70.00th=[ 8455], 80.00th=[ 8979], 90.00th=[ 9765], 95.00th=[10421], 00:33:08.928 | 99.00th=[13173], 99.50th=[43779], 99.90th=[49546], 99.95th=[49546], 00:33:08.928 | 99.99th=[50070] 00:33:08.928 bw ( KiB/s): min=66176, max=83968, per=49.53%, avg=74528.00, stdev=7290.71, samples=4 00:33:08.928 iops : min= 4136, max= 5248, avg=4658.00, stdev=455.67, samples=4 00:33:08.928 write: IOPS=5549, BW=86.7MiB/s (90.9MB/s)(152MiB/1757msec); 0 zone resets 00:33:08.928 slat (usec): min=29, max=281, avg=32.04, stdev= 5.57 00:33:08.928 clat (usec): min=3335, max=16505, avg=10009.50, stdev=1637.25 00:33:08.928 lat (usec): min=3365, max=16535, avg=10041.55, stdev=1637.46 00:33:08.928 clat percentiles (usec): 00:33:08.928 | 1.00th=[ 6652], 5.00th=[ 7504], 10.00th=[ 7963], 20.00th=[ 8586], 00:33:08.928 | 30.00th=[ 9110], 40.00th=[ 9503], 50.00th=[ 9896], 60.00th=[10290], 00:33:08.928 | 70.00th=[10814], 80.00th=[11469], 90.00th=[12256], 95.00th=[12911], 00:33:08.928 | 99.00th=[13960], 99.50th=[14222], 99.90th=[15926], 99.95th=[16188], 00:33:08.928 | 99.99th=[16450] 00:33:08.928 bw ( KiB/s): min=68864, max=87040, per=87.44%, avg=77648.00, stdev=7430.87, samples=4 00:33:08.928 iops : min= 4304, max= 5440, avg=4853.00, stdev=464.43, samples=4 00:33:08.929 lat (msec) : 2=0.01%, 4=0.42%, 10=78.15%, 20=20.97%, 50=0.43% 00:33:08.929 lat (msec) : 100=0.01% 00:33:08.929 cpu : usr=87.05%, sys=12.21%, ctx=37, majf=0, minf=2388 00:33:08.929 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.5% 00:33:08.929 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:08.929 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:33:08.929 issued rwts: total=18875,9751,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:08.929 latency : target=0, window=0, percentile=100.00%, depth=128 00:33:08.929 00:33:08.929 Run status group 0 (all jobs): 00:33:08.929 READ: bw=147MiB/s (154MB/s), 147MiB/s-147MiB/s (154MB/s-154MB/s), io=295MiB (309MB), run=2007-2007msec 00:33:08.929 WRITE: bw=86.7MiB/s (90.9MB/s), 86.7MiB/s-86.7MiB/s (90.9MB/s-90.9MB/s), io=152MiB (160MB), run=1757-1757msec 00:33:08.929 ----------------------------------------------------- 00:33:08.929 Suppressions used: 00:33:08.929 count bytes template 00:33:08.929 1 57 /usr/src/fio/parse.c 00:33:08.929 226 21696 /usr/src/fio/iolog.c 00:33:08.929 1 8 libtcmalloc_minimal.so 00:33:08.929 ----------------------------------------------------- 00:33:08.929 00:33:08.929 00:13:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:33:09.188 00:13:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@49 -- # '[' 1 -eq 1 ']' 00:33:09.188 00:13:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@51 -- # bdfs=($(get_nvme_bdfs)) 00:33:09.188 00:13:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@51 -- # get_nvme_bdfs 00:33:09.188 00:13:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1498 -- # bdfs=() 00:33:09.188 00:13:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1498 -- # local bdfs 00:33:09.188 00:13:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:33:09.188 00:13:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:33:09.188 00:13:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:33:09.188 00:13:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:33:09.188 00:13:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:5e:00.0 00:33:09.188 00:13:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:5e:00.0 -i 10.0.0.2 00:33:12.476 Nvme0n1 00:33:12.476 00:13:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore -c 1073741824 Nvme0n1 lvs_0 00:33:15.766 00:13:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@53 -- # ls_guid=172e49df-b7e9-4785-b588-dce18d5215df 00:33:15.766 00:13:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@54 -- # get_lvs_free_mb 172e49df-b7e9-4785-b588-dce18d5215df 00:33:15.766 00:13:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1368 -- # local lvs_uuid=172e49df-b7e9-4785-b588-dce18d5215df 00:33:15.766 00:13:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1369 -- # local lvs_info 00:33:15.766 00:13:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1370 -- # local fc 00:33:15.766 00:13:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1371 -- # local cs 00:33:15.766 00:13:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1372 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:33:15.766 00:13:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1372 -- # lvs_info='[ 00:33:15.766 { 00:33:15.766 "uuid": "172e49df-b7e9-4785-b588-dce18d5215df", 00:33:15.766 "name": "lvs_0", 00:33:15.766 "base_bdev": "Nvme0n1", 00:33:15.766 "total_data_clusters": 930, 00:33:15.766 "free_clusters": 930, 00:33:15.766 "block_size": 512, 00:33:15.766 "cluster_size": 1073741824 00:33:15.766 } 00:33:15.766 ]' 00:33:15.766 00:13:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1373 -- # jq '.[] | select(.uuid=="172e49df-b7e9-4785-b588-dce18d5215df") .free_clusters' 00:33:15.766 00:13:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1373 -- # fc=930 00:33:15.766 00:13:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1374 -- # jq '.[] | select(.uuid=="172e49df-b7e9-4785-b588-dce18d5215df") .cluster_size' 00:33:15.766 00:13:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1374 -- # cs=1073741824 00:33:15.766 00:13:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1377 -- # free_mb=952320 00:33:15.766 00:13:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1378 -- # echo 952320 00:33:15.766 952320 00:33:15.766 00:13:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -l lvs_0 lbd_0 952320 00:33:15.766 95af0dcf-d51a-4a54-9c88-fc597429819c 00:33:15.766 00:13:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000001 00:33:16.025 00:13:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 lvs_0/lbd_0 00:33:16.283 00:13:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:33:16.542 00:13:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@59 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:33:16.542 00:13:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:33:16.542 00:13:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:33:16.542 00:13:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:33:16.542 00:13:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:33:16.542 00:13:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:33:16.542 00:13:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:33:16.542 00:13:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:33:16.542 00:13:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:33:16.542 00:13:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:33:16.542 00:13:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:33:16.542 00:13:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:33:16.542 00:13:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:33:16.542 00:13:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:33:16.542 00:13:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1351 -- # break 00:33:16.542 00:13:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:33:16.542 00:13:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:33:16.801 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:33:16.801 fio-3.35 00:33:16.801 Starting 1 thread 00:33:19.331 00:33:19.331 test: (groupid=0, jobs=1): err= 0: pid=4178686: Sat Dec 14 00:13:58 2024 00:33:19.331 read: IOPS=6928, BW=27.1MiB/s (28.4MB/s)(54.3MiB/2007msec) 00:33:19.331 slat (nsec): min=1655, max=239027, avg=2224.84, stdev=2977.87 00:33:19.331 clat (usec): min=649, max=170931, avg=10086.99, stdev=11015.69 00:33:19.331 lat (usec): min=652, max=170958, avg=10089.21, stdev=11016.01 00:33:19.331 clat percentiles (msec): 00:33:19.331 | 1.00th=[ 8], 5.00th=[ 8], 10.00th=[ 9], 20.00th=[ 9], 00:33:19.331 | 30.00th=[ 9], 40.00th=[ 10], 50.00th=[ 10], 60.00th=[ 10], 00:33:19.331 | 70.00th=[ 10], 80.00th=[ 11], 90.00th=[ 11], 95.00th=[ 11], 00:33:19.331 | 99.00th=[ 12], 99.50th=[ 16], 99.90th=[ 171], 99.95th=[ 171], 00:33:19.331 | 99.99th=[ 171] 00:33:19.331 bw ( KiB/s): min=19488, max=30560, per=99.81%, avg=27662.00, stdev=5453.06, samples=4 00:33:19.331 iops : min= 4872, max= 7640, avg=6915.50, stdev=1363.27, samples=4 00:33:19.331 write: IOPS=6932, BW=27.1MiB/s (28.4MB/s)(54.3MiB/2007msec); 0 zone resets 00:33:19.331 slat (nsec): min=1719, max=119046, avg=2297.20, stdev=2153.28 00:33:19.331 clat (usec): min=219, max=168814, avg=8242.84, stdev=10267.00 00:33:19.331 lat (usec): min=221, max=168819, avg=8245.14, stdev=10267.26 00:33:19.331 clat percentiles (msec): 00:33:19.331 | 1.00th=[ 6], 5.00th=[ 7], 10.00th=[ 7], 20.00th=[ 8], 00:33:19.331 | 30.00th=[ 8], 40.00th=[ 8], 50.00th=[ 8], 60.00th=[ 8], 00:33:19.331 | 70.00th=[ 8], 80.00th=[ 9], 90.00th=[ 9], 95.00th=[ 9], 00:33:19.331 | 99.00th=[ 10], 99.50th=[ 12], 99.90th=[ 169], 99.95th=[ 169], 00:33:19.331 | 99.99th=[ 169] 00:33:19.331 bw ( KiB/s): min=20456, max=30336, per=99.97%, avg=27722.00, stdev=4845.97, samples=4 00:33:19.331 iops : min= 5114, max= 7584, avg=6930.50, stdev=1211.49, samples=4 00:33:19.331 lat (usec) : 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.01% 00:33:19.331 lat (msec) : 2=0.03%, 4=0.18%, 10=88.95%, 20=10.36%, 250=0.46% 00:33:19.331 cpu : usr=67.15%, sys=24.98%, ctx=887, majf=0, minf=1505 00:33:19.331 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:33:19.331 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:19.331 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:33:19.331 issued rwts: total=13906,13913,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:19.331 latency : target=0, window=0, percentile=100.00%, depth=128 00:33:19.331 00:33:19.331 Run status group 0 (all jobs): 00:33:19.332 READ: bw=27.1MiB/s (28.4MB/s), 27.1MiB/s-27.1MiB/s (28.4MB/s-28.4MB/s), io=54.3MiB (57.0MB), run=2007-2007msec 00:33:19.332 WRITE: bw=27.1MiB/s (28.4MB/s), 27.1MiB/s-27.1MiB/s (28.4MB/s-28.4MB/s), io=54.3MiB (57.0MB), run=2007-2007msec 00:33:19.332 ----------------------------------------------------- 00:33:19.332 Suppressions used: 00:33:19.332 count bytes template 00:33:19.332 1 58 /usr/src/fio/parse.c 00:33:19.332 1 8 libtcmalloc_minimal.so 00:33:19.332 ----------------------------------------------------- 00:33:19.332 00:33:19.332 00:13:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:33:19.590 00:13:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none lvs_0/lbd_0 lvs_n_0 00:33:20.968 00:13:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@64 -- # ls_nested_guid=82339731-d5b3-48ac-b691-e81252ececd2 00:33:20.968 00:13:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@65 -- # get_lvs_free_mb 82339731-d5b3-48ac-b691-e81252ececd2 00:33:20.968 00:13:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1368 -- # local lvs_uuid=82339731-d5b3-48ac-b691-e81252ececd2 00:33:20.968 00:13:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1369 -- # local lvs_info 00:33:20.968 00:13:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1370 -- # local fc 00:33:20.968 00:13:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1371 -- # local cs 00:33:20.968 00:13:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1372 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:33:20.968 00:13:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1372 -- # lvs_info='[ 00:33:20.968 { 00:33:20.968 "uuid": "172e49df-b7e9-4785-b588-dce18d5215df", 00:33:20.968 "name": "lvs_0", 00:33:20.968 "base_bdev": "Nvme0n1", 00:33:20.968 "total_data_clusters": 930, 00:33:20.968 "free_clusters": 0, 00:33:20.968 "block_size": 512, 00:33:20.968 "cluster_size": 1073741824 00:33:20.968 }, 00:33:20.968 { 00:33:20.968 "uuid": "82339731-d5b3-48ac-b691-e81252ececd2", 00:33:20.968 "name": "lvs_n_0", 00:33:20.968 "base_bdev": "95af0dcf-d51a-4a54-9c88-fc597429819c", 00:33:20.968 "total_data_clusters": 237847, 00:33:20.968 "free_clusters": 237847, 00:33:20.968 "block_size": 512, 00:33:20.968 "cluster_size": 4194304 00:33:20.968 } 00:33:20.968 ]' 00:33:20.968 00:13:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1373 -- # jq '.[] | select(.uuid=="82339731-d5b3-48ac-b691-e81252ececd2") .free_clusters' 00:33:20.968 00:14:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1373 -- # fc=237847 00:33:20.968 00:14:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1374 -- # jq '.[] | select(.uuid=="82339731-d5b3-48ac-b691-e81252ececd2") .cluster_size' 00:33:20.968 00:14:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1374 -- # cs=4194304 00:33:20.968 00:14:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1377 -- # free_mb=951388 00:33:20.968 00:14:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1378 -- # echo 951388 00:33:20.968 951388 00:33:20.968 00:14:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -l lvs_n_0 lbd_nest_0 951388 00:33:21.905 166a1604-93ef-4246-8ac2-e151ad56ae3b 00:33:21.905 00:14:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000001 00:33:22.164 00:14:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 lvs_n_0/lbd_nest_0 00:33:22.421 00:14:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:33:22.702 00:14:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@70 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:33:22.702 00:14:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:33:22.702 00:14:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:33:22.702 00:14:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:33:22.702 00:14:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:33:22.702 00:14:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:33:22.702 00:14:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:33:22.702 00:14:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:33:22.702 00:14:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:33:22.702 00:14:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:33:22.702 00:14:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:33:22.702 00:14:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:33:22.702 00:14:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:33:22.702 00:14:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:33:22.702 00:14:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1351 -- # break 00:33:22.702 00:14:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:33:22.702 00:14:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:33:22.966 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:33:22.966 fio-3.35 00:33:22.966 Starting 1 thread 00:33:25.499 00:33:25.499 test: (groupid=0, jobs=1): err= 0: pid=4179863: Sat Dec 14 00:14:04 2024 00:33:25.499 read: IOPS=6735, BW=26.3MiB/s (27.6MB/s)(52.8MiB/2008msec) 00:33:25.499 slat (nsec): min=1656, max=100553, avg=1878.88, stdev=1252.70 00:33:25.499 clat (usec): min=3724, max=16314, avg=10397.68, stdev=937.12 00:33:25.499 lat (usec): min=3728, max=16315, avg=10399.56, stdev=937.03 00:33:25.499 clat percentiles (usec): 00:33:25.499 | 1.00th=[ 8225], 5.00th=[ 8979], 10.00th=[ 9241], 20.00th=[ 9634], 00:33:25.499 | 30.00th=[ 9896], 40.00th=[10159], 50.00th=[10421], 60.00th=[10683], 00:33:25.499 | 70.00th=[10814], 80.00th=[11207], 90.00th=[11469], 95.00th=[11863], 00:33:25.499 | 99.00th=[12387], 99.50th=[12649], 99.90th=[15664], 99.95th=[16057], 00:33:25.499 | 99.99th=[16319] 00:33:25.499 bw ( KiB/s): min=25744, max=27512, per=99.90%, avg=26914.00, stdev=817.58, samples=4 00:33:25.499 iops : min= 6436, max= 6878, avg=6728.50, stdev=204.39, samples=4 00:33:25.499 write: IOPS=6739, BW=26.3MiB/s (27.6MB/s)(52.9MiB/2008msec); 0 zone resets 00:33:25.499 slat (nsec): min=1717, max=87393, avg=1945.63, stdev=929.80 00:33:25.499 clat (usec): min=1699, max=15962, avg=8468.54, stdev=781.73 00:33:25.499 lat (usec): min=1704, max=15964, avg=8470.49, stdev=781.67 00:33:25.499 clat percentiles (usec): 00:33:25.499 | 1.00th=[ 6652], 5.00th=[ 7308], 10.00th=[ 7570], 20.00th=[ 7898], 00:33:25.499 | 30.00th=[ 8094], 40.00th=[ 8291], 50.00th=[ 8455], 60.00th=[ 8586], 00:33:25.499 | 70.00th=[ 8848], 80.00th=[ 9110], 90.00th=[ 9372], 95.00th=[ 9634], 00:33:25.499 | 99.00th=[10159], 99.50th=[10421], 99.90th=[13829], 99.95th=[15401], 00:33:25.499 | 99.99th=[15926] 00:33:25.499 bw ( KiB/s): min=26768, max=27120, per=99.95%, avg=26944.00, stdev=152.91, samples=4 00:33:25.499 iops : min= 6692, max= 6780, avg=6736.00, stdev=38.23, samples=4 00:33:25.499 lat (msec) : 2=0.01%, 4=0.10%, 10=65.44%, 20=34.45% 00:33:25.499 cpu : usr=73.49%, sys=25.46%, ctx=85, majf=0, minf=1504 00:33:25.499 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:33:25.499 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:25.499 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:33:25.499 issued rwts: total=13524,13532,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:25.499 latency : target=0, window=0, percentile=100.00%, depth=128 00:33:25.499 00:33:25.499 Run status group 0 (all jobs): 00:33:25.499 READ: bw=26.3MiB/s (27.6MB/s), 26.3MiB/s-26.3MiB/s (27.6MB/s-27.6MB/s), io=52.8MiB (55.4MB), run=2008-2008msec 00:33:25.499 WRITE: bw=26.3MiB/s (27.6MB/s), 26.3MiB/s-26.3MiB/s (27.6MB/s-27.6MB/s), io=52.9MiB (55.4MB), run=2008-2008msec 00:33:25.499 ----------------------------------------------------- 00:33:25.499 Suppressions used: 00:33:25.499 count bytes template 00:33:25.499 1 58 /usr/src/fio/parse.c 00:33:25.499 1 8 libtcmalloc_minimal.so 00:33:25.499 ----------------------------------------------------- 00:33:25.499 00:33:25.499 00:14:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:33:25.758 00:14:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@74 -- # sync 00:33:25.758 00:14:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -t 120 bdev_lvol_delete lvs_n_0/lbd_nest_0 00:33:29.950 00:14:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:33:30.208 00:14:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete lvs_0/lbd_0 00:33:33.496 00:14:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:33:33.496 00:14:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:33:35.400 00:14:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:33:35.400 00:14:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:33:35.400 00:14:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:33:35.400 00:14:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:33:35.400 00:14:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@121 -- # sync 00:33:35.400 00:14:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:35.400 00:14:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@124 -- # set +e 00:33:35.400 00:14:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:35.400 00:14:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:35.400 rmmod nvme_tcp 00:33:35.400 rmmod nvme_fabrics 00:33:35.400 rmmod nvme_keyring 00:33:35.400 00:14:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:35.400 00:14:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@128 -- # set -e 00:33:35.400 00:14:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@129 -- # return 0 00:33:35.400 00:14:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@517 -- # '[' -n 4175845 ']' 00:33:35.400 00:14:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@518 -- # killprocess 4175845 00:33:35.400 00:14:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@954 -- # '[' -z 4175845 ']' 00:33:35.400 00:14:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@958 -- # kill -0 4175845 00:33:35.400 00:14:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # uname 00:33:35.400 00:14:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:35.400 00:14:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4175845 00:33:35.400 00:14:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:33:35.400 00:14:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:33:35.400 00:14:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4175845' 00:33:35.400 killing process with pid 4175845 00:33:35.400 00:14:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@973 -- # kill 4175845 00:33:35.400 00:14:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@978 -- # wait 4175845 00:33:36.777 00:14:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:33:36.777 00:14:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:33:36.777 00:14:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:33:36.777 00:14:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # iptr 00:33:36.777 00:14:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-save 00:33:36.777 00:14:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-restore 00:33:36.777 00:14:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:33:36.777 00:14:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:36.777 00:14:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:36.777 00:14:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:36.777 00:14:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:36.777 00:14:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:38.683 00:14:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:38.683 00:33:38.683 real 0m43.676s 00:33:38.683 user 2m54.270s 00:33:38.683 sys 0m10.108s 00:33:38.683 00:14:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:38.683 00:14:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:33:38.683 ************************************ 00:33:38.683 END TEST nvmf_fio_host 00:33:38.683 ************************************ 00:33:38.683 00:14:17 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@25 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:33:38.683 00:14:17 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:33:38.683 00:14:17 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:38.683 00:14:17 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:33:38.683 ************************************ 00:33:38.683 START TEST nvmf_failover 00:33:38.683 ************************************ 00:33:38.683 00:14:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:33:38.683 * Looking for test storage... 00:33:38.683 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:33:38.683 00:14:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:33:38.683 00:14:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1711 -- # lcov --version 00:33:38.683 00:14:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:33:38.683 00:14:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:33:38.683 00:14:17 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:38.683 00:14:17 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:38.683 00:14:17 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:38.683 00:14:17 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # IFS=.-: 00:33:38.683 00:14:17 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # read -ra ver1 00:33:38.683 00:14:17 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # IFS=.-: 00:33:38.683 00:14:17 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # read -ra ver2 00:33:38.683 00:14:17 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@338 -- # local 'op=<' 00:33:38.683 00:14:17 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@340 -- # ver1_l=2 00:33:38.683 00:14:17 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@341 -- # ver2_l=1 00:33:38.683 00:14:17 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:38.683 00:14:17 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@344 -- # case "$op" in 00:33:38.683 00:14:17 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@345 -- # : 1 00:33:38.683 00:14:17 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:38.683 00:14:17 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:38.683 00:14:17 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # decimal 1 00:33:38.683 00:14:17 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=1 00:33:38.683 00:14:17 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:38.683 00:14:17 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 1 00:33:38.683 00:14:17 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # ver1[v]=1 00:33:38.683 00:14:17 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # decimal 2 00:33:38.683 00:14:17 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=2 00:33:38.683 00:14:17 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:38.683 00:14:17 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 2 00:33:38.683 00:14:17 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # ver2[v]=2 00:33:38.683 00:14:17 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:38.683 00:14:17 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:38.683 00:14:17 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # return 0 00:33:38.683 00:14:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:38.683 00:14:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:33:38.683 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:38.683 --rc genhtml_branch_coverage=1 00:33:38.683 --rc genhtml_function_coverage=1 00:33:38.683 --rc genhtml_legend=1 00:33:38.683 --rc geninfo_all_blocks=1 00:33:38.683 --rc geninfo_unexecuted_blocks=1 00:33:38.683 00:33:38.683 ' 00:33:38.683 00:14:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:33:38.683 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:38.683 --rc genhtml_branch_coverage=1 00:33:38.683 --rc genhtml_function_coverage=1 00:33:38.683 --rc genhtml_legend=1 00:33:38.683 --rc geninfo_all_blocks=1 00:33:38.683 --rc geninfo_unexecuted_blocks=1 00:33:38.683 00:33:38.683 ' 00:33:38.683 00:14:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:33:38.683 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:38.683 --rc genhtml_branch_coverage=1 00:33:38.683 --rc genhtml_function_coverage=1 00:33:38.683 --rc genhtml_legend=1 00:33:38.683 --rc geninfo_all_blocks=1 00:33:38.683 --rc geninfo_unexecuted_blocks=1 00:33:38.683 00:33:38.683 ' 00:33:38.683 00:14:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:33:38.683 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:38.683 --rc genhtml_branch_coverage=1 00:33:38.683 --rc genhtml_function_coverage=1 00:33:38.683 --rc genhtml_legend=1 00:33:38.683 --rc geninfo_all_blocks=1 00:33:38.684 --rc geninfo_unexecuted_blocks=1 00:33:38.684 00:33:38.684 ' 00:33:38.684 00:14:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:38.684 00:14:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:33:38.684 00:14:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:38.684 00:14:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:38.943 00:14:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:38.943 00:14:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:38.943 00:14:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:38.943 00:14:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:38.943 00:14:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:38.943 00:14:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:38.943 00:14:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:38.943 00:14:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:38.943 00:14:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:33:38.943 00:14:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:33:38.943 00:14:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:38.943 00:14:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:38.943 00:14:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:38.943 00:14:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:38.943 00:14:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:38.943 00:14:17 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@15 -- # shopt -s extglob 00:33:38.943 00:14:17 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:38.943 00:14:17 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:38.943 00:14:17 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:38.943 00:14:17 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:38.943 00:14:17 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:38.943 00:14:17 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:38.943 00:14:17 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:33:38.943 00:14:17 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:38.943 00:14:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@51 -- # : 0 00:33:38.943 00:14:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:38.943 00:14:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:38.943 00:14:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:38.943 00:14:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:38.943 00:14:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:38.944 00:14:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:33:38.944 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:33:38.944 00:14:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:38.944 00:14:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:38.944 00:14:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:38.944 00:14:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:33:38.944 00:14:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:33:38.944 00:14:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:33:38.944 00:14:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:33:38.944 00:14:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:33:38.944 00:14:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:33:38.944 00:14:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:38.944 00:14:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@476 -- # prepare_net_devs 00:33:38.944 00:14:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@438 -- # local -g is_hw=no 00:33:38.944 00:14:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@440 -- # remove_spdk_ns 00:33:38.944 00:14:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:38.944 00:14:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:38.944 00:14:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:38.944 00:14:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:33:38.944 00:14:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:33:38.944 00:14:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@309 -- # xtrace_disable 00:33:38.944 00:14:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:33:44.218 00:14:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:44.219 00:14:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # pci_devs=() 00:33:44.219 00:14:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # local -a pci_devs 00:33:44.219 00:14:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # pci_net_devs=() 00:33:44.219 00:14:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:33:44.219 00:14:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # pci_drivers=() 00:33:44.219 00:14:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # local -A pci_drivers 00:33:44.219 00:14:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # net_devs=() 00:33:44.219 00:14:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # local -ga net_devs 00:33:44.219 00:14:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # e810=() 00:33:44.219 00:14:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # local -ga e810 00:33:44.219 00:14:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # x722=() 00:33:44.219 00:14:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # local -ga x722 00:33:44.219 00:14:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # mlx=() 00:33:44.219 00:14:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # local -ga mlx 00:33:44.219 00:14:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:44.219 00:14:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:44.219 00:14:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:44.219 00:14:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:44.219 00:14:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:44.219 00:14:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:44.219 00:14:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:44.219 00:14:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:33:44.219 00:14:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:44.219 00:14:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:44.219 00:14:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:44.219 00:14:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:44.219 00:14:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:33:44.219 00:14:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:33:44.219 00:14:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:33:44.219 00:14:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:33:44.219 00:14:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:33:44.219 00:14:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:33:44.219 00:14:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:44.219 00:14:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:33:44.219 Found 0000:af:00.0 (0x8086 - 0x159b) 00:33:44.219 00:14:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:44.219 00:14:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:44.219 00:14:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:44.219 00:14:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:44.219 00:14:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:44.219 00:14:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:44.219 00:14:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:33:44.219 Found 0000:af:00.1 (0x8086 - 0x159b) 00:33:44.219 00:14:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:44.219 00:14:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:44.219 00:14:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:44.219 00:14:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:44.219 00:14:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:44.219 00:14:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:33:44.219 00:14:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:33:44.219 00:14:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:33:44.219 00:14:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:44.219 00:14:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:44.219 00:14:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:44.219 00:14:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:44.219 00:14:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:44.219 00:14:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:44.219 00:14:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:44.219 00:14:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:33:44.219 Found net devices under 0000:af:00.0: cvl_0_0 00:33:44.219 00:14:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:44.219 00:14:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:44.219 00:14:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:44.219 00:14:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:44.219 00:14:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:44.219 00:14:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:44.219 00:14:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:44.219 00:14:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:44.219 00:14:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:33:44.219 Found net devices under 0000:af:00.1: cvl_0_1 00:33:44.219 00:14:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:44.219 00:14:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:33:44.219 00:14:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # is_hw=yes 00:33:44.219 00:14:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:33:44.219 00:14:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:33:44.219 00:14:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:33:44.219 00:14:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:44.219 00:14:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:44.219 00:14:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:44.219 00:14:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:44.219 00:14:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:33:44.219 00:14:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:44.219 00:14:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:44.219 00:14:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:33:44.219 00:14:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:33:44.219 00:14:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:44.219 00:14:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:44.219 00:14:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:33:44.219 00:14:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:33:44.219 00:14:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:33:44.219 00:14:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:44.219 00:14:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:44.219 00:14:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:44.219 00:14:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:33:44.219 00:14:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:44.219 00:14:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:44.219 00:14:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:44.219 00:14:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:33:44.219 00:14:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:33:44.219 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:44.219 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.387 ms 00:33:44.219 00:33:44.219 --- 10.0.0.2 ping statistics --- 00:33:44.219 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:44.219 rtt min/avg/max/mdev = 0.387/0.387/0.387/0.000 ms 00:33:44.219 00:14:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:44.219 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:44.219 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.201 ms 00:33:44.219 00:33:44.219 --- 10.0.0.1 ping statistics --- 00:33:44.219 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:44.219 rtt min/avg/max/mdev = 0.201/0.201/0.201/0.000 ms 00:33:44.219 00:14:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:44.219 00:14:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@450 -- # return 0 00:33:44.219 00:14:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:33:44.219 00:14:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:44.219 00:14:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:33:44.219 00:14:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:33:44.219 00:14:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:44.220 00:14:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:33:44.220 00:14:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:33:44.220 00:14:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:33:44.220 00:14:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:33:44.220 00:14:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:44.220 00:14:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:33:44.220 00:14:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@509 -- # nvmfpid=4185228 00:33:44.220 00:14:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@510 -- # waitforlisten 4185228 00:33:44.220 00:14:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 4185228 ']' 00:33:44.220 00:14:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:44.220 00:14:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:44.220 00:14:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:44.220 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:44.220 00:14:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:44.220 00:14:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:33:44.220 00:14:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:33:44.479 [2024-12-14 00:14:23.371373] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:33:44.479 [2024-12-14 00:14:23.371470] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:44.479 [2024-12-14 00:14:23.490414] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:33:44.479 [2024-12-14 00:14:23.596962] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:44.479 [2024-12-14 00:14:23.597010] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:44.479 [2024-12-14 00:14:23.597020] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:44.479 [2024-12-14 00:14:23.597045] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:44.479 [2024-12-14 00:14:23.597054] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:44.479 [2024-12-14 00:14:23.599277] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:33:44.479 [2024-12-14 00:14:23.599339] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:33:44.479 [2024-12-14 00:14:23.599360] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:33:45.145 00:14:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:45.145 00:14:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:33:45.145 00:14:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:33:45.145 00:14:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:45.145 00:14:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:33:45.145 00:14:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:45.145 00:14:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:33:45.486 [2024-12-14 00:14:24.399000] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:45.486 00:14:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:33:45.744 Malloc0 00:33:45.744 00:14:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:33:46.003 00:14:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:33:46.003 00:14:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:46.260 [2024-12-14 00:14:25.283933] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:46.260 00:14:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:33:46.518 [2024-12-14 00:14:25.476524] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:33:46.518 00:14:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:33:46.776 [2024-12-14 00:14:25.669141] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:33:46.776 00:14:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:33:46.776 00:14:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=4185633 00:33:46.776 00:14:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:33:46.776 00:14:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 4185633 /var/tmp/bdevperf.sock 00:33:46.776 00:14:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 4185633 ']' 00:33:46.776 00:14:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:33:46.776 00:14:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:46.776 00:14:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:33:46.776 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:33:46.776 00:14:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:46.776 00:14:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:33:47.712 00:14:26 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:47.712 00:14:26 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:33:47.712 00:14:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:33:47.971 NVMe0n1 00:33:47.971 00:14:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:33:48.230 00:33:48.230 00:14:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:33:48.230 00:14:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=4185865 00:33:48.230 00:14:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:33:49.167 00:14:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:49.425 00:14:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:33:52.711 00:14:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:33:52.711 00:33:52.970 00:14:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:33:52.970 [2024-12-14 00:14:32.069192] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:33:52.970 [2024-12-14 00:14:32.069246] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:33:52.970 [2024-12-14 00:14:32.069257] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:33:52.970 [2024-12-14 00:14:32.069266] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:33:52.970 [2024-12-14 00:14:32.069274] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:33:52.970 [2024-12-14 00:14:32.069282] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:33:52.970 [2024-12-14 00:14:32.069291] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:33:52.970 [2024-12-14 00:14:32.069299] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:33:52.970 [2024-12-14 00:14:32.069307] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:33:52.970 [2024-12-14 00:14:32.069315] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:33:52.970 [2024-12-14 00:14:32.069323] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:33:52.970 [2024-12-14 00:14:32.069336] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:33:52.970 [2024-12-14 00:14:32.069344] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:33:52.970 [2024-12-14 00:14:32.069352] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:33:52.970 [2024-12-14 00:14:32.069360] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:33:52.970 [2024-12-14 00:14:32.069368] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:33:52.970 [2024-12-14 00:14:32.069377] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:33:52.970 [2024-12-14 00:14:32.069385] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:33:52.970 [2024-12-14 00:14:32.069393] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:33:52.970 [2024-12-14 00:14:32.069402] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:33:52.970 [2024-12-14 00:14:32.069410] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:33:52.970 [2024-12-14 00:14:32.069418] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:33:52.970 [2024-12-14 00:14:32.069426] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:33:52.970 [2024-12-14 00:14:32.069434] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:33:52.970 [2024-12-14 00:14:32.069450] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:33:52.970 [2024-12-14 00:14:32.069458] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:33:52.970 [2024-12-14 00:14:32.069465] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:33:52.970 [2024-12-14 00:14:32.069473] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:33:52.970 [2024-12-14 00:14:32.069482] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:33:52.970 [2024-12-14 00:14:32.069490] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:33:52.970 [2024-12-14 00:14:32.069498] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:33:52.970 [2024-12-14 00:14:32.069506] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(6) to be set 00:33:52.970 00:14:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:33:56.255 00:14:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:56.255 [2024-12-14 00:14:35.277654] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:56.255 00:14:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:33:57.187 00:14:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:33:57.446 00:14:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@59 -- # wait 4185865 00:34:04.013 { 00:34:04.013 "results": [ 00:34:04.013 { 00:34:04.013 "job": "NVMe0n1", 00:34:04.013 "core_mask": "0x1", 00:34:04.013 "workload": "verify", 00:34:04.013 "status": "finished", 00:34:04.013 "verify_range": { 00:34:04.013 "start": 0, 00:34:04.013 "length": 16384 00:34:04.013 }, 00:34:04.013 "queue_depth": 128, 00:34:04.013 "io_size": 4096, 00:34:04.013 "runtime": 15.005361, 00:34:04.013 "iops": 9548.387406340973, 00:34:04.014 "mibps": 37.298388306019426, 00:34:04.014 "io_failed": 15045, 00:34:04.014 "io_timeout": 0, 00:34:04.014 "avg_latency_us": 12107.043986859811, 00:34:04.014 "min_latency_us": 477.8666666666667, 00:34:04.014 "max_latency_us": 35701.51619047619 00:34:04.014 } 00:34:04.014 ], 00:34:04.014 "core_count": 1 00:34:04.014 } 00:34:04.014 00:14:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@61 -- # killprocess 4185633 00:34:04.014 00:14:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 4185633 ']' 00:34:04.014 00:14:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 4185633 00:34:04.014 00:14:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:34:04.014 00:14:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:04.014 00:14:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4185633 00:34:04.014 00:14:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:34:04.014 00:14:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:34:04.014 00:14:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4185633' 00:34:04.014 killing process with pid 4185633 00:34:04.014 00:14:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 4185633 00:34:04.014 00:14:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 4185633 00:34:04.283 00:14:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:34:04.283 [2024-12-14 00:14:25.768760] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:34:04.283 [2024-12-14 00:14:25.768853] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4185633 ] 00:34:04.283 [2024-12-14 00:14:25.883310] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:04.283 [2024-12-14 00:14:26.001992] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:34:04.283 Running I/O for 15 seconds... 00:34:04.283 9645.00 IOPS, 37.68 MiB/s [2024-12-13T23:14:43.424Z] [2024-12-14 00:14:28.425588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:84048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.283 [2024-12-14 00:14:28.425650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.283 [2024-12-14 00:14:28.425673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:84056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.283 [2024-12-14 00:14:28.425684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.283 [2024-12-14 00:14:28.425697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:84064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.283 [2024-12-14 00:14:28.425707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.283 [2024-12-14 00:14:28.425719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:84072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.283 [2024-12-14 00:14:28.425729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.283 [2024-12-14 00:14:28.425740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:84080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.283 [2024-12-14 00:14:28.425750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.283 [2024-12-14 00:14:28.425762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:84088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.283 [2024-12-14 00:14:28.425772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.283 [2024-12-14 00:14:28.425783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:84096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.283 [2024-12-14 00:14:28.425792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.283 [2024-12-14 00:14:28.425809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:84104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.283 [2024-12-14 00:14:28.425818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.283 [2024-12-14 00:14:28.425829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:84112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.283 [2024-12-14 00:14:28.425839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.283 [2024-12-14 00:14:28.425850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:84120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.283 [2024-12-14 00:14:28.425859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.283 [2024-12-14 00:14:28.425871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:84128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.283 [2024-12-14 00:14:28.425880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.283 [2024-12-14 00:14:28.425898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:84136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.283 [2024-12-14 00:14:28.425907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.283 [2024-12-14 00:14:28.425918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:84144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.283 [2024-12-14 00:14:28.425927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.283 [2024-12-14 00:14:28.425950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:84152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.283 [2024-12-14 00:14:28.425960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.283 [2024-12-14 00:14:28.425971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:84160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.283 [2024-12-14 00:14:28.425980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.283 [2024-12-14 00:14:28.425991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:84168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.283 [2024-12-14 00:14:28.426001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.283 [2024-12-14 00:14:28.426012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:84176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.283 [2024-12-14 00:14:28.426022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.283 [2024-12-14 00:14:28.426033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:84184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.283 [2024-12-14 00:14:28.426043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.283 [2024-12-14 00:14:28.426054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:84192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.283 [2024-12-14 00:14:28.426063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.283 [2024-12-14 00:14:28.426074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:84200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.283 [2024-12-14 00:14:28.426083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.283 [2024-12-14 00:14:28.426094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:84208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.283 [2024-12-14 00:14:28.426103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.283 [2024-12-14 00:14:28.426115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:84216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.283 [2024-12-14 00:14:28.426124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.283 [2024-12-14 00:14:28.426135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:84224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.283 [2024-12-14 00:14:28.426145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.283 [2024-12-14 00:14:28.426156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:84232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.283 [2024-12-14 00:14:28.426167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.283 [2024-12-14 00:14:28.426179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:84240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.283 [2024-12-14 00:14:28.426188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.283 [2024-12-14 00:14:28.426199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:84248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.283 [2024-12-14 00:14:28.426208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.283 [2024-12-14 00:14:28.426219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:84256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.283 [2024-12-14 00:14:28.426228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.283 [2024-12-14 00:14:28.426238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:84264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.283 [2024-12-14 00:14:28.426247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.283 [2024-12-14 00:14:28.426258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:84272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.283 [2024-12-14 00:14:28.426267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.283 [2024-12-14 00:14:28.426278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:84280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.283 [2024-12-14 00:14:28.426287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.284 [2024-12-14 00:14:28.426297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:84288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.284 [2024-12-14 00:14:28.426306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.284 [2024-12-14 00:14:28.426317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:84296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.284 [2024-12-14 00:14:28.426326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.284 [2024-12-14 00:14:28.426337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:84304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.284 [2024-12-14 00:14:28.426346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.284 [2024-12-14 00:14:28.426357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:84312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.284 [2024-12-14 00:14:28.426367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.284 [2024-12-14 00:14:28.426377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:84320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.284 [2024-12-14 00:14:28.426387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.284 [2024-12-14 00:14:28.426397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:84328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.284 [2024-12-14 00:14:28.426407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.284 [2024-12-14 00:14:28.426420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:84336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.284 [2024-12-14 00:14:28.426429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.284 [2024-12-14 00:14:28.426446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:84344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.284 [2024-12-14 00:14:28.426456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.284 [2024-12-14 00:14:28.426467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:84352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.284 [2024-12-14 00:14:28.426476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.284 [2024-12-14 00:14:28.426487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:84360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.284 [2024-12-14 00:14:28.426496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.284 [2024-12-14 00:14:28.426507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:84368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.284 [2024-12-14 00:14:28.426516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.284 [2024-12-14 00:14:28.426527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:84376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.284 [2024-12-14 00:14:28.426536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.284 [2024-12-14 00:14:28.426547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:84440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:04.284 [2024-12-14 00:14:28.426556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.284 [2024-12-14 00:14:28.426567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:84448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:04.284 [2024-12-14 00:14:28.426576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.284 [2024-12-14 00:14:28.426587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:84456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:04.284 [2024-12-14 00:14:28.426596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.284 [2024-12-14 00:14:28.426607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:84464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:04.284 [2024-12-14 00:14:28.426617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.284 [2024-12-14 00:14:28.426627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:84472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:04.284 [2024-12-14 00:14:28.426636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.284 [2024-12-14 00:14:28.426648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:84480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:04.284 [2024-12-14 00:14:28.426657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.284 [2024-12-14 00:14:28.426667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:84488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:04.284 [2024-12-14 00:14:28.426678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.284 [2024-12-14 00:14:28.426690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:84496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:04.284 [2024-12-14 00:14:28.426700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.284 [2024-12-14 00:14:28.426711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:84504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:04.284 [2024-12-14 00:14:28.426720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.284 [2024-12-14 00:14:28.426731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:84512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:04.284 [2024-12-14 00:14:28.426740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.284 [2024-12-14 00:14:28.426751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:84520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:04.284 [2024-12-14 00:14:28.426760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.284 [2024-12-14 00:14:28.426771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:84528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:04.284 [2024-12-14 00:14:28.426780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.284 [2024-12-14 00:14:28.426791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:84536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:04.284 [2024-12-14 00:14:28.426800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.284 [2024-12-14 00:14:28.426810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:84544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:04.284 [2024-12-14 00:14:28.426819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.284 [2024-12-14 00:14:28.426830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:84552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:04.284 [2024-12-14 00:14:28.426839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.284 [2024-12-14 00:14:28.426850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:84560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:04.284 [2024-12-14 00:14:28.426859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.284 [2024-12-14 00:14:28.426870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:84568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:04.284 [2024-12-14 00:14:28.426880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.284 [2024-12-14 00:14:28.426891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:84576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:04.284 [2024-12-14 00:14:28.426900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.284 [2024-12-14 00:14:28.426911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:84584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:04.284 [2024-12-14 00:14:28.426920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.284 [2024-12-14 00:14:28.426931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:84592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:04.284 [2024-12-14 00:14:28.426941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.284 [2024-12-14 00:14:28.426952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:84600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:04.284 [2024-12-14 00:14:28.426961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.284 [2024-12-14 00:14:28.426973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:84608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:04.284 [2024-12-14 00:14:28.426982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.284 [2024-12-14 00:14:28.426992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:84616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:04.284 [2024-12-14 00:14:28.427001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.284 [2024-12-14 00:14:28.427013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:84624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:04.284 [2024-12-14 00:14:28.427022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.284 [2024-12-14 00:14:28.427034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:84632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:04.284 [2024-12-14 00:14:28.427043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.284 [2024-12-14 00:14:28.427053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:84640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:04.284 [2024-12-14 00:14:28.427063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.285 [2024-12-14 00:14:28.427074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:84648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:04.285 [2024-12-14 00:14:28.427083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.285 [2024-12-14 00:14:28.427094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:84656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:04.285 [2024-12-14 00:14:28.427103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.285 [2024-12-14 00:14:28.427114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:84664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:04.285 [2024-12-14 00:14:28.427123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.285 [2024-12-14 00:14:28.427134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:04.285 [2024-12-14 00:14:28.427144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.285 [2024-12-14 00:14:28.427154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:84680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:04.285 [2024-12-14 00:14:28.427163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.285 [2024-12-14 00:14:28.427174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:84688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:04.285 [2024-12-14 00:14:28.427183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.285 [2024-12-14 00:14:28.427196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:84696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:04.285 [2024-12-14 00:14:28.427205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.285 [2024-12-14 00:14:28.427216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:84704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:04.285 [2024-12-14 00:14:28.427225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.285 [2024-12-14 00:14:28.427235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:84712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:04.285 [2024-12-14 00:14:28.427245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.285 [2024-12-14 00:14:28.427264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:84720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:04.285 [2024-12-14 00:14:28.427273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.285 [2024-12-14 00:14:28.427283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:84728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:04.285 [2024-12-14 00:14:28.427292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.285 [2024-12-14 00:14:28.427303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:84736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:04.285 [2024-12-14 00:14:28.427313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.285 [2024-12-14 00:14:28.427323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:84744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:04.285 [2024-12-14 00:14:28.427332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.285 [2024-12-14 00:14:28.427343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:84752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:04.285 [2024-12-14 00:14:28.427353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.285 [2024-12-14 00:14:28.427363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:84760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:04.285 [2024-12-14 00:14:28.427372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.285 [2024-12-14 00:14:28.427383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:84768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:04.285 [2024-12-14 00:14:28.427392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.285 [2024-12-14 00:14:28.427403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:84776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:04.285 [2024-12-14 00:14:28.427412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.285 [2024-12-14 00:14:28.427422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:84784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:04.285 [2024-12-14 00:14:28.427431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.285 [2024-12-14 00:14:28.427453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:84792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:04.285 [2024-12-14 00:14:28.427464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.285 [2024-12-14 00:14:28.427475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:84800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:04.285 [2024-12-14 00:14:28.427484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.285 [2024-12-14 00:14:28.427495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:84808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:04.285 [2024-12-14 00:14:28.427504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.285 [2024-12-14 00:14:28.427514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:84816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:04.285 [2024-12-14 00:14:28.427523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.285 [2024-12-14 00:14:28.427534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:84824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:04.285 [2024-12-14 00:14:28.427543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.285 [2024-12-14 00:14:28.427553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:84832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:04.285 [2024-12-14 00:14:28.427562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.285 [2024-12-14 00:14:28.427573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:84840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:04.285 [2024-12-14 00:14:28.427582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.285 [2024-12-14 00:14:28.427592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:84848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:04.285 [2024-12-14 00:14:28.427601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.285 [2024-12-14 00:14:28.427612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:84856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:04.285 [2024-12-14 00:14:28.427621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.285 [2024-12-14 00:14:28.427631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:84864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:04.285 [2024-12-14 00:14:28.427640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.285 [2024-12-14 00:14:28.427651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:84872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:04.285 [2024-12-14 00:14:28.427660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.285 [2024-12-14 00:14:28.427670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:84880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:04.285 [2024-12-14 00:14:28.427680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.285 [2024-12-14 00:14:28.427690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:84888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:04.285 [2024-12-14 00:14:28.427699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.285 [2024-12-14 00:14:28.427710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:84896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:04.285 [2024-12-14 00:14:28.427720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.285 [2024-12-14 00:14:28.427731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:84904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:04.285 [2024-12-14 00:14:28.427740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.285 [2024-12-14 00:14:28.427751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:84912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:04.285 [2024-12-14 00:14:28.427760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.285 [2024-12-14 00:14:28.427771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:84920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:04.285 [2024-12-14 00:14:28.427780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.285 [2024-12-14 00:14:28.427791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:84928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:04.285 [2024-12-14 00:14:28.427800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.285 [2024-12-14 00:14:28.427811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:84936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:04.285 [2024-12-14 00:14:28.427820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.285 [2024-12-14 00:14:28.427830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:84944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:04.285 [2024-12-14 00:14:28.427839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.285 [2024-12-14 00:14:28.427850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:84952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:04.285 [2024-12-14 00:14:28.427859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.285 [2024-12-14 00:14:28.427869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:84960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:04.285 [2024-12-14 00:14:28.427878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.285 [2024-12-14 00:14:28.427889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:84968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:04.286 [2024-12-14 00:14:28.427898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.286 [2024-12-14 00:14:28.427908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:84976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:04.286 [2024-12-14 00:14:28.427917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.286 [2024-12-14 00:14:28.427927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:84984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:04.286 [2024-12-14 00:14:28.427936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.286 [2024-12-14 00:14:28.427947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:84992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:04.286 [2024-12-14 00:14:28.427956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.286 [2024-12-14 00:14:28.427968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:85000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:04.286 [2024-12-14 00:14:28.427977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.286 [2024-12-14 00:14:28.427988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:85008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:04.286 [2024-12-14 00:14:28.427997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.286 [2024-12-14 00:14:28.428008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:85016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:04.286 [2024-12-14 00:14:28.428017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.286 [2024-12-14 00:14:28.428028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:85024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:04.286 [2024-12-14 00:14:28.428038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.286 [2024-12-14 00:14:28.428048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:85032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:04.286 [2024-12-14 00:14:28.428057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.286 [2024-12-14 00:14:28.428068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:85040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:04.286 [2024-12-14 00:14:28.428077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.286 [2024-12-14 00:14:28.428090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:85048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:04.286 [2024-12-14 00:14:28.428099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.286 [2024-12-14 00:14:28.428109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:85056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:04.286 [2024-12-14 00:14:28.428118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.286 [2024-12-14 00:14:28.428129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:85064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:04.286 [2024-12-14 00:14:28.428138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.286 [2024-12-14 00:14:28.428149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:84384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.286 [2024-12-14 00:14:28.428158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.286 [2024-12-14 00:14:28.428169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:84392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.286 [2024-12-14 00:14:28.428178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.286 [2024-12-14 00:14:28.428215] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:04.286 [2024-12-14 00:14:28.428227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:84400 len:8 PRP1 0x0 PRP2 0x0 00:34:04.286 [2024-12-14 00:14:28.428238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.286 [2024-12-14 00:14:28.428284] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:34:04.286 [2024-12-14 00:14:28.428297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.286 [2024-12-14 00:14:28.428308] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:34:04.286 [2024-12-14 00:14:28.428317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.286 [2024-12-14 00:14:28.428327] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:34:04.286 [2024-12-14 00:14:28.428337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.286 [2024-12-14 00:14:28.428346] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:34:04.286 [2024-12-14 00:14:28.428355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.286 [2024-12-14 00:14:28.428364] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325580 is same with the state(6) to be set 00:34:04.286 [2024-12-14 00:14:28.428619] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:04.286 [2024-12-14 00:14:28.428632] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:04.286 [2024-12-14 00:14:28.428642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:84408 len:8 PRP1 0x0 PRP2 0x0 00:34:04.286 [2024-12-14 00:14:28.428653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.286 [2024-12-14 00:14:28.428667] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:04.286 [2024-12-14 00:14:28.428674] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:04.286 [2024-12-14 00:14:28.428682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:84416 len:8 PRP1 0x0 PRP2 0x0 00:34:04.286 [2024-12-14 00:14:28.428691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.286 [2024-12-14 00:14:28.428700] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:04.286 [2024-12-14 00:14:28.428707] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:04.286 [2024-12-14 00:14:28.428719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:84424 len:8 PRP1 0x0 PRP2 0x0 00:34:04.286 [2024-12-14 00:14:28.428728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.286 [2024-12-14 00:14:28.428736] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:04.286 [2024-12-14 00:14:28.428743] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:04.286 [2024-12-14 00:14:28.428751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:84432 len:8 PRP1 0x0 PRP2 0x0 00:34:04.286 [2024-12-14 00:14:28.428761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.286 [2024-12-14 00:14:28.428769] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:04.286 [2024-12-14 00:14:28.428776] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:04.286 [2024-12-14 00:14:28.428784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:84048 len:8 PRP1 0x0 PRP2 0x0 00:34:04.286 [2024-12-14 00:14:28.428793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.286 [2024-12-14 00:14:28.428804] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:04.286 [2024-12-14 00:14:28.428812] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:04.286 [2024-12-14 00:14:28.428820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:84056 len:8 PRP1 0x0 PRP2 0x0 00:34:04.286 [2024-12-14 00:14:28.428829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.286 [2024-12-14 00:14:28.428848] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:04.286 [2024-12-14 00:14:28.428855] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:04.286 [2024-12-14 00:14:28.428863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:84064 len:8 PRP1 0x0 PRP2 0x0 00:34:04.286 [2024-12-14 00:14:28.428872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.286 [2024-12-14 00:14:28.428881] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:04.286 [2024-12-14 00:14:28.428888] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:04.286 [2024-12-14 00:14:28.428895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:84072 len:8 PRP1 0x0 PRP2 0x0 00:34:04.286 [2024-12-14 00:14:28.428904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.286 [2024-12-14 00:14:28.428915] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:04.286 [2024-12-14 00:14:28.428922] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:04.286 [2024-12-14 00:14:28.428930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:84080 len:8 PRP1 0x0 PRP2 0x0 00:34:04.286 [2024-12-14 00:14:28.428939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.286 [2024-12-14 00:14:28.428947] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:04.286 [2024-12-14 00:14:28.428954] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:04.286 [2024-12-14 00:14:28.428962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:84088 len:8 PRP1 0x0 PRP2 0x0 00:34:04.286 [2024-12-14 00:14:28.428971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.286 [2024-12-14 00:14:28.428980] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:04.286 [2024-12-14 00:14:28.428987] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:04.286 [2024-12-14 00:14:28.428996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:84096 len:8 PRP1 0x0 PRP2 0x0 00:34:04.286 [2024-12-14 00:14:28.429005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.286 [2024-12-14 00:14:28.429013] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:04.286 [2024-12-14 00:14:28.429020] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:04.286 [2024-12-14 00:14:28.429028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:84104 len:8 PRP1 0x0 PRP2 0x0 00:34:04.287 [2024-12-14 00:14:28.429037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.287 [2024-12-14 00:14:28.429046] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:04.287 [2024-12-14 00:14:28.429053] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:04.287 [2024-12-14 00:14:28.429061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:84112 len:8 PRP1 0x0 PRP2 0x0 00:34:04.287 [2024-12-14 00:14:28.429071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.287 [2024-12-14 00:14:28.429080] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:04.287 [2024-12-14 00:14:28.429087] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:04.287 [2024-12-14 00:14:28.429095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:84120 len:8 PRP1 0x0 PRP2 0x0 00:34:04.287 [2024-12-14 00:14:28.429103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.287 [2024-12-14 00:14:28.429112] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:04.287 [2024-12-14 00:14:28.429118] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:04.287 [2024-12-14 00:14:28.429126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:84128 len:8 PRP1 0x0 PRP2 0x0 00:34:04.287 [2024-12-14 00:14:28.429135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.287 [2024-12-14 00:14:28.429144] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:04.287 [2024-12-14 00:14:28.429151] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:04.287 [2024-12-14 00:14:28.429158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:84136 len:8 PRP1 0x0 PRP2 0x0 00:34:04.287 [2024-12-14 00:14:28.429167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.287 [2024-12-14 00:14:28.429177] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:04.287 [2024-12-14 00:14:28.429184] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:04.287 [2024-12-14 00:14:28.429191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:84144 len:8 PRP1 0x0 PRP2 0x0 00:34:04.287 [2024-12-14 00:14:28.429200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.287 [2024-12-14 00:14:28.429209] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:04.287 [2024-12-14 00:14:28.429215] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:04.287 [2024-12-14 00:14:28.429223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:84152 len:8 PRP1 0x0 PRP2 0x0 00:34:04.287 [2024-12-14 00:14:28.429232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.287 [2024-12-14 00:14:28.429240] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:04.287 [2024-12-14 00:14:28.429247] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:04.287 [2024-12-14 00:14:28.429256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:84160 len:8 PRP1 0x0 PRP2 0x0 00:34:04.287 [2024-12-14 00:14:28.429265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.287 [2024-12-14 00:14:28.440391] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:04.287 [2024-12-14 00:14:28.440404] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:04.287 [2024-12-14 00:14:28.440413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:84168 len:8 PRP1 0x0 PRP2 0x0 00:34:04.287 [2024-12-14 00:14:28.440422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.287 [2024-12-14 00:14:28.440430] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:04.287 [2024-12-14 00:14:28.440442] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:04.287 [2024-12-14 00:14:28.440453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:84176 len:8 PRP1 0x0 PRP2 0x0 00:34:04.287 [2024-12-14 00:14:28.440463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.287 [2024-12-14 00:14:28.440471] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:04.287 [2024-12-14 00:14:28.440478] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:04.287 [2024-12-14 00:14:28.440486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:84184 len:8 PRP1 0x0 PRP2 0x0 00:34:04.287 [2024-12-14 00:14:28.440494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.287 [2024-12-14 00:14:28.440503] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:04.287 [2024-12-14 00:14:28.440510] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:04.287 [2024-12-14 00:14:28.440518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:84192 len:8 PRP1 0x0 PRP2 0x0 00:34:04.287 [2024-12-14 00:14:28.440527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.287 [2024-12-14 00:14:28.440535] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:04.287 [2024-12-14 00:14:28.440542] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:04.287 [2024-12-14 00:14:28.440550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:84200 len:8 PRP1 0x0 PRP2 0x0 00:34:04.287 [2024-12-14 00:14:28.440559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.287 [2024-12-14 00:14:28.440568] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:04.287 [2024-12-14 00:14:28.440575] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:04.287 [2024-12-14 00:14:28.440583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:84208 len:8 PRP1 0x0 PRP2 0x0 00:34:04.287 [2024-12-14 00:14:28.440591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.287 [2024-12-14 00:14:28.440600] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:04.287 [2024-12-14 00:14:28.440607] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:04.287 [2024-12-14 00:14:28.440615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:84216 len:8 PRP1 0x0 PRP2 0x0 00:34:04.287 [2024-12-14 00:14:28.440623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.287 [2024-12-14 00:14:28.440632] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:04.287 [2024-12-14 00:14:28.440639] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:04.287 [2024-12-14 00:14:28.440647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:84224 len:8 PRP1 0x0 PRP2 0x0 00:34:04.287 [2024-12-14 00:14:28.440656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.287 [2024-12-14 00:14:28.440664] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:04.287 [2024-12-14 00:14:28.440671] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:04.287 [2024-12-14 00:14:28.440679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:84232 len:8 PRP1 0x0 PRP2 0x0 00:34:04.287 [2024-12-14 00:14:28.440688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.287 [2024-12-14 00:14:28.440696] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:04.287 [2024-12-14 00:14:28.440705] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:04.287 [2024-12-14 00:14:28.440712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:84240 len:8 PRP1 0x0 PRP2 0x0 00:34:04.287 [2024-12-14 00:14:28.440721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.287 [2024-12-14 00:14:28.440730] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:04.287 [2024-12-14 00:14:28.440737] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:04.287 [2024-12-14 00:14:28.440745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:84248 len:8 PRP1 0x0 PRP2 0x0 00:34:04.287 [2024-12-14 00:14:28.440753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.287 [2024-12-14 00:14:28.440761] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:04.288 [2024-12-14 00:14:28.440768] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:04.288 [2024-12-14 00:14:28.440776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:84256 len:8 PRP1 0x0 PRP2 0x0 00:34:04.288 [2024-12-14 00:14:28.440785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.288 [2024-12-14 00:14:28.440793] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:04.288 [2024-12-14 00:14:28.440800] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:04.288 [2024-12-14 00:14:28.440808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:84264 len:8 PRP1 0x0 PRP2 0x0 00:34:04.288 [2024-12-14 00:14:28.440817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.288 [2024-12-14 00:14:28.440826] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:04.288 [2024-12-14 00:14:28.440833] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:04.288 [2024-12-14 00:14:28.440840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:84272 len:8 PRP1 0x0 PRP2 0x0 00:34:04.288 [2024-12-14 00:14:28.440849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.288 [2024-12-14 00:14:28.440857] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:04.288 [2024-12-14 00:14:28.440864] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:04.288 [2024-12-14 00:14:28.440872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:84280 len:8 PRP1 0x0 PRP2 0x0 00:34:04.288 [2024-12-14 00:14:28.440885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.288 [2024-12-14 00:14:28.440894] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:04.288 [2024-12-14 00:14:28.440901] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:04.288 [2024-12-14 00:14:28.440909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:84288 len:8 PRP1 0x0 PRP2 0x0 00:34:04.288 [2024-12-14 00:14:28.440917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.288 [2024-12-14 00:14:28.440926] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:04.288 [2024-12-14 00:14:28.440933] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:04.288 [2024-12-14 00:14:28.440941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:84296 len:8 PRP1 0x0 PRP2 0x0 00:34:04.288 [2024-12-14 00:14:28.440950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.288 [2024-12-14 00:14:28.440960] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:04.288 [2024-12-14 00:14:28.440967] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:04.288 [2024-12-14 00:14:28.440975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:84304 len:8 PRP1 0x0 PRP2 0x0 00:34:04.288 [2024-12-14 00:14:28.440984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.288 [2024-12-14 00:14:28.440992] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:04.288 [2024-12-14 00:14:28.440999] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:04.288 [2024-12-14 00:14:28.441006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:84312 len:8 PRP1 0x0 PRP2 0x0 00:34:04.288 [2024-12-14 00:14:28.441015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.288 [2024-12-14 00:14:28.441028] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:04.288 [2024-12-14 00:14:28.441035] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:04.288 [2024-12-14 00:14:28.441043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:84320 len:8 PRP1 0x0 PRP2 0x0 00:34:04.288 [2024-12-14 00:14:28.441052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.288 [2024-12-14 00:14:28.441061] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:04.288 [2024-12-14 00:14:28.441068] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:04.288 [2024-12-14 00:14:28.441076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:84328 len:8 PRP1 0x0 PRP2 0x0 00:34:04.288 [2024-12-14 00:14:28.441085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.288 [2024-12-14 00:14:28.441094] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:04.288 [2024-12-14 00:14:28.441100] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:04.288 [2024-12-14 00:14:28.441108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:84336 len:8 PRP1 0x0 PRP2 0x0 00:34:04.288 [2024-12-14 00:14:28.441117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.288 [2024-12-14 00:14:28.441125] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:04.288 [2024-12-14 00:14:28.441132] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:04.288 [2024-12-14 00:14:28.441140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:84344 len:8 PRP1 0x0 PRP2 0x0 00:34:04.288 [2024-12-14 00:14:28.441149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.288 [2024-12-14 00:14:28.441157] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:04.288 [2024-12-14 00:14:28.441163] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:04.288 [2024-12-14 00:14:28.441171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:84352 len:8 PRP1 0x0 PRP2 0x0 00:34:04.288 [2024-12-14 00:14:28.441180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.288 [2024-12-14 00:14:28.441189] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:04.288 [2024-12-14 00:14:28.441196] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:04.288 [2024-12-14 00:14:28.441203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:84360 len:8 PRP1 0x0 PRP2 0x0 00:34:04.288 [2024-12-14 00:14:28.441216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.288 [2024-12-14 00:14:28.441225] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:04.288 [2024-12-14 00:14:28.441232] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:04.288 [2024-12-14 00:14:28.441240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:84368 len:8 PRP1 0x0 PRP2 0x0 00:34:04.288 [2024-12-14 00:14:28.441249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.288 [2024-12-14 00:14:28.441257] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:04.288 [2024-12-14 00:14:28.441264] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:04.288 [2024-12-14 00:14:28.441271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:84376 len:8 PRP1 0x0 PRP2 0x0 00:34:04.288 [2024-12-14 00:14:28.441280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.288 [2024-12-14 00:14:28.441289] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:04.288 [2024-12-14 00:14:28.441296] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:04.288 [2024-12-14 00:14:28.441304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84440 len:8 PRP1 0x0 PRP2 0x0 00:34:04.288 [2024-12-14 00:14:28.441312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.288 [2024-12-14 00:14:28.441321] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:04.288 [2024-12-14 00:14:28.441327] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:04.288 [2024-12-14 00:14:28.441335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84448 len:8 PRP1 0x0 PRP2 0x0 00:34:04.288 [2024-12-14 00:14:28.441344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.288 [2024-12-14 00:14:28.441353] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:04.288 [2024-12-14 00:14:28.441361] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:04.288 [2024-12-14 00:14:28.441369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84456 len:8 PRP1 0x0 PRP2 0x0 00:34:04.288 [2024-12-14 00:14:28.441377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.288 [2024-12-14 00:14:28.441386] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:04.288 [2024-12-14 00:14:28.441393] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:04.288 [2024-12-14 00:14:28.441401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84464 len:8 PRP1 0x0 PRP2 0x0 00:34:04.288 [2024-12-14 00:14:28.441410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.288 [2024-12-14 00:14:28.441419] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:04.288 [2024-12-14 00:14:28.441425] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:04.288 [2024-12-14 00:14:28.441433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84472 len:8 PRP1 0x0 PRP2 0x0 00:34:04.288 [2024-12-14 00:14:28.441446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.288 [2024-12-14 00:14:28.441456] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:04.288 [2024-12-14 00:14:28.441463] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:04.288 [2024-12-14 00:14:28.441472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84480 len:8 PRP1 0x0 PRP2 0x0 00:34:04.288 [2024-12-14 00:14:28.441481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.288 [2024-12-14 00:14:28.441490] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:04.288 [2024-12-14 00:14:28.441498] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:04.288 [2024-12-14 00:14:28.441506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84488 len:8 PRP1 0x0 PRP2 0x0 00:34:04.288 [2024-12-14 00:14:28.441514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.288 [2024-12-14 00:14:28.441523] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:04.288 [2024-12-14 00:14:28.441530] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:04.288 [2024-12-14 00:14:28.441537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84496 len:8 PRP1 0x0 PRP2 0x0 00:34:04.288 [2024-12-14 00:14:28.441546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.289 [2024-12-14 00:14:28.441555] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:04.289 [2024-12-14 00:14:28.441562] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:04.289 [2024-12-14 00:14:28.441570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84504 len:8 PRP1 0x0 PRP2 0x0 00:34:04.289 [2024-12-14 00:14:28.441578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.289 [2024-12-14 00:14:28.441587] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:04.289 [2024-12-14 00:14:28.441594] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:04.289 [2024-12-14 00:14:28.441601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84512 len:8 PRP1 0x0 PRP2 0x0 00:34:04.289 [2024-12-14 00:14:28.441610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.289 [2024-12-14 00:14:28.441619] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:04.289 [2024-12-14 00:14:28.441626] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:04.289 [2024-12-14 00:14:28.441634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84520 len:8 PRP1 0x0 PRP2 0x0 00:34:04.289 [2024-12-14 00:14:28.441642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.289 [2024-12-14 00:14:28.441651] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:04.289 [2024-12-14 00:14:28.441658] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:04.289 [2024-12-14 00:14:28.441666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84528 len:8 PRP1 0x0 PRP2 0x0 00:34:04.289 [2024-12-14 00:14:28.441674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.289 [2024-12-14 00:14:28.441683] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:04.289 [2024-12-14 00:14:28.441690] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:04.289 [2024-12-14 00:14:28.441697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84536 len:8 PRP1 0x0 PRP2 0x0 00:34:04.289 [2024-12-14 00:14:28.441706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.289 [2024-12-14 00:14:28.441715] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:04.289 [2024-12-14 00:14:28.441723] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:04.289 [2024-12-14 00:14:28.441731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84544 len:8 PRP1 0x0 PRP2 0x0 00:34:04.289 [2024-12-14 00:14:28.441739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.289 [2024-12-14 00:14:28.441748] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:04.289 [2024-12-14 00:14:28.441755] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:04.289 [2024-12-14 00:14:28.441762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84552 len:8 PRP1 0x0 PRP2 0x0 00:34:04.289 [2024-12-14 00:14:28.441771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.289 [2024-12-14 00:14:28.441780] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:04.289 [2024-12-14 00:14:28.441787] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:04.289 [2024-12-14 00:14:28.441794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84560 len:8 PRP1 0x0 PRP2 0x0 00:34:04.289 [2024-12-14 00:14:28.441803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.289 [2024-12-14 00:14:28.441812] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:04.289 [2024-12-14 00:14:28.441819] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:04.289 [2024-12-14 00:14:28.441827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84568 len:8 PRP1 0x0 PRP2 0x0 00:34:04.289 [2024-12-14 00:14:28.441835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.289 [2024-12-14 00:14:28.441844] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:04.289 [2024-12-14 00:14:28.441851] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:04.289 [2024-12-14 00:14:28.441859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84576 len:8 PRP1 0x0 PRP2 0x0 00:34:04.289 [2024-12-14 00:14:28.441868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.289 [2024-12-14 00:14:28.441879] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:04.289 [2024-12-14 00:14:28.441886] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:04.289 [2024-12-14 00:14:28.441894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84584 len:8 PRP1 0x0 PRP2 0x0 00:34:04.289 [2024-12-14 00:14:28.441903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.289 [2024-12-14 00:14:28.441911] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:04.289 [2024-12-14 00:14:28.441919] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:04.289 [2024-12-14 00:14:28.441926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84592 len:8 PRP1 0x0 PRP2 0x0 00:34:04.289 [2024-12-14 00:14:28.441935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.289 [2024-12-14 00:14:28.441944] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:04.289 [2024-12-14 00:14:28.441951] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:04.289 [2024-12-14 00:14:28.441958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84600 len:8 PRP1 0x0 PRP2 0x0 00:34:04.289 [2024-12-14 00:14:28.441967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.289 [2024-12-14 00:14:28.441977] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:04.289 [2024-12-14 00:14:28.441984] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:04.289 [2024-12-14 00:14:28.441993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84608 len:8 PRP1 0x0 PRP2 0x0 00:34:04.289 [2024-12-14 00:14:28.442002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.289 [2024-12-14 00:14:28.442011] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:04.289 [2024-12-14 00:14:28.442017] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:04.289 [2024-12-14 00:14:28.442026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84616 len:8 PRP1 0x0 PRP2 0x0 00:34:04.289 [2024-12-14 00:14:28.442035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.289 [2024-12-14 00:14:28.442043] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:04.289 [2024-12-14 00:14:28.442050] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:04.289 [2024-12-14 00:14:28.442058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84624 len:8 PRP1 0x0 PRP2 0x0 00:34:04.289 [2024-12-14 00:14:28.442066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.289 [2024-12-14 00:14:28.442079] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:04.289 [2024-12-14 00:14:28.442086] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:04.289 [2024-12-14 00:14:28.442094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84632 len:8 PRP1 0x0 PRP2 0x0 00:34:04.289 [2024-12-14 00:14:28.442102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.289 [2024-12-14 00:14:28.442111] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:04.289 [2024-12-14 00:14:28.442118] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:04.289 [2024-12-14 00:14:28.442126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84640 len:8 PRP1 0x0 PRP2 0x0 00:34:04.289 [2024-12-14 00:14:28.442134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.289 [2024-12-14 00:14:28.442146] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:04.289 [2024-12-14 00:14:28.442153] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:04.289 [2024-12-14 00:14:28.442161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84648 len:8 PRP1 0x0 PRP2 0x0 00:34:04.289 [2024-12-14 00:14:28.442170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.289 [2024-12-14 00:14:28.442178] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:04.289 [2024-12-14 00:14:28.442185] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:04.289 [2024-12-14 00:14:28.442192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84656 len:8 PRP1 0x0 PRP2 0x0 00:34:04.289 [2024-12-14 00:14:28.442201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.289 [2024-12-14 00:14:28.442210] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:04.289 [2024-12-14 00:14:28.442217] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:04.289 [2024-12-14 00:14:28.442224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84664 len:8 PRP1 0x0 PRP2 0x0 00:34:04.289 [2024-12-14 00:14:28.442241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.289 [2024-12-14 00:14:28.442250] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:04.289 [2024-12-14 00:14:28.442257] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:04.289 [2024-12-14 00:14:28.442266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84672 len:8 PRP1 0x0 PRP2 0x0 00:34:04.289 [2024-12-14 00:14:28.442274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.289 [2024-12-14 00:14:28.442283] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:04.289 [2024-12-14 00:14:28.442290] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:04.289 [2024-12-14 00:14:28.442298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84680 len:8 PRP1 0x0 PRP2 0x0 00:34:04.289 [2024-12-14 00:14:28.442307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.290 [2024-12-14 00:14:28.442315] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:04.290 [2024-12-14 00:14:28.442322] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:04.290 [2024-12-14 00:14:28.442329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84688 len:8 PRP1 0x0 PRP2 0x0 00:34:04.290 [2024-12-14 00:14:28.442339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.290 [2024-12-14 00:14:28.442347] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:04.290 [2024-12-14 00:14:28.442354] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:04.290 [2024-12-14 00:14:28.442361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84696 len:8 PRP1 0x0 PRP2 0x0 00:34:04.290 [2024-12-14 00:14:28.442369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.290 [2024-12-14 00:14:28.442378] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:04.290 [2024-12-14 00:14:28.442385] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:04.290 [2024-12-14 00:14:28.442393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84704 len:8 PRP1 0x0 PRP2 0x0 00:34:04.290 [2024-12-14 00:14:28.442401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.290 [2024-12-14 00:14:28.442411] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:04.290 [2024-12-14 00:14:28.442418] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:04.290 [2024-12-14 00:14:28.442426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84712 len:8 PRP1 0x0 PRP2 0x0 00:34:04.290 [2024-12-14 00:14:28.449628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.290 [2024-12-14 00:14:28.449644] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:04.290 [2024-12-14 00:14:28.449654] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:04.290 [2024-12-14 00:14:28.449666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84720 len:8 PRP1 0x0 PRP2 0x0 00:34:04.290 [2024-12-14 00:14:28.449678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.290 [2024-12-14 00:14:28.449689] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:04.290 [2024-12-14 00:14:28.449701] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:04.290 [2024-12-14 00:14:28.449711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84728 len:8 PRP1 0x0 PRP2 0x0 00:34:04.290 [2024-12-14 00:14:28.449723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.290 [2024-12-14 00:14:28.449734] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:04.290 [2024-12-14 00:14:28.449744] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:04.290 [2024-12-14 00:14:28.449755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84736 len:8 PRP1 0x0 PRP2 0x0 00:34:04.290 [2024-12-14 00:14:28.449767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.290 [2024-12-14 00:14:28.449778] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:04.290 [2024-12-14 00:14:28.449787] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:04.290 [2024-12-14 00:14:28.449798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84744 len:8 PRP1 0x0 PRP2 0x0 00:34:04.290 [2024-12-14 00:14:28.449810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.290 [2024-12-14 00:14:28.449821] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:04.290 [2024-12-14 00:14:28.449831] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:04.290 [2024-12-14 00:14:28.449841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84752 len:8 PRP1 0x0 PRP2 0x0 00:34:04.290 [2024-12-14 00:14:28.449853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.290 [2024-12-14 00:14:28.449864] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:04.290 [2024-12-14 00:14:28.449874] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:04.290 [2024-12-14 00:14:28.449884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84760 len:8 PRP1 0x0 PRP2 0x0 00:34:04.290 [2024-12-14 00:14:28.449896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.290 [2024-12-14 00:14:28.449907] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:04.290 [2024-12-14 00:14:28.449917] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:04.290 [2024-12-14 00:14:28.449927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84768 len:8 PRP1 0x0 PRP2 0x0 00:34:04.290 [2024-12-14 00:14:28.449939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.290 [2024-12-14 00:14:28.449951] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:04.290 [2024-12-14 00:14:28.449961] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:04.290 [2024-12-14 00:14:28.449971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84776 len:8 PRP1 0x0 PRP2 0x0 00:34:04.290 [2024-12-14 00:14:28.449982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.290 [2024-12-14 00:14:28.449994] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:04.290 [2024-12-14 00:14:28.450003] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:04.290 [2024-12-14 00:14:28.450013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84784 len:8 PRP1 0x0 PRP2 0x0 00:34:04.290 [2024-12-14 00:14:28.450025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.290 [2024-12-14 00:14:28.450038] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:04.290 [2024-12-14 00:14:28.450047] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:04.290 [2024-12-14 00:14:28.450058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84792 len:8 PRP1 0x0 PRP2 0x0 00:34:04.290 [2024-12-14 00:14:28.450069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.290 [2024-12-14 00:14:28.450080] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:04.290 [2024-12-14 00:14:28.450090] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:04.290 [2024-12-14 00:14:28.450100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84800 len:8 PRP1 0x0 PRP2 0x0 00:34:04.290 [2024-12-14 00:14:28.450112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.290 [2024-12-14 00:14:28.450124] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:04.290 [2024-12-14 00:14:28.450133] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:04.290 [2024-12-14 00:14:28.450143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84808 len:8 PRP1 0x0 PRP2 0x0 00:34:04.290 [2024-12-14 00:14:28.450155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.290 [2024-12-14 00:14:28.450166] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:04.290 [2024-12-14 00:14:28.450176] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:04.290 [2024-12-14 00:14:28.450186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84816 len:8 PRP1 0x0 PRP2 0x0 00:34:04.290 [2024-12-14 00:14:28.450198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.290 [2024-12-14 00:14:28.450209] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:04.290 [2024-12-14 00:14:28.450218] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:04.290 [2024-12-14 00:14:28.450229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84824 len:8 PRP1 0x0 PRP2 0x0 00:34:04.290 [2024-12-14 00:14:28.450241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.290 [2024-12-14 00:14:28.450252] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:04.290 [2024-12-14 00:14:28.450261] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:04.290 [2024-12-14 00:14:28.450271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84832 len:8 PRP1 0x0 PRP2 0x0 00:34:04.290 [2024-12-14 00:14:28.450283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.290 [2024-12-14 00:14:28.450295] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:04.290 [2024-12-14 00:14:28.450304] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:04.290 [2024-12-14 00:14:28.450315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84840 len:8 PRP1 0x0 PRP2 0x0 00:34:04.290 [2024-12-14 00:14:28.450326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.290 [2024-12-14 00:14:28.450338] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:04.290 [2024-12-14 00:14:28.450347] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:04.290 [2024-12-14 00:14:28.450357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84848 len:8 PRP1 0x0 PRP2 0x0 00:34:04.290 [2024-12-14 00:14:28.450371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.290 [2024-12-14 00:14:28.450382] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:04.290 [2024-12-14 00:14:28.450391] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:04.290 [2024-12-14 00:14:28.450402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84856 len:8 PRP1 0x0 PRP2 0x0 00:34:04.290 [2024-12-14 00:14:28.450414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.290 [2024-12-14 00:14:28.450425] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:04.290 [2024-12-14 00:14:28.450434] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:04.290 [2024-12-14 00:14:28.450451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84864 len:8 PRP1 0x0 PRP2 0x0 00:34:04.290 [2024-12-14 00:14:28.450463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.290 [2024-12-14 00:14:28.450474] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:04.290 [2024-12-14 00:14:28.450484] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:04.290 [2024-12-14 00:14:28.450494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84872 len:8 PRP1 0x0 PRP2 0x0 00:34:04.291 [2024-12-14 00:14:28.450506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.291 [2024-12-14 00:14:28.450518] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:04.291 [2024-12-14 00:14:28.450527] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:04.291 [2024-12-14 00:14:28.450537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84880 len:8 PRP1 0x0 PRP2 0x0 00:34:04.291 [2024-12-14 00:14:28.450549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.291 [2024-12-14 00:14:28.450593] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:04.291 [2024-12-14 00:14:28.450602] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:04.291 [2024-12-14 00:14:28.450613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84888 len:8 PRP1 0x0 PRP2 0x0 00:34:04.291 [2024-12-14 00:14:28.450625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.291 [2024-12-14 00:14:28.450637] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:04.291 [2024-12-14 00:14:28.450646] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:04.291 [2024-12-14 00:14:28.450657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84896 len:8 PRP1 0x0 PRP2 0x0 00:34:04.291 [2024-12-14 00:14:28.450669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.291 [2024-12-14 00:14:28.450681] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:04.291 [2024-12-14 00:14:28.450690] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:04.291 [2024-12-14 00:14:28.450700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84904 len:8 PRP1 0x0 PRP2 0x0 00:34:04.291 [2024-12-14 00:14:28.450712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.291 [2024-12-14 00:14:28.450723] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:04.291 [2024-12-14 00:14:28.450733] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:04.291 [2024-12-14 00:14:28.450745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84912 len:8 PRP1 0x0 PRP2 0x0 00:34:04.291 [2024-12-14 00:14:28.450757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.291 [2024-12-14 00:14:28.450768] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:04.291 [2024-12-14 00:14:28.450778] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:04.291 [2024-12-14 00:14:28.450788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84920 len:8 PRP1 0x0 PRP2 0x0 00:34:04.291 [2024-12-14 00:14:28.450800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.291 [2024-12-14 00:14:28.450811] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:04.291 [2024-12-14 00:14:28.450821] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:04.291 [2024-12-14 00:14:28.450831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84928 len:8 PRP1 0x0 PRP2 0x0 00:34:04.291 [2024-12-14 00:14:28.450843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.291 [2024-12-14 00:14:28.450854] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:04.291 [2024-12-14 00:14:28.450864] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:04.291 [2024-12-14 00:14:28.450874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84936 len:8 PRP1 0x0 PRP2 0x0 00:34:04.291 [2024-12-14 00:14:28.450886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.291 [2024-12-14 00:14:28.450897] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:04.291 [2024-12-14 00:14:28.450906] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:04.291 [2024-12-14 00:14:28.450917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84944 len:8 PRP1 0x0 PRP2 0x0 00:34:04.291 [2024-12-14 00:14:28.450928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.291 [2024-12-14 00:14:28.450940] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:04.291 [2024-12-14 00:14:28.450949] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:04.291 [2024-12-14 00:14:28.450959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84952 len:8 PRP1 0x0 PRP2 0x0 00:34:04.291 [2024-12-14 00:14:28.450971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.291 [2024-12-14 00:14:28.450982] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:04.291 [2024-12-14 00:14:28.450991] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:04.291 [2024-12-14 00:14:28.451002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84960 len:8 PRP1 0x0 PRP2 0x0 00:34:04.291 [2024-12-14 00:14:28.451014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.291 [2024-12-14 00:14:28.451025] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:04.291 [2024-12-14 00:14:28.451035] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:04.291 [2024-12-14 00:14:28.451045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84968 len:8 PRP1 0x0 PRP2 0x0 00:34:04.291 [2024-12-14 00:14:28.451057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.291 [2024-12-14 00:14:28.451068] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:04.291 [2024-12-14 00:14:28.451079] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:04.291 [2024-12-14 00:14:28.451090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84976 len:8 PRP1 0x0 PRP2 0x0 00:34:04.291 [2024-12-14 00:14:28.451102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.291 [2024-12-14 00:14:28.451113] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:04.291 [2024-12-14 00:14:28.451122] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:04.291 [2024-12-14 00:14:28.451133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84984 len:8 PRP1 0x0 PRP2 0x0 00:34:04.291 [2024-12-14 00:14:28.451145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.291 [2024-12-14 00:14:28.451156] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:04.291 [2024-12-14 00:14:28.451165] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:04.291 [2024-12-14 00:14:28.451176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84992 len:8 PRP1 0x0 PRP2 0x0 00:34:04.291 [2024-12-14 00:14:28.451188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.291 [2024-12-14 00:14:28.451199] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:04.291 [2024-12-14 00:14:28.451209] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:04.291 [2024-12-14 00:14:28.451219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:85000 len:8 PRP1 0x0 PRP2 0x0 00:34:04.291 [2024-12-14 00:14:28.451231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.291 [2024-12-14 00:14:28.451242] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:04.291 [2024-12-14 00:14:28.451252] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:04.291 [2024-12-14 00:14:28.451262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:85008 len:8 PRP1 0x0 PRP2 0x0 00:34:04.291 [2024-12-14 00:14:28.451274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.291 [2024-12-14 00:14:28.451285] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:04.291 [2024-12-14 00:14:28.451294] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:04.291 [2024-12-14 00:14:28.451305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:85016 len:8 PRP1 0x0 PRP2 0x0 00:34:04.291 [2024-12-14 00:14:28.451316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.291 [2024-12-14 00:14:28.451327] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:04.291 [2024-12-14 00:14:28.451337] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:04.291 [2024-12-14 00:14:28.451347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:85024 len:8 PRP1 0x0 PRP2 0x0 00:34:04.291 [2024-12-14 00:14:28.451359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.291 [2024-12-14 00:14:28.451371] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:04.291 [2024-12-14 00:14:28.451380] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:04.291 [2024-12-14 00:14:28.451390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:85032 len:8 PRP1 0x0 PRP2 0x0 00:34:04.291 [2024-12-14 00:14:28.451402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.291 [2024-12-14 00:14:28.451416] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:04.291 [2024-12-14 00:14:28.451425] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:04.291 [2024-12-14 00:14:28.451435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:85040 len:8 PRP1 0x0 PRP2 0x0 00:34:04.291 [2024-12-14 00:14:28.451452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.291 [2024-12-14 00:14:28.451464] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:04.291 [2024-12-14 00:14:28.451473] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:04.291 [2024-12-14 00:14:28.451484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:85048 len:8 PRP1 0x0 PRP2 0x0 00:34:04.291 [2024-12-14 00:14:28.451496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.291 [2024-12-14 00:14:28.451507] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:04.291 [2024-12-14 00:14:28.451516] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:04.291 [2024-12-14 00:14:28.451527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:85056 len:8 PRP1 0x0 PRP2 0x0 00:34:04.291 [2024-12-14 00:14:28.451538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.291 [2024-12-14 00:14:28.451549] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:04.291 [2024-12-14 00:14:28.451558] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:04.291 [2024-12-14 00:14:28.451569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:85064 len:8 PRP1 0x0 PRP2 0x0 00:34:04.292 [2024-12-14 00:14:28.451581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.292 [2024-12-14 00:14:28.451592] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:04.292 [2024-12-14 00:14:28.451602] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:04.292 [2024-12-14 00:14:28.451612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:84384 len:8 PRP1 0x0 PRP2 0x0 00:34:04.292 [2024-12-14 00:14:28.451623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.292 [2024-12-14 00:14:28.451635] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:04.292 [2024-12-14 00:14:28.451645] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:04.292 [2024-12-14 00:14:28.451655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:84392 len:8 PRP1 0x0 PRP2 0x0 00:34:04.292 [2024-12-14 00:14:28.451666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.292 [2024-12-14 00:14:28.451678] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:04.292 [2024-12-14 00:14:28.451687] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:04.292 [2024-12-14 00:14:28.451697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:84400 len:8 PRP1 0x0 PRP2 0x0 00:34:04.292 [2024-12-14 00:14:28.451709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.292 [2024-12-14 00:14:28.452101] bdev_nvme.c:2057:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:34:04.292 [2024-12-14 00:14:28.452119] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:34:04.292 [2024-12-14 00:14:28.452185] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325580 (9): Bad file descriptor 00:34:04.292 [2024-12-14 00:14:28.456890] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:34:04.292 [2024-12-14 00:14:28.610568] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful. 00:34:04.292 8811.50 IOPS, 34.42 MiB/s [2024-12-13T23:14:43.433Z] 9133.33 IOPS, 35.68 MiB/s [2024-12-13T23:14:43.433Z] 9327.00 IOPS, 36.43 MiB/s [2024-12-13T23:14:43.433Z] [2024-12-14 00:14:32.070499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:15152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.292 [2024-12-14 00:14:32.070551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.292 [2024-12-14 00:14:32.070575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:15160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.292 [2024-12-14 00:14:32.070586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.292 [2024-12-14 00:14:32.070598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:15168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.292 [2024-12-14 00:14:32.070608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.292 [2024-12-14 00:14:32.070620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:15176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.292 [2024-12-14 00:14:32.070630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.292 [2024-12-14 00:14:32.070641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:15184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.292 [2024-12-14 00:14:32.070650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.292 [2024-12-14 00:14:32.070662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.292 [2024-12-14 00:14:32.070671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.292 [2024-12-14 00:14:32.070693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:15200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.292 [2024-12-14 00:14:32.070702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.292 [2024-12-14 00:14:32.070713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:15208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.292 [2024-12-14 00:14:32.070723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.292 [2024-12-14 00:14:32.070735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:15216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.292 [2024-12-14 00:14:32.070745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.292 [2024-12-14 00:14:32.070755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:15224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.292 [2024-12-14 00:14:32.070765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.292 [2024-12-14 00:14:32.070776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:15232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.292 [2024-12-14 00:14:32.070785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.292 [2024-12-14 00:14:32.070798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:15240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.292 [2024-12-14 00:14:32.070808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.292 [2024-12-14 00:14:32.070819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:15248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.292 [2024-12-14 00:14:32.070828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.292 [2024-12-14 00:14:32.070839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:15256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.292 [2024-12-14 00:14:32.070848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.292 [2024-12-14 00:14:32.070859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:15264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.292 [2024-12-14 00:14:32.070868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.292 [2024-12-14 00:14:32.070879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:15272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.292 [2024-12-14 00:14:32.070888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.292 [2024-12-14 00:14:32.070899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:15280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.292 [2024-12-14 00:14:32.070908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.292 [2024-12-14 00:14:32.070919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.292 [2024-12-14 00:14:32.070929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.292 [2024-12-14 00:14:32.070939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:15296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.292 [2024-12-14 00:14:32.070948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.292 [2024-12-14 00:14:32.070959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:15304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.292 [2024-12-14 00:14:32.070968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.292 [2024-12-14 00:14:32.070978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:15312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.292 [2024-12-14 00:14:32.070987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.292 [2024-12-14 00:14:32.070998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:15320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.292 [2024-12-14 00:14:32.071007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.292 [2024-12-14 00:14:32.071018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:15328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.292 [2024-12-14 00:14:32.071027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.292 [2024-12-14 00:14:32.071038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.292 [2024-12-14 00:14:32.071048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.292 [2024-12-14 00:14:32.071059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:15344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.292 [2024-12-14 00:14:32.071068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.292 [2024-12-14 00:14:32.071079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:15352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.292 [2024-12-14 00:14:32.071088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.293 [2024-12-14 00:14:32.071098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:15360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.293 [2024-12-14 00:14:32.071108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.293 [2024-12-14 00:14:32.071118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:15368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.293 [2024-12-14 00:14:32.071127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.293 [2024-12-14 00:14:32.071138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:15376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.293 [2024-12-14 00:14:32.071146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.293 [2024-12-14 00:14:32.071157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:15384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.293 [2024-12-14 00:14:32.071166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.293 [2024-12-14 00:14:32.071177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.293 [2024-12-14 00:14:32.071186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.293 [2024-12-14 00:14:32.071197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:15400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.293 [2024-12-14 00:14:32.071206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.293 [2024-12-14 00:14:32.071217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:15408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.293 [2024-12-14 00:14:32.071226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.293 [2024-12-14 00:14:32.071237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:15432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:04.293 [2024-12-14 00:14:32.071246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.293 [2024-12-14 00:14:32.071257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:15440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:04.293 [2024-12-14 00:14:32.071266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.293 [2024-12-14 00:14:32.071277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:15448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:04.293 [2024-12-14 00:14:32.071286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.293 [2024-12-14 00:14:32.071297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:15456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:04.293 [2024-12-14 00:14:32.071307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.293 [2024-12-14 00:14:32.071318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:15464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:04.293 [2024-12-14 00:14:32.071327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.293 [2024-12-14 00:14:32.071337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:15472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:04.293 [2024-12-14 00:14:32.071347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.293 [2024-12-14 00:14:32.071357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:15480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:04.293 [2024-12-14 00:14:32.071366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.293 [2024-12-14 00:14:32.071377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:15488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:04.293 [2024-12-14 00:14:32.071386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.293 [2024-12-14 00:14:32.071396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:15496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:04.293 [2024-12-14 00:14:32.071405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.293 [2024-12-14 00:14:32.071416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:15504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:04.293 [2024-12-14 00:14:32.071425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.293 [2024-12-14 00:14:32.071436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:15512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:04.293 [2024-12-14 00:14:32.071451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.293 [2024-12-14 00:14:32.071461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:15520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:04.293 [2024-12-14 00:14:32.071471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.293 [2024-12-14 00:14:32.071481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:15528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:04.293 [2024-12-14 00:14:32.071490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.293 [2024-12-14 00:14:32.071501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:15536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:04.293 [2024-12-14 00:14:32.071511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.293 [2024-12-14 00:14:32.071521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:15544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:04.293 [2024-12-14 00:14:32.071531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.293 [2024-12-14 00:14:32.071541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:15416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.293 [2024-12-14 00:14:32.071551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.293 [2024-12-14 00:14:32.071563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:15424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.293 [2024-12-14 00:14:32.071572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.293 [2024-12-14 00:14:32.071582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:15552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:04.293 [2024-12-14 00:14:32.071591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.293 [2024-12-14 00:14:32.071602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:15560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:04.293 [2024-12-14 00:14:32.071611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.293 [2024-12-14 00:14:32.071622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:04.293 [2024-12-14 00:14:32.071631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.293 [2024-12-14 00:14:32.071642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:15576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:04.293 [2024-12-14 00:14:32.071652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.293 [2024-12-14 00:14:32.071663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:15584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:04.293 [2024-12-14 00:14:32.071672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.293 [2024-12-14 00:14:32.071683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:15592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:04.293 [2024-12-14 00:14:32.071692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.293 [2024-12-14 00:14:32.071703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:15600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:04.293 [2024-12-14 00:14:32.071712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.293 [2024-12-14 00:14:32.071723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:15608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:04.293 [2024-12-14 00:14:32.071732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.293 [2024-12-14 00:14:32.071743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:15616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:04.293 [2024-12-14 00:14:32.071752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.293 [2024-12-14 00:14:32.071763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:15624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:04.293 [2024-12-14 00:14:32.071772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.293 [2024-12-14 00:14:32.071783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:15632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:04.293 [2024-12-14 00:14:32.071792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.293 [2024-12-14 00:14:32.071802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:15640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:04.293 [2024-12-14 00:14:32.071813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.293 [2024-12-14 00:14:32.071823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:15648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:04.293 [2024-12-14 00:14:32.071833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.293 [2024-12-14 00:14:32.071843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:15656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:04.293 [2024-12-14 00:14:32.071852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.293 [2024-12-14 00:14:32.071863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:15664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:04.293 [2024-12-14 00:14:32.071874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.293 [2024-12-14 00:14:32.071885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:15672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:04.293 [2024-12-14 00:14:32.071894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.293 [2024-12-14 00:14:32.071905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:15680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:04.294 [2024-12-14 00:14:32.071914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.294 [2024-12-14 00:14:32.071925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:15688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:04.294 [2024-12-14 00:14:32.071935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.294 [2024-12-14 00:14:32.071946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:15696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:04.294 [2024-12-14 00:14:32.071955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.294 [2024-12-14 00:14:32.071966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:15704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:04.294 [2024-12-14 00:14:32.071975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.294 [2024-12-14 00:14:32.071991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:15712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:04.294 [2024-12-14 00:14:32.072000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.294 [2024-12-14 00:14:32.072011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:15720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:04.294 [2024-12-14 00:14:32.072020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.294 [2024-12-14 00:14:32.072032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:15728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:04.294 [2024-12-14 00:14:32.072041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.294 [2024-12-14 00:14:32.072052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:15736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:04.294 [2024-12-14 00:14:32.072061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.294 [2024-12-14 00:14:32.072074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:15744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:04.294 [2024-12-14 00:14:32.072084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.294 [2024-12-14 00:14:32.072095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:15752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:04.294 [2024-12-14 00:14:32.072104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.294 [2024-12-14 00:14:32.072115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:15760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:04.294 [2024-12-14 00:14:32.072125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.294 [2024-12-14 00:14:32.072136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:15768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:04.294 [2024-12-14 00:14:32.072145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.294 [2024-12-14 00:14:32.072156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:15776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:04.294 [2024-12-14 00:14:32.072165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.294 [2024-12-14 00:14:32.072176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:15784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:04.294 [2024-12-14 00:14:32.072185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.294 [2024-12-14 00:14:32.072196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:15792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:04.294 [2024-12-14 00:14:32.072206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.294 [2024-12-14 00:14:32.072217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:15800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:04.294 [2024-12-14 00:14:32.072226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.294 [2024-12-14 00:14:32.072236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:15808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:04.294 [2024-12-14 00:14:32.072246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.294 [2024-12-14 00:14:32.072256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:15816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:04.294 [2024-12-14 00:14:32.072266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.294 [2024-12-14 00:14:32.072276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:15824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:04.294 [2024-12-14 00:14:32.072285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.294 [2024-12-14 00:14:32.072296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:15832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:04.294 [2024-12-14 00:14:32.072306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.294 [2024-12-14 00:14:32.072316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:15840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:04.294 [2024-12-14 00:14:32.072326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.294 [2024-12-14 00:14:32.072338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:15848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:04.294 [2024-12-14 00:14:32.072347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.294 [2024-12-14 00:14:32.072358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:15856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:04.294 [2024-12-14 00:14:32.072367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.294 [2024-12-14 00:14:32.072377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:15864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:04.294 [2024-12-14 00:14:32.072386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.294 [2024-12-14 00:14:32.072400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:15872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:04.294 [2024-12-14 00:14:32.072410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.294 [2024-12-14 00:14:32.072420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:04.294 [2024-12-14 00:14:32.072429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.294 [2024-12-14 00:14:32.072444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:15888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:04.294 [2024-12-14 00:14:32.072454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.294 [2024-12-14 00:14:32.072464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:15896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:04.294 [2024-12-14 00:14:32.072474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.294 [2024-12-14 00:14:32.072484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:15904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:04.294 [2024-12-14 00:14:32.072494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.294 [2024-12-14 00:14:32.072505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:15912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:04.294 [2024-12-14 00:14:32.072514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.294 [2024-12-14 00:14:32.072525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:15920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:04.294 [2024-12-14 00:14:32.072534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.294 [2024-12-14 00:14:32.072545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:15928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:04.294 [2024-12-14 00:14:32.072555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.294 [2024-12-14 00:14:32.072595] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:04.294 [2024-12-14 00:14:32.072607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15936 len:8 PRP1 0x0 PRP2 0x0 00:34:04.294 [2024-12-14 00:14:32.072618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.294 [2024-12-14 00:14:32.072635] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:04.294 [2024-12-14 00:14:32.072644] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:04.294 [2024-12-14 00:14:32.072653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15944 len:8 PRP1 0x0 PRP2 0x0 00:34:04.294 [2024-12-14 00:14:32.072663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.294 [2024-12-14 00:14:32.072672] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:04.294 [2024-12-14 00:14:32.072679] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:04.294 [2024-12-14 00:14:32.072687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15952 len:8 PRP1 0x0 PRP2 0x0 00:34:04.294 [2024-12-14 00:14:32.072696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.294 [2024-12-14 00:14:32.072705] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:04.294 [2024-12-14 00:14:32.072712] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:04.294 [2024-12-14 00:14:32.072720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15960 len:8 PRP1 0x0 PRP2 0x0 00:34:04.294 [2024-12-14 00:14:32.072729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.294 [2024-12-14 00:14:32.072737] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:04.294 [2024-12-14 00:14:32.072745] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:04.294 [2024-12-14 00:14:32.072753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15968 len:8 PRP1 0x0 PRP2 0x0 00:34:04.295 [2024-12-14 00:14:32.072762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.295 [2024-12-14 00:14:32.072770] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:04.295 [2024-12-14 00:14:32.072777] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:04.295 [2024-12-14 00:14:32.072787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15976 len:8 PRP1 0x0 PRP2 0x0 00:34:04.295 [2024-12-14 00:14:32.072795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.295 [2024-12-14 00:14:32.072804] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:04.295 [2024-12-14 00:14:32.072811] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:04.295 [2024-12-14 00:14:32.072819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15984 len:8 PRP1 0x0 PRP2 0x0 00:34:04.295 [2024-12-14 00:14:32.072828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.295 [2024-12-14 00:14:32.072836] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:04.295 [2024-12-14 00:14:32.072843] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:04.295 [2024-12-14 00:14:32.072851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15992 len:8 PRP1 0x0 PRP2 0x0 00:34:04.295 [2024-12-14 00:14:32.072860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.295 [2024-12-14 00:14:32.072868] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:04.295 [2024-12-14 00:14:32.072876] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:04.295 [2024-12-14 00:14:32.072883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16000 len:8 PRP1 0x0 PRP2 0x0 00:34:04.295 [2024-12-14 00:14:32.072893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.295 [2024-12-14 00:14:32.072902] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:04.295 [2024-12-14 00:14:32.072908] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:04.295 [2024-12-14 00:14:32.072917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16008 len:8 PRP1 0x0 PRP2 0x0 00:34:04.295 [2024-12-14 00:14:32.072925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.295 [2024-12-14 00:14:32.072934] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:04.295 [2024-12-14 00:14:32.072941] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:04.295 [2024-12-14 00:14:32.072949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16016 len:8 PRP1 0x0 PRP2 0x0 00:34:04.295 [2024-12-14 00:14:32.072957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.295 [2024-12-14 00:14:32.072966] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:04.295 [2024-12-14 00:14:32.072973] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:04.295 [2024-12-14 00:14:32.072980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16024 len:8 PRP1 0x0 PRP2 0x0 00:34:04.295 [2024-12-14 00:14:32.072989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.295 [2024-12-14 00:14:32.072997] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:04.295 [2024-12-14 00:14:32.073004] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:04.295 [2024-12-14 00:14:32.073012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16032 len:8 PRP1 0x0 PRP2 0x0 00:34:04.295 [2024-12-14 00:14:32.073020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.295 [2024-12-14 00:14:32.073029] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:04.295 [2024-12-14 00:14:32.073036] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:04.295 [2024-12-14 00:14:32.073045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16040 len:8 PRP1 0x0 PRP2 0x0 00:34:04.295 [2024-12-14 00:14:32.073054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.295 [2024-12-14 00:14:32.073062] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:04.295 [2024-12-14 00:14:32.073069] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:04.295 [2024-12-14 00:14:32.073077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16048 len:8 PRP1 0x0 PRP2 0x0 00:34:04.295 [2024-12-14 00:14:32.073086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.295 [2024-12-14 00:14:32.073094] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:04.295 [2024-12-14 00:14:32.073101] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:04.295 [2024-12-14 00:14:32.073109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16056 len:8 PRP1 0x0 PRP2 0x0 00:34:04.295 [2024-12-14 00:14:32.073118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.295 [2024-12-14 00:14:32.073126] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:04.295 [2024-12-14 00:14:32.073133] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:04.295 [2024-12-14 00:14:32.073143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16064 len:8 PRP1 0x0 PRP2 0x0 00:34:04.295 [2024-12-14 00:14:32.073151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.295 [2024-12-14 00:14:32.073160] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:04.295 [2024-12-14 00:14:32.073167] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:04.295 [2024-12-14 00:14:32.073175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16072 len:8 PRP1 0x0 PRP2 0x0 00:34:04.295 [2024-12-14 00:14:32.073183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.295 [2024-12-14 00:14:32.073192] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:04.295 [2024-12-14 00:14:32.073204] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:04.295 [2024-12-14 00:14:32.073212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16080 len:8 PRP1 0x0 PRP2 0x0 00:34:04.295 [2024-12-14 00:14:32.073221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.295 [2024-12-14 00:14:32.073230] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:04.295 [2024-12-14 00:14:32.073237] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:04.295 [2024-12-14 00:14:32.073245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16088 len:8 PRP1 0x0 PRP2 0x0 00:34:04.295 [2024-12-14 00:14:32.073253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.295 [2024-12-14 00:14:32.073262] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:04.295 [2024-12-14 00:14:32.073269] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:04.295 [2024-12-14 00:14:32.073276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16096 len:8 PRP1 0x0 PRP2 0x0 00:34:04.295 [2024-12-14 00:14:32.073285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.295 [2024-12-14 00:14:32.073294] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:04.295 [2024-12-14 00:14:32.073301] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:04.295 [2024-12-14 00:14:32.073310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16104 len:8 PRP1 0x0 PRP2 0x0 00:34:04.295 [2024-12-14 00:14:32.073318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.295 [2024-12-14 00:14:32.073327] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:04.295 [2024-12-14 00:14:32.073334] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:04.295 [2024-12-14 00:14:32.073342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16112 len:8 PRP1 0x0 PRP2 0x0 00:34:04.295 [2024-12-14 00:14:32.073350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.295 [2024-12-14 00:14:32.073359] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:04.295 [2024-12-14 00:14:32.073366] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:04.295 [2024-12-14 00:14:32.073375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16120 len:8 PRP1 0x0 PRP2 0x0 00:34:04.295 [2024-12-14 00:14:32.073384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.295 [2024-12-14 00:14:32.073393] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:04.295 [2024-12-14 00:14:32.073404] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:04.295 [2024-12-14 00:14:32.073412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16128 len:8 PRP1 0x0 PRP2 0x0 00:34:04.295 [2024-12-14 00:14:32.073420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.295 [2024-12-14 00:14:32.073429] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:04.295 [2024-12-14 00:14:32.073436] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:04.295 [2024-12-14 00:14:32.073450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16136 len:8 PRP1 0x0 PRP2 0x0 00:34:04.295 [2024-12-14 00:14:32.073459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.295 [2024-12-14 00:14:32.073467] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:04.295 [2024-12-14 00:14:32.073474] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:04.295 [2024-12-14 00:14:32.073481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16144 len:8 PRP1 0x0 PRP2 0x0 00:34:04.295 [2024-12-14 00:14:32.073490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.295 [2024-12-14 00:14:32.073499] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:04.295 [2024-12-14 00:14:32.073506] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:04.295 [2024-12-14 00:14:32.073514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16152 len:8 PRP1 0x0 PRP2 0x0 00:34:04.295 [2024-12-14 00:14:32.073522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.295 [2024-12-14 00:14:32.073541] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:04.295 [2024-12-14 00:14:32.073548] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:04.295 [2024-12-14 00:14:32.073556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16160 len:8 PRP1 0x0 PRP2 0x0 00:34:04.296 [2024-12-14 00:14:32.084315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.296 [2024-12-14 00:14:32.084337] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:04.296 [2024-12-14 00:14:32.084347] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:04.296 [2024-12-14 00:14:32.084357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16168 len:8 PRP1 0x0 PRP2 0x0 00:34:04.296 [2024-12-14 00:14:32.084371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.296 [2024-12-14 00:14:32.084741] bdev_nvme.c:2057:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:34:04.296 [2024-12-14 00:14:32.084787] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:34:04.296 [2024-12-14 00:14:32.084809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.296 [2024-12-14 00:14:32.084825] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:34:04.296 [2024-12-14 00:14:32.084839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.296 [2024-12-14 00:14:32.084852] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:34:04.296 [2024-12-14 00:14:32.084870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.296 [2024-12-14 00:14:32.084884] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:34:04.296 [2024-12-14 00:14:32.084896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.296 [2024-12-14 00:14:32.084908] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:34:04.296 [2024-12-14 00:14:32.084964] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325580 (9): Bad file descriptor 00:34:04.296 [2024-12-14 00:14:32.089077] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:34:04.296 [2024-12-14 00:14:32.127205] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller successful. 00:34:04.296 9294.20 IOPS, 36.31 MiB/s [2024-12-13T23:14:43.437Z] 9395.67 IOPS, 36.70 MiB/s [2024-12-13T23:14:43.437Z] 9469.86 IOPS, 36.99 MiB/s [2024-12-13T23:14:43.437Z] 9519.88 IOPS, 37.19 MiB/s [2024-12-13T23:14:43.437Z] 9553.00 IOPS, 37.32 MiB/s [2024-12-13T23:14:43.437Z] [2024-12-14 00:14:36.503838] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:34:04.296 [2024-12-14 00:14:36.503897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.296 [2024-12-14 00:14:36.503911] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:34:04.296 [2024-12-14 00:14:36.503921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.296 [2024-12-14 00:14:36.503933] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:34:04.296 [2024-12-14 00:14:36.503943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.296 [2024-12-14 00:14:36.503953] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:34:04.296 [2024-12-14 00:14:36.503962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.296 [2024-12-14 00:14:36.503971] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325580 is same with the state(6) to be set 00:34:04.296 [2024-12-14 00:14:36.507091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:111784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.296 [2024-12-14 00:14:36.507122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.296 [2024-12-14 00:14:36.507143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:111856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:04.296 [2024-12-14 00:14:36.507154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.296 [2024-12-14 00:14:36.507165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:111864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:04.296 [2024-12-14 00:14:36.507175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.296 [2024-12-14 00:14:36.507186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:111872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:04.296 [2024-12-14 00:14:36.507195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.296 [2024-12-14 00:14:36.507207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:111880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:04.296 [2024-12-14 00:14:36.507222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.296 [2024-12-14 00:14:36.507233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:111888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:04.296 [2024-12-14 00:14:36.507243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.296 [2024-12-14 00:14:36.507254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:111896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:04.296 [2024-12-14 00:14:36.507265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.296 [2024-12-14 00:14:36.507276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:111904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:04.296 [2024-12-14 00:14:36.507286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.296 [2024-12-14 00:14:36.507297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:111912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:04.296 [2024-12-14 00:14:36.507306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.296 [2024-12-14 00:14:36.507318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:111920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:04.296 [2024-12-14 00:14:36.507328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.296 [2024-12-14 00:14:36.507339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:111928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:04.296 [2024-12-14 00:14:36.507348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.296 [2024-12-14 00:14:36.507358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:111936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:04.296 [2024-12-14 00:14:36.507368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.296 [2024-12-14 00:14:36.507379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:111944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:04.296 [2024-12-14 00:14:36.507388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.296 [2024-12-14 00:14:36.507399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:111952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:04.296 [2024-12-14 00:14:36.507408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.296 [2024-12-14 00:14:36.507419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:111960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:04.296 [2024-12-14 00:14:36.507429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.296 [2024-12-14 00:14:36.507447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:111968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:04.296 [2024-12-14 00:14:36.507457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.296 [2024-12-14 00:14:36.507468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:111976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:04.296 [2024-12-14 00:14:36.507477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.296 [2024-12-14 00:14:36.507490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:111984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:04.296 [2024-12-14 00:14:36.507499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.296 [2024-12-14 00:14:36.507510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:111992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:04.296 [2024-12-14 00:14:36.507520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.296 [2024-12-14 00:14:36.507530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:112000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:04.297 [2024-12-14 00:14:36.507540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.297 [2024-12-14 00:14:36.507551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:112008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:04.297 [2024-12-14 00:14:36.507560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.297 [2024-12-14 00:14:36.507571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:112016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:04.297 [2024-12-14 00:14:36.507580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.297 [2024-12-14 00:14:36.507591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:112024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:04.297 [2024-12-14 00:14:36.507600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.297 [2024-12-14 00:14:36.507611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:112032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:04.297 [2024-12-14 00:14:36.507620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.297 [2024-12-14 00:14:36.507631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:112040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:04.297 [2024-12-14 00:14:36.507640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.297 [2024-12-14 00:14:36.507651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:112048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:04.297 [2024-12-14 00:14:36.507661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.297 [2024-12-14 00:14:36.507671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:112056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:04.297 [2024-12-14 00:14:36.507680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.297 [2024-12-14 00:14:36.507691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:112064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:04.297 [2024-12-14 00:14:36.507700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.297 [2024-12-14 00:14:36.507711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:112072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:04.297 [2024-12-14 00:14:36.507720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.297 [2024-12-14 00:14:36.507731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:112080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:04.297 [2024-12-14 00:14:36.507753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.297 [2024-12-14 00:14:36.507766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:111792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.297 [2024-12-14 00:14:36.507776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.297 [2024-12-14 00:14:36.507786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:112088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:04.297 [2024-12-14 00:14:36.507796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.297 [2024-12-14 00:14:36.507807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:112096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:04.297 [2024-12-14 00:14:36.507816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.297 [2024-12-14 00:14:36.507827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:112104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:04.297 [2024-12-14 00:14:36.507836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.297 [2024-12-14 00:14:36.507846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:112112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:04.297 [2024-12-14 00:14:36.507856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.297 [2024-12-14 00:14:36.507867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:112120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:04.297 [2024-12-14 00:14:36.507876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.297 [2024-12-14 00:14:36.507887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:112128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:04.297 [2024-12-14 00:14:36.507896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.297 [2024-12-14 00:14:36.507907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:112136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:04.297 [2024-12-14 00:14:36.507917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.297 [2024-12-14 00:14:36.507927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:112144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:04.297 [2024-12-14 00:14:36.507937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.297 [2024-12-14 00:14:36.507947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:112152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:04.297 [2024-12-14 00:14:36.507957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.297 [2024-12-14 00:14:36.507971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:112160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:04.297 [2024-12-14 00:14:36.507980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.297 [2024-12-14 00:14:36.507991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:112168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:04.297 [2024-12-14 00:14:36.508000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.297 [2024-12-14 00:14:36.508011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:112176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:04.297 [2024-12-14 00:14:36.508025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.297 [2024-12-14 00:14:36.508036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:112184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:04.297 [2024-12-14 00:14:36.508045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.297 [2024-12-14 00:14:36.508056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:112192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:04.297 [2024-12-14 00:14:36.508066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.297 [2024-12-14 00:14:36.508077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:112200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:04.297 [2024-12-14 00:14:36.508087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.297 [2024-12-14 00:14:36.508097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:112208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:04.297 [2024-12-14 00:14:36.508107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.297 [2024-12-14 00:14:36.508119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:112216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:04.297 [2024-12-14 00:14:36.508129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.297 [2024-12-14 00:14:36.508140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:112224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:04.297 [2024-12-14 00:14:36.508149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.297 [2024-12-14 00:14:36.508160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:112232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:04.297 [2024-12-14 00:14:36.508169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.297 [2024-12-14 00:14:36.508180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:112240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:04.297 [2024-12-14 00:14:36.508189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.297 [2024-12-14 00:14:36.508199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:112248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:04.297 [2024-12-14 00:14:36.508208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.297 [2024-12-14 00:14:36.508219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:112256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:04.297 [2024-12-14 00:14:36.508229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.297 [2024-12-14 00:14:36.508240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:112264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:04.297 [2024-12-14 00:14:36.508249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.297 [2024-12-14 00:14:36.508260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:112272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:04.297 [2024-12-14 00:14:36.508269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.297 [2024-12-14 00:14:36.508282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:112280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:04.297 [2024-12-14 00:14:36.508291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.297 [2024-12-14 00:14:36.508302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:112288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:04.297 [2024-12-14 00:14:36.508311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.297 [2024-12-14 00:14:36.508322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:112296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:04.297 [2024-12-14 00:14:36.508331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.297 [2024-12-14 00:14:36.508342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:112304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:04.297 [2024-12-14 00:14:36.508351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.297 [2024-12-14 00:14:36.508361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:112312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:04.297 [2024-12-14 00:14:36.508370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.298 [2024-12-14 00:14:36.508381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:112320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:04.298 [2024-12-14 00:14:36.508390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.298 [2024-12-14 00:14:36.508401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:112328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:04.298 [2024-12-14 00:14:36.508410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.298 [2024-12-14 00:14:36.508421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:112336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:04.298 [2024-12-14 00:14:36.508431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.298 [2024-12-14 00:14:36.508463] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:04.298 [2024-12-14 00:14:36.508474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:112344 len:8 PRP1 0x0 PRP2 0x0 00:34:04.298 [2024-12-14 00:14:36.508484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.298 [2024-12-14 00:14:36.508499] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:04.298 [2024-12-14 00:14:36.508508] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:04.298 [2024-12-14 00:14:36.508517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:112352 len:8 PRP1 0x0 PRP2 0x0 00:34:04.298 [2024-12-14 00:14:36.508526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.298 [2024-12-14 00:14:36.508536] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:04.298 [2024-12-14 00:14:36.508543] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:04.298 [2024-12-14 00:14:36.508551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:112360 len:8 PRP1 0x0 PRP2 0x0 00:34:04.298 [2024-12-14 00:14:36.508560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.298 [2024-12-14 00:14:36.508571] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:04.298 [2024-12-14 00:14:36.508578] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:04.298 [2024-12-14 00:14:36.508587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:112368 len:8 PRP1 0x0 PRP2 0x0 00:34:04.298 [2024-12-14 00:14:36.508596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.298 [2024-12-14 00:14:36.508605] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:04.298 [2024-12-14 00:14:36.508612] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:04.298 [2024-12-14 00:14:36.508620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:112376 len:8 PRP1 0x0 PRP2 0x0 00:34:04.298 [2024-12-14 00:14:36.508628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.298 [2024-12-14 00:14:36.508637] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:04.298 [2024-12-14 00:14:36.508644] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:04.298 [2024-12-14 00:14:36.508652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:112384 len:8 PRP1 0x0 PRP2 0x0 00:34:04.298 [2024-12-14 00:14:36.508661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.298 [2024-12-14 00:14:36.508670] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:04.298 [2024-12-14 00:14:36.508676] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:04.298 [2024-12-14 00:14:36.508684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:112392 len:8 PRP1 0x0 PRP2 0x0 00:34:04.298 [2024-12-14 00:14:36.508693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.298 [2024-12-14 00:14:36.508702] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:04.298 [2024-12-14 00:14:36.508709] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:04.298 [2024-12-14 00:14:36.508717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:112400 len:8 PRP1 0x0 PRP2 0x0 00:34:04.298 [2024-12-14 00:14:36.508726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.298 [2024-12-14 00:14:36.508735] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:04.298 [2024-12-14 00:14:36.508742] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:04.298 [2024-12-14 00:14:36.508750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:112408 len:8 PRP1 0x0 PRP2 0x0 00:34:04.298 [2024-12-14 00:14:36.508759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.298 [2024-12-14 00:14:36.508768] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:04.298 [2024-12-14 00:14:36.508774] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:04.298 [2024-12-14 00:14:36.508782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:112416 len:8 PRP1 0x0 PRP2 0x0 00:34:04.298 [2024-12-14 00:14:36.508791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.298 [2024-12-14 00:14:36.508800] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:04.298 [2024-12-14 00:14:36.508808] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:04.298 [2024-12-14 00:14:36.508815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:112424 len:8 PRP1 0x0 PRP2 0x0 00:34:04.298 [2024-12-14 00:14:36.508826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.298 [2024-12-14 00:14:36.508837] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:04.298 [2024-12-14 00:14:36.508844] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:04.298 [2024-12-14 00:14:36.508852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:112432 len:8 PRP1 0x0 PRP2 0x0 00:34:04.298 [2024-12-14 00:14:36.508861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.298 [2024-12-14 00:14:36.508870] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:04.298 [2024-12-14 00:14:36.508877] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:04.298 [2024-12-14 00:14:36.508885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:112440 len:8 PRP1 0x0 PRP2 0x0 00:34:04.298 [2024-12-14 00:14:36.508893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.298 [2024-12-14 00:14:36.508902] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:04.298 [2024-12-14 00:14:36.508909] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:04.298 [2024-12-14 00:14:36.508917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:112448 len:8 PRP1 0x0 PRP2 0x0 00:34:04.298 [2024-12-14 00:14:36.508926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.298 [2024-12-14 00:14:36.508934] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:04.298 [2024-12-14 00:14:36.508941] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:04.298 [2024-12-14 00:14:36.508949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:112456 len:8 PRP1 0x0 PRP2 0x0 00:34:04.298 [2024-12-14 00:14:36.508958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.298 [2024-12-14 00:14:36.508966] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:04.298 [2024-12-14 00:14:36.508973] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:04.298 [2024-12-14 00:14:36.508986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:112464 len:8 PRP1 0x0 PRP2 0x0 00:34:04.298 [2024-12-14 00:14:36.508995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.298 [2024-12-14 00:14:36.509003] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:04.298 [2024-12-14 00:14:36.509010] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:04.298 [2024-12-14 00:14:36.509018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:112472 len:8 PRP1 0x0 PRP2 0x0 00:34:04.298 [2024-12-14 00:14:36.509027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.298 [2024-12-14 00:14:36.509036] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:04.298 [2024-12-14 00:14:36.509043] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:04.298 [2024-12-14 00:14:36.509050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:112480 len:8 PRP1 0x0 PRP2 0x0 00:34:04.298 [2024-12-14 00:14:36.509060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.298 [2024-12-14 00:14:36.509068] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:04.298 [2024-12-14 00:14:36.509075] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:04.298 [2024-12-14 00:14:36.509085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:112488 len:8 PRP1 0x0 PRP2 0x0 00:34:04.298 [2024-12-14 00:14:36.509094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.298 [2024-12-14 00:14:36.509103] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:04.298 [2024-12-14 00:14:36.509111] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:04.298 [2024-12-14 00:14:36.509119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:112496 len:8 PRP1 0x0 PRP2 0x0 00:34:04.298 [2024-12-14 00:14:36.509127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.299 [2024-12-14 00:14:36.509136] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:04.299 [2024-12-14 00:14:36.509143] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:04.299 [2024-12-14 00:14:36.509151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:112504 len:8 PRP1 0x0 PRP2 0x0 00:34:04.299 [2024-12-14 00:14:36.509160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.299 [2024-12-14 00:14:36.509169] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:04.299 [2024-12-14 00:14:36.509176] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:04.299 [2024-12-14 00:14:36.509183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:112512 len:8 PRP1 0x0 PRP2 0x0 00:34:04.299 [2024-12-14 00:14:36.509192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.299 [2024-12-14 00:14:36.509201] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:04.299 [2024-12-14 00:14:36.509207] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:04.299 [2024-12-14 00:14:36.509215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:112520 len:8 PRP1 0x0 PRP2 0x0 00:34:04.299 [2024-12-14 00:14:36.509224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.299 [2024-12-14 00:14:36.509233] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:04.299 [2024-12-14 00:14:36.509239] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:04.299 [2024-12-14 00:14:36.509249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:112528 len:8 PRP1 0x0 PRP2 0x0 00:34:04.299 [2024-12-14 00:14:36.509258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.299 [2024-12-14 00:14:36.509266] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:04.299 [2024-12-14 00:14:36.509273] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:04.299 [2024-12-14 00:14:36.509281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:112536 len:8 PRP1 0x0 PRP2 0x0 00:34:04.299 [2024-12-14 00:14:36.509290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.299 [2024-12-14 00:14:36.509298] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:04.299 [2024-12-14 00:14:36.509305] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:04.299 [2024-12-14 00:14:36.509313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:112544 len:8 PRP1 0x0 PRP2 0x0 00:34:04.299 [2024-12-14 00:14:36.509322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.299 [2024-12-14 00:14:36.509332] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:04.299 [2024-12-14 00:14:36.509339] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:04.299 [2024-12-14 00:14:36.509346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:112552 len:8 PRP1 0x0 PRP2 0x0 00:34:04.299 [2024-12-14 00:14:36.509355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.299 [2024-12-14 00:14:36.509366] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:04.299 [2024-12-14 00:14:36.509374] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:04.299 [2024-12-14 00:14:36.509381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:112560 len:8 PRP1 0x0 PRP2 0x0 00:34:04.299 [2024-12-14 00:14:36.509390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.299 [2024-12-14 00:14:36.509399] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:04.299 [2024-12-14 00:14:36.509405] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:04.299 [2024-12-14 00:14:36.509413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:112568 len:8 PRP1 0x0 PRP2 0x0 00:34:04.299 [2024-12-14 00:14:36.509422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.299 [2024-12-14 00:14:36.509431] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:04.299 [2024-12-14 00:14:36.509443] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:04.299 [2024-12-14 00:14:36.509451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:112576 len:8 PRP1 0x0 PRP2 0x0 00:34:04.299 [2024-12-14 00:14:36.509460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.299 [2024-12-14 00:14:36.509469] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:04.299 [2024-12-14 00:14:36.509476] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:04.299 [2024-12-14 00:14:36.509483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:112584 len:8 PRP1 0x0 PRP2 0x0 00:34:04.299 [2024-12-14 00:14:36.509492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.299 [2024-12-14 00:14:36.509501] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:04.299 [2024-12-14 00:14:36.509508] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:04.299 [2024-12-14 00:14:36.509517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:112592 len:8 PRP1 0x0 PRP2 0x0 00:34:04.299 [2024-12-14 00:14:36.509526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.299 [2024-12-14 00:14:36.509534] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:04.299 [2024-12-14 00:14:36.509541] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:04.299 [2024-12-14 00:14:36.509549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:112600 len:8 PRP1 0x0 PRP2 0x0 00:34:04.299 [2024-12-14 00:14:36.509558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.299 [2024-12-14 00:14:36.509567] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:04.299 [2024-12-14 00:14:36.509573] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:04.299 [2024-12-14 00:14:36.509581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:112608 len:8 PRP1 0x0 PRP2 0x0 00:34:04.299 [2024-12-14 00:14:36.509591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.299 [2024-12-14 00:14:36.509600] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:04.299 [2024-12-14 00:14:36.509607] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:04.299 [2024-12-14 00:14:36.509615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:112616 len:8 PRP1 0x0 PRP2 0x0 00:34:04.299 [2024-12-14 00:14:36.509624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.299 [2024-12-14 00:14:36.509634] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:04.299 [2024-12-14 00:14:36.509641] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:04.299 [2024-12-14 00:14:36.509649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:112624 len:8 PRP1 0x0 PRP2 0x0 00:34:04.299 [2024-12-14 00:14:36.509658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.299 [2024-12-14 00:14:36.509666] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:04.299 [2024-12-14 00:14:36.509673] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:04.299 [2024-12-14 00:14:36.509681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:112632 len:8 PRP1 0x0 PRP2 0x0 00:34:04.299 [2024-12-14 00:14:36.509689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.299 [2024-12-14 00:14:36.509698] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:04.299 [2024-12-14 00:14:36.509705] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:04.299 [2024-12-14 00:14:36.509713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:112640 len:8 PRP1 0x0 PRP2 0x0 00:34:04.299 [2024-12-14 00:14:36.509722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.299 [2024-12-14 00:14:36.509730] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:04.299 [2024-12-14 00:14:36.509737] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:04.299 [2024-12-14 00:14:36.509744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:112648 len:8 PRP1 0x0 PRP2 0x0 00:34:04.299 [2024-12-14 00:14:36.509753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.299 [2024-12-14 00:14:36.509762] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:04.299 [2024-12-14 00:14:36.509768] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:04.299 [2024-12-14 00:14:36.509777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:112656 len:8 PRP1 0x0 PRP2 0x0 00:34:04.299 [2024-12-14 00:14:36.509786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.299 [2024-12-14 00:14:36.509795] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:04.299 [2024-12-14 00:14:36.509802] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:04.299 [2024-12-14 00:14:36.509810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:112664 len:8 PRP1 0x0 PRP2 0x0 00:34:04.299 [2024-12-14 00:14:36.509819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.299 [2024-12-14 00:14:36.509827] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:04.299 [2024-12-14 00:14:36.509834] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:04.299 [2024-12-14 00:14:36.509843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:112672 len:8 PRP1 0x0 PRP2 0x0 00:34:04.299 [2024-12-14 00:14:36.509852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.299 [2024-12-14 00:14:36.509861] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:04.300 [2024-12-14 00:14:36.509867] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:04.300 [2024-12-14 00:14:36.509875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:112680 len:8 PRP1 0x0 PRP2 0x0 00:34:04.300 [2024-12-14 00:14:36.509884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.300 [2024-12-14 00:14:36.509894] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:04.300 [2024-12-14 00:14:36.509901] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:04.300 [2024-12-14 00:14:36.509908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:112688 len:8 PRP1 0x0 PRP2 0x0 00:34:04.300 [2024-12-14 00:14:36.509917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.300 [2024-12-14 00:14:36.509926] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:04.300 [2024-12-14 00:14:36.509933] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:04.300 [2024-12-14 00:14:36.509941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:112696 len:8 PRP1 0x0 PRP2 0x0 00:34:04.300 [2024-12-14 00:14:36.509949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.300 [2024-12-14 00:14:36.509957] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:04.300 [2024-12-14 00:14:36.509965] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:04.300 [2024-12-14 00:14:36.509972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:112704 len:8 PRP1 0x0 PRP2 0x0 00:34:04.300 [2024-12-14 00:14:36.509982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.300 [2024-12-14 00:14:36.509990] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:04.300 [2024-12-14 00:14:36.509997] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:04.300 [2024-12-14 00:14:36.510004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:112712 len:8 PRP1 0x0 PRP2 0x0 00:34:04.300 [2024-12-14 00:14:36.510013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.300 [2024-12-14 00:14:36.510021] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:04.300 [2024-12-14 00:14:36.510028] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:04.300 [2024-12-14 00:14:36.510045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:112720 len:8 PRP1 0x0 PRP2 0x0 00:34:04.300 [2024-12-14 00:14:36.510054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.300 [2024-12-14 00:14:36.510063] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:04.300 [2024-12-14 00:14:36.510070] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:04.300 [2024-12-14 00:14:36.510078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:112728 len:8 PRP1 0x0 PRP2 0x0 00:34:04.300 [2024-12-14 00:14:36.510087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.300 [2024-12-14 00:14:36.510095] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:04.300 [2024-12-14 00:14:36.510110] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:04.300 [2024-12-14 00:14:36.510118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:112736 len:8 PRP1 0x0 PRP2 0x0 00:34:04.300 [2024-12-14 00:14:36.510127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.300 [2024-12-14 00:14:36.510136] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:04.300 [2024-12-14 00:14:36.510142] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:04.300 [2024-12-14 00:14:36.510150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:112744 len:8 PRP1 0x0 PRP2 0x0 00:34:04.300 [2024-12-14 00:14:36.510159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.300 [2024-12-14 00:14:36.510169] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:04.300 [2024-12-14 00:14:36.510176] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:04.300 [2024-12-14 00:14:36.510184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:112752 len:8 PRP1 0x0 PRP2 0x0 00:34:04.300 [2024-12-14 00:14:36.510193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.300 [2024-12-14 00:14:36.510201] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:04.300 [2024-12-14 00:14:36.510208] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:04.300 [2024-12-14 00:14:36.510216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:112760 len:8 PRP1 0x0 PRP2 0x0 00:34:04.300 [2024-12-14 00:14:36.510225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.300 [2024-12-14 00:14:36.510234] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:04.300 [2024-12-14 00:14:36.520883] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:04.300 [2024-12-14 00:14:36.520900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:112768 len:8 PRP1 0x0 PRP2 0x0 00:34:04.300 [2024-12-14 00:14:36.520913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.300 [2024-12-14 00:14:36.520926] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:04.300 [2024-12-14 00:14:36.520936] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:04.300 [2024-12-14 00:14:36.520948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:112776 len:8 PRP1 0x0 PRP2 0x0 00:34:04.300 [2024-12-14 00:14:36.520961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.300 [2024-12-14 00:14:36.520976] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:04.300 [2024-12-14 00:14:36.520987] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:04.300 [2024-12-14 00:14:36.520999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:112784 len:8 PRP1 0x0 PRP2 0x0 00:34:04.300 [2024-12-14 00:14:36.521012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.300 [2024-12-14 00:14:36.521026] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:04.300 [2024-12-14 00:14:36.521037] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:04.300 [2024-12-14 00:14:36.521050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:112792 len:8 PRP1 0x0 PRP2 0x0 00:34:04.300 [2024-12-14 00:14:36.521063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.300 [2024-12-14 00:14:36.521080] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:04.300 [2024-12-14 00:14:36.521091] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:04.300 [2024-12-14 00:14:36.521103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:112800 len:8 PRP1 0x0 PRP2 0x0 00:34:04.300 [2024-12-14 00:14:36.521116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.300 [2024-12-14 00:14:36.521128] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:04.300 [2024-12-14 00:14:36.521139] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:04.300 [2024-12-14 00:14:36.521150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:111800 len:8 PRP1 0x0 PRP2 0x0 00:34:04.300 [2024-12-14 00:14:36.521164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.300 [2024-12-14 00:14:36.521179] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:04.300 [2024-12-14 00:14:36.521190] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:04.300 [2024-12-14 00:14:36.521203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:111808 len:8 PRP1 0x0 PRP2 0x0 00:34:04.300 [2024-12-14 00:14:36.521216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.300 [2024-12-14 00:14:36.521229] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:04.300 [2024-12-14 00:14:36.521240] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:04.300 [2024-12-14 00:14:36.521251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:111816 len:8 PRP1 0x0 PRP2 0x0 00:34:04.300 [2024-12-14 00:14:36.521263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.300 [2024-12-14 00:14:36.521277] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:04.300 [2024-12-14 00:14:36.521286] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:04.300 [2024-12-14 00:14:36.521297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:111824 len:8 PRP1 0x0 PRP2 0x0 00:34:04.300 [2024-12-14 00:14:36.521311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.300 [2024-12-14 00:14:36.521325] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:04.300 [2024-12-14 00:14:36.521336] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:04.300 [2024-12-14 00:14:36.521348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:111832 len:8 PRP1 0x0 PRP2 0x0 00:34:04.300 [2024-12-14 00:14:36.521362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.300 [2024-12-14 00:14:36.521374] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:04.300 [2024-12-14 00:14:36.521385] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:04.300 [2024-12-14 00:14:36.521395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:111840 len:8 PRP1 0x0 PRP2 0x0 00:34:04.300 [2024-12-14 00:14:36.521408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.300 [2024-12-14 00:14:36.521420] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:04.300 [2024-12-14 00:14:36.521431] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:04.300 [2024-12-14 00:14:36.521451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:111848 len:8 PRP1 0x0 PRP2 0x0 00:34:04.300 [2024-12-14 00:14:36.521465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:04.300 [2024-12-14 00:14:36.521860] bdev_nvme.c:2057:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:34:04.300 [2024-12-14 00:14:36.521877] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:34:04.300 [2024-12-14 00:14:36.521931] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325580 (9): Bad file descriptor 00:34:04.300 [2024-12-14 00:14:36.526029] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:34:04.301 [2024-12-14 00:14:36.678152] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 6] Resetting controller successful. 00:34:04.301 9414.40 IOPS, 36.77 MiB/s [2024-12-13T23:14:43.442Z] 9447.18 IOPS, 36.90 MiB/s [2024-12-13T23:14:43.442Z] 9493.17 IOPS, 37.08 MiB/s [2024-12-13T23:14:43.442Z] 9516.00 IOPS, 37.17 MiB/s [2024-12-13T23:14:43.442Z] 9527.93 IOPS, 37.22 MiB/s [2024-12-13T23:14:43.442Z] 9543.27 IOPS, 37.28 MiB/s 00:34:04.301 Latency(us) 00:34:04.301 [2024-12-13T23:14:43.442Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:04.301 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:34:04.301 Verification LBA range: start 0x0 length 0x4000 00:34:04.301 NVMe0n1 : 15.01 9548.39 37.30 1002.64 0.00 12107.04 477.87 35701.52 00:34:04.301 [2024-12-13T23:14:43.442Z] =================================================================================================================== 00:34:04.301 [2024-12-13T23:14:43.442Z] Total : 9548.39 37.30 1002.64 0.00 12107.04 477.87 35701.52 00:34:04.301 Received shutdown signal, test time was about 15.000000 seconds 00:34:04.301 00:34:04.301 Latency(us) 00:34:04.301 [2024-12-13T23:14:43.442Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:04.301 [2024-12-13T23:14:43.442Z] =================================================================================================================== 00:34:04.301 [2024-12-13T23:14:43.442Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:34:04.301 00:14:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:34:04.301 00:14:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # count=3 00:34:04.301 00:14:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:34:04.301 00:14:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=4188454 00:34:04.301 00:14:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:34:04.301 00:14:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 4188454 /var/tmp/bdevperf.sock 00:34:04.301 00:14:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 4188454 ']' 00:34:04.301 00:14:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:34:04.301 00:14:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:04.301 00:14:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:34:04.301 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:34:04.301 00:14:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:04.301 00:14:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:34:05.237 00:14:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:05.237 00:14:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:34:05.238 00:14:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:34:05.496 [2024-12-14 00:14:44.429738] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:34:05.496 00:14:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:34:05.755 [2024-12-14 00:14:44.642387] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:34:05.755 00:14:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:34:06.013 NVMe0n1 00:34:06.013 00:14:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:34:06.579 00:34:06.579 00:14:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:34:06.837 00:34:06.837 00:14:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:34:06.837 00:14:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:34:07.096 00:14:46 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:34:07.354 00:14:46 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:34:10.640 00:14:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:34:10.640 00:14:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:34:10.640 00:14:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=4189434 00:34:10.640 00:14:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:34:10.640 00:14:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@92 -- # wait 4189434 00:34:11.577 { 00:34:11.577 "results": [ 00:34:11.577 { 00:34:11.577 "job": "NVMe0n1", 00:34:11.577 "core_mask": "0x1", 00:34:11.577 "workload": "verify", 00:34:11.577 "status": "finished", 00:34:11.577 "verify_range": { 00:34:11.577 "start": 0, 00:34:11.577 "length": 16384 00:34:11.577 }, 00:34:11.577 "queue_depth": 128, 00:34:11.577 "io_size": 4096, 00:34:11.577 "runtime": 1.009932, 00:34:11.577 "iops": 9710.55477002412, 00:34:11.577 "mibps": 37.93185457040672, 00:34:11.577 "io_failed": 0, 00:34:11.577 "io_timeout": 0, 00:34:11.577 "avg_latency_us": 13111.795860488379, 00:34:11.577 "min_latency_us": 2012.8914285714286, 00:34:11.577 "max_latency_us": 11546.819047619048 00:34:11.577 } 00:34:11.577 ], 00:34:11.577 "core_count": 1 00:34:11.577 } 00:34:11.577 00:14:50 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:34:11.577 [2024-12-14 00:14:43.453252] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:34:11.577 [2024-12-14 00:14:43.453346] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4188454 ] 00:34:11.577 [2024-12-14 00:14:43.570997] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:11.577 [2024-12-14 00:14:43.685314] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:34:11.577 [2024-12-14 00:14:46.246653] bdev_nvme.c:2057:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:34:11.577 [2024-12-14 00:14:46.246722] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:34:11.577 [2024-12-14 00:14:46.246740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.577 [2024-12-14 00:14:46.246755] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:34:11.577 [2024-12-14 00:14:46.246766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.577 [2024-12-14 00:14:46.246776] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:34:11.577 [2024-12-14 00:14:46.246786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.577 [2024-12-14 00:14:46.246797] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:34:11.577 [2024-12-14 00:14:46.246806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.577 [2024-12-14 00:14:46.246815] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 7] in failed state. 00:34:11.577 [2024-12-14 00:14:46.246864] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] resetting controller 00:34:11.577 [2024-12-14 00:14:46.246892] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325580 (9): Bad file descriptor 00:34:11.577 [2024-12-14 00:14:46.257601] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 10] Resetting controller successful. 00:34:11.577 Running I/O for 1 seconds... 00:34:11.577 9671.00 IOPS, 37.78 MiB/s 00:34:11.577 Latency(us) 00:34:11.577 [2024-12-13T23:14:50.718Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:11.577 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:34:11.577 Verification LBA range: start 0x0 length 0x4000 00:34:11.577 NVMe0n1 : 1.01 9710.55 37.93 0.00 0.00 13111.80 2012.89 11546.82 00:34:11.577 [2024-12-13T23:14:50.718Z] =================================================================================================================== 00:34:11.577 [2024-12-13T23:14:50.718Z] Total : 9710.55 37.93 0.00 0.00 13111.80 2012.89 11546.82 00:34:11.577 00:14:50 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:34:11.577 00:14:50 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:34:11.835 00:14:50 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:34:12.093 00:14:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:34:12.093 00:14:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:34:12.093 00:14:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:34:12.352 00:14:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:34:15.639 00:14:54 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:34:15.639 00:14:54 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:34:15.639 00:14:54 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@108 -- # killprocess 4188454 00:34:15.639 00:14:54 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 4188454 ']' 00:34:15.639 00:14:54 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 4188454 00:34:15.639 00:14:54 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:34:15.639 00:14:54 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:15.639 00:14:54 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4188454 00:34:15.639 00:14:54 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:34:15.639 00:14:54 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:34:15.639 00:14:54 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4188454' 00:34:15.639 killing process with pid 4188454 00:34:15.639 00:14:54 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 4188454 00:34:15.639 00:14:54 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 4188454 00:34:16.574 00:14:55 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@110 -- # sync 00:34:16.574 00:14:55 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:34:16.832 00:14:55 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:34:16.833 00:14:55 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:34:16.833 00:14:55 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:34:16.833 00:14:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@516 -- # nvmfcleanup 00:34:16.833 00:14:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@121 -- # sync 00:34:16.833 00:14:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:34:16.833 00:14:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@124 -- # set +e 00:34:16.833 00:14:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:16.833 00:14:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:34:16.833 rmmod nvme_tcp 00:34:16.833 rmmod nvme_fabrics 00:34:16.833 rmmod nvme_keyring 00:34:16.833 00:14:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:16.833 00:14:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@128 -- # set -e 00:34:16.833 00:14:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@129 -- # return 0 00:34:16.833 00:14:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@517 -- # '[' -n 4185228 ']' 00:34:16.833 00:14:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@518 -- # killprocess 4185228 00:34:16.833 00:14:55 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 4185228 ']' 00:34:16.833 00:14:55 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 4185228 00:34:16.833 00:14:55 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:34:16.833 00:14:55 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:16.833 00:14:55 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4185228 00:34:16.833 00:14:55 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:34:16.833 00:14:55 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:34:16.833 00:14:55 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4185228' 00:34:16.833 killing process with pid 4185228 00:34:16.833 00:14:55 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 4185228 00:34:16.833 00:14:55 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 4185228 00:34:18.209 00:14:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:34:18.209 00:14:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:34:18.209 00:14:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:34:18.209 00:14:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # iptr 00:34:18.209 00:14:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-restore 00:34:18.209 00:14:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-save 00:34:18.209 00:14:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:34:18.209 00:14:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:34:18.209 00:14:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@302 -- # remove_spdk_ns 00:34:18.209 00:14:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:18.209 00:14:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:18.209 00:14:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:20.744 00:14:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:34:20.744 00:34:20.744 real 0m41.658s 00:34:20.744 user 2m14.738s 00:34:20.744 sys 0m7.734s 00:34:20.744 00:14:59 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:20.744 00:14:59 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:34:20.744 ************************************ 00:34:20.744 END TEST nvmf_failover 00:34:20.744 ************************************ 00:34:20.744 00:14:59 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@26 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:34:20.744 00:14:59 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:34:20.744 00:14:59 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:20.744 00:14:59 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:34:20.744 ************************************ 00:34:20.744 START TEST nvmf_host_discovery 00:34:20.744 ************************************ 00:34:20.744 00:14:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:34:20.744 * Looking for test storage... 00:34:20.744 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:34:20.744 00:14:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:34:20.744 00:14:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1711 -- # lcov --version 00:34:20.744 00:14:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:34:20.744 00:14:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:34:20.744 00:14:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:20.744 00:14:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:20.744 00:14:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:20.744 00:14:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:34:20.744 00:14:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:34:20.744 00:14:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:34:20.744 00:14:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:34:20.744 00:14:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:34:20.744 00:14:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:34:20.744 00:14:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:34:20.744 00:14:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:20.744 00:14:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@344 -- # case "$op" in 00:34:20.744 00:14:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@345 -- # : 1 00:34:20.745 00:14:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:20.745 00:14:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:20.745 00:14:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # decimal 1 00:34:20.745 00:14:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=1 00:34:20.745 00:14:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:20.745 00:14:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 1 00:34:20.745 00:14:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:34:20.745 00:14:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # decimal 2 00:34:20.745 00:14:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=2 00:34:20.745 00:14:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:20.745 00:14:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 2 00:34:20.745 00:14:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:34:20.745 00:14:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:20.745 00:14:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:20.745 00:14:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # return 0 00:34:20.745 00:14:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:20.745 00:14:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:34:20.745 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:20.745 --rc genhtml_branch_coverage=1 00:34:20.745 --rc genhtml_function_coverage=1 00:34:20.745 --rc genhtml_legend=1 00:34:20.745 --rc geninfo_all_blocks=1 00:34:20.745 --rc geninfo_unexecuted_blocks=1 00:34:20.745 00:34:20.745 ' 00:34:20.745 00:14:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:34:20.745 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:20.745 --rc genhtml_branch_coverage=1 00:34:20.745 --rc genhtml_function_coverage=1 00:34:20.745 --rc genhtml_legend=1 00:34:20.745 --rc geninfo_all_blocks=1 00:34:20.745 --rc geninfo_unexecuted_blocks=1 00:34:20.745 00:34:20.745 ' 00:34:20.745 00:14:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:34:20.745 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:20.745 --rc genhtml_branch_coverage=1 00:34:20.745 --rc genhtml_function_coverage=1 00:34:20.745 --rc genhtml_legend=1 00:34:20.745 --rc geninfo_all_blocks=1 00:34:20.745 --rc geninfo_unexecuted_blocks=1 00:34:20.745 00:34:20.745 ' 00:34:20.745 00:14:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:34:20.745 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:20.745 --rc genhtml_branch_coverage=1 00:34:20.745 --rc genhtml_function_coverage=1 00:34:20.745 --rc genhtml_legend=1 00:34:20.745 --rc geninfo_all_blocks=1 00:34:20.745 --rc geninfo_unexecuted_blocks=1 00:34:20.745 00:34:20.745 ' 00:34:20.745 00:14:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:20.745 00:14:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:34:20.745 00:14:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:20.745 00:14:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:20.745 00:14:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:20.745 00:14:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:20.745 00:14:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:20.745 00:14:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:20.745 00:14:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:20.745 00:14:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:20.745 00:14:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:20.745 00:14:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:20.745 00:14:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:34:20.745 00:14:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:34:20.745 00:14:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:20.745 00:14:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:20.745 00:14:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:20.745 00:14:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:20.745 00:14:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:20.745 00:14:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:34:20.745 00:14:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:20.745 00:14:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:20.745 00:14:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:20.745 00:14:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:20.745 00:14:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:20.745 00:14:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:20.745 00:14:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:34:20.745 00:14:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:20.745 00:14:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@51 -- # : 0 00:34:20.745 00:14:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:20.745 00:14:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:20.745 00:14:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:20.745 00:14:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:20.745 00:14:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:20.745 00:14:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:34:20.745 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:34:20.745 00:14:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:20.745 00:14:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:20.745 00:14:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:20.745 00:14:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:34:20.745 00:14:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:34:20.745 00:14:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:34:20.745 00:14:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:34:20.745 00:14:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:34:20.745 00:14:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:34:20.745 00:14:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:34:20.745 00:14:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:34:20.745 00:14:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:20.745 00:14:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:34:20.745 00:14:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:34:20.745 00:14:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:34:20.745 00:14:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:20.745 00:14:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:20.745 00:14:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:20.745 00:14:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:34:20.745 00:14:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:34:20.745 00:14:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:34:20.745 00:14:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:26.014 00:15:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:26.014 00:15:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:34:26.014 00:15:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:34:26.014 00:15:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:34:26.014 00:15:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:34:26.014 00:15:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:34:26.014 00:15:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:34:26.014 00:15:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:34:26.014 00:15:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:34:26.014 00:15:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # e810=() 00:34:26.014 00:15:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:34:26.014 00:15:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # x722=() 00:34:26.014 00:15:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:34:26.014 00:15:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # mlx=() 00:34:26.014 00:15:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:34:26.014 00:15:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:26.014 00:15:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:26.014 00:15:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:26.014 00:15:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:26.014 00:15:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:26.015 00:15:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:26.015 00:15:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:26.015 00:15:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:34:26.015 00:15:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:26.015 00:15:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:26.015 00:15:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:26.015 00:15:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:26.015 00:15:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:34:26.015 00:15:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:34:26.015 00:15:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:34:26.015 00:15:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:34:26.015 00:15:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:34:26.015 00:15:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:34:26.015 00:15:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:26.015 00:15:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:34:26.015 Found 0000:af:00.0 (0x8086 - 0x159b) 00:34:26.015 00:15:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:26.015 00:15:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:26.015 00:15:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:26.015 00:15:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:26.015 00:15:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:26.015 00:15:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:26.015 00:15:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:34:26.015 Found 0000:af:00.1 (0x8086 - 0x159b) 00:34:26.015 00:15:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:26.015 00:15:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:26.015 00:15:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:26.015 00:15:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:26.015 00:15:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:26.015 00:15:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:34:26.015 00:15:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:34:26.015 00:15:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:34:26.015 00:15:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:26.015 00:15:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:26.015 00:15:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:26.015 00:15:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:26.015 00:15:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:26.015 00:15:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:26.015 00:15:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:26.015 00:15:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:34:26.015 Found net devices under 0000:af:00.0: cvl_0_0 00:34:26.015 00:15:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:26.015 00:15:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:26.015 00:15:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:26.015 00:15:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:26.015 00:15:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:26.015 00:15:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:26.015 00:15:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:26.015 00:15:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:26.015 00:15:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:34:26.015 Found net devices under 0000:af:00.1: cvl_0_1 00:34:26.015 00:15:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:26.015 00:15:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:34:26.015 00:15:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # is_hw=yes 00:34:26.015 00:15:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:34:26.015 00:15:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:34:26.015 00:15:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:34:26.015 00:15:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:34:26.015 00:15:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:26.015 00:15:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:26.015 00:15:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:26.015 00:15:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:34:26.015 00:15:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:26.015 00:15:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:26.015 00:15:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:34:26.015 00:15:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:34:26.015 00:15:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:26.015 00:15:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:26.015 00:15:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:34:26.015 00:15:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:34:26.015 00:15:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:34:26.015 00:15:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:26.015 00:15:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:26.015 00:15:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:26.015 00:15:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:34:26.015 00:15:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:26.015 00:15:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:26.015 00:15:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:26.015 00:15:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:34:26.015 00:15:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:34:26.015 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:26.015 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.388 ms 00:34:26.015 00:34:26.015 --- 10.0.0.2 ping statistics --- 00:34:26.015 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:26.015 rtt min/avg/max/mdev = 0.388/0.388/0.388/0.000 ms 00:34:26.015 00:15:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:26.015 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:26.015 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.168 ms 00:34:26.015 00:34:26.015 --- 10.0.0.1 ping statistics --- 00:34:26.015 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:26.015 rtt min/avg/max/mdev = 0.168/0.168/0.168/0.000 ms 00:34:26.015 00:15:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:26.015 00:15:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@450 -- # return 0 00:34:26.015 00:15:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:34:26.015 00:15:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:26.015 00:15:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:34:26.015 00:15:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:34:26.015 00:15:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:26.015 00:15:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:34:26.015 00:15:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:34:26.015 00:15:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:34:26.015 00:15:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:34:26.015 00:15:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:26.015 00:15:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:26.015 00:15:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@509 -- # nvmfpid=4194144 00:34:26.015 00:15:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@510 -- # waitforlisten 4194144 00:34:26.015 00:15:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # '[' -z 4194144 ']' 00:34:26.015 00:15:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:26.015 00:15:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:34:26.015 00:15:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:26.015 00:15:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:26.015 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:26.015 00:15:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:26.015 00:15:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:26.015 [2024-12-14 00:15:05.056284] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:34:26.015 [2024-12-14 00:15:05.056384] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:26.274 [2024-12-14 00:15:05.174860] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:26.274 [2024-12-14 00:15:05.274338] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:26.274 [2024-12-14 00:15:05.274384] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:26.274 [2024-12-14 00:15:05.274395] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:26.274 [2024-12-14 00:15:05.274422] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:26.274 [2024-12-14 00:15:05.274430] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:26.274 [2024-12-14 00:15:05.275991] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:34:26.843 00:15:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:26.843 00:15:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@868 -- # return 0 00:34:26.843 00:15:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:34:26.843 00:15:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@732 -- # xtrace_disable 00:34:26.843 00:15:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:26.843 00:15:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:26.843 00:15:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:34:26.843 00:15:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:26.843 00:15:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:26.843 [2024-12-14 00:15:05.905785] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:26.843 00:15:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:26.843 00:15:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:34:26.843 00:15:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:26.843 00:15:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:26.843 [2024-12-14 00:15:05.917981] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:34:26.843 00:15:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:26.843 00:15:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:34:26.843 00:15:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:26.843 00:15:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:26.843 null0 00:34:26.843 00:15:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:26.843 00:15:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:34:26.843 00:15:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:26.843 00:15:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:26.843 null1 00:34:26.843 00:15:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:26.843 00:15:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:34:26.843 00:15:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:26.843 00:15:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:26.843 00:15:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:26.843 00:15:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:34:26.843 00:15:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=4194262 00:34:26.843 00:15:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 4194262 /tmp/host.sock 00:34:26.843 00:15:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # '[' -z 4194262 ']' 00:34:26.843 00:15:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 00:34:26.843 00:15:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:26.843 00:15:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:34:26.843 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:34:26.843 00:15:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:26.843 00:15:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:27.102 [2024-12-14 00:15:06.014453] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:34:27.102 [2024-12-14 00:15:06.014551] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4194262 ] 00:34:27.102 [2024-12-14 00:15:06.125136] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:27.102 [2024-12-14 00:15:06.237827] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:34:28.039 00:15:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:28.039 00:15:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@868 -- # return 0 00:34:28.039 00:15:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:34:28.039 00:15:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:34:28.039 00:15:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:28.039 00:15:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:28.039 00:15:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:28.039 00:15:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:34:28.039 00:15:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:28.039 00:15:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:28.039 00:15:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:28.039 00:15:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:34:28.039 00:15:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:34:28.039 00:15:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:34:28.039 00:15:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:34:28.039 00:15:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:34:28.039 00:15:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:28.039 00:15:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:28.039 00:15:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:34:28.039 00:15:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:28.039 00:15:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:34:28.039 00:15:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:34:28.039 00:15:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:28.039 00:15:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:34:28.039 00:15:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:28.039 00:15:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:34:28.039 00:15:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:28.039 00:15:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:34:28.039 00:15:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:28.039 00:15:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:34:28.039 00:15:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:34:28.039 00:15:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:28.039 00:15:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:28.039 00:15:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:28.039 00:15:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:34:28.039 00:15:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:34:28.039 00:15:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:34:28.039 00:15:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:28.039 00:15:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:34:28.039 00:15:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:28.039 00:15:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:34:28.039 00:15:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:28.039 00:15:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:34:28.039 00:15:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:34:28.039 00:15:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:28.039 00:15:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:34:28.039 00:15:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:28.039 00:15:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:34:28.039 00:15:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:28.039 00:15:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:34:28.039 00:15:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:28.039 00:15:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:34:28.039 00:15:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:34:28.039 00:15:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:28.039 00:15:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:28.039 00:15:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:28.039 00:15:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:34:28.039 00:15:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:34:28.039 00:15:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:34:28.039 00:15:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:34:28.039 00:15:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:34:28.039 00:15:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:28.039 00:15:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:28.039 00:15:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:28.039 00:15:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:34:28.039 00:15:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:34:28.039 00:15:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:28.039 00:15:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:34:28.039 00:15:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:28.039 00:15:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:34:28.039 00:15:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:28.039 00:15:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:34:28.039 00:15:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:28.039 00:15:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:34:28.039 00:15:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:34:28.039 00:15:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:28.039 00:15:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:28.039 [2024-12-14 00:15:07.161406] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:28.039 00:15:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:28.039 00:15:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:34:28.039 00:15:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:34:28.039 00:15:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:34:28.039 00:15:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:28.039 00:15:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:34:28.039 00:15:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:28.039 00:15:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:34:28.039 00:15:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:28.298 00:15:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:34:28.298 00:15:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:34:28.298 00:15:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:28.298 00:15:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:34:28.298 00:15:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:28.298 00:15:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:34:28.298 00:15:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:28.298 00:15:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:34:28.298 00:15:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:28.298 00:15:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:34:28.299 00:15:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:34:28.299 00:15:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:34:28.299 00:15:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:34:28.299 00:15:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:34:28.299 00:15:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:34:28.299 00:15:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:34:28.299 00:15:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:34:28.299 00:15:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:34:28.299 00:15:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:34:28.299 00:15:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:34:28.299 00:15:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:28.299 00:15:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:28.299 00:15:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:28.299 00:15:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:34:28.299 00:15:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:34:28.299 00:15:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:34:28.299 00:15:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:34:28.299 00:15:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:34:28.299 00:15:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:28.299 00:15:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:28.299 00:15:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:28.299 00:15:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:34:28.299 00:15:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:34:28.299 00:15:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:34:28.299 00:15:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:34:28.299 00:15:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:34:28.299 00:15:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:34:28.299 00:15:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:34:28.299 00:15:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:34:28.299 00:15:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:28.299 00:15:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:34:28.299 00:15:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:28.299 00:15:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:34:28.299 00:15:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:28.299 00:15:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == \n\v\m\e\0 ]] 00:34:28.299 00:15:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # sleep 1 00:34:28.866 [2024-12-14 00:15:07.866760] bdev_nvme.c:7516:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:34:28.866 [2024-12-14 00:15:07.866794] bdev_nvme.c:7602:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:34:28.866 [2024-12-14 00:15:07.866820] bdev_nvme.c:7479:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:34:28.866 [2024-12-14 00:15:07.954079] bdev_nvme.c:7445:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:34:29.125 [2024-12-14 00:15:08.055949] bdev_nvme.c:5663:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.2:4420 00:34:29.125 [2024-12-14 00:15:08.057114] bdev_nvme.c:1990:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x615000325f80:1 started. 00:34:29.125 [2024-12-14 00:15:08.058822] bdev_nvme.c:7335:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:34:29.125 [2024-12-14 00:15:08.058846] bdev_nvme.c:7294:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:34:29.125 [2024-12-14 00:15:08.065978] bdev_nvme.c:1792:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x615000325f80 was disconnected and freed. delete nvme_qpair. 00:34:29.384 00:15:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:34:29.384 00:15:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:34:29.384 00:15:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:34:29.384 00:15:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:34:29.384 00:15:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:34:29.384 00:15:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:29.384 00:15:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:34:29.384 00:15:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:34:29.384 00:15:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:29.384 00:15:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:29.384 00:15:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:29.384 00:15:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:34:29.384 00:15:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:34:29.384 00:15:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:34:29.384 00:15:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:34:29.384 00:15:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:34:29.384 00:15:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:34:29.384 00:15:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:34:29.384 00:15:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:29.384 00:15:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:34:29.384 00:15:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:34:29.384 00:15:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:29.384 00:15:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:34:29.384 00:15:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:29.384 00:15:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:29.384 00:15:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:34:29.384 00:15:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:34:29.384 00:15:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:34:29.384 00:15:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:34:29.384 00:15:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:34:29.384 00:15:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:34:29.384 00:15:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:34:29.384 00:15:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:34:29.384 00:15:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:34:29.384 00:15:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:34:29.384 00:15:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:34:29.384 00:15:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:34:29.384 00:15:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:29.384 00:15:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:29.384 00:15:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:29.384 00:15:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 == \4\4\2\0 ]] 00:34:29.384 00:15:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:34:29.384 00:15:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:34:29.384 00:15:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:34:29.384 00:15:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:34:29.384 00:15:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:34:29.384 00:15:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:34:29.384 00:15:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:34:29.385 00:15:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:34:29.385 00:15:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:34:29.385 00:15:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:34:29.385 00:15:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:34:29.385 00:15:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:29.385 00:15:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:29.385 00:15:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:29.644 00:15:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:34:29.644 00:15:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:34:29.644 00:15:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:34:29.644 00:15:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:34:29.644 00:15:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:34:29.644 00:15:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:29.644 00:15:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:29.644 [2024-12-14 00:15:08.549394] bdev_nvme.c:1990:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x615000326200:1 started. 00:34:29.644 00:15:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:29.644 00:15:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:34:29.644 00:15:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:34:29.644 00:15:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:34:29.644 00:15:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:34:29.644 00:15:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:34:29.644 00:15:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:34:29.644 00:15:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:29.644 00:15:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:34:29.644 00:15:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:29.644 00:15:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:29.644 00:15:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:34:29.644 00:15:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:34:29.644 [2024-12-14 00:15:08.557298] bdev_nvme.c:1792:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x615000326200 was disconnected and freed. delete nvme_qpair. 00:34:29.644 00:15:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:29.644 00:15:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:34:29.644 00:15:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:34:29.644 00:15:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:34:29.644 00:15:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:34:29.644 00:15:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:34:29.644 00:15:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:34:29.644 00:15:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:34:29.644 00:15:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:34:29.644 00:15:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:34:29.644 00:15:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:34:29.644 00:15:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:34:29.644 00:15:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:34:29.644 00:15:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:29.644 00:15:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:29.644 00:15:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:29.644 00:15:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:34:29.644 00:15:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:34:29.644 00:15:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:34:29.644 00:15:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:34:29.644 00:15:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:34:29.644 00:15:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:29.644 00:15:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:29.644 [2024-12-14 00:15:08.638529] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:34:29.644 [2024-12-14 00:15:08.639095] bdev_nvme.c:7498:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:34:29.644 [2024-12-14 00:15:08.639126] bdev_nvme.c:7479:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:34:29.644 00:15:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:29.644 00:15:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:34:29.644 00:15:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:34:29.644 00:15:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:34:29.644 00:15:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:34:29.644 00:15:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:34:29.644 00:15:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:34:29.644 00:15:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:34:29.644 00:15:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:29.644 00:15:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:34:29.644 00:15:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:29.644 00:15:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:34:29.644 00:15:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:34:29.644 00:15:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:29.644 00:15:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:29.644 00:15:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:34:29.644 00:15:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:34:29.644 00:15:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:34:29.644 00:15:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:34:29.644 00:15:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:34:29.644 00:15:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:34:29.644 00:15:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:34:29.644 00:15:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:34:29.644 00:15:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:29.644 00:15:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:34:29.644 00:15:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:29.644 00:15:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:29.644 00:15:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:34:29.644 00:15:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:29.644 00:15:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:34:29.644 00:15:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:34:29.644 00:15:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:34:29.644 00:15:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:34:29.644 00:15:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:34:29.644 00:15:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:34:29.644 00:15:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:34:29.644 00:15:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:34:29.644 00:15:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:34:29.644 00:15:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:34:29.644 00:15:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:29.644 00:15:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:34:29.644 00:15:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:29.645 00:15:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:34:29.645 00:15:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:29.645 [2024-12-14 00:15:08.766540] bdev_nvme.c:7440:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:34:29.645 00:15:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:34:29.645 00:15:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # sleep 1 00:34:29.903 [2024-12-14 00:15:08.873552] bdev_nvme.c:5663:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.2:4421 00:34:29.903 [2024-12-14 00:15:08.873611] bdev_nvme.c:7335:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:34:29.903 [2024-12-14 00:15:08.873626] bdev_nvme.c:7294:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:34:29.903 [2024-12-14 00:15:08.873635] bdev_nvme.c:7294:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:34:30.841 00:15:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:34:30.841 00:15:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:34:30.841 00:15:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:34:30.841 00:15:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:34:30.841 00:15:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:34:30.841 00:15:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:30.841 00:15:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:34:30.841 00:15:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:30.841 00:15:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:34:30.841 00:15:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:30.841 00:15:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:34:30.841 00:15:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:34:30.841 00:15:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:34:30.841 00:15:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:34:30.842 00:15:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:34:30.842 00:15:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:34:30.842 00:15:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:34:30.842 00:15:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:34:30.842 00:15:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:34:30.842 00:15:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:34:30.842 00:15:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:34:30.842 00:15:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:34:30.842 00:15:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:30.842 00:15:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:30.842 00:15:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:30.842 00:15:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:34:30.842 00:15:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:34:30.842 00:15:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:34:30.842 00:15:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:34:30.842 00:15:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:34:30.842 00:15:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:30.842 00:15:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:30.842 [2024-12-14 00:15:09.882307] bdev_nvme.c:7498:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:34:30.842 [2024-12-14 00:15:09.882338] bdev_nvme.c:7479:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:34:30.842 00:15:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:30.842 00:15:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:34:30.842 00:15:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:34:30.842 00:15:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:34:30.842 00:15:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:34:30.842 00:15:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:34:30.842 00:15:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:34:30.842 [2024-12-14 00:15:09.891177] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:34:30.842 [2024-12-14 00:15:09.891210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:30.842 [2024-12-14 00:15:09.891224] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:34:30.842 [2024-12-14 00:15:09.891234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:30.842 [2024-12-14 00:15:09.891244] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:34:30.842 [2024-12-14 00:15:09.891253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:30.842 [2024-12-14 00:15:09.891263] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:34:30.842 [2024-12-14 00:15:09.891273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:30.842 [2024-12-14 00:15:09.891282] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:34:30.842 00:15:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:34:30.842 00:15:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:34:30.842 00:15:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:30.842 00:15:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:34:30.842 00:15:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:30.842 00:15:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:34:30.842 [2024-12-14 00:15:09.901183] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:34:30.842 00:15:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:30.842 [2024-12-14 00:15:09.911215] bdev_nvme.c:2550:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:34:30.842 [2024-12-14 00:15:09.911239] bdev_nvme.c:2538:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:34:30.842 [2024-12-14 00:15:09.911246] bdev_nvme.c:2134:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:34:30.842 [2024-12-14 00:15:09.911257] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:34:30.842 [2024-12-14 00:15:09.911287] bdev_nvme.c:2522:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:34:30.842 [2024-12-14 00:15:09.911552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.842 [2024-12-14 00:15:09.911574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:34:30.842 [2024-12-14 00:15:09.911586] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:34:30.842 [2024-12-14 00:15:09.911602] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:34:30.842 [2024-12-14 00:15:09.911626] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:34:30.842 [2024-12-14 00:15:09.911639] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:34:30.842 [2024-12-14 00:15:09.911655] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:34:30.842 [2024-12-14 00:15:09.911665] bdev_nvme.c:2512:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:34:30.842 [2024-12-14 00:15:09.911673] bdev_nvme.c:2279:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:34:30.842 [2024-12-14 00:15:09.911680] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:34:30.842 [2024-12-14 00:15:09.921322] bdev_nvme.c:2550:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:34:30.842 [2024-12-14 00:15:09.921344] bdev_nvme.c:2538:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:34:30.842 [2024-12-14 00:15:09.921351] bdev_nvme.c:2134:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:34:30.842 [2024-12-14 00:15:09.921357] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:34:30.842 [2024-12-14 00:15:09.921378] bdev_nvme.c:2522:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:34:30.842 [2024-12-14 00:15:09.921633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.842 [2024-12-14 00:15:09.921664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:34:30.842 [2024-12-14 00:15:09.921675] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:34:30.842 [2024-12-14 00:15:09.921690] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:34:30.842 [2024-12-14 00:15:09.921712] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:34:30.842 [2024-12-14 00:15:09.921722] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:34:30.842 [2024-12-14 00:15:09.921731] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:34:30.842 [2024-12-14 00:15:09.921739] bdev_nvme.c:2512:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:34:30.842 [2024-12-14 00:15:09.921746] bdev_nvme.c:2279:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:34:30.842 [2024-12-14 00:15:09.921752] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:34:30.842 [2024-12-14 00:15:09.931414] bdev_nvme.c:2550:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:34:30.842 [2024-12-14 00:15:09.931444] bdev_nvme.c:2538:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:34:30.842 [2024-12-14 00:15:09.931451] bdev_nvme.c:2134:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:34:30.842 [2024-12-14 00:15:09.931457] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:34:30.842 [2024-12-14 00:15:09.931479] bdev_nvme.c:2522:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:34:30.842 [2024-12-14 00:15:09.931723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.842 [2024-12-14 00:15:09.931741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:34:30.842 [2024-12-14 00:15:09.931752] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:34:30.842 [2024-12-14 00:15:09.931766] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:34:30.842 [2024-12-14 00:15:09.931789] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:34:30.842 [2024-12-14 00:15:09.931801] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:34:30.842 [2024-12-14 00:15:09.931811] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:34:30.842 [2024-12-14 00:15:09.931818] bdev_nvme.c:2512:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:34:30.842 [2024-12-14 00:15:09.931825] bdev_nvme.c:2279:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:34:30.842 [2024-12-14 00:15:09.931831] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:34:30.842 00:15:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:30.842 00:15:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:34:30.842 00:15:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:34:30.842 00:15:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:34:30.842 00:15:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:34:30.842 00:15:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:34:30.843 00:15:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:34:30.843 00:15:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:34:30.843 [2024-12-14 00:15:09.941515] bdev_nvme.c:2550:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:34:30.843 [2024-12-14 00:15:09.941537] bdev_nvme.c:2538:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:34:30.843 [2024-12-14 00:15:09.941544] bdev_nvme.c:2134:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:34:30.843 [2024-12-14 00:15:09.941550] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:34:30.843 [2024-12-14 00:15:09.941575] bdev_nvme.c:2522:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:34:30.843 [2024-12-14 00:15:09.941808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.843 [2024-12-14 00:15:09.941826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:34:30.843 [2024-12-14 00:15:09.941836] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:34:30.843 [2024-12-14 00:15:09.941851] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:34:30.843 [2024-12-14 00:15:09.941873] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:34:30.843 [2024-12-14 00:15:09.941883] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:34:30.843 [2024-12-14 00:15:09.941892] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:34:30.843 [2024-12-14 00:15:09.941900] bdev_nvme.c:2512:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:34:30.843 [2024-12-14 00:15:09.941907] bdev_nvme.c:2279:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:34:30.843 [2024-12-14 00:15:09.941913] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:34:30.843 00:15:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:34:30.843 00:15:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:30.843 00:15:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:30.843 00:15:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:34:30.843 00:15:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:30.843 00:15:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:34:30.843 [2024-12-14 00:15:09.951612] bdev_nvme.c:2550:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:34:30.843 [2024-12-14 00:15:09.951638] bdev_nvme.c:2538:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:34:30.843 [2024-12-14 00:15:09.951645] bdev_nvme.c:2134:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:34:30.843 [2024-12-14 00:15:09.951662] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:34:30.843 [2024-12-14 00:15:09.951684] bdev_nvme.c:2522:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:34:30.843 [2024-12-14 00:15:09.951941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.843 [2024-12-14 00:15:09.951958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:34:30.843 [2024-12-14 00:15:09.951969] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:34:30.843 [2024-12-14 00:15:09.951984] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:34:30.843 [2024-12-14 00:15:09.952008] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:34:30.843 [2024-12-14 00:15:09.952018] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:34:30.843 [2024-12-14 00:15:09.952027] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:34:30.843 [2024-12-14 00:15:09.952042] bdev_nvme.c:2512:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:34:30.843 [2024-12-14 00:15:09.952049] bdev_nvme.c:2279:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:34:30.843 [2024-12-14 00:15:09.952055] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:34:30.843 [2024-12-14 00:15:09.961718] bdev_nvme.c:2550:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:34:30.843 [2024-12-14 00:15:09.961740] bdev_nvme.c:2538:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:34:30.843 [2024-12-14 00:15:09.961746] bdev_nvme.c:2134:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:34:30.843 [2024-12-14 00:15:09.961753] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:34:30.843 [2024-12-14 00:15:09.961778] bdev_nvme.c:2522:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:34:30.843 [2024-12-14 00:15:09.961904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.843 [2024-12-14 00:15:09.961921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:34:30.843 [2024-12-14 00:15:09.961931] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:34:30.843 [2024-12-14 00:15:09.961946] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:34:30.843 [2024-12-14 00:15:09.961959] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:34:30.843 [2024-12-14 00:15:09.961969] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:34:30.843 [2024-12-14 00:15:09.961978] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:34:30.843 [2024-12-14 00:15:09.961989] bdev_nvme.c:2512:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:34:30.843 [2024-12-14 00:15:09.961996] bdev_nvme.c:2279:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:34:30.843 [2024-12-14 00:15:09.962003] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:34:30.843 [2024-12-14 00:15:09.968072] bdev_nvme.c:7303:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:34:30.843 [2024-12-14 00:15:09.968099] bdev_nvme.c:7294:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:34:30.843 00:15:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:31.103 00:15:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:34:31.103 00:15:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:34:31.103 00:15:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:34:31.103 00:15:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:34:31.103 00:15:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:34:31.103 00:15:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:34:31.103 00:15:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:34:31.103 00:15:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:34:31.103 00:15:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:34:31.103 00:15:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:34:31.103 00:15:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:31.103 00:15:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:34:31.103 00:15:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:31.103 00:15:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:34:31.103 00:15:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:31.103 00:15:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4421 == \4\4\2\1 ]] 00:34:31.103 00:15:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:34:31.103 00:15:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:34:31.103 00:15:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:34:31.103 00:15:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:34:31.103 00:15:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:34:31.103 00:15:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:34:31.103 00:15:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:34:31.103 00:15:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:34:31.103 00:15:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:34:31.103 00:15:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:34:31.103 00:15:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:34:31.103 00:15:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:31.103 00:15:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:31.103 00:15:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:31.103 00:15:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:34:31.103 00:15:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:34:31.103 00:15:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:34:31.103 00:15:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:34:31.103 00:15:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:34:31.103 00:15:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:31.103 00:15:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:31.103 00:15:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:31.103 00:15:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:34:31.103 00:15:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:34:31.103 00:15:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:34:31.103 00:15:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:34:31.103 00:15:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:34:31.103 00:15:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:34:31.103 00:15:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:34:31.103 00:15:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:34:31.103 00:15:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:31.103 00:15:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:34:31.103 00:15:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:31.103 00:15:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:34:31.103 00:15:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:31.103 00:15:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == '' ]] 00:34:31.103 00:15:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:34:31.103 00:15:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:34:31.103 00:15:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:34:31.103 00:15:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:34:31.103 00:15:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:34:31.103 00:15:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:34:31.103 00:15:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:34:31.103 00:15:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:34:31.103 00:15:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:31.103 00:15:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:34:31.103 00:15:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:31.103 00:15:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:31.103 00:15:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:34:31.103 00:15:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:31.103 00:15:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == '' ]] 00:34:31.103 00:15:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:34:31.103 00:15:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:34:31.103 00:15:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:34:31.103 00:15:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:34:31.103 00:15:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:34:31.103 00:15:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:34:31.103 00:15:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:34:31.103 00:15:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:34:31.103 00:15:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:34:31.103 00:15:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:34:31.103 00:15:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:34:31.104 00:15:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:31.104 00:15:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:31.104 00:15:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:31.104 00:15:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:34:31.104 00:15:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:34:31.104 00:15:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:34:31.104 00:15:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:34:31.104 00:15:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:34:31.104 00:15:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:31.104 00:15:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:32.481 [2024-12-14 00:15:11.276126] bdev_nvme.c:7516:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:34:32.481 [2024-12-14 00:15:11.276152] bdev_nvme.c:7602:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:34:32.481 [2024-12-14 00:15:11.276183] bdev_nvme.c:7479:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:34:32.481 [2024-12-14 00:15:11.404604] bdev_nvme.c:7445:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:34:32.481 [2024-12-14 00:15:11.508515] bdev_nvme.c:5663:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] ctrlr was created to 10.0.0.2:4421 00:34:32.481 [2024-12-14 00:15:11.509597] bdev_nvme.c:1990:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] Connecting qpair 0x615000327380:1 started. 00:34:32.481 [2024-12-14 00:15:11.511609] bdev_nvme.c:7335:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:34:32.481 [2024-12-14 00:15:11.511644] bdev_nvme.c:7294:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:34:32.481 00:15:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:32.481 00:15:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:34:32.481 00:15:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:34:32.481 00:15:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:34:32.481 00:15:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:34:32.481 [2024-12-14 00:15:11.515197] bdev_nvme.c:1792:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] qpair 0x615000327380 was disconnected and freed. delete nvme_qpair. 00:34:32.481 00:15:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:34:32.481 00:15:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:34:32.481 00:15:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:34:32.481 00:15:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:34:32.481 00:15:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:32.481 00:15:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:32.481 request: 00:34:32.481 { 00:34:32.481 "name": "nvme", 00:34:32.481 "trtype": "tcp", 00:34:32.481 "traddr": "10.0.0.2", 00:34:32.481 "adrfam": "ipv4", 00:34:32.481 "trsvcid": "8009", 00:34:32.481 "hostnqn": "nqn.2021-12.io.spdk:test", 00:34:32.481 "wait_for_attach": true, 00:34:32.481 "method": "bdev_nvme_start_discovery", 00:34:32.481 "req_id": 1 00:34:32.481 } 00:34:32.481 Got JSON-RPC error response 00:34:32.481 response: 00:34:32.481 { 00:34:32.481 "code": -17, 00:34:32.481 "message": "File exists" 00:34:32.481 } 00:34:32.481 00:15:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:34:32.481 00:15:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:34:32.481 00:15:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:34:32.481 00:15:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:34:32.481 00:15:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:34:32.481 00:15:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:34:32.481 00:15:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:34:32.481 00:15:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:34:32.481 00:15:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:34:32.481 00:15:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:32.481 00:15:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:34:32.481 00:15:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:32.481 00:15:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:32.481 00:15:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:34:32.481 00:15:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:34:32.481 00:15:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:32.481 00:15:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:34:32.481 00:15:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:34:32.481 00:15:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:32.481 00:15:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:34:32.481 00:15:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:32.481 00:15:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:32.740 00:15:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:34:32.740 00:15:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:34:32.740 00:15:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:34:32.740 00:15:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:34:32.740 00:15:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:34:32.740 00:15:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:34:32.740 00:15:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:34:32.740 00:15:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:34:32.740 00:15:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:34:32.740 00:15:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:32.740 00:15:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:32.740 request: 00:34:32.740 { 00:34:32.740 "name": "nvme_second", 00:34:32.740 "trtype": "tcp", 00:34:32.740 "traddr": "10.0.0.2", 00:34:32.740 "adrfam": "ipv4", 00:34:32.740 "trsvcid": "8009", 00:34:32.740 "hostnqn": "nqn.2021-12.io.spdk:test", 00:34:32.740 "wait_for_attach": true, 00:34:32.740 "method": "bdev_nvme_start_discovery", 00:34:32.740 "req_id": 1 00:34:32.740 } 00:34:32.740 Got JSON-RPC error response 00:34:32.740 response: 00:34:32.740 { 00:34:32.740 "code": -17, 00:34:32.740 "message": "File exists" 00:34:32.740 } 00:34:32.740 00:15:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:34:32.740 00:15:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:34:32.740 00:15:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:34:32.740 00:15:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:34:32.740 00:15:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:34:32.740 00:15:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:34:32.740 00:15:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:34:32.740 00:15:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:34:32.740 00:15:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:32.740 00:15:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:34:32.740 00:15:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:32.740 00:15:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:34:32.740 00:15:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:32.740 00:15:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:34:32.740 00:15:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:34:32.740 00:15:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:34:32.740 00:15:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:32.740 00:15:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:34:32.740 00:15:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:32.740 00:15:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:32.740 00:15:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:34:32.740 00:15:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:32.740 00:15:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:34:32.740 00:15:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:34:32.740 00:15:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:34:32.740 00:15:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:34:32.741 00:15:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:34:32.741 00:15:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:34:32.741 00:15:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:34:32.741 00:15:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:34:32.741 00:15:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:34:32.741 00:15:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:32.741 00:15:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:33.677 [2024-12-14 00:15:12.743306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.677 [2024-12-14 00:15:12.743343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000327600 with addr=10.0.0.2, port=8010 00:34:33.677 [2024-12-14 00:15:12.743392] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:34:33.677 [2024-12-14 00:15:12.743402] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:34:33.677 [2024-12-14 00:15:12.743415] bdev_nvme.c:7584:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:34:34.614 [2024-12-14 00:15:13.745758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.614 [2024-12-14 00:15:13.745789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000327880 with addr=10.0.0.2, port=8010 00:34:34.614 [2024-12-14 00:15:13.745837] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:34:34.614 [2024-12-14 00:15:13.745846] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:34:34.614 [2024-12-14 00:15:13.745855] bdev_nvme.c:7584:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:34:35.992 [2024-12-14 00:15:14.747808] bdev_nvme.c:7559:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:34:35.992 request: 00:34:35.992 { 00:34:35.992 "name": "nvme_second", 00:34:35.992 "trtype": "tcp", 00:34:35.992 "traddr": "10.0.0.2", 00:34:35.992 "adrfam": "ipv4", 00:34:35.992 "trsvcid": "8010", 00:34:35.992 "hostnqn": "nqn.2021-12.io.spdk:test", 00:34:35.992 "wait_for_attach": false, 00:34:35.992 "attach_timeout_ms": 3000, 00:34:35.992 "method": "bdev_nvme_start_discovery", 00:34:35.992 "req_id": 1 00:34:35.992 } 00:34:35.992 Got JSON-RPC error response 00:34:35.992 response: 00:34:35.992 { 00:34:35.992 "code": -110, 00:34:35.992 "message": "Connection timed out" 00:34:35.992 } 00:34:35.992 00:15:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:34:35.992 00:15:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:34:35.992 00:15:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:34:35.992 00:15:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:34:35.992 00:15:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:34:35.992 00:15:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:34:35.992 00:15:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:34:35.992 00:15:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:34:35.992 00:15:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:35.992 00:15:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:34:35.992 00:15:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:35.992 00:15:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:34:35.992 00:15:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:35.992 00:15:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:34:35.992 00:15:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:34:35.992 00:15:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 4194262 00:34:35.992 00:15:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:34:35.992 00:15:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:34:35.992 00:15:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@121 -- # sync 00:34:35.992 00:15:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:34:35.992 00:15:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@124 -- # set +e 00:34:35.992 00:15:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:35.992 00:15:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:34:35.992 rmmod nvme_tcp 00:34:35.992 rmmod nvme_fabrics 00:34:35.992 rmmod nvme_keyring 00:34:35.992 00:15:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:35.992 00:15:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@128 -- # set -e 00:34:35.992 00:15:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@129 -- # return 0 00:34:35.992 00:15:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@517 -- # '[' -n 4194144 ']' 00:34:35.992 00:15:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@518 -- # killprocess 4194144 00:34:35.992 00:15:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@954 -- # '[' -z 4194144 ']' 00:34:35.992 00:15:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@958 -- # kill -0 4194144 00:34:35.992 00:15:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@959 -- # uname 00:34:35.992 00:15:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:35.992 00:15:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4194144 00:34:35.992 00:15:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:34:35.992 00:15:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:34:35.992 00:15:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4194144' 00:34:35.992 killing process with pid 4194144 00:34:35.992 00:15:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@973 -- # kill 4194144 00:34:35.992 00:15:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@978 -- # wait 4194144 00:34:36.927 00:15:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:34:36.927 00:15:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:34:36.927 00:15:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:34:36.927 00:15:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@297 -- # iptr 00:34:36.927 00:15:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:34:36.927 00:15:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-save 00:34:36.927 00:15:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:34:36.927 00:15:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:34:36.927 00:15:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:34:36.927 00:15:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:36.927 00:15:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:36.927 00:15:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:39.547 00:15:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:34:39.547 00:34:39.547 real 0m18.760s 00:34:39.547 user 0m23.850s 00:34:39.547 sys 0m5.517s 00:34:39.547 00:15:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:39.547 00:15:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:39.547 ************************************ 00:34:39.547 END TEST nvmf_host_discovery 00:34:39.547 ************************************ 00:34:39.547 00:15:18 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@27 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:34:39.547 00:15:18 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:34:39.547 00:15:18 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:39.547 00:15:18 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:34:39.547 ************************************ 00:34:39.547 START TEST nvmf_host_multipath_status 00:34:39.547 ************************************ 00:34:39.547 00:15:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:34:39.547 * Looking for test storage... 00:34:39.547 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:34:39.547 00:15:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:34:39.547 00:15:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1711 -- # lcov --version 00:34:39.547 00:15:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:34:39.547 00:15:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:34:39.547 00:15:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:39.547 00:15:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:39.547 00:15:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:39.547 00:15:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # IFS=.-: 00:34:39.547 00:15:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # read -ra ver1 00:34:39.547 00:15:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # IFS=.-: 00:34:39.547 00:15:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # read -ra ver2 00:34:39.547 00:15:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@338 -- # local 'op=<' 00:34:39.547 00:15:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@340 -- # ver1_l=2 00:34:39.547 00:15:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@341 -- # ver2_l=1 00:34:39.547 00:15:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:39.547 00:15:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@344 -- # case "$op" in 00:34:39.547 00:15:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@345 -- # : 1 00:34:39.547 00:15:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:39.547 00:15:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:39.547 00:15:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # decimal 1 00:34:39.547 00:15:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=1 00:34:39.547 00:15:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:39.547 00:15:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 1 00:34:39.547 00:15:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # ver1[v]=1 00:34:39.547 00:15:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # decimal 2 00:34:39.547 00:15:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=2 00:34:39.547 00:15:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:39.547 00:15:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 2 00:34:39.547 00:15:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # ver2[v]=2 00:34:39.547 00:15:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:39.547 00:15:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:39.547 00:15:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # return 0 00:34:39.547 00:15:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:39.547 00:15:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:34:39.547 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:39.547 --rc genhtml_branch_coverage=1 00:34:39.547 --rc genhtml_function_coverage=1 00:34:39.547 --rc genhtml_legend=1 00:34:39.547 --rc geninfo_all_blocks=1 00:34:39.547 --rc geninfo_unexecuted_blocks=1 00:34:39.547 00:34:39.547 ' 00:34:39.547 00:15:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:34:39.547 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:39.547 --rc genhtml_branch_coverage=1 00:34:39.547 --rc genhtml_function_coverage=1 00:34:39.547 --rc genhtml_legend=1 00:34:39.547 --rc geninfo_all_blocks=1 00:34:39.547 --rc geninfo_unexecuted_blocks=1 00:34:39.547 00:34:39.547 ' 00:34:39.547 00:15:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:34:39.547 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:39.547 --rc genhtml_branch_coverage=1 00:34:39.547 --rc genhtml_function_coverage=1 00:34:39.547 --rc genhtml_legend=1 00:34:39.547 --rc geninfo_all_blocks=1 00:34:39.547 --rc geninfo_unexecuted_blocks=1 00:34:39.547 00:34:39.547 ' 00:34:39.547 00:15:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:34:39.547 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:39.547 --rc genhtml_branch_coverage=1 00:34:39.547 --rc genhtml_function_coverage=1 00:34:39.547 --rc genhtml_legend=1 00:34:39.547 --rc geninfo_all_blocks=1 00:34:39.547 --rc geninfo_unexecuted_blocks=1 00:34:39.547 00:34:39.547 ' 00:34:39.547 00:15:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:39.547 00:15:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:34:39.547 00:15:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:39.547 00:15:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:39.547 00:15:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:39.547 00:15:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:39.547 00:15:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:39.547 00:15:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:39.547 00:15:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:39.547 00:15:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:39.547 00:15:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:39.547 00:15:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:39.547 00:15:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:34:39.548 00:15:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:34:39.548 00:15:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:39.548 00:15:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:39.548 00:15:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:39.548 00:15:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:39.548 00:15:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:39.548 00:15:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@15 -- # shopt -s extglob 00:34:39.548 00:15:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:39.548 00:15:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:39.548 00:15:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:39.548 00:15:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:39.548 00:15:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:39.548 00:15:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:39.548 00:15:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:34:39.548 00:15:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:39.548 00:15:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # : 0 00:34:39.548 00:15:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:39.548 00:15:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:39.548 00:15:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:39.548 00:15:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:39.548 00:15:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:39.548 00:15:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:34:39.548 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:34:39.548 00:15:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:39.548 00:15:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:39.548 00:15:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:39.548 00:15:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:34:39.548 00:15:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:34:39.548 00:15:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:34:39.548 00:15:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/bpftrace.sh 00:34:39.548 00:15:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:34:39.548 00:15:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:34:39.548 00:15:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:34:39.548 00:15:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:34:39.548 00:15:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:39.548 00:15:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@476 -- # prepare_net_devs 00:34:39.548 00:15:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@438 -- # local -g is_hw=no 00:34:39.548 00:15:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@440 -- # remove_spdk_ns 00:34:39.548 00:15:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:39.548 00:15:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:39.548 00:15:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:39.548 00:15:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:34:39.548 00:15:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:34:39.548 00:15:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@309 -- # xtrace_disable 00:34:39.548 00:15:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:34:44.822 00:15:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:44.822 00:15:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # pci_devs=() 00:34:44.822 00:15:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # local -a pci_devs 00:34:44.822 00:15:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # pci_net_devs=() 00:34:44.822 00:15:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:34:44.822 00:15:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # pci_drivers=() 00:34:44.822 00:15:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # local -A pci_drivers 00:34:44.822 00:15:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # net_devs=() 00:34:44.822 00:15:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # local -ga net_devs 00:34:44.822 00:15:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # e810=() 00:34:44.822 00:15:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # local -ga e810 00:34:44.822 00:15:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # x722=() 00:34:44.822 00:15:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # local -ga x722 00:34:44.822 00:15:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # mlx=() 00:34:44.822 00:15:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # local -ga mlx 00:34:44.822 00:15:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:44.822 00:15:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:44.822 00:15:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:44.822 00:15:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:44.822 00:15:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:44.822 00:15:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:44.822 00:15:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:44.822 00:15:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:34:44.822 00:15:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:44.822 00:15:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:44.822 00:15:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:44.822 00:15:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:44.822 00:15:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:34:44.822 00:15:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:34:44.822 00:15:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:34:44.822 00:15:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:34:44.822 00:15:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:34:44.822 00:15:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:34:44.822 00:15:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:44.822 00:15:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:34:44.822 Found 0000:af:00.0 (0x8086 - 0x159b) 00:34:44.822 00:15:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:44.822 00:15:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:44.822 00:15:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:44.822 00:15:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:44.822 00:15:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:44.822 00:15:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:44.822 00:15:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:34:44.822 Found 0000:af:00.1 (0x8086 - 0x159b) 00:34:44.822 00:15:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:44.822 00:15:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:44.822 00:15:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:44.822 00:15:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:44.822 00:15:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:44.822 00:15:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:34:44.822 00:15:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:34:44.822 00:15:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:34:44.822 00:15:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:44.822 00:15:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:44.822 00:15:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:44.822 00:15:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:44.822 00:15:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:44.822 00:15:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:44.822 00:15:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:44.822 00:15:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:34:44.822 Found net devices under 0000:af:00.0: cvl_0_0 00:34:44.822 00:15:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:44.822 00:15:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:44.822 00:15:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:44.822 00:15:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:44.822 00:15:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:44.822 00:15:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:44.822 00:15:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:44.822 00:15:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:44.822 00:15:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:34:44.822 Found net devices under 0000:af:00.1: cvl_0_1 00:34:44.822 00:15:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:44.822 00:15:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:34:44.822 00:15:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # is_hw=yes 00:34:44.822 00:15:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:34:44.822 00:15:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:34:44.822 00:15:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:34:44.822 00:15:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:34:44.822 00:15:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:44.822 00:15:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:44.822 00:15:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:44.822 00:15:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:34:44.822 00:15:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:44.822 00:15:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:44.823 00:15:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:34:44.823 00:15:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:34:44.823 00:15:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:44.823 00:15:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:44.823 00:15:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:34:44.823 00:15:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:34:44.823 00:15:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:34:44.823 00:15:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:44.823 00:15:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:44.823 00:15:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:44.823 00:15:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:34:44.823 00:15:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:44.823 00:15:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:44.823 00:15:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:44.823 00:15:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:34:44.823 00:15:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:34:44.823 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:44.823 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.347 ms 00:34:44.823 00:34:44.823 --- 10.0.0.2 ping statistics --- 00:34:44.823 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:44.823 rtt min/avg/max/mdev = 0.347/0.347/0.347/0.000 ms 00:34:44.823 00:15:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:44.823 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:44.823 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.192 ms 00:34:44.823 00:34:44.823 --- 10.0.0.1 ping statistics --- 00:34:44.823 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:44.823 rtt min/avg/max/mdev = 0.192/0.192/0.192/0.000 ms 00:34:44.823 00:15:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:44.823 00:15:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # return 0 00:34:44.823 00:15:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:34:44.823 00:15:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:44.823 00:15:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:34:44.823 00:15:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:34:44.823 00:15:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:44.823 00:15:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:34:44.823 00:15:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:34:44.823 00:15:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:34:44.823 00:15:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:34:44.823 00:15:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:44.823 00:15:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:34:44.823 00:15:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@509 -- # nvmfpid=6283 00:34:44.823 00:15:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@510 -- # waitforlisten 6283 00:34:44.823 00:15:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 6283 ']' 00:34:44.823 00:15:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:44.823 00:15:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:44.823 00:15:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:44.823 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:44.823 00:15:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:44.823 00:15:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:34:44.823 00:15:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:34:44.823 [2024-12-14 00:15:23.882430] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:34:44.823 [2024-12-14 00:15:23.882538] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:45.082 [2024-12-14 00:15:23.999719] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:34:45.082 [2024-12-14 00:15:24.098267] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:45.082 [2024-12-14 00:15:24.098314] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:45.082 [2024-12-14 00:15:24.098324] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:45.082 [2024-12-14 00:15:24.098335] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:45.082 [2024-12-14 00:15:24.098343] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:45.082 [2024-12-14 00:15:24.100391] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:34:45.082 [2024-12-14 00:15:24.100398] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:34:45.650 00:15:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:45.650 00:15:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:34:45.650 00:15:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:34:45.650 00:15:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@732 -- # xtrace_disable 00:34:45.650 00:15:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:34:45.650 00:15:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:45.650 00:15:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=6283 00:34:45.650 00:15:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:34:45.909 [2024-12-14 00:15:24.857367] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:45.909 00:15:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:34:46.174 Malloc0 00:34:46.174 00:15:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:34:46.432 00:15:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:34:46.432 00:15:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:46.691 [2024-12-14 00:15:25.654486] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:46.691 00:15:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:34:46.950 [2024-12-14 00:15:25.838973] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:34:46.950 00:15:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=6632 00:34:46.950 00:15:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:34:46.950 00:15:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:34:46.950 00:15:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 6632 /var/tmp/bdevperf.sock 00:34:46.950 00:15:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 6632 ']' 00:34:46.950 00:15:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:34:46.950 00:15:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:46.950 00:15:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:34:46.950 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:34:46.950 00:15:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:46.950 00:15:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:34:47.887 00:15:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:47.887 00:15:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:34:47.887 00:15:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:34:47.887 00:15:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:34:48.145 Nvme0n1 00:34:48.145 00:15:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:34:48.404 Nvme0n1 00:34:48.663 00:15:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:34:48.663 00:15:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:34:50.567 00:15:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:34:50.568 00:15:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:34:50.826 00:15:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:34:51.085 00:15:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:34:52.020 00:15:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:34:52.020 00:15:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:34:52.020 00:15:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:52.020 00:15:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:34:52.279 00:15:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:52.279 00:15:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:34:52.279 00:15:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:52.279 00:15:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:34:52.279 00:15:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:34:52.279 00:15:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:34:52.279 00:15:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:52.279 00:15:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:34:52.538 00:15:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:52.538 00:15:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:34:52.538 00:15:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:52.538 00:15:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:34:52.797 00:15:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:52.797 00:15:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:34:52.797 00:15:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:52.797 00:15:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:34:53.055 00:15:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:53.055 00:15:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:34:53.055 00:15:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:53.055 00:15:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:34:53.314 00:15:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:53.314 00:15:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:34:53.314 00:15:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:34:53.314 00:15:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:34:53.573 00:15:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:34:54.509 00:15:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:34:54.509 00:15:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:34:54.509 00:15:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:54.509 00:15:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:34:54.768 00:15:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:34:54.768 00:15:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:34:54.768 00:15:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:54.768 00:15:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:34:55.026 00:15:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:55.026 00:15:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:34:55.026 00:15:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:55.026 00:15:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:34:55.285 00:15:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:55.285 00:15:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:34:55.285 00:15:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:55.285 00:15:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:34:55.543 00:15:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:55.543 00:15:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:34:55.543 00:15:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:55.543 00:15:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:34:55.543 00:15:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:55.543 00:15:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:34:55.543 00:15:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:55.543 00:15:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:34:55.802 00:15:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:55.802 00:15:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:34:55.802 00:15:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:34:56.067 00:15:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:34:56.326 00:15:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:34:57.261 00:15:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:34:57.261 00:15:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:34:57.261 00:15:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:57.261 00:15:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:34:57.520 00:15:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:57.520 00:15:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:34:57.520 00:15:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:57.520 00:15:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:34:57.778 00:15:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:34:57.778 00:15:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:34:57.778 00:15:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:57.778 00:15:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:34:57.778 00:15:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:57.778 00:15:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:34:57.778 00:15:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:57.778 00:15:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:34:58.036 00:15:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:58.037 00:15:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:34:58.037 00:15:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:34:58.037 00:15:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:58.295 00:15:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:58.295 00:15:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:34:58.295 00:15:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:58.295 00:15:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:34:58.553 00:15:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:58.553 00:15:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:34:58.554 00:15:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:34:58.554 00:15:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:34:58.812 00:15:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:35:00.189 00:15:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:35:00.189 00:15:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:35:00.189 00:15:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:00.189 00:15:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:35:00.189 00:15:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:00.189 00:15:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:35:00.189 00:15:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:35:00.189 00:15:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:00.189 00:15:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:35:00.189 00:15:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:35:00.189 00:15:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:00.189 00:15:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:35:00.448 00:15:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:00.448 00:15:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:35:00.448 00:15:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:00.448 00:15:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:35:00.707 00:15:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:00.707 00:15:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:35:00.707 00:15:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:00.707 00:15:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:35:00.965 00:15:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:00.965 00:15:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:35:00.965 00:15:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:00.965 00:15:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:35:00.965 00:15:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:35:00.965 00:15:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:35:00.965 00:15:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:35:01.223 00:15:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:35:01.482 00:15:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:35:02.417 00:15:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:35:02.417 00:15:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:35:02.417 00:15:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:02.417 00:15:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:35:02.676 00:15:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:35:02.676 00:15:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:35:02.676 00:15:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:02.676 00:15:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:35:02.935 00:15:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:35:02.935 00:15:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:35:02.935 00:15:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:02.935 00:15:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:35:03.194 00:15:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:03.194 00:15:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:35:03.194 00:15:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:35:03.194 00:15:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:03.194 00:15:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:03.194 00:15:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:35:03.194 00:15:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:03.194 00:15:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:35:03.453 00:15:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:35:03.453 00:15:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:35:03.453 00:15:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:03.453 00:15:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:35:03.711 00:15:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:35:03.711 00:15:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:35:03.711 00:15:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:35:03.970 00:15:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:35:03.970 00:15:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:35:05.348 00:15:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:35:05.348 00:15:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:35:05.348 00:15:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:05.348 00:15:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:35:05.348 00:15:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:35:05.349 00:15:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:35:05.349 00:15:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:05.349 00:15:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:35:05.349 00:15:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:05.349 00:15:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:35:05.349 00:15:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:05.349 00:15:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:35:05.607 00:15:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:05.607 00:15:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:35:05.607 00:15:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:05.607 00:15:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:35:05.866 00:15:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:05.866 00:15:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:35:05.866 00:15:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:05.866 00:15:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:35:06.125 00:15:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:35:06.126 00:15:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:35:06.126 00:15:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:06.126 00:15:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:35:06.126 00:15:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:06.126 00:15:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:35:06.384 00:15:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:35:06.384 00:15:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:35:06.642 00:15:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:35:06.901 00:15:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:35:07.838 00:15:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:35:07.838 00:15:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:35:07.838 00:15:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:07.838 00:15:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:35:08.096 00:15:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:08.096 00:15:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:35:08.096 00:15:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:08.096 00:15:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:35:08.355 00:15:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:08.355 00:15:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:35:08.355 00:15:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:08.355 00:15:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:35:08.355 00:15:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:08.355 00:15:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:35:08.355 00:15:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:35:08.355 00:15:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:08.614 00:15:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:08.614 00:15:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:35:08.614 00:15:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:08.614 00:15:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:35:08.873 00:15:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:08.873 00:15:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:35:08.873 00:15:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:08.873 00:15:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:35:09.132 00:15:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:09.132 00:15:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:35:09.132 00:15:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:35:09.391 00:15:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:35:09.391 00:15:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:35:10.769 00:15:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:35:10.769 00:15:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:35:10.769 00:15:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:10.769 00:15:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:35:10.769 00:15:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:35:10.769 00:15:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:35:10.769 00:15:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:10.769 00:15:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:35:11.028 00:15:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:11.028 00:15:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:35:11.028 00:15:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:11.028 00:15:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:35:11.028 00:15:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:11.028 00:15:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:35:11.028 00:15:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:11.028 00:15:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:35:11.287 00:15:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:11.287 00:15:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:35:11.287 00:15:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:11.287 00:15:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:35:11.545 00:15:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:11.545 00:15:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:35:11.545 00:15:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:35:11.545 00:15:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:11.804 00:15:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:11.804 00:15:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:35:11.804 00:15:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:35:11.804 00:15:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:35:12.063 00:15:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:35:13.000 00:15:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:35:13.259 00:15:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:35:13.259 00:15:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:13.259 00:15:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:35:13.259 00:15:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:13.259 00:15:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:35:13.259 00:15:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:13.259 00:15:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:35:13.518 00:15:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:13.518 00:15:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:35:13.518 00:15:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:13.518 00:15:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:35:13.777 00:15:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:13.777 00:15:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:35:13.777 00:15:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:13.777 00:15:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:35:14.036 00:15:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:14.036 00:15:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:35:14.036 00:15:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:14.036 00:15:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:35:14.036 00:15:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:14.036 00:15:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:35:14.036 00:15:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:14.036 00:15:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:35:14.295 00:15:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:14.295 00:15:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:35:14.295 00:15:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:35:14.554 00:15:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:35:14.813 00:15:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:35:15.748 00:15:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:35:15.748 00:15:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:35:15.748 00:15:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:15.748 00:15:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:35:16.007 00:15:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:16.007 00:15:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:35:16.007 00:15:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:16.007 00:15:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:35:16.266 00:15:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:35:16.266 00:15:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:35:16.266 00:15:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:35:16.266 00:15:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:16.525 00:15:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:16.525 00:15:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:35:16.525 00:15:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:16.525 00:15:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:35:16.525 00:15:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:16.525 00:15:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:35:16.525 00:15:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:16.525 00:15:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:35:16.783 00:15:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:16.783 00:15:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:35:16.783 00:15:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:16.783 00:15:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:35:17.042 00:15:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:35:17.042 00:15:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 6632 00:35:17.042 00:15:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 6632 ']' 00:35:17.042 00:15:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 6632 00:35:17.042 00:15:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:35:17.042 00:15:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:17.042 00:15:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 6632 00:35:17.042 00:15:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:35:17.042 00:15:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:35:17.042 00:15:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 6632' 00:35:17.042 killing process with pid 6632 00:35:17.042 00:15:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 6632 00:35:17.042 00:15:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 6632 00:35:17.042 { 00:35:17.042 "results": [ 00:35:17.042 { 00:35:17.042 "job": "Nvme0n1", 00:35:17.042 "core_mask": "0x4", 00:35:17.042 "workload": "verify", 00:35:17.042 "status": "terminated", 00:35:17.042 "verify_range": { 00:35:17.042 "start": 0, 00:35:17.042 "length": 16384 00:35:17.042 }, 00:35:17.042 "queue_depth": 128, 00:35:17.042 "io_size": 4096, 00:35:17.042 "runtime": 28.38543, 00:35:17.042 "iops": 9325.735069012519, 00:35:17.042 "mibps": 36.42865261333015, 00:35:17.042 "io_failed": 0, 00:35:17.042 "io_timeout": 0, 00:35:17.042 "avg_latency_us": 13702.218920898757, 00:35:17.042 "min_latency_us": 741.1809523809524, 00:35:17.042 "max_latency_us": 3019898.88 00:35:17.042 } 00:35:17.042 ], 00:35:17.042 "core_count": 1 00:35:17.042 } 00:35:17.990 00:15:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 6632 00:35:17.990 00:15:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:35:17.990 [2024-12-14 00:15:25.930535] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:35:17.990 [2024-12-14 00:15:25.930631] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid6632 ] 00:35:17.990 [2024-12-14 00:15:26.041963] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:17.990 [2024-12-14 00:15:26.153797] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:35:17.990 Running I/O for 90 seconds... 00:35:17.990 9876.00 IOPS, 38.58 MiB/s [2024-12-13T23:15:57.131Z] 9828.00 IOPS, 38.39 MiB/s [2024-12-13T23:15:57.131Z] 9881.00 IOPS, 38.60 MiB/s [2024-12-13T23:15:57.132Z] 9896.25 IOPS, 38.66 MiB/s [2024-12-13T23:15:57.132Z] 9902.00 IOPS, 38.68 MiB/s [2024-12-13T23:15:57.132Z] 9896.83 IOPS, 38.66 MiB/s [2024-12-13T23:15:57.132Z] 9899.00 IOPS, 38.67 MiB/s [2024-12-13T23:15:57.132Z] 9937.12 IOPS, 38.82 MiB/s [2024-12-13T23:15:57.132Z] 9937.67 IOPS, 38.82 MiB/s [2024-12-13T23:15:57.132Z] 9953.50 IOPS, 38.88 MiB/s [2024-12-13T23:15:57.132Z] 9967.18 IOPS, 38.93 MiB/s [2024-12-13T23:15:57.132Z] 9987.83 IOPS, 39.01 MiB/s [2024-12-13T23:15:57.132Z] [2024-12-14 00:15:40.279130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:88744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.991 [2024-12-14 00:15:40.279195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:35:17.991 [2024-12-14 00:15:40.279253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:88752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.991 [2024-12-14 00:15:40.279266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:35:17.991 [2024-12-14 00:15:40.279285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:88760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.991 [2024-12-14 00:15:40.279296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:35:17.991 [2024-12-14 00:15:40.279313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:88768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.991 [2024-12-14 00:15:40.279324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:35:17.991 [2024-12-14 00:15:40.279341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:88776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.991 [2024-12-14 00:15:40.279352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:35:17.991 [2024-12-14 00:15:40.279370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:88784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.991 [2024-12-14 00:15:40.279380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:35:17.991 [2024-12-14 00:15:40.279397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:88792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.991 [2024-12-14 00:15:40.279408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:35:17.991 [2024-12-14 00:15:40.279426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:88800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.991 [2024-12-14 00:15:40.279436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:35:17.991 [2024-12-14 00:15:40.279461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:87976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.991 [2024-12-14 00:15:40.279471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:35:17.991 [2024-12-14 00:15:40.279489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:87984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.991 [2024-12-14 00:15:40.279505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:35:17.991 [2024-12-14 00:15:40.279522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:87992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.991 [2024-12-14 00:15:40.279532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:35:17.991 [2024-12-14 00:15:40.279549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:88000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.991 [2024-12-14 00:15:40.279559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:35:17.991 [2024-12-14 00:15:40.279576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:88008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.991 [2024-12-14 00:15:40.279586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:35:17.991 [2024-12-14 00:15:40.279603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:88016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.991 [2024-12-14 00:15:40.279612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:35:17.991 [2024-12-14 00:15:40.279629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:88024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.991 [2024-12-14 00:15:40.279639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:35:17.991 [2024-12-14 00:15:40.279656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:88032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.991 [2024-12-14 00:15:40.279666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:35:17.991 [2024-12-14 00:15:40.280715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:88808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.991 [2024-12-14 00:15:40.280741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:35:17.991 [2024-12-14 00:15:40.280765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:88040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.991 [2024-12-14 00:15:40.280775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:35:17.991 [2024-12-14 00:15:40.280793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:88048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.991 [2024-12-14 00:15:40.280804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:35:17.991 [2024-12-14 00:15:40.280821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:88056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.991 [2024-12-14 00:15:40.280831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:35:17.991 [2024-12-14 00:15:40.280849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:88064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.991 [2024-12-14 00:15:40.280859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:35:17.991 [2024-12-14 00:15:40.280876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:88072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.991 [2024-12-14 00:15:40.280889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:35:17.991 [2024-12-14 00:15:40.280908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:88080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.991 [2024-12-14 00:15:40.280918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:35:17.991 [2024-12-14 00:15:40.280936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:88088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.991 [2024-12-14 00:15:40.280946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:35:17.991 [2024-12-14 00:15:40.280963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:88096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.991 [2024-12-14 00:15:40.280974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:35:17.991 [2024-12-14 00:15:40.280991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:88104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.991 [2024-12-14 00:15:40.281001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:17.991 [2024-12-14 00:15:40.281020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:88112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.991 [2024-12-14 00:15:40.281030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:17.991 [2024-12-14 00:15:40.281048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:88120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.991 [2024-12-14 00:15:40.281058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:35:17.991 [2024-12-14 00:15:40.281076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:88128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.991 [2024-12-14 00:15:40.281086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:35:17.991 [2024-12-14 00:15:40.281103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:88136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.991 [2024-12-14 00:15:40.281113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:35:17.991 [2024-12-14 00:15:40.281131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:88144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.991 [2024-12-14 00:15:40.281141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:35:17.991 [2024-12-14 00:15:40.281159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:88152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.991 [2024-12-14 00:15:40.281169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:35:17.991 [2024-12-14 00:15:40.281187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:88160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.991 [2024-12-14 00:15:40.281197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:35:17.991 [2024-12-14 00:15:40.281268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:88168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.991 [2024-12-14 00:15:40.281281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:35:17.991 [2024-12-14 00:15:40.281305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:88176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.991 [2024-12-14 00:15:40.281315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:35:17.991 [2024-12-14 00:15:40.281333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:88184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.991 [2024-12-14 00:15:40.281343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:35:17.991 [2024-12-14 00:15:40.281362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:88192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.991 [2024-12-14 00:15:40.281371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:35:17.992 [2024-12-14 00:15:40.281390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:88200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.992 [2024-12-14 00:15:40.281400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:35:17.992 [2024-12-14 00:15:40.281418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:88208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.992 [2024-12-14 00:15:40.281428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:35:17.992 [2024-12-14 00:15:40.281452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:88216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.992 [2024-12-14 00:15:40.281463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:35:17.992 [2024-12-14 00:15:40.281482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:88224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.992 [2024-12-14 00:15:40.281492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:35:17.992 [2024-12-14 00:15:40.281510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:88232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.992 [2024-12-14 00:15:40.281520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:35:17.992 [2024-12-14 00:15:40.281538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:88240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.992 [2024-12-14 00:15:40.281548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:35:17.992 [2024-12-14 00:15:40.281566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:88248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.992 [2024-12-14 00:15:40.281576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:35:17.992 [2024-12-14 00:15:40.281595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:88256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.992 [2024-12-14 00:15:40.281605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:35:17.992 [2024-12-14 00:15:40.281623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:88264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.992 [2024-12-14 00:15:40.281633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:35:17.992 [2024-12-14 00:15:40.281654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:88272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.992 [2024-12-14 00:15:40.281664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:35:17.992 [2024-12-14 00:15:40.281682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:88280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.992 [2024-12-14 00:15:40.281692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:35:17.992 [2024-12-14 00:15:40.281711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:88288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.992 [2024-12-14 00:15:40.281720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:35:17.992 [2024-12-14 00:15:40.281739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:88816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.992 [2024-12-14 00:15:40.281749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:35:17.992 [2024-12-14 00:15:40.281767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:88824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.992 [2024-12-14 00:15:40.281777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:35:17.992 [2024-12-14 00:15:40.281795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:88832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.992 [2024-12-14 00:15:40.281805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:35:17.992 [2024-12-14 00:15:40.281824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:88840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.992 [2024-12-14 00:15:40.281834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:35:17.992 [2024-12-14 00:15:40.281852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:88848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.992 [2024-12-14 00:15:40.281862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:17.992 [2024-12-14 00:15:40.281880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:88856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.992 [2024-12-14 00:15:40.281890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:17.992 [2024-12-14 00:15:40.281909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:88864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.992 [2024-12-14 00:15:40.281919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:17.992 [2024-12-14 00:15:40.281937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:88296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.992 [2024-12-14 00:15:40.281946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.992 [2024-12-14 00:15:40.281965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:88304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.992 [2024-12-14 00:15:40.281975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:17.992 [2024-12-14 00:15:40.281993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:88312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.992 [2024-12-14 00:15:40.282005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:17.992 [2024-12-14 00:15:40.282024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:88320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.992 [2024-12-14 00:15:40.282034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:35:17.992 [2024-12-14 00:15:40.282052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:88328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.992 [2024-12-14 00:15:40.282062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:35:17.992 [2024-12-14 00:15:40.282081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:88336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.992 [2024-12-14 00:15:40.282090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:35:17.992 [2024-12-14 00:15:40.282108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:88344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.992 [2024-12-14 00:15:40.282118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:35:17.992 [2024-12-14 00:15:40.282136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:88352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.992 [2024-12-14 00:15:40.282146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:35:17.992 [2024-12-14 00:15:40.282164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:88360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.992 [2024-12-14 00:15:40.282180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:35:17.992 [2024-12-14 00:15:40.282198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:88368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.992 [2024-12-14 00:15:40.282208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:35:17.992 [2024-12-14 00:15:40.282226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:88376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.992 [2024-12-14 00:15:40.282242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:35:17.992 [2024-12-14 00:15:40.282261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:88384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.992 [2024-12-14 00:15:40.282270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:35:17.992 [2024-12-14 00:15:40.282288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:88392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.992 [2024-12-14 00:15:40.282298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:35:17.992 [2024-12-14 00:15:40.282317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:88400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.992 [2024-12-14 00:15:40.282326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:35:17.992 [2024-12-14 00:15:40.282344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:88408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.992 [2024-12-14 00:15:40.282358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:35:17.992 [2024-12-14 00:15:40.282378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:88416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.992 [2024-12-14 00:15:40.282387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:35:17.992 [2024-12-14 00:15:40.282406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:88424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.992 [2024-12-14 00:15:40.282415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:35:17.992 [2024-12-14 00:15:40.282433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:88432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.992 [2024-12-14 00:15:40.282447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:35:17.992 [2024-12-14 00:15:40.282466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:88440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.992 [2024-12-14 00:15:40.282476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:35:17.992 [2024-12-14 00:15:40.282494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:88448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.992 [2024-12-14 00:15:40.282504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:35:17.992 [2024-12-14 00:15:40.282522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:88456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.992 [2024-12-14 00:15:40.282532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:35:17.993 [2024-12-14 00:15:40.282550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:88464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.993 [2024-12-14 00:15:40.282560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:35:17.993 [2024-12-14 00:15:40.282579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:88472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.993 [2024-12-14 00:15:40.282588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:35:17.993 [2024-12-14 00:15:40.282606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:88480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.993 [2024-12-14 00:15:40.282616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:35:17.993 [2024-12-14 00:15:40.282634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:88488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.993 [2024-12-14 00:15:40.282644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:35:17.993 [2024-12-14 00:15:40.282753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:88496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.993 [2024-12-14 00:15:40.282764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:35:17.993 [2024-12-14 00:15:40.282788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:88504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.993 [2024-12-14 00:15:40.282798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:35:17.993 [2024-12-14 00:15:40.282822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:88512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.993 [2024-12-14 00:15:40.282831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:35:17.993 [2024-12-14 00:15:40.282852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:88520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.993 [2024-12-14 00:15:40.282862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:35:17.993 [2024-12-14 00:15:40.282882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:88528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.993 [2024-12-14 00:15:40.282892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:35:17.993 [2024-12-14 00:15:40.282912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:88536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.993 [2024-12-14 00:15:40.282922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:35:17.993 [2024-12-14 00:15:40.282942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:88544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.993 [2024-12-14 00:15:40.282952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:35:17.993 [2024-12-14 00:15:40.282973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:88552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.993 [2024-12-14 00:15:40.282982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:35:17.993 [2024-12-14 00:15:40.283003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:88560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.993 [2024-12-14 00:15:40.283013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:17.993 [2024-12-14 00:15:40.283034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:88568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.993 [2024-12-14 00:15:40.283044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:17.993 [2024-12-14 00:15:40.283064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:88576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.993 [2024-12-14 00:15:40.283074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:35:17.993 [2024-12-14 00:15:40.283095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:88584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.993 [2024-12-14 00:15:40.283105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:35:17.993 [2024-12-14 00:15:40.283126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:88592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.993 [2024-12-14 00:15:40.283136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:35:17.993 [2024-12-14 00:15:40.283156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:88600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.993 [2024-12-14 00:15:40.283166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:35:17.993 [2024-12-14 00:15:40.283188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:88608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.993 [2024-12-14 00:15:40.283198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:35:17.993 [2024-12-14 00:15:40.283218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:88872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.993 [2024-12-14 00:15:40.283228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:35:17.993 [2024-12-14 00:15:40.283248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:88880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.993 [2024-12-14 00:15:40.283258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:35:17.993 [2024-12-14 00:15:40.283279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:88888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.993 [2024-12-14 00:15:40.283288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:35:17.993 [2024-12-14 00:15:40.283309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:88896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.993 [2024-12-14 00:15:40.283318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:35:17.993 [2024-12-14 00:15:40.283338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:88904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.993 [2024-12-14 00:15:40.283348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:35:17.993 [2024-12-14 00:15:40.283369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:88912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.993 [2024-12-14 00:15:40.283379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:35:17.993 [2024-12-14 00:15:40.283400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:88920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.993 [2024-12-14 00:15:40.283410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:35:17.993 [2024-12-14 00:15:40.283431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:88928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.993 [2024-12-14 00:15:40.283449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:35:17.993 [2024-12-14 00:15:40.283470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:88616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.993 [2024-12-14 00:15:40.283480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:35:17.993 [2024-12-14 00:15:40.283501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:88624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.993 [2024-12-14 00:15:40.283510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:35:17.993 [2024-12-14 00:15:40.283533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:88632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.993 [2024-12-14 00:15:40.283543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:35:17.993 [2024-12-14 00:15:40.283565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:88640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.993 [2024-12-14 00:15:40.283577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:35:17.993 [2024-12-14 00:15:40.283598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:88648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.993 [2024-12-14 00:15:40.283607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:35:17.993 [2024-12-14 00:15:40.283628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:88656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.993 [2024-12-14 00:15:40.283638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:35:17.993 [2024-12-14 00:15:40.283658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:88664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.993 [2024-12-14 00:15:40.283668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:35:17.993 [2024-12-14 00:15:40.283689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:88672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.993 [2024-12-14 00:15:40.283699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:35:17.993 [2024-12-14 00:15:40.283719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:88936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.993 [2024-12-14 00:15:40.283729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:35:17.993 [2024-12-14 00:15:40.283750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:88944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.993 [2024-12-14 00:15:40.283759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:35:17.993 [2024-12-14 00:15:40.283780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:88952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.993 [2024-12-14 00:15:40.283790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:35:17.993 [2024-12-14 00:15:40.283811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:88960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.993 [2024-12-14 00:15:40.283821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:35:17.993 [2024-12-14 00:15:40.283842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:88968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.994 [2024-12-14 00:15:40.283853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:35:17.994 [2024-12-14 00:15:40.283874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:88976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.994 [2024-12-14 00:15:40.283884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:35:17.994 [2024-12-14 00:15:40.283906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:88984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.994 [2024-12-14 00:15:40.283916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:35:17.994 [2024-12-14 00:15:40.283936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:88992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.994 [2024-12-14 00:15:40.283948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:35:17.994 [2024-12-14 00:15:40.283969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:88680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.994 [2024-12-14 00:15:40.283979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:35:17.994 [2024-12-14 00:15:40.284000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:88688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.994 [2024-12-14 00:15:40.284010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:17.994 [2024-12-14 00:15:40.284032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:88696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.994 [2024-12-14 00:15:40.284042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:17.994 [2024-12-14 00:15:40.284063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:88704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.994 [2024-12-14 00:15:40.284073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:35:17.994 [2024-12-14 00:15:40.284094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:88712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.994 [2024-12-14 00:15:40.284104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:35:17.994 [2024-12-14 00:15:40.284125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:88720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.994 [2024-12-14 00:15:40.284134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:35:17.994 [2024-12-14 00:15:40.284156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:88728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.994 [2024-12-14 00:15:40.284166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:35:17.994 [2024-12-14 00:15:40.284187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:88736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.994 [2024-12-14 00:15:40.284197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:35:17.994 9668.08 IOPS, 37.77 MiB/s [2024-12-13T23:15:57.135Z] 8977.50 IOPS, 35.07 MiB/s [2024-12-13T23:15:57.135Z] 8379.00 IOPS, 32.73 MiB/s [2024-12-13T23:15:57.135Z] 8105.00 IOPS, 31.66 MiB/s [2024-12-13T23:15:57.135Z] 8220.24 IOPS, 32.11 MiB/s [2024-12-13T23:15:57.135Z] 8303.89 IOPS, 32.44 MiB/s [2024-12-13T23:15:57.135Z] 8508.68 IOPS, 33.24 MiB/s [2024-12-13T23:15:57.135Z] 8697.75 IOPS, 33.98 MiB/s [2024-12-13T23:15:57.135Z] 8817.76 IOPS, 34.44 MiB/s [2024-12-13T23:15:57.135Z] 8866.00 IOPS, 34.63 MiB/s [2024-12-13T23:15:57.135Z] 8907.43 IOPS, 34.79 MiB/s [2024-12-13T23:15:57.135Z] 9002.12 IOPS, 35.16 MiB/s [2024-12-13T23:15:57.135Z] 9132.12 IOPS, 35.67 MiB/s [2024-12-13T23:15:57.135Z] 9255.73 IOPS, 36.16 MiB/s [2024-12-13T23:15:57.135Z] [2024-12-14 00:15:53.788825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:96696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.994 [2024-12-14 00:15:53.788884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:35:17.994 [2024-12-14 00:15:53.788914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:96728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.994 [2024-12-14 00:15:53.788926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:35:17.994 [2024-12-14 00:15:53.788944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:96760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.994 [2024-12-14 00:15:53.788960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:35:17.994 [2024-12-14 00:15:53.788978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:96792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.994 [2024-12-14 00:15:53.788988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:35:17.994 [2024-12-14 00:15:53.789005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:96824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.994 [2024-12-14 00:15:53.789015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:35:17.994 [2024-12-14 00:15:53.789032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:96856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.994 [2024-12-14 00:15:53.789042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:35:17.994 [2024-12-14 00:15:53.789059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:96888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.994 [2024-12-14 00:15:53.789069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:35:17.994 [2024-12-14 00:15:53.789086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:96920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.994 [2024-12-14 00:15:53.789096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:35:17.994 [2024-12-14 00:15:53.789113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:96952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.994 [2024-12-14 00:15:53.789123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:35:17.994 [2024-12-14 00:15:53.789140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:96984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.994 [2024-12-14 00:15:53.789150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:35:17.994 [2024-12-14 00:15:53.789167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:97008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.994 [2024-12-14 00:15:53.789177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:35:17.994 [2024-12-14 00:15:53.789193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:96720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.994 [2024-12-14 00:15:53.789203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:35:17.994 [2024-12-14 00:15:53.789220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:96752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.994 [2024-12-14 00:15:53.789230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:35:17.994 [2024-12-14 00:15:53.789246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:96784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.994 [2024-12-14 00:15:53.789256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:35:17.994 [2024-12-14 00:15:53.789274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:96816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.994 [2024-12-14 00:15:53.789285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:35:17.994 [2024-12-14 00:15:53.789303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:96848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.994 [2024-12-14 00:15:53.789314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:35:17.994 [2024-12-14 00:15:53.789330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:96880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.994 [2024-12-14 00:15:53.789340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:35:17.994 [2024-12-14 00:15:53.789358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:96912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.994 [2024-12-14 00:15:53.789368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:35:17.994 [2024-12-14 00:15:53.789385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:96944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.994 [2024-12-14 00:15:53.789395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:35:17.994 [2024-12-14 00:15:53.789412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:96976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.994 [2024-12-14 00:15:53.789422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:35:17.994 [2024-12-14 00:15:53.789447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:97016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.994 [2024-12-14 00:15:53.789457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:35:17.994 [2024-12-14 00:15:53.789475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:97032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.994 [2024-12-14 00:15:53.789485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:35:17.994 [2024-12-14 00:15:53.789502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:97048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.994 [2024-12-14 00:15:53.789512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:35:17.994 [2024-12-14 00:15:53.789529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:97064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.994 [2024-12-14 00:15:53.789538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:35:17.994 [2024-12-14 00:15:53.789555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:97080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.994 [2024-12-14 00:15:53.789565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:35:17.994 [2024-12-14 00:15:53.789582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:97096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.994 [2024-12-14 00:15:53.789592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:17.995 [2024-12-14 00:15:53.789608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:97112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.995 [2024-12-14 00:15:53.789620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:17.995 [2024-12-14 00:15:53.789637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:97128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.995 [2024-12-14 00:15:53.789646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:17.995 [2024-12-14 00:15:53.789663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:97144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.995 [2024-12-14 00:15:53.789673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.995 [2024-12-14 00:15:53.789691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:97160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.995 [2024-12-14 00:15:53.789701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:17.995 [2024-12-14 00:15:53.789717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:97176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.995 [2024-12-14 00:15:53.789727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:17.995 [2024-12-14 00:15:53.789745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:97192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.995 [2024-12-14 00:15:53.789755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:35:17.995 [2024-12-14 00:15:53.789772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:97208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.995 [2024-12-14 00:15:53.789782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:35:17.995 [2024-12-14 00:15:53.792229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:97224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.995 [2024-12-14 00:15:53.792262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:35:17.995 [2024-12-14 00:15:53.792287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:97240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.995 [2024-12-14 00:15:53.792298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:35:17.995 [2024-12-14 00:15:53.792315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:97256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.995 [2024-12-14 00:15:53.792326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:35:17.995 [2024-12-14 00:15:53.792343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:97272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.995 [2024-12-14 00:15:53.792353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:35:17.995 [2024-12-14 00:15:53.792370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:97288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.995 [2024-12-14 00:15:53.792380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:35:17.995 [2024-12-14 00:15:53.792397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:97304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.995 [2024-12-14 00:15:53.792408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:35:17.995 [2024-12-14 00:15:53.792429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:97320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.995 [2024-12-14 00:15:53.792445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:35:17.995 [2024-12-14 00:15:53.792463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:97336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.995 [2024-12-14 00:15:53.792472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:35:17.995 [2024-12-14 00:15:53.792489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:97352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.995 [2024-12-14 00:15:53.792499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:35:17.995 [2024-12-14 00:15:53.792516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:97368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.995 [2024-12-14 00:15:53.792526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:35:17.995 [2024-12-14 00:15:53.792543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:97384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.995 [2024-12-14 00:15:53.792553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:35:17.995 [2024-12-14 00:15:53.792570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:97400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.995 [2024-12-14 00:15:53.792580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:35:17.995 [2024-12-14 00:15:53.792597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:97416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.995 [2024-12-14 00:15:53.792607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:35:17.995 [2024-12-14 00:15:53.792624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:97432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.995 [2024-12-14 00:15:53.792634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:35:17.995 [2024-12-14 00:15:53.792651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:97448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.995 [2024-12-14 00:15:53.792661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:35:17.995 [2024-12-14 00:15:53.792696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:97464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.995 [2024-12-14 00:15:53.792707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:35:17.995 [2024-12-14 00:15:53.792724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:97480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.995 [2024-12-14 00:15:53.792734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:35:17.995 [2024-12-14 00:15:53.792751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:97496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.995 [2024-12-14 00:15:53.792761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:35:17.995 [2024-12-14 00:15:53.792779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:97512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.995 [2024-12-14 00:15:53.792789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:35:17.995 [2024-12-14 00:15:53.792806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:97528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.995 [2024-12-14 00:15:53.792816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:35:17.995 [2024-12-14 00:15:53.792833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:97544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.995 [2024-12-14 00:15:53.792843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:35:17.995 [2024-12-14 00:15:53.792860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:97560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.995 [2024-12-14 00:15:53.792870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:35:17.995 [2024-12-14 00:15:53.792887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:97576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.995 [2024-12-14 00:15:53.792897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:35:17.995 [2024-12-14 00:15:53.792914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:97592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.995 [2024-12-14 00:15:53.792924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:35:17.996 [2024-12-14 00:15:53.792941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:97608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.996 [2024-12-14 00:15:53.792952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:35:17.996 [2024-12-14 00:15:53.792969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:97624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.996 [2024-12-14 00:15:53.792978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:35:17.996 [2024-12-14 00:15:53.792995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:97640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.996 [2024-12-14 00:15:53.793005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:35:17.996 [2024-12-14 00:15:53.793022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:97656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.996 [2024-12-14 00:15:53.793032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:35:17.996 [2024-12-14 00:15:53.793048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:97672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.996 [2024-12-14 00:15:53.793058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:17.996 [2024-12-14 00:15:53.793075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:97688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.996 [2024-12-14 00:15:53.793085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:17.996 [2024-12-14 00:15:53.793101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:97704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.996 [2024-12-14 00:15:53.793112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:35:17.996 [2024-12-14 00:15:53.793129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:96728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.996 [2024-12-14 00:15:53.793140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:35:17.996 [2024-12-14 00:15:53.793157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:96792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.996 [2024-12-14 00:15:53.793167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:35:17.996 [2024-12-14 00:15:53.793184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:96856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.996 [2024-12-14 00:15:53.793193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:35:17.996 [2024-12-14 00:15:53.793210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:96920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.996 [2024-12-14 00:15:53.793220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:35:17.996 [2024-12-14 00:15:53.793237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:96984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.996 [2024-12-14 00:15:53.793247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:35:17.996 [2024-12-14 00:15:53.793264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:96720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.996 [2024-12-14 00:15:53.793274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:35:17.996 [2024-12-14 00:15:53.793290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:96784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.996 [2024-12-14 00:15:53.793300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:35:17.996 [2024-12-14 00:15:53.793316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:96848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.996 [2024-12-14 00:15:53.793326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:35:17.996 [2024-12-14 00:15:53.793343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:96912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.996 [2024-12-14 00:15:53.793354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:35:17.996 [2024-12-14 00:15:53.793371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:96976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.996 [2024-12-14 00:15:53.793381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:35:17.996 [2024-12-14 00:15:53.794667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:97032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.996 [2024-12-14 00:15:53.794693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:35:17.996 [2024-12-14 00:15:53.794716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:97064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.996 [2024-12-14 00:15:53.794730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:35:17.996 [2024-12-14 00:15:53.794747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:97096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.996 [2024-12-14 00:15:53.794757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:35:17.996 [2024-12-14 00:15:53.794773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:97128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.996 [2024-12-14 00:15:53.794783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:35:17.996 [2024-12-14 00:15:53.794800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:97160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.996 [2024-12-14 00:15:53.794810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:35:17.996 [2024-12-14 00:15:53.794827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:97192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.996 [2024-12-14 00:15:53.794837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:35:17.996 [2024-12-14 00:15:53.794854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:97040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.996 [2024-12-14 00:15:53.794864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:35:17.996 [2024-12-14 00:15:53.794881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:97072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.996 [2024-12-14 00:15:53.794891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:35:17.996 [2024-12-14 00:15:53.794908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:97104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.996 [2024-12-14 00:15:53.794917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:35:17.996 [2024-12-14 00:15:53.794934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:97136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.996 [2024-12-14 00:15:53.794944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:35:17.996 [2024-12-14 00:15:53.794960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:97168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.996 [2024-12-14 00:15:53.794970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:35:17.996 [2024-12-14 00:15:53.794986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:97200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.996 [2024-12-14 00:15:53.794997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:35:17.996 [2024-12-14 00:15:53.795014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:97728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.996 [2024-12-14 00:15:53.795024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:35:17.996 [2024-12-14 00:15:53.795041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:97744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.996 [2024-12-14 00:15:53.795051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:35:17.996 [2024-12-14 00:15:53.795069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:97760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.996 [2024-12-14 00:15:53.795079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:35:17.996 [2024-12-14 00:15:53.795096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:97776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.996 [2024-12-14 00:15:53.795106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:35:17.996 [2024-12-14 00:15:53.795123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:97792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.996 [2024-12-14 00:15:53.795133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:35:17.996 [2024-12-14 00:15:53.795149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:97808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.996 [2024-12-14 00:15:53.795159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:35:17.996 [2024-12-14 00:15:53.795176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:97824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.996 [2024-12-14 00:15:53.795186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:35:17.996 [2024-12-14 00:15:53.795203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:97840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.996 [2024-12-14 00:15:53.795213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:17.996 [2024-12-14 00:15:53.795230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:97856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.996 [2024-12-14 00:15:53.795239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:17.996 [2024-12-14 00:15:53.795255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:97872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.996 [2024-12-14 00:15:53.795265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:35:17.996 [2024-12-14 00:15:53.795282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:97216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.997 [2024-12-14 00:15:53.795291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:35:17.997 [2024-12-14 00:15:53.795308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:97248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.997 [2024-12-14 00:15:53.795318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:35:17.997 [2024-12-14 00:15:53.795335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:97280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.997 [2024-12-14 00:15:53.795344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:35:17.997 [2024-12-14 00:15:53.795361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:97312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.997 [2024-12-14 00:15:53.795371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:35:17.997 [2024-12-14 00:15:53.795390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:97344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.997 [2024-12-14 00:15:53.795400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:35:17.997 [2024-12-14 00:15:53.795416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:97376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.997 [2024-12-14 00:15:53.795426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:35:17.997 [2024-12-14 00:15:53.795449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:97408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.997 [2024-12-14 00:15:53.795459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:35:17.997 [2024-12-14 00:15:53.795476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:97440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.997 [2024-12-14 00:15:53.795486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:35:17.997 [2024-12-14 00:15:53.795503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:97472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.997 [2024-12-14 00:15:53.795513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:35:17.997 [2024-12-14 00:15:53.795530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:97504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.997 [2024-12-14 00:15:53.795540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:35:17.997 [2024-12-14 00:15:53.795556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:97536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.997 [2024-12-14 00:15:53.795566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:35:17.997 [2024-12-14 00:15:53.795583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:97568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.997 [2024-12-14 00:15:53.795592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:35:17.997 [2024-12-14 00:15:53.795609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:97600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.997 [2024-12-14 00:15:53.795619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:35:17.997 [2024-12-14 00:15:53.795636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:97632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.997 [2024-12-14 00:15:53.795646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:35:17.997 [2024-12-14 00:15:53.795662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:97664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.997 [2024-12-14 00:15:53.795672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:35:17.997 [2024-12-14 00:15:53.795689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:97696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.997 [2024-12-14 00:15:53.795698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:35:17.997 [2024-12-14 00:15:53.795722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:97048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.997 [2024-12-14 00:15:53.795734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:35:17.997 [2024-12-14 00:15:53.795751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:97112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.997 [2024-12-14 00:15:53.795760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:35:17.997 [2024-12-14 00:15:53.795777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:97176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.997 [2024-12-14 00:15:53.795787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:35:17.997 [2024-12-14 00:15:53.795803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:97240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.997 [2024-12-14 00:15:53.795813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:35:17.997 [2024-12-14 00:15:53.795829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:97272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.997 [2024-12-14 00:15:53.795839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:35:17.997 [2024-12-14 00:15:53.795855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:97304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.997 [2024-12-14 00:15:53.795865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:35:17.997 [2024-12-14 00:15:53.795882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:97336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.997 [2024-12-14 00:15:53.795892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:35:17.997 [2024-12-14 00:15:53.795908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:97368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.997 [2024-12-14 00:15:53.795918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:35:17.997 [2024-12-14 00:15:53.795935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:97400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.997 [2024-12-14 00:15:53.795945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:35:17.997 [2024-12-14 00:15:53.795962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:97432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.997 [2024-12-14 00:15:53.795972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:35:17.997 [2024-12-14 00:15:53.796597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:97464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.997 [2024-12-14 00:15:53.796619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:35:17.997 [2024-12-14 00:15:53.796641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:97496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.997 [2024-12-14 00:15:53.796651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:35:17.997 [2024-12-14 00:15:53.796668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:97528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.997 [2024-12-14 00:15:53.796681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:35:17.997 [2024-12-14 00:15:53.796699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:97560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.997 [2024-12-14 00:15:53.796709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:17.997 [2024-12-14 00:15:53.796726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:97592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.997 [2024-12-14 00:15:53.796736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:17.997 [2024-12-14 00:15:53.796753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:97624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.997 [2024-12-14 00:15:53.796763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:35:17.997 [2024-12-14 00:15:53.796781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:97656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.997 [2024-12-14 00:15:53.796791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:35:17.997 [2024-12-14 00:15:53.796807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:97688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.997 [2024-12-14 00:15:53.796818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:35:17.997 [2024-12-14 00:15:53.796835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:96728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.997 [2024-12-14 00:15:53.796845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:35:17.997 [2024-12-14 00:15:53.796862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:96856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.997 [2024-12-14 00:15:53.796872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:35:17.997 [2024-12-14 00:15:53.796889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:96984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.997 [2024-12-14 00:15:53.796899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:35:17.997 [2024-12-14 00:15:53.796915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:96784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.997 [2024-12-14 00:15:53.796925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:35:17.997 [2024-12-14 00:15:53.796943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:96912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.997 [2024-12-14 00:15:53.796952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:35:17.997 [2024-12-14 00:15:53.797236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:97888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.998 [2024-12-14 00:15:53.797255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:35:17.998 [2024-12-14 00:15:53.797277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:97904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.998 [2024-12-14 00:15:53.797287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:35:17.998 [2024-12-14 00:15:53.797308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:97920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.998 [2024-12-14 00:15:53.797323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:35:17.998 [2024-12-14 00:15:53.797341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:97936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.998 [2024-12-14 00:15:53.797351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:35:17.998 [2024-12-14 00:15:53.797367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:97952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.998 [2024-12-14 00:15:53.797377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:35:17.998 [2024-12-14 00:15:53.797394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:97968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.998 [2024-12-14 00:15:53.797404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:35:17.998 [2024-12-14 00:15:53.797421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:97736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.998 [2024-12-14 00:15:53.797431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:35:17.998 [2024-12-14 00:15:53.797454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:97768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.998 [2024-12-14 00:15:53.797464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:35:17.998 [2024-12-14 00:15:53.797481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:97800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.998 [2024-12-14 00:15:53.797491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:35:17.998 [2024-12-14 00:15:53.797507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:97976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.998 [2024-12-14 00:15:53.797518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:35:17.998 [2024-12-14 00:15:53.799311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:97992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.998 [2024-12-14 00:15:53.799334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:35:17.998 [2024-12-14 00:15:53.799364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:98008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.998 [2024-12-14 00:15:53.799377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:35:17.998 [2024-12-14 00:15:53.799395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:97848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.998 [2024-12-14 00:15:53.799405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:35:17.998 [2024-12-14 00:15:53.799422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:97880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.998 [2024-12-14 00:15:53.799432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:35:17.998 [2024-12-14 00:15:53.799458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:97064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.998 [2024-12-14 00:15:53.799468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:35:17.998 [2024-12-14 00:15:53.799485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:97128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.998 [2024-12-14 00:15:53.799495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:35:17.998 [2024-12-14 00:15:53.799512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:97192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.998 [2024-12-14 00:15:53.799522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:35:17.998 [2024-12-14 00:15:53.799538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:97072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.998 [2024-12-14 00:15:53.799549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:35:17.998 [2024-12-14 00:15:53.799565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:97136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.998 [2024-12-14 00:15:53.799575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:17.998 [2024-12-14 00:15:53.799591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:97200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.998 [2024-12-14 00:15:53.799601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:17.998 [2024-12-14 00:15:53.799618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:97744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.998 [2024-12-14 00:15:53.799628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:17.998 [2024-12-14 00:15:53.799645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:97776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.998 [2024-12-14 00:15:53.799655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.998 [2024-12-14 00:15:53.799672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:97808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.998 [2024-12-14 00:15:53.799681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:17.998 [2024-12-14 00:15:53.799698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:97840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.998 [2024-12-14 00:15:53.799708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:17.998 [2024-12-14 00:15:53.799725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:97872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.998 [2024-12-14 00:15:53.799735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:35:17.998 [2024-12-14 00:15:53.799752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:97248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.998 [2024-12-14 00:15:53.799762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:35:17.998 [2024-12-14 00:15:53.799779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:97312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.998 [2024-12-14 00:15:53.799790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:35:17.998 [2024-12-14 00:15:53.799808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:97376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.998 [2024-12-14 00:15:53.799818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:35:17.998 [2024-12-14 00:15:53.799834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:97440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.998 [2024-12-14 00:15:53.799844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:35:17.998 [2024-12-14 00:15:53.799860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:97504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.998 [2024-12-14 00:15:53.799870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:35:17.998 [2024-12-14 00:15:53.799887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:97568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.998 [2024-12-14 00:15:53.799897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:35:17.998 [2024-12-14 00:15:53.799914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:97632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.998 [2024-12-14 00:15:53.799924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:35:17.998 [2024-12-14 00:15:53.799940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:97696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.998 [2024-12-14 00:15:53.799950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:35:17.998 [2024-12-14 00:15:53.799967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:97112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.998 [2024-12-14 00:15:53.799977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:35:17.998 [2024-12-14 00:15:53.799993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:97240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.998 [2024-12-14 00:15:53.800003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:35:17.998 [2024-12-14 00:15:53.800019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:97304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.998 [2024-12-14 00:15:53.800029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:35:17.998 [2024-12-14 00:15:53.800046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:97368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.998 [2024-12-14 00:15:53.800056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:35:17.998 [2024-12-14 00:15:53.800073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:97432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.998 [2024-12-14 00:15:53.800083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:35:17.998 [2024-12-14 00:15:53.800100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:97496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.998 [2024-12-14 00:15:53.800111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:35:17.998 [2024-12-14 00:15:53.800128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:97560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.998 [2024-12-14 00:15:53.800138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:35:17.999 [2024-12-14 00:15:53.800155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:97624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.999 [2024-12-14 00:15:53.800164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:35:17.999 [2024-12-14 00:15:53.800206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:97688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.999 [2024-12-14 00:15:53.800216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:35:17.999 [2024-12-14 00:15:53.800232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:96856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.999 [2024-12-14 00:15:53.800242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:35:17.999 [2024-12-14 00:15:53.800259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:96784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.999 [2024-12-14 00:15:53.800269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:35:17.999 [2024-12-14 00:15:53.800286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:97904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.999 [2024-12-14 00:15:53.800296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:35:17.999 [2024-12-14 00:15:53.800313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:97936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.999 [2024-12-14 00:15:53.800323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:35:17.999 [2024-12-14 00:15:53.800340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:97968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.999 [2024-12-14 00:15:53.800350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:35:17.999 [2024-12-14 00:15:53.800366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:97768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.999 [2024-12-14 00:15:53.800375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:35:17.999 [2024-12-14 00:15:53.800392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:97976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.999 [2024-12-14 00:15:53.800402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:35:17.999 [2024-12-14 00:15:53.800911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:97256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.999 [2024-12-14 00:15:53.800931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:35:17.999 [2024-12-14 00:15:53.800952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:97320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.999 [2024-12-14 00:15:53.800962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:35:17.999 [2024-12-14 00:15:53.800982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:97384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.999 [2024-12-14 00:15:53.800992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:35:17.999 [2024-12-14 00:15:53.801010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:98016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.999 [2024-12-14 00:15:53.801019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:35:17.999 [2024-12-14 00:15:53.801036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:98032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.999 [2024-12-14 00:15:53.801046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:35:17.999 [2024-12-14 00:15:53.801062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:98048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.999 [2024-12-14 00:15:53.801072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:17.999 [2024-12-14 00:15:53.801088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:98064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.999 [2024-12-14 00:15:53.801098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:17.999 [2024-12-14 00:15:53.801115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:98080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.999 [2024-12-14 00:15:53.801124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:35:17.999 [2024-12-14 00:15:53.801146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:98096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.999 [2024-12-14 00:15:53.801156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:35:17.999 [2024-12-14 00:15:53.801172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:98112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.999 [2024-12-14 00:15:53.801182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:35:17.999 [2024-12-14 00:15:53.801199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:98128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.999 [2024-12-14 00:15:53.801209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:35:17.999 [2024-12-14 00:15:53.801225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:98144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.999 [2024-12-14 00:15:53.801235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:35:17.999 [2024-12-14 00:15:53.801252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:98160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.999 [2024-12-14 00:15:53.801262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:35:17.999 [2024-12-14 00:15:53.801278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:97480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.999 [2024-12-14 00:15:53.801288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:35:17.999 [2024-12-14 00:15:53.801312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:97544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.999 [2024-12-14 00:15:53.801322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:35:17.999 [2024-12-14 00:15:53.801338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:97608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.999 [2024-12-14 00:15:53.801348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:35:17.999 [2024-12-14 00:15:53.801365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:97672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.999 [2024-12-14 00:15:53.801375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:35:17.999 [2024-12-14 00:15:53.801391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:98168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.999 [2024-12-14 00:15:53.801401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:35:17.999 [2024-12-14 00:15:53.801417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:98184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.999 [2024-12-14 00:15:53.801427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:35:17.999 [2024-12-14 00:15:53.801451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:98200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.999 [2024-12-14 00:15:53.801461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:35:17.999 [2024-12-14 00:15:53.801477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:97912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.999 [2024-12-14 00:15:53.801487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:35:17.999 [2024-12-14 00:15:53.801504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:97944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.999 [2024-12-14 00:15:53.801514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:35:17.999 [2024-12-14 00:15:53.803155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:97984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.999 [2024-12-14 00:15:53.803178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:35:17.999 [2024-12-14 00:15:53.803199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:97032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.999 [2024-12-14 00:15:53.803209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:35:17.999 [2024-12-14 00:15:53.803227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:97160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.999 [2024-12-14 00:15:53.803238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:35:17.999 [2024-12-14 00:15:53.803255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:97760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.000 [2024-12-14 00:15:53.803265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:35:18.000 [2024-12-14 00:15:53.803285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:97824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.000 [2024-12-14 00:15:53.803296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:35:18.000 [2024-12-14 00:15:53.803312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:98008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.000 [2024-12-14 00:15:53.803322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:35:18.000 [2024-12-14 00:15:53.803339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:97880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.000 [2024-12-14 00:15:53.803349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:35:18.000 [2024-12-14 00:15:53.803365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:97128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.000 [2024-12-14 00:15:53.803375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:35:18.000 [2024-12-14 00:15:53.803391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:97072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.000 [2024-12-14 00:15:53.803401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:35:18.000 [2024-12-14 00:15:53.803417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:97200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.000 [2024-12-14 00:15:53.803427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:35:18.000 [2024-12-14 00:15:53.803449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:97776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.000 [2024-12-14 00:15:53.803460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:35:18.000 [2024-12-14 00:15:53.803476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:97840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.000 [2024-12-14 00:15:53.803485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:35:18.000 [2024-12-14 00:15:53.803503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:97248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.000 [2024-12-14 00:15:53.803512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:35:18.000 [2024-12-14 00:15:53.803529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:97376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.000 [2024-12-14 00:15:53.803539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:35:18.000 [2024-12-14 00:15:53.803556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:97504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.000 [2024-12-14 00:15:53.803565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:35:18.000 [2024-12-14 00:15:53.803583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:97632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.000 [2024-12-14 00:15:53.803592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:18.000 [2024-12-14 00:15:53.803608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:97112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.000 [2024-12-14 00:15:53.803620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:18.000 [2024-12-14 00:15:53.803636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:97304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.000 [2024-12-14 00:15:53.803646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:35:18.000 [2024-12-14 00:15:53.803662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:97432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.000 [2024-12-14 00:15:53.803672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:35:18.000 [2024-12-14 00:15:53.803688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:97560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.000 [2024-12-14 00:15:53.803699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:35:18.000 [2024-12-14 00:15:53.803715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:97688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.000 [2024-12-14 00:15:53.803725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:35:18.000 [2024-12-14 00:15:53.803742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:96784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.000 [2024-12-14 00:15:53.803751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:35:18.000 [2024-12-14 00:15:53.803768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:97936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.000 [2024-12-14 00:15:53.803777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:35:18.000 [2024-12-14 00:15:53.803794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:97768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.000 [2024-12-14 00:15:53.803803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:35:18.000 [2024-12-14 00:15:53.803820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:97320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.000 [2024-12-14 00:15:53.803829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:35:18.000 [2024-12-14 00:15:53.803846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:98016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.000 [2024-12-14 00:15:53.803856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:35:18.000 [2024-12-14 00:15:53.803872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:98048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.000 [2024-12-14 00:15:53.803882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:35:18.000 [2024-12-14 00:15:53.803898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:98080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.000 [2024-12-14 00:15:53.803908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:35:18.000 [2024-12-14 00:15:53.803924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:98112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.000 [2024-12-14 00:15:53.803935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:35:18.000 [2024-12-14 00:15:53.803952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:98144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.000 [2024-12-14 00:15:53.803961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:35:18.000 [2024-12-14 00:15:53.803978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:97480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.000 [2024-12-14 00:15:53.803988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:35:18.000 [2024-12-14 00:15:53.804004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:97608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.000 [2024-12-14 00:15:53.804013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:35:18.000 [2024-12-14 00:15:53.804030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:98168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.000 [2024-12-14 00:15:53.804039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:35:18.000 [2024-12-14 00:15:53.804056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:98200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.000 [2024-12-14 00:15:53.804066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:35:18.000 [2024-12-14 00:15:53.804089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:97944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.000 [2024-12-14 00:15:53.804099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:35:18.000 [2024-12-14 00:15:53.805092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:98216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.000 [2024-12-14 00:15:53.805114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:35:18.000 [2024-12-14 00:15:53.805134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:98232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.000 [2024-12-14 00:15:53.805145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:35:18.000 [2024-12-14 00:15:53.805162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:98248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.000 [2024-12-14 00:15:53.805172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:35:18.000 [2024-12-14 00:15:53.805189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:98264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.000 [2024-12-14 00:15:53.805199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:35:18.000 [2024-12-14 00:15:53.805216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:98280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.000 [2024-12-14 00:15:53.805227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:35:18.000 [2024-12-14 00:15:53.805243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:98296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.000 [2024-12-14 00:15:53.805253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:35:18.000 [2024-12-14 00:15:53.805273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:98312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.000 [2024-12-14 00:15:53.805283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:35:18.000 [2024-12-14 00:15:53.805300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:98328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.000 [2024-12-14 00:15:53.805309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:35:18.000 [2024-12-14 00:15:53.805326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:98344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.001 [2024-12-14 00:15:53.805336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:35:18.001 [2024-12-14 00:15:53.805353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:97272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.001 [2024-12-14 00:15:53.805362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:35:18.001 [2024-12-14 00:15:53.805380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:97400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.001 [2024-12-14 00:15:53.805390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:35:18.001 [2024-12-14 00:15:53.805407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:97528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.001 [2024-12-14 00:15:53.805416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:35:18.001 [2024-12-14 00:15:53.805434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:97656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.001 [2024-12-14 00:15:53.805450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:18.001 [2024-12-14 00:15:53.805466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:98360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.001 [2024-12-14 00:15:53.805476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:18.001 [2024-12-14 00:15:53.805493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:98376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.001 [2024-12-14 00:15:53.805504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:35:18.001 [2024-12-14 00:15:53.805520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:97952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.001 [2024-12-14 00:15:53.805530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:35:18.001 [2024-12-14 00:15:53.806503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:98040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.001 [2024-12-14 00:15:53.806525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:35:18.001 [2024-12-14 00:15:53.806546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:98072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.001 [2024-12-14 00:15:53.806556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:35:18.001 [2024-12-14 00:15:53.806577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:98104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.001 [2024-12-14 00:15:53.806587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:35:18.001 [2024-12-14 00:15:53.806604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:98136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.001 [2024-12-14 00:15:53.806613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:35:18.001 [2024-12-14 00:15:53.806630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:98176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.001 [2024-12-14 00:15:53.806640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:35:18.001 [2024-12-14 00:15:53.806657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:97032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.001 [2024-12-14 00:15:53.806667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:35:18.001 [2024-12-14 00:15:53.806683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:97760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.001 [2024-12-14 00:15:53.806693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:35:18.001 [2024-12-14 00:15:53.806710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:98008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.001 [2024-12-14 00:15:53.806720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:35:18.001 [2024-12-14 00:15:53.806737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:97128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.001 [2024-12-14 00:15:53.806747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:35:18.001 [2024-12-14 00:15:53.806764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:97200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.001 [2024-12-14 00:15:53.806774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:35:18.001 [2024-12-14 00:15:53.806791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:97840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.001 [2024-12-14 00:15:53.806801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:35:18.001 [2024-12-14 00:15:53.806818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:97376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.001 [2024-12-14 00:15:53.806828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:35:18.001 [2024-12-14 00:15:53.806844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:97632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.001 [2024-12-14 00:15:53.806854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:35:18.001 [2024-12-14 00:15:53.806871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:97304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.001 [2024-12-14 00:15:53.806881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:35:18.001 [2024-12-14 00:15:53.806898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:97560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.001 [2024-12-14 00:15:53.806910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:35:18.001 [2024-12-14 00:15:53.806934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:96784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.001 [2024-12-14 00:15:53.806944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:35:18.001 [2024-12-14 00:15:53.806963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:97768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.001 [2024-12-14 00:15:53.806973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:35:18.001 [2024-12-14 00:15:53.806990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:98016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.001 [2024-12-14 00:15:53.807000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:35:18.001 [2024-12-14 00:15:53.807017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:98080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.001 [2024-12-14 00:15:53.807027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:35:18.001 [2024-12-14 00:15:53.807044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:98144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.001 [2024-12-14 00:15:53.807054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:35:18.001 [2024-12-14 00:15:53.807071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:97608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.001 [2024-12-14 00:15:53.807081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:35:18.001 [2024-12-14 00:15:53.807099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:98200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.001 [2024-12-14 00:15:53.807109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:35:18.001 [2024-12-14 00:15:53.807126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:98384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.001 [2024-12-14 00:15:53.807135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:35:18.001 [2024-12-14 00:15:53.807152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:98400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.001 [2024-12-14 00:15:53.807162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:35:18.001 [2024-12-14 00:15:53.807179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:98416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.001 [2024-12-14 00:15:53.807189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:18.001 [2024-12-14 00:15:53.807206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:98432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.001 [2024-12-14 00:15:53.807215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:18.001 [2024-12-14 00:15:53.807232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:98448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.001 [2024-12-14 00:15:53.807244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:18.001 [2024-12-14 00:15:53.807261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:98464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.001 [2024-12-14 00:15:53.807274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:18.001 [2024-12-14 00:15:53.807290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:98480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.001 [2024-12-14 00:15:53.807300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:18.001 [2024-12-14 00:15:53.807317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:97992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.001 [2024-12-14 00:15:53.807327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:18.001 [2024-12-14 00:15:53.807344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:97192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.001 [2024-12-14 00:15:53.807354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:35:18.001 [2024-12-14 00:15:53.808442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:98232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.001 [2024-12-14 00:15:53.808466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:35:18.001 [2024-12-14 00:15:53.808487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:98264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.002 [2024-12-14 00:15:53.808497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:35:18.002 [2024-12-14 00:15:53.808514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:98296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.002 [2024-12-14 00:15:53.808525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:35:18.002 [2024-12-14 00:15:53.808542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:98328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.002 [2024-12-14 00:15:53.808552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:35:18.002 [2024-12-14 00:15:53.808569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:97272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.002 [2024-12-14 00:15:53.808579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:35:18.002 [2024-12-14 00:15:53.808596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:97528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.002 [2024-12-14 00:15:53.808607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:35:18.002 [2024-12-14 00:15:53.808624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:98360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.002 [2024-12-14 00:15:53.808634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:35:18.002 [2024-12-14 00:15:53.808651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:97952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.002 [2024-12-14 00:15:53.808661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:35:18.002 [2024-12-14 00:15:53.808910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:97808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.002 [2024-12-14 00:15:53.808927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:35:18.002 [2024-12-14 00:15:53.808946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:97240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.002 [2024-12-14 00:15:53.808956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:35:18.002 [2024-12-14 00:15:53.808973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:97496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.002 [2024-12-14 00:15:53.808983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:35:18.002 [2024-12-14 00:15:53.809000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:97904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.002 [2024-12-14 00:15:53.809010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:35:18.002 [2024-12-14 00:15:53.809030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:97976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.002 [2024-12-14 00:15:53.809041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:35:18.002 [2024-12-14 00:15:53.809058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:98064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.002 [2024-12-14 00:15:53.809067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:35:18.002 [2024-12-14 00:15:53.809085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:98504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.002 [2024-12-14 00:15:53.809095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:35:18.002 [2024-12-14 00:15:53.809112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:98520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.002 [2024-12-14 00:15:53.809122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:35:18.002 [2024-12-14 00:15:53.809151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:98536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.002 [2024-12-14 00:15:53.809160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:35:18.002 [2024-12-14 00:15:53.809177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:98552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.002 [2024-12-14 00:15:53.809187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:35:18.002 [2024-12-14 00:15:53.809204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:98568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.002 [2024-12-14 00:15:53.809214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:35:18.002 [2024-12-14 00:15:53.809230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:98096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.002 [2024-12-14 00:15:53.809240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:35:18.002 [2024-12-14 00:15:53.809259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:98160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.002 [2024-12-14 00:15:53.809268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:35:18.002 [2024-12-14 00:15:53.809285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:98072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.002 [2024-12-14 00:15:53.809295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:35:18.002 [2024-12-14 00:15:53.809312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:98136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.002 [2024-12-14 00:15:53.809321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:35:18.002 [2024-12-14 00:15:53.809338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:97032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.002 [2024-12-14 00:15:53.809348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:35:18.002 [2024-12-14 00:15:53.809364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:98008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.002 [2024-12-14 00:15:53.809373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:35:18.002 [2024-12-14 00:15:53.809390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:97200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.002 [2024-12-14 00:15:53.809399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:35:18.002 [2024-12-14 00:15:53.809416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:97376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.002 [2024-12-14 00:15:53.809425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:35:18.002 [2024-12-14 00:15:53.809449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:97304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.002 [2024-12-14 00:15:53.809459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:35:18.002 [2024-12-14 00:15:53.809476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:96784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.002 [2024-12-14 00:15:53.809486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:35:18.002 [2024-12-14 00:15:53.809933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:98016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.002 [2024-12-14 00:15:53.809952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:18.002 [2024-12-14 00:15:53.809973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:98144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.002 [2024-12-14 00:15:53.809983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:18.002 [2024-12-14 00:15:53.810000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:98200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.002 [2024-12-14 00:15:53.810010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:35:18.002 [2024-12-14 00:15:53.810028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:98400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.002 [2024-12-14 00:15:53.810040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:35:18.002 [2024-12-14 00:15:53.810058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:98432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.002 [2024-12-14 00:15:53.810067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:35:18.002 [2024-12-14 00:15:53.810085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:98464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.002 [2024-12-14 00:15:53.810094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:35:18.002 [2024-12-14 00:15:53.810112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:97992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.002 [2024-12-14 00:15:53.810121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:35:18.002 [2024-12-14 00:15:53.810138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:98184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.002 [2024-12-14 00:15:53.810147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:35:18.002 [2024-12-14 00:15:53.810164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:98592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.002 [2024-12-14 00:15:53.810174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:35:18.002 [2024-12-14 00:15:53.810191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:98608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.002 [2024-12-14 00:15:53.810200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:35:18.002 [2024-12-14 00:15:53.810217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:98624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.002 [2024-12-14 00:15:53.810227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:35:18.002 [2024-12-14 00:15:53.810243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:98224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.002 [2024-12-14 00:15:53.810253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:35:18.003 [2024-12-14 00:15:53.810269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:98256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.003 [2024-12-14 00:15:53.810279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:35:18.003 [2024-12-14 00:15:53.810296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:98288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.003 [2024-12-14 00:15:53.810306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:35:18.003 [2024-12-14 00:15:53.810322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:98320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.003 [2024-12-14 00:15:53.810332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:35:18.003 [2024-12-14 00:15:53.810348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:98352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.003 [2024-12-14 00:15:53.810359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:35:18.003 [2024-12-14 00:15:53.810376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:98632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.003 [2024-12-14 00:15:53.810386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:35:18.003 [2024-12-14 00:15:53.810402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:98648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.003 [2024-12-14 00:15:53.810412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:35:18.003 [2024-12-14 00:15:53.810428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:98664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.003 [2024-12-14 00:15:53.810444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:35:18.003 [2024-12-14 00:15:53.810462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:98264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.003 [2024-12-14 00:15:53.810471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:35:18.003 [2024-12-14 00:15:53.810488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:98328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.003 [2024-12-14 00:15:53.810498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:35:18.003 [2024-12-14 00:15:53.810514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:97528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.003 [2024-12-14 00:15:53.810524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:35:18.003 [2024-12-14 00:15:53.810541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:97952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.003 [2024-12-14 00:15:53.810551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:35:18.003 [2024-12-14 00:15:53.811833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:97240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.003 [2024-12-14 00:15:53.811855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:35:18.003 [2024-12-14 00:15:53.811882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:97904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.003 [2024-12-14 00:15:53.811892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:35:18.003 [2024-12-14 00:15:53.811909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:98064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.003 [2024-12-14 00:15:53.811919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:35:18.003 [2024-12-14 00:15:53.811936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:98520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.003 [2024-12-14 00:15:53.811946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:35:18.003 [2024-12-14 00:15:53.811963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:98552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.003 [2024-12-14 00:15:53.811973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:35:18.003 [2024-12-14 00:15:53.811991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:98096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.003 [2024-12-14 00:15:53.812002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:35:18.003 [2024-12-14 00:15:53.812018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:98072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.003 [2024-12-14 00:15:53.812029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:35:18.003 [2024-12-14 00:15:53.812045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:97032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.003 [2024-12-14 00:15:53.812055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:35:18.003 [2024-12-14 00:15:53.812072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:97200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.003 [2024-12-14 00:15:53.812082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:35:18.003 [2024-12-14 00:15:53.812099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:97304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.003 [2024-12-14 00:15:53.812108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:18.003 [2024-12-14 00:15:53.812125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:97776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.003 [2024-12-14 00:15:53.812135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:18.003 [2024-12-14 00:15:53.812151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:97688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.003 [2024-12-14 00:15:53.812161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:35:18.003 [2024-12-14 00:15:53.812178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:98048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.003 [2024-12-14 00:15:53.812188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:35:18.003 [2024-12-14 00:15:53.812204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:98168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.003 [2024-12-14 00:15:53.812214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:35:18.003 [2024-12-14 00:15:53.812230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:98144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.003 [2024-12-14 00:15:53.812240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:35:18.003 [2024-12-14 00:15:53.812257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:98400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.003 [2024-12-14 00:15:53.812267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:35:18.003 [2024-12-14 00:15:53.812283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:98464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.003 [2024-12-14 00:15:53.812293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:35:18.003 [2024-12-14 00:15:53.812311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:98184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.003 [2024-12-14 00:15:53.812321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:35:18.003 [2024-12-14 00:15:53.812337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:98608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.003 [2024-12-14 00:15:53.812347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:35:18.003 [2024-12-14 00:15:53.812363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:98224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.003 [2024-12-14 00:15:53.812373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:35:18.003 [2024-12-14 00:15:53.812390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:98288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.003 [2024-12-14 00:15:53.812399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:35:18.003 [2024-12-14 00:15:53.812416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:98352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.003 [2024-12-14 00:15:53.812426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:35:18.003 [2024-12-14 00:15:53.812449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:98648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.003 [2024-12-14 00:15:53.812459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:35:18.003 [2024-12-14 00:15:53.812476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:98264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.003 [2024-12-14 00:15:53.812486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:35:18.003 [2024-12-14 00:15:53.812503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:97528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.003 [2024-12-14 00:15:53.812513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:35:18.003 [2024-12-14 00:15:53.814165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:98392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.003 [2024-12-14 00:15:53.814187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:35:18.003 [2024-12-14 00:15:53.814208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:98424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.003 [2024-12-14 00:15:53.814219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:35:18.003 [2024-12-14 00:15:53.814236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:98456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.003 [2024-12-14 00:15:53.814246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:35:18.003 [2024-12-14 00:15:53.814270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:98488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.003 [2024-12-14 00:15:53.814279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:35:18.004 [2024-12-14 00:15:53.814296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:98680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.004 [2024-12-14 00:15:53.814308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:35:18.004 [2024-12-14 00:15:53.814325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:98696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.004 [2024-12-14 00:15:53.814334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:35:18.004 [2024-12-14 00:15:53.814351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:98712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.004 [2024-12-14 00:15:53.814361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:35:18.004 [2024-12-14 00:15:53.814377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:98728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.004 [2024-12-14 00:15:53.814387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:35:18.004 [2024-12-14 00:15:53.814404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:98744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.004 [2024-12-14 00:15:53.814414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:35:18.004 [2024-12-14 00:15:53.814431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:98760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.004 [2024-12-14 00:15:53.814447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:35:18.004 [2024-12-14 00:15:53.814465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:98776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.004 [2024-12-14 00:15:53.814475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:35:18.004 [2024-12-14 00:15:53.815595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:98792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.004 [2024-12-14 00:15:53.815617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:35:18.004 [2024-12-14 00:15:53.815639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:98808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.004 [2024-12-14 00:15:53.815650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:35:18.004 [2024-12-14 00:15:53.815667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:98248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.004 [2024-12-14 00:15:53.815677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:35:18.004 [2024-12-14 00:15:53.815694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:98312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.004 [2024-12-14 00:15:53.815704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:35:18.004 [2024-12-14 00:15:53.815721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:98376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.004 [2024-12-14 00:15:53.815731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:35:18.004 [2024-12-14 00:15:53.815748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:98824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.004 [2024-12-14 00:15:53.815761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:18.004 [2024-12-14 00:15:53.815778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:97904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.004 [2024-12-14 00:15:53.815787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:18.004 [2024-12-14 00:15:53.815804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:98520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.004 [2024-12-14 00:15:53.815814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:35:18.004 [2024-12-14 00:15:53.815831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:98096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.004 [2024-12-14 00:15:53.815841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:35:18.004 [2024-12-14 00:15:53.815857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:97032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.004 [2024-12-14 00:15:53.815867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:35:18.004 [2024-12-14 00:15:53.815884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:97304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.004 [2024-12-14 00:15:53.815893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:35:18.004 [2024-12-14 00:15:53.815910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:97688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.004 [2024-12-14 00:15:53.815919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:35:18.004 [2024-12-14 00:15:53.815936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:98168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.004 [2024-12-14 00:15:53.815946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:35:18.004 [2024-12-14 00:15:53.815963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:98400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.004 [2024-12-14 00:15:53.815972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:35:18.004 [2024-12-14 00:15:53.815989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:98184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.004 [2024-12-14 00:15:53.815998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:35:18.004 [2024-12-14 00:15:53.816015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:98224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.004 [2024-12-14 00:15:53.816025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:35:18.004 [2024-12-14 00:15:53.816041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:98352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.004 [2024-12-14 00:15:53.816051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:35:18.004 [2024-12-14 00:15:53.816068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:98264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.004 [2024-12-14 00:15:53.816079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:35:18.004 [2024-12-14 00:15:53.816095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:98496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.004 [2024-12-14 00:15:53.816105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:35:18.004 [2024-12-14 00:15:53.816121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:98528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.004 [2024-12-14 00:15:53.816131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:35:18.004 [2024-12-14 00:15:53.816147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:98560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.004 [2024-12-14 00:15:53.816157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:35:18.004 [2024-12-14 00:15:53.816173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:97128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.004 [2024-12-14 00:15:53.816182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:35:18.004 [2024-12-14 00:15:53.816199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:97560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.004 [2024-12-14 00:15:53.816208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:35:18.004 [2024-12-14 00:15:53.816224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:98384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.004 [2024-12-14 00:15:53.816234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:35:18.004 [2024-12-14 00:15:53.816250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:98448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.004 [2024-12-14 00:15:53.816260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:35:18.004 [2024-12-14 00:15:53.816276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:98832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.004 [2024-12-14 00:15:53.816286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:35:18.004 [2024-12-14 00:15:53.816302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:98848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.004 [2024-12-14 00:15:53.816313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:35:18.004 [2024-12-14 00:15:53.816332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:98864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.004 [2024-12-14 00:15:53.816343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:35:18.005 [2024-12-14 00:15:53.816360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:98880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.005 [2024-12-14 00:15:53.816372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:35:18.005 [2024-12-14 00:15:53.816389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:98896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.005 [2024-12-14 00:15:53.816399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:35:18.005 [2024-12-14 00:15:53.816419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:98912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.005 [2024-12-14 00:15:53.816429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:35:18.005 [2024-12-14 00:15:53.816453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:98928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.005 [2024-12-14 00:15:53.816464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:35:18.005 [2024-12-14 00:15:53.816480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:98944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.005 [2024-12-14 00:15:53.816490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:35:18.005 [2024-12-14 00:15:53.816508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:98600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.005 [2024-12-14 00:15:53.816517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:18.005 [2024-12-14 00:15:53.816534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:98640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.005 [2024-12-14 00:15:53.816543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:18.005 [2024-12-14 00:15:53.816560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:98232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.005 [2024-12-14 00:15:53.816570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:18.005 [2024-12-14 00:15:53.816586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:98360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.005 [2024-12-14 00:15:53.816595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:18.005 [2024-12-14 00:15:53.816612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:98536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.005 [2024-12-14 00:15:53.816622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:18.005 [2024-12-14 00:15:53.816638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:98008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.005 [2024-12-14 00:15:53.816648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:18.005 [2024-12-14 00:15:53.816665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:98424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.005 [2024-12-14 00:15:53.816675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:35:18.005 [2024-12-14 00:15:53.816691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:98488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.005 [2024-12-14 00:15:53.816701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:35:18.005 [2024-12-14 00:15:53.816718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:98696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.005 [2024-12-14 00:15:53.816728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:35:18.005 [2024-12-14 00:15:53.816746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:98728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.005 [2024-12-14 00:15:53.816756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:35:18.005 [2024-12-14 00:15:53.816772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:98760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.005 [2024-12-14 00:15:53.816783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:35:18.005 [2024-12-14 00:15:53.819300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:98952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.005 [2024-12-14 00:15:53.819325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:35:18.005 [2024-12-14 00:15:53.819347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:98968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.005 [2024-12-14 00:15:53.819358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:35:18.005 [2024-12-14 00:15:53.819375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:98984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.005 [2024-12-14 00:15:53.819386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:35:18.005 [2024-12-14 00:15:53.819402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:99000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.005 [2024-12-14 00:15:53.819413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:35:18.005 [2024-12-14 00:15:53.819429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:99016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.005 [2024-12-14 00:15:53.819445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:35:18.005 [2024-12-14 00:15:53.819462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:99032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.005 [2024-12-14 00:15:53.819472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:35:18.005 [2024-12-14 00:15:53.819489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:99048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.005 [2024-12-14 00:15:53.819499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:35:18.005 [2024-12-14 00:15:53.819516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:98016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.005 [2024-12-14 00:15:53.819526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:35:18.005 [2024-12-14 00:15:53.819544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:98432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.005 [2024-12-14 00:15:53.819554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:35:18.005 [2024-12-14 00:15:53.819572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:98624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.005 [2024-12-14 00:15:53.819582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:35:18.005 [2024-12-14 00:15:53.819599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:98664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.005 [2024-12-14 00:15:53.819613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:35:18.005 [2024-12-14 00:15:53.819631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:99064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.005 [2024-12-14 00:15:53.819641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:35:18.005 [2024-12-14 00:15:53.819666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:99080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.005 [2024-12-14 00:15:53.819676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:35:18.005 [2024-12-14 00:15:53.819692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:99096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.005 [2024-12-14 00:15:53.819702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:35:18.005 [2024-12-14 00:15:53.819719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:99112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.005 [2024-12-14 00:15:53.819729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:35:18.005 [2024-12-14 00:15:53.819746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:98808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.005 [2024-12-14 00:15:53.819756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:35:18.005 [2024-12-14 00:15:53.819773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:98312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.005 [2024-12-14 00:15:53.819783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:35:18.005 [2024-12-14 00:15:53.819800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:98824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.005 [2024-12-14 00:15:53.819810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:35:18.005 [2024-12-14 00:15:53.819827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:98520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.005 [2024-12-14 00:15:53.819837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:35:18.005 [2024-12-14 00:15:53.819853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:97032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.005 [2024-12-14 00:15:53.819863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:35:18.005 [2024-12-14 00:15:53.819880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:97688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.005 [2024-12-14 00:15:53.819890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:35:18.005 [2024-12-14 00:15:53.819907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:98400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.005 [2024-12-14 00:15:53.819917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:35:18.005 [2024-12-14 00:15:53.819933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:98224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.005 [2024-12-14 00:15:53.819948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:35:18.005 [2024-12-14 00:15:53.819965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:98264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.005 [2024-12-14 00:15:53.819975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:35:18.006 [2024-12-14 00:15:53.819991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:98528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.006 [2024-12-14 00:15:53.820001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:35:18.006 [2024-12-14 00:15:53.820018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:97128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.006 [2024-12-14 00:15:53.820027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:18.006 [2024-12-14 00:15:53.820044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:98384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.006 [2024-12-14 00:15:53.820054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:18.006 [2024-12-14 00:15:53.820070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:98832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.006 [2024-12-14 00:15:53.820080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:35:18.006 [2024-12-14 00:15:53.820096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:98864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.006 [2024-12-14 00:15:53.820106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:35:18.006 [2024-12-14 00:15:53.820123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:98896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.006 [2024-12-14 00:15:53.820133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:35:18.006 [2024-12-14 00:15:53.820149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:98928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.006 [2024-12-14 00:15:53.820159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:35:18.006 [2024-12-14 00:15:53.820176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:98600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.006 [2024-12-14 00:15:53.820185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:35:18.006 [2024-12-14 00:15:53.820202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:98232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.006 [2024-12-14 00:15:53.820212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:35:18.006 [2024-12-14 00:15:53.820228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:98536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.006 [2024-12-14 00:15:53.820238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:35:18.006 [2024-12-14 00:15:53.820255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:98424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.006 [2024-12-14 00:15:53.820265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:35:18.006 [2024-12-14 00:15:53.820283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:98696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.006 [2024-12-14 00:15:53.820293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:35:18.006 [2024-12-14 00:15:53.820309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.006 [2024-12-14 00:15:53.820319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:35:18.006 [2024-12-14 00:15:53.820336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:98688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.006 [2024-12-14 00:15:53.820346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:35:18.006 [2024-12-14 00:15:53.820362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:98720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.006 [2024-12-14 00:15:53.820372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:35:18.006 [2024-12-14 00:15:53.820389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:98752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.006 [2024-12-14 00:15:53.820399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:35:18.006 [2024-12-14 00:15:53.820415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:98784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.006 [2024-12-14 00:15:53.820425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:35:18.006 [2024-12-14 00:15:53.820445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:98816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.006 [2024-12-14 00:15:53.820456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:35:18.006 [2024-12-14 00:15:53.820472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:98144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.006 [2024-12-14 00:15:53.820482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:35:18.006 [2024-12-14 00:15:53.820499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:98608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.006 [2024-12-14 00:15:53.820508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:35:18.006 [2024-12-14 00:15:53.820525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:99120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.006 [2024-12-14 00:15:53.820535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:35:18.006 [2024-12-14 00:15:53.821233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:99136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.006 [2024-12-14 00:15:53.821255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:35:18.006 [2024-12-14 00:15:53.821276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:98856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.006 [2024-12-14 00:15:53.821287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:35:18.006 [2024-12-14 00:15:53.821307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:98888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.006 [2024-12-14 00:15:53.821318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:35:18.006 [2024-12-14 00:15:53.821334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:98920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.006 [2024-12-14 00:15:53.821345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:35:18.006 [2024-12-14 00:15:53.821763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:98680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.006 [2024-12-14 00:15:53.821784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:35:18.006 [2024-12-14 00:15:53.821804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:98744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.006 [2024-12-14 00:15:53.821814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:35:18.006 [2024-12-14 00:15:53.821831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:99144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.006 [2024-12-14 00:15:53.821842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:35:18.006 [2024-12-14 00:15:53.821859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:99160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.006 [2024-12-14 00:15:53.821869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:35:18.006 [2024-12-14 00:15:53.821886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:99176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.006 [2024-12-14 00:15:53.821897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:35:18.006 [2024-12-14 00:15:53.821913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:99192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.006 [2024-12-14 00:15:53.821923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:35:18.006 [2024-12-14 00:15:53.821940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:99208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.006 [2024-12-14 00:15:53.821950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:35:18.006 [2024-12-14 00:15:53.821967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:99224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.006 [2024-12-14 00:15:53.821977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:35:18.006 [2024-12-14 00:15:53.821994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:99240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.006 [2024-12-14 00:15:53.822004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:18.006 [2024-12-14 00:15:53.822257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:99256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.006 [2024-12-14 00:15:53.822276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:18.006 [2024-12-14 00:15:53.822296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:99272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.006 [2024-12-14 00:15:53.822309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:35:18.006 [2024-12-14 00:15:53.822327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:99288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.006 [2024-12-14 00:15:53.822337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:35:18.006 [2024-12-14 00:15:53.822353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:99304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.006 [2024-12-14 00:15:53.822363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:35:18.006 [2024-12-14 00:15:53.822381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:99320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.006 [2024-12-14 00:15:53.822391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:35:18.006 [2024-12-14 00:15:53.822408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:99336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.006 [2024-12-14 00:15:53.822418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:35:18.007 [2024-12-14 00:15:53.822436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:98960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.007 [2024-12-14 00:15:53.822453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:35:18.007 [2024-12-14 00:15:53.822470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:98992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.007 [2024-12-14 00:15:53.822480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:35:18.007 [2024-12-14 00:15:53.822498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:99024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.007 [2024-12-14 00:15:53.822508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:35:18.007 [2024-12-14 00:15:53.822525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:99056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.007 [2024-12-14 00:15:53.822535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:35:18.007 [2024-12-14 00:15:53.822552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:98968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.007 [2024-12-14 00:15:53.822562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:35:18.007 [2024-12-14 00:15:53.822579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:99000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.007 [2024-12-14 00:15:53.822589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:35:18.007 [2024-12-14 00:15:53.824011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:99032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.007 [2024-12-14 00:15:53.824033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:35:18.007 [2024-12-14 00:15:53.824054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:98016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.007 [2024-12-14 00:15:53.824067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:35:18.007 [2024-12-14 00:15:53.824084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:98624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.007 [2024-12-14 00:15:53.824101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:35:18.007 [2024-12-14 00:15:53.824119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:99064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.007 [2024-12-14 00:15:53.824128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:35:18.007 [2024-12-14 00:15:53.824146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:99096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.007 [2024-12-14 00:15:53.824155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:35:18.007 [2024-12-14 00:15:53.824172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:98808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.007 [2024-12-14 00:15:53.824182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:35:18.007 [2024-12-14 00:15:53.824205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.007 [2024-12-14 00:15:53.824215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:35:18.007 [2024-12-14 00:15:53.824231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:97032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.007 [2024-12-14 00:15:53.824241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:35:18.007 [2024-12-14 00:15:53.824258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:98400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.007 [2024-12-14 00:15:53.824268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:35:18.007 [2024-12-14 00:15:53.824285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:98264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.007 [2024-12-14 00:15:53.824294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:35:18.007 [2024-12-14 00:15:53.824312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:97128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.007 [2024-12-14 00:15:53.824322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:35:18.007 [2024-12-14 00:15:53.824339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:98832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.007 [2024-12-14 00:15:53.824348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:35:18.007 [2024-12-14 00:15:53.824365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:98896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.007 [2024-12-14 00:15:53.824375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:35:18.007 [2024-12-14 00:15:53.824392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:98600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.007 [2024-12-14 00:15:53.824402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:35:18.007 [2024-12-14 00:15:53.824421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:98536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.007 [2024-12-14 00:15:53.824432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:35:18.007 [2024-12-14 00:15:53.824454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:98696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.007 [2024-12-14 00:15:53.824464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:35:18.007 [2024-12-14 00:15:53.824482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:98688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.007 [2024-12-14 00:15:53.824492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:35:18.007 [2024-12-14 00:15:53.824509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:98752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.007 [2024-12-14 00:15:53.824519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:35:18.007 [2024-12-14 00:15:53.824536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:98816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.007 [2024-12-14 00:15:53.824546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:35:18.007 [2024-12-14 00:15:53.824562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:98608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.007 [2024-12-14 00:15:53.824572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:18.007 [2024-12-14 00:15:53.824589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:99072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.007 [2024-12-14 00:15:53.824599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:18.007 [2024-12-14 00:15:53.824615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:99104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.007 [2024-12-14 00:15:53.824625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:35:18.007 [2024-12-14 00:15:53.824642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:97304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.007 [2024-12-14 00:15:53.824652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:35:18.007 [2024-12-14 00:15:53.824669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:98856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.007 [2024-12-14 00:15:53.824678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:35:18.007 [2024-12-14 00:15:53.824695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:98920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.007 [2024-12-14 00:15:53.824705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:35:18.007 [2024-12-14 00:15:53.824721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:98880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.007 [2024-12-14 00:15:53.824731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:35:18.007 [2024-12-14 00:15:53.824750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:99352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.007 [2024-12-14 00:15:53.824760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:35:18.007 [2024-12-14 00:15:53.824777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:99368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.007 [2024-12-14 00:15:53.824787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:35:18.007 [2024-12-14 00:15:53.824804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:98944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.007 [2024-12-14 00:15:53.824813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:35:18.007 [2024-12-14 00:15:53.824830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:98744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.007 [2024-12-14 00:15:53.824840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:35:18.007 [2024-12-14 00:15:53.824856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:99160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.007 [2024-12-14 00:15:53.824866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:35:18.007 [2024-12-14 00:15:53.824883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:99192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.007 [2024-12-14 00:15:53.824892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:35:18.007 [2024-12-14 00:15:53.824909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:99224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.007 [2024-12-14 00:15:53.824919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:35:18.007 [2024-12-14 00:15:53.824936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:99128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.007 [2024-12-14 00:15:53.824946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:35:18.008 [2024-12-14 00:15:53.824963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:99272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.008 [2024-12-14 00:15:53.824973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:35:18.008 [2024-12-14 00:15:53.824990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:99304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.008 [2024-12-14 00:15:53.825000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:35:18.008 [2024-12-14 00:15:53.825017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:99336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.008 [2024-12-14 00:15:53.825026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:35:18.008 [2024-12-14 00:15:53.825044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:98992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.008 [2024-12-14 00:15:53.825054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:35:18.008 [2024-12-14 00:15:53.825070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:99056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.008 [2024-12-14 00:15:53.825082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:35:18.008 [2024-12-14 00:15:53.825099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:99000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.008 [2024-12-14 00:15:53.825109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:35:18.008 [2024-12-14 00:15:53.827599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:99392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.008 [2024-12-14 00:15:53.827623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:35:18.008 [2024-12-14 00:15:53.827650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:99408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.008 [2024-12-14 00:15:53.827660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:35:18.008 [2024-12-14 00:15:53.827678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:99424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.008 [2024-12-14 00:15:53.827688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:35:18.008 [2024-12-14 00:15:53.827705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:99440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.008 [2024-12-14 00:15:53.827715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:35:18.008 [2024-12-14 00:15:53.827732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:99456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.008 [2024-12-14 00:15:53.827742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:35:18.008 [2024-12-14 00:15:53.827759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:99472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.008 [2024-12-14 00:15:53.827769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:35:18.008 [2024-12-14 00:15:53.827785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:99488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.008 [2024-12-14 00:15:53.827795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:35:18.008 [2024-12-14 00:15:53.827812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:99504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.008 [2024-12-14 00:15:53.827822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:18.008 [2024-12-14 00:15:53.827839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:99520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.008 [2024-12-14 00:15:53.827849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:18.008 [2024-12-14 00:15:53.827866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:99536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.008 [2024-12-14 00:15:53.827876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:18.008 [2024-12-14 00:15:53.827893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:99552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.008 [2024-12-14 00:15:53.827906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:18.008 [2024-12-14 00:15:53.827923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:99568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.008 [2024-12-14 00:15:53.827932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:18.008 [2024-12-14 00:15:53.827949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:99584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.008 [2024-12-14 00:15:53.827959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:18.008 [2024-12-14 00:15:53.827976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:99152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.008 [2024-12-14 00:15:53.827986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:35:18.008 [2024-12-14 00:15:53.828003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:99184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.008 [2024-12-14 00:15:53.828013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:35:18.008 [2024-12-14 00:15:53.828031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:99216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.008 [2024-12-14 00:15:53.828041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:35:18.008 [2024-12-14 00:15:53.828057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:99248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.008 [2024-12-14 00:15:53.828067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:35:18.008 [2024-12-14 00:15:53.828084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:99280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.008 [2024-12-14 00:15:53.828094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:35:18.008 [2024-12-14 00:15:53.828110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:99312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.008 [2024-12-14 00:15:53.828121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:35:18.008 [2024-12-14 00:15:53.828138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:99344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.008 [2024-12-14 00:15:53.828147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:35:18.008 [2024-12-14 00:15:53.828164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:98016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.008 [2024-12-14 00:15:53.828174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:35:18.008 [2024-12-14 00:15:53.828190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:99064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.008 [2024-12-14 00:15:53.828200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:35:18.008 [2024-12-14 00:15:53.828217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:98808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.008 [2024-12-14 00:15:53.828227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:35:18.008 [2024-12-14 00:15:53.828246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:97032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.008 [2024-12-14 00:15:53.828256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:35:18.008 [2024-12-14 00:15:53.828273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:98264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.008 [2024-12-14 00:15:53.828283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:35:18.008 [2024-12-14 00:15:53.828300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:98832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.008 [2024-12-14 00:15:53.828310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:35:18.008 [2024-12-14 00:15:53.828326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:98600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.008 [2024-12-14 00:15:53.828336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:35:18.008 [2024-12-14 00:15:53.828353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:98696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.008 [2024-12-14 00:15:53.828363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:35:18.009 [2024-12-14 00:15:53.828380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:98752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.009 [2024-12-14 00:15:53.828390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:35:18.009 [2024-12-14 00:15:53.828406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:98608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.009 [2024-12-14 00:15:53.828417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:35:18.009 [2024-12-14 00:15:53.828446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:99104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.009 [2024-12-14 00:15:53.828457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:35:18.009 [2024-12-14 00:15:53.828474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:98856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.009 [2024-12-14 00:15:53.828484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:35:18.009 [2024-12-14 00:15:53.828500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:98880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.009 [2024-12-14 00:15:53.828510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:35:18.009 [2024-12-14 00:15:53.828527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:99368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.009 [2024-12-14 00:15:53.828537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:35:18.009 [2024-12-14 00:15:53.828554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:98744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.009 [2024-12-14 00:15:53.828564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:35:18.009 [2024-12-14 00:15:53.828585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:99192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.009 [2024-12-14 00:15:53.828595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:35:18.009 [2024-12-14 00:15:53.828611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:99128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.009 [2024-12-14 00:15:53.828621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:35:18.009 [2024-12-14 00:15:53.828637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:99304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.009 [2024-12-14 00:15:53.828647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:35:18.009 [2024-12-14 00:15:53.828664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:98992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.009 [2024-12-14 00:15:53.828674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:35:18.009 [2024-12-14 00:15:53.828691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:99000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.009 [2024-12-14 00:15:53.828700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:35:18.009 [2024-12-14 00:15:53.828717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:98984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.009 [2024-12-14 00:15:53.828727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:35:18.009 [2024-12-14 00:15:53.828744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:99608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.009 [2024-12-14 00:15:53.828754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:35:18.009 [2024-12-14 00:15:53.828771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:99048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.009 [2024-12-14 00:15:53.828781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:35:18.009 [2024-12-14 00:15:53.828798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:99112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.009 [2024-12-14 00:15:53.828807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:18.009 [2024-12-14 00:15:53.828824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:98864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.009 [2024-12-14 00:15:53.828834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:18.009 [2024-12-14 00:15:53.828851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:98760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.009 [2024-12-14 00:15:53.828862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:35:18.009 [2024-12-14 00:15:53.829525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:99624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.009 [2024-12-14 00:15:53.829547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:35:18.009 [2024-12-14 00:15:53.829571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:99120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.009 [2024-12-14 00:15:53.829582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:35:18.009 [2024-12-14 00:15:53.829599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:99360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.009 [2024-12-14 00:15:53.829609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:35:18.009 [2024-12-14 00:15:53.829626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:99144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.009 [2024-12-14 00:15:53.829636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:35:18.009 [2024-12-14 00:15:53.829653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:99208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.009 [2024-12-14 00:15:53.829663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:35:18.009 [2024-12-14 00:15:53.829680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:99640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.009 [2024-12-14 00:15:53.829690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:35:18.009 [2024-12-14 00:15:53.829707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:99656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.009 [2024-12-14 00:15:53.829717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:35:18.009 [2024-12-14 00:15:53.829734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.009 [2024-12-14 00:15:53.829744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:35:18.009 [2024-12-14 00:15:53.829761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:99688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.009 [2024-12-14 00:15:53.829770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:35:18.009 [2024-12-14 00:15:53.829788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:99704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.009 [2024-12-14 00:15:53.829798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:35:18.009 [2024-12-14 00:15:53.829815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:99720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.009 [2024-12-14 00:15:53.829824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:35:18.009 [2024-12-14 00:15:53.829841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:99736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.009 [2024-12-14 00:15:53.829851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:35:18.009 [2024-12-14 00:15:53.829867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:99752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.009 [2024-12-14 00:15:53.829877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:35:18.009 [2024-12-14 00:15:53.829893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:99768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.009 [2024-12-14 00:15:53.829905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:35:18.009 [2024-12-14 00:15:53.829922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:99784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.009 [2024-12-14 00:15:53.829932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:35:18.009 [2024-12-14 00:15:53.829948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:99288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.009 [2024-12-14 00:15:53.829958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:35:18.009 [2024-12-14 00:15:53.829975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:98968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.009 [2024-12-14 00:15:53.829985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:35:18.009 [2024-12-14 00:15:53.830003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:99800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.009 [2024-12-14 00:15:53.830012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:35:18.009 [2024-12-14 00:15:53.831721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:99400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.009 [2024-12-14 00:15:53.831743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:35:18.009 [2024-12-14 00:15:53.831764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:99432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.009 [2024-12-14 00:15:53.831775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:35:18.009 [2024-12-14 00:15:53.831791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:99464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.009 [2024-12-14 00:15:53.831801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:35:18.009 [2024-12-14 00:15:53.831818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:99496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.009 [2024-12-14 00:15:53.831828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:35:18.010 [2024-12-14 00:15:53.831845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:99528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.010 [2024-12-14 00:15:53.831855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:35:18.010 [2024-12-14 00:15:53.831872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:99560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.010 [2024-12-14 00:15:53.831882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:35:18.010 [2024-12-14 00:15:53.831899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:99592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.010 [2024-12-14 00:15:53.831909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:35:18.010 [2024-12-14 00:15:53.831927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:99408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.010 [2024-12-14 00:15:53.831940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:35:18.010 [2024-12-14 00:15:53.831957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:99440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.010 [2024-12-14 00:15:53.831967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:35:18.010 [2024-12-14 00:15:53.831984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:99472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.010 [2024-12-14 00:15:53.831994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:35:18.010 [2024-12-14 00:15:53.832010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:99504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.010 [2024-12-14 00:15:53.832020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:35:18.010 [2024-12-14 00:15:53.832038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:99536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.010 [2024-12-14 00:15:53.832048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:18.010 [2024-12-14 00:15:53.832065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:99568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.010 [2024-12-14 00:15:53.832075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:18.010 [2024-12-14 00:15:53.832093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:99152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.010 [2024-12-14 00:15:53.832102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:35:18.010 [2024-12-14 00:15:53.832119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:99216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.010 [2024-12-14 00:15:53.832129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:35:18.010 [2024-12-14 00:15:53.832146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:99280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.010 [2024-12-14 00:15:53.832156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:35:18.010 [2024-12-14 00:15:53.832172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:99344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.010 [2024-12-14 00:15:53.832182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:35:18.010 [2024-12-14 00:15:53.832198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:99064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.010 [2024-12-14 00:15:53.832208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:35:18.010 [2024-12-14 00:15:53.832225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:97032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.010 [2024-12-14 00:15:53.832235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:35:18.010 [2024-12-14 00:15:53.832252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:98832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.010 [2024-12-14 00:15:53.832262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:35:18.010 [2024-12-14 00:15:53.832281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:98696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.010 [2024-12-14 00:15:53.832291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:35:18.010 [2024-12-14 00:15:53.832307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:98608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.010 [2024-12-14 00:15:53.832317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:35:18.010 [2024-12-14 00:15:53.832334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:98856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.010 [2024-12-14 00:15:53.832344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:35:18.010 [2024-12-14 00:15:53.832360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:99368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.010 [2024-12-14 00:15:53.832370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:35:18.010 [2024-12-14 00:15:53.832386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:99192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.010 [2024-12-14 00:15:53.832396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:35:18.010 [2024-12-14 00:15:53.832413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:99304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.010 [2024-12-14 00:15:53.832422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:35:18.010 [2024-12-14 00:15:53.832445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:99000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.010 [2024-12-14 00:15:53.832455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:35:18.010 [2024-12-14 00:15:53.832472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:99608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.010 [2024-12-14 00:15:53.832482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:35:18.010 [2024-12-14 00:15:53.832498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:99112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.010 [2024-12-14 00:15:53.832508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:35:18.010 [2024-12-14 00:15:53.832524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:98760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.010 [2024-12-14 00:15:53.832534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:35:18.010 [2024-12-14 00:15:53.832556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:98824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.010 [2024-12-14 00:15:53.832566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:35:18.010 [2024-12-14 00:15:53.832582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:98896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.010 [2024-12-14 00:15:53.832592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:35:18.010 [2024-12-14 00:15:53.832610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:99120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.010 [2024-12-14 00:15:53.832620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:35:18.010 [2024-12-14 00:15:53.832636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:99144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.010 [2024-12-14 00:15:53.832646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:35:18.010 [2024-12-14 00:15:53.832663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:99640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.010 [2024-12-14 00:15:53.832673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:35:18.010 [2024-12-14 00:15:53.832689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:99672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.010 [2024-12-14 00:15:53.832699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:35:18.010 [2024-12-14 00:15:53.832715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:99704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.010 [2024-12-14 00:15:53.832725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:35:18.010 [2024-12-14 00:15:53.832742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:99736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.010 [2024-12-14 00:15:53.832752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:35:18.010 [2024-12-14 00:15:53.832768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:99768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.010 [2024-12-14 00:15:53.832777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:35:18.010 [2024-12-14 00:15:53.832794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:99288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.010 [2024-12-14 00:15:53.832804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:35:18.010 [2024-12-14 00:15:53.832820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:99800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.011 [2024-12-14 00:15:53.832830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:35:18.011 [2024-12-14 00:15:53.832846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:99224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.011 [2024-12-14 00:15:53.832855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:35:18.011 [2024-12-14 00:15:53.832872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.011 [2024-12-14 00:15:53.832882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:35:18.011 [2024-12-14 00:15:53.832898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:99824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.011 [2024-12-14 00:15:53.832908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:18.011 [2024-12-14 00:15:53.832925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:99840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.011 [2024-12-14 00:15:53.832936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:18.011 [2024-12-14 00:15:53.832953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:99856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.011 [2024-12-14 00:15:53.832963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:35:18.011 [2024-12-14 00:15:53.832979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:99872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.011 [2024-12-14 00:15:53.832989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:35:18.011 [2024-12-14 00:15:53.833006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:99888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.011 [2024-12-14 00:15:53.833015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:35:18.011 [2024-12-14 00:15:53.833032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:99904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.011 [2024-12-14 00:15:53.833042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:35:18.011 [2024-12-14 00:15:53.833764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:99920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.011 [2024-12-14 00:15:53.833786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:35:18.011 [2024-12-14 00:15:53.833807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:99600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.011 [2024-12-14 00:15:53.833818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:35:18.011 [2024-12-14 00:15:53.833835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:99632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.011 [2024-12-14 00:15:53.833846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:35:18.011 [2024-12-14 00:15:53.834738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:99664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.011 [2024-12-14 00:15:53.834760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:35:18.011 [2024-12-14 00:15:53.834781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:99696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.011 [2024-12-14 00:15:53.834791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:35:18.011 [2024-12-14 00:15:53.834809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:99728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.011 [2024-12-14 00:15:53.834818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:35:18.011 [2024-12-14 00:15:53.834836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:99760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.011 [2024-12-14 00:15:53.834846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:35:18.011 [2024-12-14 00:15:53.834863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:99792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.011 [2024-12-14 00:15:53.834876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:35:18.011 [2024-12-14 00:15:53.834894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:99936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.011 [2024-12-14 00:15:53.834904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:35:18.011 [2024-12-14 00:15:53.834921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:99952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.011 [2024-12-14 00:15:53.834930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:35:18.011 [2024-12-14 00:15:53.834947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:99968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.011 [2024-12-14 00:15:53.834957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:35:18.011 [2024-12-14 00:15:53.834974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:99984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.011 [2024-12-14 00:15:53.834984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:35:18.011 [2024-12-14 00:15:53.835001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:100000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.011 [2024-12-14 00:15:53.835011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:35:18.011 [2024-12-14 00:15:53.835028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:100016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.011 [2024-12-14 00:15:53.835037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:35:18.011 [2024-12-14 00:15:53.835055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:99432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.011 [2024-12-14 00:15:53.835064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:35:18.011 [2024-12-14 00:15:53.835080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:99496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.011 [2024-12-14 00:15:53.835090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:35:18.011 [2024-12-14 00:15:53.835107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:99560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.011 [2024-12-14 00:15:53.835117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:35:18.011 [2024-12-14 00:15:53.835134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:99408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.011 [2024-12-14 00:15:53.835144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:35:18.011 [2024-12-14 00:15:53.836255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:99472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.011 [2024-12-14 00:15:53.836276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:35:18.011 [2024-12-14 00:15:53.836297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:99536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.011 [2024-12-14 00:15:53.836308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:35:18.011 [2024-12-14 00:15:53.836328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:99152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.011 [2024-12-14 00:15:53.836338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:35:18.011 [2024-12-14 00:15:53.836355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:99280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.011 [2024-12-14 00:15:53.836365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:35:18.011 [2024-12-14 00:15:53.836382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:99064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.011 [2024-12-14 00:15:53.836391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:18.011 [2024-12-14 00:15:53.836408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:98832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.011 [2024-12-14 00:15:53.836418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:18.011 [2024-12-14 00:15:53.836435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:98608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.011 [2024-12-14 00:15:53.836450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:18.011 [2024-12-14 00:15:53.836468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:99368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.011 [2024-12-14 00:15:53.836477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:18.011 [2024-12-14 00:15:53.836494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:99304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.011 [2024-12-14 00:15:53.836504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:18.011 [2024-12-14 00:15:53.836521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:99608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.011 [2024-12-14 00:15:53.836531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:18.012 [2024-12-14 00:15:53.836547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:98760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.012 [2024-12-14 00:15:53.836557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:35:18.012 [2024-12-14 00:15:53.836574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:98896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.012 [2024-12-14 00:15:53.836584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:35:18.012 [2024-12-14 00:15:53.836601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:99144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.012 [2024-12-14 00:15:53.836610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:35:18.012 [2024-12-14 00:15:53.836627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:99672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.012 [2024-12-14 00:15:53.836637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:35:18.012 [2024-12-14 00:15:53.836656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:99736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.012 [2024-12-14 00:15:53.836666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:35:18.012 [2024-12-14 00:15:53.836683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:99288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.012 [2024-12-14 00:15:53.836693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:35:18.012 [2024-12-14 00:15:53.836710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:99224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.012 [2024-12-14 00:15:53.836719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:35:18.012 [2024-12-14 00:15:53.836736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:99824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.012 [2024-12-14 00:15:53.836745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:35:18.012 [2024-12-14 00:15:53.836762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:99856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.012 [2024-12-14 00:15:53.836772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:35:18.012 [2024-12-14 00:15:53.836789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:99888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.012 [2024-12-14 00:15:53.836799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:35:18.012 [2024-12-14 00:15:53.836815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:99392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.012 [2024-12-14 00:15:53.836825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:35:18.012 [2024-12-14 00:15:53.836842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:99456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.012 [2024-12-14 00:15:53.836851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:35:18.012 [2024-12-14 00:15:53.836867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:99520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.012 [2024-12-14 00:15:53.836877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:35:18.012 [2024-12-14 00:15:53.836894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:99584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.012 [2024-12-14 00:15:53.836904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:35:18.012 [2024-12-14 00:15:53.836921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:99600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.012 [2024-12-14 00:15:53.836930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:35:18.012 [2024-12-14 00:15:53.836947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:100024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.012 [2024-12-14 00:15:53.836957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:35:18.012 [2024-12-14 00:15:53.836973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:100040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.012 [2024-12-14 00:15:53.836984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:35:18.012 [2024-12-14 00:15:53.837012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:100056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.012 [2024-12-14 00:15:53.837022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:35:18.012 [2024-12-14 00:15:53.837039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:100072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.012 [2024-12-14 00:15:53.837049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:35:18.012 [2024-12-14 00:15:53.837066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:100088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.012 [2024-12-14 00:15:53.837076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:35:18.012 [2024-12-14 00:15:53.837093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:100104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.012 [2024-12-14 00:15:53.837103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:35:18.012 [2024-12-14 00:15:53.837123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:98808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.012 [2024-12-14 00:15:53.837133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:35:18.012 [2024-12-14 00:15:53.837150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:99624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.012 [2024-12-14 00:15:53.837160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:35:18.012 [2024-12-14 00:15:53.837176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:99688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.012 [2024-12-14 00:15:53.837186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:35:18.012 [2024-12-14 00:15:53.837203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:99752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.012 [2024-12-14 00:15:53.837212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:35:18.012 [2024-12-14 00:15:53.837229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:99696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.012 [2024-12-14 00:15:53.837239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:35:18.012 [2024-12-14 00:15:53.837255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:99760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.012 [2024-12-14 00:15:53.837265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:35:18.012 [2024-12-14 00:15:53.837281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:99936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.012 [2024-12-14 00:15:53.837291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:35:18.012 [2024-12-14 00:15:53.837308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:99968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.012 [2024-12-14 00:15:53.837319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:35:18.012 [2024-12-14 00:15:53.837336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:100000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.012 [2024-12-14 00:15:53.837346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:35:18.012 [2024-12-14 00:15:53.837363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:99432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.012 [2024-12-14 00:15:53.837373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:18.012 [2024-12-14 00:15:53.837390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:99560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.012 [2024-12-14 00:15:53.837400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:18.012 [2024-12-14 00:15:53.839358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:99816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.012 [2024-12-14 00:15:53.839382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:35:18.012 [2024-12-14 00:15:53.839403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:99848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.012 [2024-12-14 00:15:53.839413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:35:18.012 [2024-12-14 00:15:53.839430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:99880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.012 [2024-12-14 00:15:53.839453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:35:18.012 [2024-12-14 00:15:53.839471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:99912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.012 [2024-12-14 00:15:53.839481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:35:18.012 [2024-12-14 00:15:53.839497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:100128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.013 [2024-12-14 00:15:53.839507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:35:18.013 [2024-12-14 00:15:53.839524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:100144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.013 [2024-12-14 00:15:53.839533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:35:18.013 [2024-12-14 00:15:53.839550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:100160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.013 [2024-12-14 00:15:53.839560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:35:18.013 [2024-12-14 00:15:53.839577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:100176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.013 [2024-12-14 00:15:53.839587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:35:18.013 [2024-12-14 00:15:53.839604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:100192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.013 [2024-12-14 00:15:53.839619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:35:18.013 [2024-12-14 00:15:53.839637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:100208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.013 [2024-12-14 00:15:53.839647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:35:18.013 [2024-12-14 00:15:53.839663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:100224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.013 [2024-12-14 00:15:53.839673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:35:18.013 [2024-12-14 00:15:53.839691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:100240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.013 [2024-12-14 00:15:53.839701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:35:18.013 [2024-12-14 00:15:53.839718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:100256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.013 [2024-12-14 00:15:53.839728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:35:18.013 [2024-12-14 00:15:53.839745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:100272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.013 [2024-12-14 00:15:53.839755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:35:18.013 [2024-12-14 00:15:53.839772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:99536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.013 [2024-12-14 00:15:53.839782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:35:18.013 [2024-12-14 00:15:53.839799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:99280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.013 [2024-12-14 00:15:53.839809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:35:18.013 [2024-12-14 00:15:53.839826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:98832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.013 [2024-12-14 00:15:53.839835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:35:18.013 [2024-12-14 00:15:53.839853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:99368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.013 [2024-12-14 00:15:53.839863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:35:18.013 [2024-12-14 00:15:53.839879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:99608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.013 [2024-12-14 00:15:53.839889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:35:18.013 [2024-12-14 00:15:53.839906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:98896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.013 [2024-12-14 00:15:53.839916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:35:18.013 [2024-12-14 00:15:53.839933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:99672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.013 [2024-12-14 00:15:53.839943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:35:18.013 [2024-12-14 00:15:53.839962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:99288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.013 [2024-12-14 00:15:53.839972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:35:18.013 [2024-12-14 00:15:53.841003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:99824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.013 [2024-12-14 00:15:53.841025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:35:18.013 [2024-12-14 00:15:53.841046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:99888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.013 [2024-12-14 00:15:53.841056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:35:18.013 [2024-12-14 00:15:53.841074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:99456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.013 [2024-12-14 00:15:53.841084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:35:18.013 [2024-12-14 00:15:53.841101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:99584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.013 [2024-12-14 00:15:53.841111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:35:18.013 [2024-12-14 00:15:53.841128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:100024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.013 [2024-12-14 00:15:53.841139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:35:18.013 [2024-12-14 00:15:53.841156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:100056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.013 [2024-12-14 00:15:53.841166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:35:18.013 [2024-12-14 00:15:53.841183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:100088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.013 [2024-12-14 00:15:53.841192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:35:18.013 [2024-12-14 00:15:53.841210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:98808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.013 [2024-12-14 00:15:53.841220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:35:18.013 [2024-12-14 00:15:53.841238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:99688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.013 [2024-12-14 00:15:53.841248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:18.013 [2024-12-14 00:15:53.841265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:99696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.013 [2024-12-14 00:15:53.841275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:18.013 [2024-12-14 00:15:53.841292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:99936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.013 [2024-12-14 00:15:53.841301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:35:18.013 [2024-12-14 00:15:53.841322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:100000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.013 [2024-12-14 00:15:53.841332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:35:18.013 [2024-12-14 00:15:53.841350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:99560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.013 [2024-12-14 00:15:53.841359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:35:18.013 [2024-12-14 00:15:53.841376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:99944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.013 [2024-12-14 00:15:53.841386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:35:18.013 [2024-12-14 00:15:53.841403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:99976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.013 [2024-12-14 00:15:53.841413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:35:18.013 [2024-12-14 00:15:53.841430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:100008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.013 [2024-12-14 00:15:53.841447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:35:18.013 [2024-12-14 00:15:53.841465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:99504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.013 [2024-12-14 00:15:53.841475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:35:18.013 [2024-12-14 00:15:53.841492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:98696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.013 [2024-12-14 00:15:53.841502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:35:18.013 [2024-12-14 00:15:53.841519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:99000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.014 [2024-12-14 00:15:53.841530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:35:18.014 [2024-12-14 00:15:53.841546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:99704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.014 [2024-12-14 00:15:53.841555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:35:18.014 [2024-12-14 00:15:53.841572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:100280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.014 [2024-12-14 00:15:53.841583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:35:18.014 [2024-12-14 00:15:53.841600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:100296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.014 [2024-12-14 00:15:53.841609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:35:18.014 [2024-12-14 00:15:53.841626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:100312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.014 [2024-12-14 00:15:53.841636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:35:18.014 [2024-12-14 00:15:53.841653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:100328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.014 [2024-12-14 00:15:53.841664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:35:18.014 [2024-12-14 00:15:53.841681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:100344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.014 [2024-12-14 00:15:53.841691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:35:18.014 [2024-12-14 00:15:53.841708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:100360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.014 [2024-12-14 00:15:53.841717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:35:18.014 [2024-12-14 00:15:53.841734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:100376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.014 [2024-12-14 00:15:53.841744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:35:18.014 [2024-12-14 00:15:53.841767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:100392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.014 [2024-12-14 00:15:53.841776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:35:18.014 [2024-12-14 00:15:53.841793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:99800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.014 [2024-12-14 00:15:53.841803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:35:18.014 [2024-12-14 00:15:53.841820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:99840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.014 [2024-12-14 00:15:53.841829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:35:18.014 [2024-12-14 00:15:53.841846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:99904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.014 [2024-12-14 00:15:53.841856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:35:18.014 [2024-12-14 00:15:53.841872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:100032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.014 [2024-12-14 00:15:53.841882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:35:18.014 [2024-12-14 00:15:53.841900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:100064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.014 [2024-12-14 00:15:53.841909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:35:18.014 [2024-12-14 00:15:53.841927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:100096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.014 [2024-12-14 00:15:53.841937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:35:18.014 [2024-12-14 00:15:53.841954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:99952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.014 [2024-12-14 00:15:53.841964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:35:18.014 [2024-12-14 00:15:53.841980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:100016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.014 [2024-12-14 00:15:53.841991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:35:18.014 [2024-12-14 00:15:53.842009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:99848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.014 [2024-12-14 00:15:53.842018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:35:18.014 [2024-12-14 00:15:53.842035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:99912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.014 [2024-12-14 00:15:53.842044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:35:18.014 [2024-12-14 00:15:53.842061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:100144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.014 [2024-12-14 00:15:53.842071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:35:18.014 [2024-12-14 00:15:53.842088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:100176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.014 [2024-12-14 00:15:53.842098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:35:18.014 [2024-12-14 00:15:53.842114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:100208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.014 [2024-12-14 00:15:53.842124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:18.014 [2024-12-14 00:15:53.842141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:100240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.014 [2024-12-14 00:15:53.842151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:18.014 [2024-12-14 00:15:53.842167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:100272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.014 [2024-12-14 00:15:53.842177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:35:18.014 [2024-12-14 00:15:53.842193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:99280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.014 [2024-12-14 00:15:53.842203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:35:18.014 [2024-12-14 00:15:53.842220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:99368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.014 [2024-12-14 00:15:53.842230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:35:18.014 [2024-12-14 00:15:53.842246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:98896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.014 [2024-12-14 00:15:53.842256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:35:18.014 [2024-12-14 00:15:53.842272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:99288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.014 [2024-12-14 00:15:53.842282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:35:18.014 [2024-12-14 00:15:53.844669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:100408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.014 [2024-12-14 00:15:53.844692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:35:18.014 [2024-12-14 00:15:53.844722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:100424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.014 [2024-12-14 00:15:53.844734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:35:18.014 [2024-12-14 00:15:53.844751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:100440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.014 [2024-12-14 00:15:53.844762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:35:18.014 [2024-12-14 00:15:53.844779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:100456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.014 [2024-12-14 00:15:53.844788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:35:18.014 [2024-12-14 00:15:53.844805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:100472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.014 [2024-12-14 00:15:53.844815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:35:18.014 [2024-12-14 00:15:53.844831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:100488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.014 [2024-12-14 00:15:53.844841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:35:18.014 [2024-12-14 00:15:53.844858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:100120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.014 [2024-12-14 00:15:53.844868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:35:18.014 [2024-12-14 00:15:53.844885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:100152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.014 [2024-12-14 00:15:53.844895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:35:18.014 [2024-12-14 00:15:53.844912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:100184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.014 [2024-12-14 00:15:53.844921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:35:18.014 [2024-12-14 00:15:53.844938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:100216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.014 [2024-12-14 00:15:53.844947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:35:18.014 [2024-12-14 00:15:53.844964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:100248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.014 [2024-12-14 00:15:53.844974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:35:18.015 [2024-12-14 00:15:53.844990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:99472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.015 [2024-12-14 00:15:53.844999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:35:18.015 [2024-12-14 00:15:53.845016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:99304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.015 [2024-12-14 00:15:53.845026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:35:18.015 [2024-12-14 00:15:53.845044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:99888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.015 [2024-12-14 00:15:53.845054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:35:18.015 [2024-12-14 00:15:53.845070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:99584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.015 [2024-12-14 00:15:53.845080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:35:18.015 [2024-12-14 00:15:53.845097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:100056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.015 [2024-12-14 00:15:53.845106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:35:18.015 [2024-12-14 00:15:53.845122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:98808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.015 [2024-12-14 00:15:53.845132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:35:18.015 [2024-12-14 00:15:53.845149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:99696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.015 [2024-12-14 00:15:53.845158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:35:18.015 [2024-12-14 00:15:53.845175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:100000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.015 [2024-12-14 00:15:53.845185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:35:18.015 [2024-12-14 00:15:53.845201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:99944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.015 [2024-12-14 00:15:53.845211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:35:18.015 [2024-12-14 00:15:53.845227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:100008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.015 [2024-12-14 00:15:53.845237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:35:18.015 [2024-12-14 00:15:53.845253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:98696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.015 [2024-12-14 00:15:53.845263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:18.015 [2024-12-14 00:15:53.845280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:99704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.015 [2024-12-14 00:15:53.845289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:18.015 [2024-12-14 00:15:53.845306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:100296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.015 [2024-12-14 00:15:53.845316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:18.015 [2024-12-14 00:15:53.845333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:100328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.015 [2024-12-14 00:15:53.845343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:18.015 [2024-12-14 00:15:53.845368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:100360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.015 [2024-12-14 00:15:53.845378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:18.015 [2024-12-14 00:15:53.845395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:100392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.015 [2024-12-14 00:15:53.845405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:18.015 [2024-12-14 00:15:53.845422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:99840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.015 [2024-12-14 00:15:53.845431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:35:18.015 [2024-12-14 00:15:53.845455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:100032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.015 [2024-12-14 00:15:53.845465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:35:18.015 [2024-12-14 00:15:53.845483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:100096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.015 [2024-12-14 00:15:53.845493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:35:18.015 [2024-12-14 00:15:53.845509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:100016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.015 [2024-12-14 00:15:53.845519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:35:18.015 [2024-12-14 00:15:53.845536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:99912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.015 [2024-12-14 00:15:53.845545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:35:18.015 [2024-12-14 00:15:53.845562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:100176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.015 [2024-12-14 00:15:53.845572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:35:18.015 [2024-12-14 00:15:53.845590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:100240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.015 [2024-12-14 00:15:53.845600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:35:18.015 [2024-12-14 00:15:53.845617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:99280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.015 [2024-12-14 00:15:53.845627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:35:18.015 [2024-12-14 00:15:53.845644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:98896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.015 [2024-12-14 00:15:53.845654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:35:18.015 [2024-12-14 00:15:53.845671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:99856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.015 [2024-12-14 00:15:53.845682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:35:18.015 [2024-12-14 00:15:53.846289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:100512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.015 [2024-12-14 00:15:53.846312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:35:18.015 [2024-12-14 00:15:53.846331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:100040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.015 [2024-12-14 00:15:53.846342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:35:18.015 [2024-12-14 00:15:53.846360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:100104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.015 [2024-12-14 00:15:53.846370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:35:18.015 [2024-12-14 00:15:53.846387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:100288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.015 [2024-12-14 00:15:53.846397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:35:18.015 [2024-12-14 00:15:53.846414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:100320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.015 [2024-12-14 00:15:53.846424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:35:18.015 [2024-12-14 00:15:53.846447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:100528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.015 [2024-12-14 00:15:53.846457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:35:18.015 [2024-12-14 00:15:53.846475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:100544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.015 [2024-12-14 00:15:53.846485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:35:18.015 [2024-12-14 00:15:53.846508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:100560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.015 [2024-12-14 00:15:53.846519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:35:18.015 [2024-12-14 00:15:53.846535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:100576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.015 [2024-12-14 00:15:53.846545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:35:18.015 [2024-12-14 00:15:53.846563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:100352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.015 [2024-12-14 00:15:53.846572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:35:18.015 [2024-12-14 00:15:53.846590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:100384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.015 [2024-12-14 00:15:53.846600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:35:18.015 [2024-12-14 00:15:53.846974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:100128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.015 [2024-12-14 00:15:53.846992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:35:18.015 [2024-12-14 00:15:53.847012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:100192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.015 [2024-12-14 00:15:53.847025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:35:18.015 [2024-12-14 00:15:53.847042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:100256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.015 [2024-12-14 00:15:53.847052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:35:18.016 [2024-12-14 00:15:53.847068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:98832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.016 [2024-12-14 00:15:53.847078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:35:18.016 [2024-12-14 00:15:53.847094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:99672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.016 [2024-12-14 00:15:53.847104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:35:18.016 [2024-12-14 00:15:53.847121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:100600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.016 [2024-12-14 00:15:53.847131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:35:18.016 [2024-12-14 00:15:53.847147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:100616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.016 [2024-12-14 00:15:53.847157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:35:18.016 [2024-12-14 00:15:53.847173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:100632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.016 [2024-12-14 00:15:53.847183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:35:18.016 [2024-12-14 00:15:53.847200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:100648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.016 [2024-12-14 00:15:53.847210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:35:18.016 [2024-12-14 00:15:53.847227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:100664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.016 [2024-12-14 00:15:53.847237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:18.016 [2024-12-14 00:15:53.847253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:100680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.016 [2024-12-14 00:15:53.847263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:18.016 [2024-12-14 00:15:53.847280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:100696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.016 [2024-12-14 00:15:53.847290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:35:18.016 [2024-12-14 00:15:53.848324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:100712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.016 [2024-12-14 00:15:53.848346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:35:18.016 [2024-12-14 00:15:53.848367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:100728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.016 [2024-12-14 00:15:53.848377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:35:18.016 [2024-12-14 00:15:53.848397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:100744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.016 [2024-12-14 00:15:53.848407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:35:18.016 [2024-12-14 00:15:53.848424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:100424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.016 [2024-12-14 00:15:53.848434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:35:18.016 [2024-12-14 00:15:53.848457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:100456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.016 [2024-12-14 00:15:53.848467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:35:18.016 [2024-12-14 00:15:53.848484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:100488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.016 [2024-12-14 00:15:53.848494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:35:18.016 [2024-12-14 00:15:53.848510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:100152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.016 [2024-12-14 00:15:53.848520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:35:18.016 [2024-12-14 00:15:53.848536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:100216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.016 [2024-12-14 00:15:53.848546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:35:18.016 [2024-12-14 00:15:53.848563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:99472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.016 [2024-12-14 00:15:53.848573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:35:18.016 [2024-12-14 00:15:53.848589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:99888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.016 [2024-12-14 00:15:53.848599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:35:18.016 [2024-12-14 00:15:53.848615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:100056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.016 [2024-12-14 00:15:53.848625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:35:18.016 [2024-12-14 00:15:53.848643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:99696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.016 [2024-12-14 00:15:53.848653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:35:18.016 [2024-12-14 00:15:53.848670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:99944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.016 [2024-12-14 00:15:53.848679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:35:18.016 [2024-12-14 00:15:53.848696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:98696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.016 [2024-12-14 00:15:53.848706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:35:18.016 [2024-12-14 00:15:53.848726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:100296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.016 [2024-12-14 00:15:53.848736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:35:18.016 [2024-12-14 00:15:53.848753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:100360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.016 [2024-12-14 00:15:53.848763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:35:18.016 [2024-12-14 00:15:53.848780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:99840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.016 [2024-12-14 00:15:53.848790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:35:18.016 [2024-12-14 00:15:53.848808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:100096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.016 [2024-12-14 00:15:53.848817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:35:18.016 [2024-12-14 00:15:53.848834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:99912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.016 [2024-12-14 00:15:53.848844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:35:18.016 [2024-12-14 00:15:53.848861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:100240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.016 [2024-12-14 00:15:53.848871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:35:18.016 [2024-12-14 00:15:53.848887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:98896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.016 [2024-12-14 00:15:53.848898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:35:18.016 [2024-12-14 00:15:53.848914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:100416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.016 [2024-12-14 00:15:53.848924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:35:18.016 [2024-12-14 00:15:53.848941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:100448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.016 [2024-12-14 00:15:53.848951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:35:18.016 [2024-12-14 00:15:53.848967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:100480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.016 [2024-12-14 00:15:53.848977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:35:18.016 [2024-12-14 00:15:53.848993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:100040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.016 [2024-12-14 00:15:53.849003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:35:18.016 [2024-12-14 00:15:53.849020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:100288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.016 [2024-12-14 00:15:53.849029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:35:18.016 [2024-12-14 00:15:53.849050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:100528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.016 [2024-12-14 00:15:53.849060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:35:18.016 [2024-12-14 00:15:53.849077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:100560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.016 [2024-12-14 00:15:53.849087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:35:18.016 [2024-12-14 00:15:53.849104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:100352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.016 [2024-12-14 00:15:53.849114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:35:18.016 [2024-12-14 00:15:53.850166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:99824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.016 [2024-12-14 00:15:53.850189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:18.016 [2024-12-14 00:15:53.850227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:100088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.017 [2024-12-14 00:15:53.850238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:18.017 [2024-12-14 00:15:53.850256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:100280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.017 [2024-12-14 00:15:53.850266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:35:18.017 [2024-12-14 00:15:53.850283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:100344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.017 [2024-12-14 00:15:53.850292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:35:18.017 [2024-12-14 00:15:53.850309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:100192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.017 [2024-12-14 00:15:53.850319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:35:18.017 [2024-12-14 00:15:53.850336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:98832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.017 [2024-12-14 00:15:53.850346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:35:18.017 [2024-12-14 00:15:53.850363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:100600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.017 [2024-12-14 00:15:53.850373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:35:18.017 [2024-12-14 00:15:53.850389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:100632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.017 [2024-12-14 00:15:53.850399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:35:18.017 [2024-12-14 00:15:53.850416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:100664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.017 [2024-12-14 00:15:53.850427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:35:18.017 [2024-12-14 00:15:53.850450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:100696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.017 [2024-12-14 00:15:53.850464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:35:18.017 [2024-12-14 00:15:53.850925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:100760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.017 [2024-12-14 00:15:53.850945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:35:18.017 [2024-12-14 00:15:53.850965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:100776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.017 [2024-12-14 00:15:53.850975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:35:18.017 [2024-12-14 00:15:53.850992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:100792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.017 [2024-12-14 00:15:53.851003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:35:18.017 [2024-12-14 00:15:53.851019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:100808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.017 [2024-12-14 00:15:53.851029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:35:18.017 [2024-12-14 00:15:53.851046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:100824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.017 [2024-12-14 00:15:53.851056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:35:18.017 [2024-12-14 00:15:53.851073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:100840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.017 [2024-12-14 00:15:53.851083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:35:18.017 [2024-12-14 00:15:53.851100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:100856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.017 [2024-12-14 00:15:53.851110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:35:18.017 [2024-12-14 00:15:53.851127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:100872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.017 [2024-12-14 00:15:53.851137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:35:18.017 [2024-12-14 00:15:53.851154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:100888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.017 [2024-12-14 00:15:53.851164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:35:18.017 [2024-12-14 00:15:53.851188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:100904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.017 [2024-12-14 00:15:53.851198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:35:18.017 [2024-12-14 00:15:53.851214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:100272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.017 [2024-12-14 00:15:53.851224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:35:18.017 [2024-12-14 00:15:53.851241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:100504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:18.017 [2024-12-14 00:15:53.851253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:35:18.017 9291.19 IOPS, 36.29 MiB/s [2024-12-13T23:15:57.158Z] 9316.71 IOPS, 36.39 MiB/s [2024-12-13T23:15:57.158Z] Received shutdown signal, test time was about 28.386088 seconds 00:35:18.017 00:35:18.017 Latency(us) 00:35:18.017 [2024-12-13T23:15:57.158Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:18.017 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:35:18.017 Verification LBA range: start 0x0 length 0x4000 00:35:18.017 Nvme0n1 : 28.39 9325.74 36.43 0.00 0.00 13702.22 741.18 3019898.88 00:35:18.017 [2024-12-13T23:15:57.158Z] =================================================================================================================== 00:35:18.017 [2024-12-13T23:15:57.158Z] Total : 9325.74 36.43 0.00 0.00 13702.22 741.18 3019898.88 00:35:18.017 00:15:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:35:18.276 00:15:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:35:18.276 00:15:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:35:18.276 00:15:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:35:18.276 00:15:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@516 -- # nvmfcleanup 00:35:18.276 00:15:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # sync 00:35:18.276 00:15:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:35:18.276 00:15:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set +e 00:35:18.276 00:15:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # for i in {1..20} 00:35:18.276 00:15:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:35:18.276 rmmod nvme_tcp 00:35:18.276 rmmod nvme_fabrics 00:35:18.276 rmmod nvme_keyring 00:35:18.276 00:15:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:35:18.276 00:15:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@128 -- # set -e 00:35:18.276 00:15:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@129 -- # return 0 00:35:18.276 00:15:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@517 -- # '[' -n 6283 ']' 00:35:18.276 00:15:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@518 -- # killprocess 6283 00:35:18.276 00:15:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 6283 ']' 00:35:18.276 00:15:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 6283 00:35:18.276 00:15:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:35:18.276 00:15:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:18.276 00:15:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 6283 00:35:18.276 00:15:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:35:18.276 00:15:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:35:18.276 00:15:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 6283' 00:35:18.276 killing process with pid 6283 00:35:18.276 00:15:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 6283 00:35:18.276 00:15:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 6283 00:35:19.653 00:15:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:35:19.653 00:15:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:35:19.653 00:15:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:35:19.653 00:15:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # iptr 00:35:19.653 00:15:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-restore 00:35:19.653 00:15:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-save 00:35:19.653 00:15:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:35:19.653 00:15:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:35:19.653 00:15:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # remove_spdk_ns 00:35:19.653 00:15:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:19.653 00:15:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:19.653 00:15:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:21.607 00:16:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:35:21.607 00:35:21.607 real 0m42.554s 00:35:21.607 user 1m55.260s 00:35:21.607 sys 0m10.897s 00:35:21.607 00:16:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:21.607 00:16:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:35:21.607 ************************************ 00:35:21.607 END TEST nvmf_host_multipath_status 00:35:21.607 ************************************ 00:35:21.867 00:16:00 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@28 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:35:21.867 00:16:00 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:35:21.867 00:16:00 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:21.867 00:16:00 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:35:21.867 ************************************ 00:35:21.867 START TEST nvmf_discovery_remove_ifc 00:35:21.867 ************************************ 00:35:21.867 00:16:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:35:21.867 * Looking for test storage... 00:35:21.867 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:35:21.867 00:16:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:35:21.867 00:16:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1711 -- # lcov --version 00:35:21.867 00:16:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:35:21.867 00:16:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:35:21.867 00:16:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:21.867 00:16:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:21.867 00:16:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:21.867 00:16:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # IFS=.-: 00:35:21.867 00:16:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # read -ra ver1 00:35:21.867 00:16:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # IFS=.-: 00:35:21.868 00:16:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # read -ra ver2 00:35:21.868 00:16:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@338 -- # local 'op=<' 00:35:21.868 00:16:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@340 -- # ver1_l=2 00:35:21.868 00:16:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@341 -- # ver2_l=1 00:35:21.868 00:16:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:21.868 00:16:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@344 -- # case "$op" in 00:35:21.868 00:16:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@345 -- # : 1 00:35:21.868 00:16:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:21.868 00:16:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:21.868 00:16:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # decimal 1 00:35:21.868 00:16:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=1 00:35:21.868 00:16:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:21.868 00:16:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 1 00:35:21.868 00:16:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # ver1[v]=1 00:35:21.868 00:16:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # decimal 2 00:35:21.868 00:16:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=2 00:35:21.868 00:16:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:21.868 00:16:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 2 00:35:21.868 00:16:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # ver2[v]=2 00:35:21.868 00:16:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:21.868 00:16:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:21.868 00:16:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # return 0 00:35:21.868 00:16:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:21.868 00:16:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:35:21.868 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:21.868 --rc genhtml_branch_coverage=1 00:35:21.868 --rc genhtml_function_coverage=1 00:35:21.868 --rc genhtml_legend=1 00:35:21.868 --rc geninfo_all_blocks=1 00:35:21.868 --rc geninfo_unexecuted_blocks=1 00:35:21.868 00:35:21.868 ' 00:35:21.868 00:16:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:35:21.868 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:21.868 --rc genhtml_branch_coverage=1 00:35:21.868 --rc genhtml_function_coverage=1 00:35:21.868 --rc genhtml_legend=1 00:35:21.868 --rc geninfo_all_blocks=1 00:35:21.868 --rc geninfo_unexecuted_blocks=1 00:35:21.868 00:35:21.868 ' 00:35:21.868 00:16:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:35:21.868 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:21.868 --rc genhtml_branch_coverage=1 00:35:21.868 --rc genhtml_function_coverage=1 00:35:21.868 --rc genhtml_legend=1 00:35:21.868 --rc geninfo_all_blocks=1 00:35:21.868 --rc geninfo_unexecuted_blocks=1 00:35:21.868 00:35:21.868 ' 00:35:21.868 00:16:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:35:21.868 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:21.868 --rc genhtml_branch_coverage=1 00:35:21.868 --rc genhtml_function_coverage=1 00:35:21.868 --rc genhtml_legend=1 00:35:21.868 --rc geninfo_all_blocks=1 00:35:21.868 --rc geninfo_unexecuted_blocks=1 00:35:21.868 00:35:21.868 ' 00:35:21.868 00:16:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:21.868 00:16:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:35:21.868 00:16:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:21.868 00:16:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:21.868 00:16:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:21.868 00:16:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:21.868 00:16:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:21.868 00:16:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:21.868 00:16:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:21.868 00:16:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:21.868 00:16:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:21.868 00:16:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:21.868 00:16:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:35:21.868 00:16:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:35:21.868 00:16:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:21.868 00:16:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:21.868 00:16:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:21.868 00:16:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:21.868 00:16:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:21.868 00:16:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@15 -- # shopt -s extglob 00:35:21.868 00:16:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:21.868 00:16:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:21.868 00:16:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:21.868 00:16:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:21.868 00:16:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:21.868 00:16:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:21.868 00:16:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:35:21.868 00:16:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:21.868 00:16:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # : 0 00:35:21.868 00:16:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:21.868 00:16:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:21.868 00:16:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:21.868 00:16:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:21.868 00:16:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:21.868 00:16:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:35:21.868 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:35:21.868 00:16:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:21.868 00:16:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:21.868 00:16:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:21.868 00:16:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:35:21.868 00:16:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:35:21.868 00:16:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:35:21.868 00:16:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:35:21.868 00:16:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:35:21.868 00:16:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:35:21.868 00:16:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:35:21.868 00:16:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:35:21.868 00:16:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:21.868 00:16:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@476 -- # prepare_net_devs 00:35:21.868 00:16:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:35:21.868 00:16:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:35:21.868 00:16:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:21.868 00:16:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:21.868 00:16:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:22.127 00:16:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:35:22.127 00:16:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:35:22.127 00:16:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@309 -- # xtrace_disable 00:35:22.127 00:16:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:27.398 00:16:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:27.398 00:16:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # pci_devs=() 00:35:27.398 00:16:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # local -a pci_devs 00:35:27.398 00:16:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:35:27.398 00:16:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:35:27.398 00:16:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # pci_drivers=() 00:35:27.398 00:16:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:35:27.398 00:16:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # net_devs=() 00:35:27.398 00:16:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # local -ga net_devs 00:35:27.398 00:16:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # e810=() 00:35:27.398 00:16:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # local -ga e810 00:35:27.398 00:16:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # x722=() 00:35:27.398 00:16:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # local -ga x722 00:35:27.398 00:16:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # mlx=() 00:35:27.398 00:16:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # local -ga mlx 00:35:27.398 00:16:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:27.398 00:16:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:27.398 00:16:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:27.398 00:16:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:27.398 00:16:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:27.398 00:16:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:27.398 00:16:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:27.398 00:16:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:35:27.398 00:16:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:27.398 00:16:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:27.398 00:16:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:27.398 00:16:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:27.398 00:16:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:35:27.398 00:16:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:35:27.398 00:16:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:35:27.398 00:16:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:35:27.398 00:16:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:35:27.398 00:16:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:35:27.398 00:16:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:27.398 00:16:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:35:27.398 Found 0000:af:00.0 (0x8086 - 0x159b) 00:35:27.398 00:16:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:27.398 00:16:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:27.398 00:16:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:27.398 00:16:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:27.398 00:16:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:27.398 00:16:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:27.398 00:16:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:35:27.398 Found 0000:af:00.1 (0x8086 - 0x159b) 00:35:27.398 00:16:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:27.398 00:16:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:27.398 00:16:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:27.398 00:16:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:27.398 00:16:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:27.398 00:16:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:35:27.398 00:16:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:35:27.398 00:16:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:35:27.398 00:16:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:27.398 00:16:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:27.398 00:16:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:27.398 00:16:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:27.398 00:16:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:27.398 00:16:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:27.398 00:16:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:27.398 00:16:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:35:27.398 Found net devices under 0000:af:00.0: cvl_0_0 00:35:27.398 00:16:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:27.398 00:16:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:27.398 00:16:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:27.398 00:16:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:27.398 00:16:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:27.398 00:16:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:27.398 00:16:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:27.398 00:16:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:27.398 00:16:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:35:27.398 Found net devices under 0000:af:00.1: cvl_0_1 00:35:27.398 00:16:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:27.398 00:16:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:35:27.398 00:16:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # is_hw=yes 00:35:27.398 00:16:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:35:27.398 00:16:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:35:27.398 00:16:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:35:27.398 00:16:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:35:27.398 00:16:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:27.398 00:16:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:27.398 00:16:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:27.398 00:16:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:35:27.398 00:16:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:27.398 00:16:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:27.398 00:16:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:35:27.398 00:16:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:35:27.398 00:16:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:27.398 00:16:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:27.398 00:16:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:35:27.398 00:16:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:35:27.398 00:16:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:35:27.398 00:16:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:27.398 00:16:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:27.398 00:16:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:27.398 00:16:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:35:27.399 00:16:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:27.399 00:16:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:27.399 00:16:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:27.399 00:16:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:35:27.399 00:16:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:35:27.399 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:27.399 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.368 ms 00:35:27.399 00:35:27.399 --- 10.0.0.2 ping statistics --- 00:35:27.399 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:27.399 rtt min/avg/max/mdev = 0.368/0.368/0.368/0.000 ms 00:35:27.399 00:16:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:27.399 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:27.399 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.144 ms 00:35:27.399 00:35:27.399 --- 10.0.0.1 ping statistics --- 00:35:27.399 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:27.399 rtt min/avg/max/mdev = 0.144/0.144/0.144/0.000 ms 00:35:27.399 00:16:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:27.399 00:16:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # return 0 00:35:27.399 00:16:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:35:27.399 00:16:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:27.399 00:16:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:35:27.399 00:16:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:35:27.399 00:16:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:27.399 00:16:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:35:27.399 00:16:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:35:27.399 00:16:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:35:27.399 00:16:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:35:27.399 00:16:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:27.399 00:16:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:27.399 00:16:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@509 -- # nvmfpid=15350 00:35:27.399 00:16:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:35:27.399 00:16:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@510 -- # waitforlisten 15350 00:35:27.399 00:16:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # '[' -z 15350 ']' 00:35:27.399 00:16:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:27.399 00:16:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:27.399 00:16:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:27.399 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:27.399 00:16:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:27.399 00:16:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:27.399 [2024-12-14 00:16:06.394547] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:35:27.399 [2024-12-14 00:16:06.394639] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:27.399 [2024-12-14 00:16:06.509915] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:27.658 [2024-12-14 00:16:06.612312] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:27.658 [2024-12-14 00:16:06.612357] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:27.658 [2024-12-14 00:16:06.612367] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:27.658 [2024-12-14 00:16:06.612381] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:27.658 [2024-12-14 00:16:06.612389] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:27.658 [2024-12-14 00:16:06.613664] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:35:28.225 00:16:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:28.225 00:16:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@868 -- # return 0 00:35:28.225 00:16:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:35:28.225 00:16:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@732 -- # xtrace_disable 00:35:28.225 00:16:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:28.225 00:16:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:28.225 00:16:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:35:28.225 00:16:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:28.225 00:16:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:28.225 [2024-12-14 00:16:07.248359] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:28.225 [2024-12-14 00:16:07.256545] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:35:28.225 null0 00:35:28.225 [2024-12-14 00:16:07.288555] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:28.225 00:16:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:28.225 00:16:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=15515 00:35:28.225 00:16:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 15515 /tmp/host.sock 00:35:28.225 00:16:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:35:28.225 00:16:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # '[' -z 15515 ']' 00:35:28.225 00:16:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 00:35:28.225 00:16:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:28.225 00:16:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:35:28.225 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:35:28.225 00:16:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:28.225 00:16:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:28.484 [2024-12-14 00:16:07.389559] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:35:28.484 [2024-12-14 00:16:07.389649] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid15515 ] 00:35:28.484 [2024-12-14 00:16:07.500884] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:28.484 [2024-12-14 00:16:07.611465] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:35:29.052 00:16:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:29.052 00:16:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@868 -- # return 0 00:35:29.052 00:16:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:35:29.052 00:16:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:35:29.052 00:16:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:29.052 00:16:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:29.311 00:16:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:29.311 00:16:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:35:29.311 00:16:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:29.311 00:16:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:29.569 00:16:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:29.569 00:16:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:35:29.569 00:16:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:29.569 00:16:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:30.505 [2024-12-14 00:16:09.538525] bdev_nvme.c:7516:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:35:30.505 [2024-12-14 00:16:09.538558] bdev_nvme.c:7602:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:35:30.505 [2024-12-14 00:16:09.538584] bdev_nvme.c:7479:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:35:30.505 [2024-12-14 00:16:09.627858] bdev_nvme.c:7445:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:35:30.765 [2024-12-14 00:16:09.810054] bdev_nvme.c:5663:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.2:4420 00:35:30.765 [2024-12-14 00:16:09.811258] bdev_nvme.c:1990:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x615000326200:1 started. 00:35:30.765 [2024-12-14 00:16:09.812873] bdev_nvme.c:8312:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:35:30.765 [2024-12-14 00:16:09.812927] bdev_nvme.c:8312:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:35:30.765 [2024-12-14 00:16:09.812989] bdev_nvme.c:8312:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:35:30.765 [2024-12-14 00:16:09.813008] bdev_nvme.c:7335:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:35:30.765 [2024-12-14 00:16:09.813035] bdev_nvme.c:7294:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:35:30.765 00:16:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:30.765 00:16:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:35:30.765 00:16:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:35:30.765 [2024-12-14 00:16:09.817918] bdev_nvme.c:1792:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x615000326200 was disconnected and freed. delete nvme_qpair. 00:35:30.765 00:16:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:35:30.765 00:16:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:35:30.765 00:16:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:30.765 00:16:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:35:30.765 00:16:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:30.765 00:16:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:35:30.765 00:16:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:30.765 00:16:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:35:30.765 00:16:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:35:30.765 00:16:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:35:31.023 00:16:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:35:31.023 00:16:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:35:31.023 00:16:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:35:31.023 00:16:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:35:31.023 00:16:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:35:31.023 00:16:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:31.023 00:16:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:35:31.023 00:16:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:31.023 00:16:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:31.023 00:16:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:35:31.023 00:16:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:35:31.959 00:16:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:35:31.959 00:16:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:35:31.959 00:16:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:35:31.959 00:16:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:35:31.959 00:16:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:31.959 00:16:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:35:31.959 00:16:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:31.959 00:16:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:31.959 00:16:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:35:31.959 00:16:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:35:33.338 00:16:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:35:33.338 00:16:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:35:33.338 00:16:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:35:33.338 00:16:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:33.338 00:16:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:35:33.338 00:16:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:33.338 00:16:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:35:33.338 00:16:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:33.338 00:16:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:35:33.338 00:16:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:35:34.273 00:16:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:35:34.273 00:16:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:35:34.273 00:16:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:34.273 00:16:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:34.273 00:16:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:35:34.273 00:16:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:35:34.273 00:16:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:35:34.273 00:16:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:34.273 00:16:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:35:34.273 00:16:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:35:35.211 00:16:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:35:35.211 00:16:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:35:35.211 00:16:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:35:35.211 00:16:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:35:35.211 00:16:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:35.211 00:16:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:35.211 00:16:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:35:35.211 00:16:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:35.211 00:16:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:35:35.211 00:16:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:35:36.149 00:16:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:35:36.149 00:16:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:35:36.149 00:16:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:35:36.149 00:16:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:36.149 00:16:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:35:36.149 00:16:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:36.149 00:16:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:35:36.149 00:16:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:36.149 [2024-12-14 00:16:15.253875] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:35:36.149 [2024-12-14 00:16:15.253949] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:35:36.149 [2024-12-14 00:16:15.253967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:36.149 [2024-12-14 00:16:15.253981] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:35:36.149 [2024-12-14 00:16:15.253991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:36.149 [2024-12-14 00:16:15.254006] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:35:36.149 [2024-12-14 00:16:15.254016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:36.149 [2024-12-14 00:16:15.254026] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:35:36.149 [2024-12-14 00:16:15.254035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:36.149 [2024-12-14 00:16:15.254046] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:35:36.149 [2024-12-14 00:16:15.254055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:36.149 [2024-12-14 00:16:15.254065] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325d00 is same with the state(6) to be set 00:35:36.149 [2024-12-14 00:16:15.263894] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325d00 (9): Bad file descriptor 00:35:36.149 00:16:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:35:36.149 00:16:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:35:36.149 [2024-12-14 00:16:15.273931] bdev_nvme.c:2550:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:35:36.149 [2024-12-14 00:16:15.273957] bdev_nvme.c:2538:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:35:36.149 [2024-12-14 00:16:15.273966] bdev_nvme.c:2134:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:35:36.149 [2024-12-14 00:16:15.273973] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:35:36.149 [2024-12-14 00:16:15.274011] bdev_nvme.c:2522:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:35:37.527 00:16:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:35:37.527 00:16:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:35:37.527 00:16:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:35:37.527 00:16:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:35:37.527 00:16:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:37.527 00:16:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:37.527 00:16:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:35:37.527 [2024-12-14 00:16:16.299467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:35:37.527 [2024-12-14 00:16:16.299531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325d00 with addr=10.0.0.2, port=4420 00:35:37.527 [2024-12-14 00:16:16.299558] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325d00 is same with the state(6) to be set 00:35:37.527 [2024-12-14 00:16:16.299606] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325d00 (9): Bad file descriptor 00:35:37.527 [2024-12-14 00:16:16.300265] bdev_nvme.c:3173:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] Unable to perform failover, already in progress. 00:35:37.527 [2024-12-14 00:16:16.300316] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:35:37.527 [2024-12-14 00:16:16.300338] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:35:37.527 [2024-12-14 00:16:16.300356] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:35:37.527 [2024-12-14 00:16:16.300376] bdev_nvme.c:2512:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:35:37.527 [2024-12-14 00:16:16.300389] bdev_nvme.c:2279:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:35:37.527 [2024-12-14 00:16:16.300400] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:35:37.527 [2024-12-14 00:16:16.300417] bdev_nvme.c:2134:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:35:37.527 [2024-12-14 00:16:16.300429] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:35:37.527 00:16:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:37.527 00:16:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:35:37.527 00:16:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:35:38.463 [2024-12-14 00:16:17.302934] bdev_nvme.c:2522:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:35:38.464 [2024-12-14 00:16:17.302965] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:35:38.464 [2024-12-14 00:16:17.302981] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:35:38.464 [2024-12-14 00:16:17.302991] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:35:38.464 [2024-12-14 00:16:17.303002] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] already in failed state 00:35:38.464 [2024-12-14 00:16:17.303015] bdev_nvme.c:2512:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:35:38.464 [2024-12-14 00:16:17.303023] bdev_nvme.c:2279:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:35:38.464 [2024-12-14 00:16:17.303030] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:35:38.464 [2024-12-14 00:16:17.303062] bdev_nvme.c:7267:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:35:38.464 [2024-12-14 00:16:17.303093] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:35:38.464 [2024-12-14 00:16:17.303107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:38.464 [2024-12-14 00:16:17.303121] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:35:38.464 [2024-12-14 00:16:17.303131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:38.464 [2024-12-14 00:16:17.303142] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:35:38.464 [2024-12-14 00:16:17.303151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:38.464 [2024-12-14 00:16:17.303162] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:35:38.464 [2024-12-14 00:16:17.303171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:38.464 [2024-12-14 00:16:17.303182] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:35:38.464 [2024-12-14 00:16:17.303191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:38.464 [2024-12-14 00:16:17.303200] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] in failed state. 00:35:38.464 [2024-12-14 00:16:17.303222] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325800 (9): Bad file descriptor 00:35:38.464 [2024-12-14 00:16:17.303984] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:35:38.464 [2024-12-14 00:16:17.304008] nvme_ctrlr.c:1217:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] Failed to read the CC register 00:35:38.464 00:16:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:35:38.464 00:16:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:35:38.464 00:16:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:35:38.464 00:16:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:35:38.464 00:16:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:35:38.464 00:16:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:38.464 00:16:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:38.464 00:16:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:38.464 00:16:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:35:38.464 00:16:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:38.464 00:16:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:38.464 00:16:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:35:38.464 00:16:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:35:38.464 00:16:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:35:38.464 00:16:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:35:38.464 00:16:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:38.464 00:16:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:35:38.464 00:16:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:38.464 00:16:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:35:38.464 00:16:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:38.464 00:16:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:35:38.464 00:16:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:35:39.400 00:16:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:35:39.400 00:16:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:35:39.400 00:16:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:35:39.400 00:16:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:35:39.400 00:16:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:39.400 00:16:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:39.400 00:16:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:35:39.400 00:16:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:39.659 00:16:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:35:39.659 00:16:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:35:40.226 [2024-12-14 00:16:19.313871] bdev_nvme.c:7516:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:35:40.226 [2024-12-14 00:16:19.313898] bdev_nvme.c:7602:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:35:40.226 [2024-12-14 00:16:19.313928] bdev_nvme.c:7479:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:35:40.485 [2024-12-14 00:16:19.442344] bdev_nvme.c:7445:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:35:40.485 00:16:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:35:40.485 00:16:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:35:40.485 00:16:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:35:40.485 00:16:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:40.485 00:16:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:35:40.485 00:16:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:40.485 00:16:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:35:40.485 00:16:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:40.485 00:16:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:35:40.485 00:16:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:35:40.744 [2024-12-14 00:16:19.666563] bdev_nvme.c:5663:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.2:4420 00:35:40.744 [2024-12-14 00:16:19.667610] bdev_nvme.c:1990:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] Connecting qpair 0x615000326e80:1 started. 00:35:40.744 [2024-12-14 00:16:19.669164] bdev_nvme.c:8312:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:35:40.744 [2024-12-14 00:16:19.669207] bdev_nvme.c:8312:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:35:40.744 [2024-12-14 00:16:19.669261] bdev_nvme.c:8312:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:35:40.744 [2024-12-14 00:16:19.669281] bdev_nvme.c:7335:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:35:40.744 [2024-12-14 00:16:19.669292] bdev_nvme.c:7294:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:35:40.744 [2024-12-14 00:16:19.673646] bdev_nvme.c:1792:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] qpair 0x615000326e80 was disconnected and freed. delete nvme_qpair. 00:35:41.680 00:16:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:35:41.680 00:16:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:35:41.680 00:16:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:35:41.680 00:16:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:41.680 00:16:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:35:41.680 00:16:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:41.680 00:16:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:35:41.680 00:16:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:41.680 00:16:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:35:41.680 00:16:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:35:41.680 00:16:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 15515 00:35:41.680 00:16:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' -z 15515 ']' 00:35:41.680 00:16:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # kill -0 15515 00:35:41.680 00:16:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # uname 00:35:41.680 00:16:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:41.680 00:16:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 15515 00:35:41.680 00:16:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:35:41.680 00:16:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:35:41.680 00:16:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 15515' 00:35:41.680 killing process with pid 15515 00:35:41.680 00:16:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # kill 15515 00:35:41.680 00:16:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@978 -- # wait 15515 00:35:42.618 00:16:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:35:42.618 00:16:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@516 -- # nvmfcleanup 00:35:42.618 00:16:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # sync 00:35:42.618 00:16:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:35:42.618 00:16:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set +e 00:35:42.618 00:16:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # for i in {1..20} 00:35:42.618 00:16:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:35:42.618 rmmod nvme_tcp 00:35:42.618 rmmod nvme_fabrics 00:35:42.618 rmmod nvme_keyring 00:35:42.618 00:16:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:35:42.618 00:16:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@128 -- # set -e 00:35:42.618 00:16:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@129 -- # return 0 00:35:42.618 00:16:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@517 -- # '[' -n 15350 ']' 00:35:42.618 00:16:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@518 -- # killprocess 15350 00:35:42.618 00:16:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' -z 15350 ']' 00:35:42.618 00:16:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # kill -0 15350 00:35:42.618 00:16:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # uname 00:35:42.618 00:16:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:42.618 00:16:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 15350 00:35:42.618 00:16:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:35:42.618 00:16:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:35:42.618 00:16:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 15350' 00:35:42.618 killing process with pid 15350 00:35:42.618 00:16:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # kill 15350 00:35:42.618 00:16:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@978 -- # wait 15350 00:35:43.995 00:16:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:35:43.995 00:16:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:35:43.995 00:16:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:35:43.995 00:16:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # iptr 00:35:43.995 00:16:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-restore 00:35:43.995 00:16:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-save 00:35:43.995 00:16:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:35:43.995 00:16:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:35:43.995 00:16:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:35:43.995 00:16:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:43.996 00:16:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:43.996 00:16:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:45.900 00:16:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:35:45.900 00:35:45.900 real 0m24.059s 00:35:45.900 user 0m31.642s 00:35:45.900 sys 0m5.509s 00:35:45.900 00:16:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:45.900 00:16:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:45.900 ************************************ 00:35:45.900 END TEST nvmf_discovery_remove_ifc 00:35:45.900 ************************************ 00:35:45.900 00:16:24 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@29 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:35:45.900 00:16:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:35:45.900 00:16:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:45.900 00:16:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:35:45.900 ************************************ 00:35:45.900 START TEST nvmf_identify_kernel_target 00:35:45.900 ************************************ 00:35:45.900 00:16:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:35:45.900 * Looking for test storage... 00:35:45.900 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:35:45.900 00:16:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:35:45.900 00:16:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1711 -- # lcov --version 00:35:45.900 00:16:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:35:46.159 00:16:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:35:46.159 00:16:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:46.159 00:16:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:46.159 00:16:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:46.159 00:16:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # IFS=.-: 00:35:46.159 00:16:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # read -ra ver1 00:35:46.159 00:16:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # IFS=.-: 00:35:46.159 00:16:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # read -ra ver2 00:35:46.159 00:16:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@338 -- # local 'op=<' 00:35:46.159 00:16:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@340 -- # ver1_l=2 00:35:46.159 00:16:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@341 -- # ver2_l=1 00:35:46.159 00:16:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:46.159 00:16:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@344 -- # case "$op" in 00:35:46.159 00:16:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@345 -- # : 1 00:35:46.159 00:16:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:46.159 00:16:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:46.159 00:16:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # decimal 1 00:35:46.159 00:16:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=1 00:35:46.159 00:16:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:46.159 00:16:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 1 00:35:46.159 00:16:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # ver1[v]=1 00:35:46.159 00:16:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # decimal 2 00:35:46.159 00:16:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=2 00:35:46.159 00:16:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:46.159 00:16:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 2 00:35:46.159 00:16:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # ver2[v]=2 00:35:46.159 00:16:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:46.159 00:16:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:46.159 00:16:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # return 0 00:35:46.159 00:16:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:46.159 00:16:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:35:46.159 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:46.159 --rc genhtml_branch_coverage=1 00:35:46.159 --rc genhtml_function_coverage=1 00:35:46.159 --rc genhtml_legend=1 00:35:46.159 --rc geninfo_all_blocks=1 00:35:46.159 --rc geninfo_unexecuted_blocks=1 00:35:46.159 00:35:46.159 ' 00:35:46.159 00:16:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:35:46.159 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:46.159 --rc genhtml_branch_coverage=1 00:35:46.159 --rc genhtml_function_coverage=1 00:35:46.159 --rc genhtml_legend=1 00:35:46.159 --rc geninfo_all_blocks=1 00:35:46.159 --rc geninfo_unexecuted_blocks=1 00:35:46.159 00:35:46.159 ' 00:35:46.159 00:16:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:35:46.159 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:46.159 --rc genhtml_branch_coverage=1 00:35:46.159 --rc genhtml_function_coverage=1 00:35:46.159 --rc genhtml_legend=1 00:35:46.159 --rc geninfo_all_blocks=1 00:35:46.159 --rc geninfo_unexecuted_blocks=1 00:35:46.159 00:35:46.159 ' 00:35:46.159 00:16:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:35:46.159 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:46.159 --rc genhtml_branch_coverage=1 00:35:46.159 --rc genhtml_function_coverage=1 00:35:46.159 --rc genhtml_legend=1 00:35:46.159 --rc geninfo_all_blocks=1 00:35:46.159 --rc geninfo_unexecuted_blocks=1 00:35:46.159 00:35:46.159 ' 00:35:46.159 00:16:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:46.159 00:16:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:35:46.159 00:16:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:46.159 00:16:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:46.159 00:16:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:46.159 00:16:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:46.159 00:16:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:46.159 00:16:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:46.159 00:16:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:46.159 00:16:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:46.159 00:16:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:46.159 00:16:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:46.160 00:16:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:35:46.160 00:16:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:35:46.160 00:16:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:46.160 00:16:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:46.160 00:16:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:46.160 00:16:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:46.160 00:16:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:46.160 00:16:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@15 -- # shopt -s extglob 00:35:46.160 00:16:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:46.160 00:16:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:46.160 00:16:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:46.160 00:16:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:46.160 00:16:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:46.160 00:16:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:46.160 00:16:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:35:46.160 00:16:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:46.160 00:16:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # : 0 00:35:46.160 00:16:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:46.160 00:16:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:46.160 00:16:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:46.160 00:16:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:46.160 00:16:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:46.160 00:16:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:35:46.160 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:35:46.160 00:16:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:46.160 00:16:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:46.160 00:16:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:46.160 00:16:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:35:46.160 00:16:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:35:46.160 00:16:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:46.160 00:16:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:35:46.160 00:16:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:35:46.160 00:16:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:35:46.160 00:16:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:46.160 00:16:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:46.160 00:16:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:46.160 00:16:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:35:46.160 00:16:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:35:46.160 00:16:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@309 -- # xtrace_disable 00:35:46.160 00:16:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:35:51.433 00:16:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:51.433 00:16:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # pci_devs=() 00:35:51.433 00:16:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:35:51.433 00:16:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:35:51.433 00:16:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:35:51.433 00:16:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:35:51.433 00:16:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:35:51.433 00:16:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # net_devs=() 00:35:51.433 00:16:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:35:51.433 00:16:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # e810=() 00:35:51.433 00:16:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # local -ga e810 00:35:51.433 00:16:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # x722=() 00:35:51.433 00:16:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # local -ga x722 00:35:51.433 00:16:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # mlx=() 00:35:51.433 00:16:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # local -ga mlx 00:35:51.433 00:16:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:51.433 00:16:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:51.433 00:16:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:51.433 00:16:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:51.433 00:16:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:51.433 00:16:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:51.433 00:16:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:51.433 00:16:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:35:51.433 00:16:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:51.433 00:16:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:51.433 00:16:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:51.433 00:16:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:51.433 00:16:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:35:51.433 00:16:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:35:51.433 00:16:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:35:51.433 00:16:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:35:51.433 00:16:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:35:51.433 00:16:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:35:51.433 00:16:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:51.433 00:16:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:35:51.433 Found 0000:af:00.0 (0x8086 - 0x159b) 00:35:51.433 00:16:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:51.433 00:16:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:51.433 00:16:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:51.433 00:16:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:51.433 00:16:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:51.434 00:16:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:51.434 00:16:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:35:51.434 Found 0000:af:00.1 (0x8086 - 0x159b) 00:35:51.434 00:16:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:51.434 00:16:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:51.434 00:16:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:51.434 00:16:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:51.434 00:16:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:51.434 00:16:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:35:51.434 00:16:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:35:51.434 00:16:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:35:51.434 00:16:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:51.434 00:16:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:51.434 00:16:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:51.434 00:16:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:51.434 00:16:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:51.434 00:16:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:51.434 00:16:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:51.434 00:16:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:35:51.434 Found net devices under 0000:af:00.0: cvl_0_0 00:35:51.434 00:16:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:51.434 00:16:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:51.434 00:16:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:51.434 00:16:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:51.434 00:16:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:51.434 00:16:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:51.434 00:16:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:51.434 00:16:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:51.434 00:16:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:35:51.434 Found net devices under 0000:af:00.1: cvl_0_1 00:35:51.434 00:16:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:51.434 00:16:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:35:51.434 00:16:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # is_hw=yes 00:35:51.434 00:16:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:35:51.434 00:16:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:35:51.434 00:16:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:35:51.434 00:16:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:35:51.434 00:16:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:51.434 00:16:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:51.434 00:16:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:51.434 00:16:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:35:51.434 00:16:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:51.434 00:16:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:51.434 00:16:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:35:51.434 00:16:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:35:51.434 00:16:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:51.434 00:16:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:51.434 00:16:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:35:51.434 00:16:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:35:51.434 00:16:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:35:51.434 00:16:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:51.434 00:16:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:51.434 00:16:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:51.434 00:16:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:35:51.434 00:16:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:51.434 00:16:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:51.434 00:16:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:51.434 00:16:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:35:51.434 00:16:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:35:51.434 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:51.434 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.335 ms 00:35:51.434 00:35:51.434 --- 10.0.0.2 ping statistics --- 00:35:51.434 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:51.434 rtt min/avg/max/mdev = 0.335/0.335/0.335/0.000 ms 00:35:51.434 00:16:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:51.434 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:51.434 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.196 ms 00:35:51.434 00:35:51.434 --- 10.0.0.1 ping statistics --- 00:35:51.434 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:51.434 rtt min/avg/max/mdev = 0.196/0.196/0.196/0.000 ms 00:35:51.434 00:16:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:51.434 00:16:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # return 0 00:35:51.434 00:16:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:35:51.434 00:16:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:51.434 00:16:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:35:51.434 00:16:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:35:51.434 00:16:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:51.434 00:16:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:35:51.434 00:16:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:35:51.434 00:16:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:35:51.434 00:16:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:35:51.434 00:16:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@769 -- # local ip 00:35:51.434 00:16:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:51.434 00:16:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:51.434 00:16:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:51.434 00:16:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:51.434 00:16:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:51.434 00:16:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:51.434 00:16:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:51.434 00:16:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:51.434 00:16:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:51.434 00:16:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:35:51.434 00:16:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:35:51.434 00:16:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:35:51.434 00:16:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:35:51.434 00:16:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:35:51.434 00:16:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:35:51.434 00:16:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:35:51.434 00:16:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # local block nvme 00:35:51.434 00:16:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:35:51.434 00:16:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@670 -- # modprobe nvmet 00:35:51.693 00:16:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:35:51.693 00:16:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:35:54.226 Waiting for block devices as requested 00:35:54.226 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:35:54.226 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:35:54.226 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:35:54.226 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:35:54.486 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:35:54.486 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:35:54.486 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:35:54.745 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:35:54.745 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:35:54.745 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:35:54.745 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:35:55.004 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:35:55.004 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:35:55.004 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:35:55.262 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:35:55.262 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:35:55.262 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:35:55.552 00:16:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:35:55.552 00:16:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:35:55.552 00:16:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:35:55.552 00:16:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:35:55.552 00:16:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:35:55.552 00:16:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:35:55.552 00:16:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:35:55.552 00:16:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:35:55.552 00:16:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:35:55.552 No valid GPT data, bailing 00:35:55.552 00:16:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:35:55.552 00:16:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:35:55.552 00:16:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:35:55.552 00:16:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:35:55.552 00:16:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:35:55.552 00:16:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:35:55.552 00:16:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:35:55.552 00:16:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:35:55.552 00:16:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:35:55.552 00:16:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # echo 1 00:35:55.552 00:16:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:35:55.552 00:16:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@697 -- # echo 1 00:35:55.553 00:16:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:35:55.553 00:16:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@700 -- # echo tcp 00:35:55.553 00:16:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@701 -- # echo 4420 00:35:55.553 00:16:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@702 -- # echo ipv4 00:35:55.553 00:16:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:35:55.553 00:16:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -a 10.0.0.1 -t tcp -s 4420 00:35:55.553 00:35:55.553 Discovery Log Number of Records 2, Generation counter 2 00:35:55.553 =====Discovery Log Entry 0====== 00:35:55.553 trtype: tcp 00:35:55.553 adrfam: ipv4 00:35:55.553 subtype: current discovery subsystem 00:35:55.553 treq: not specified, sq flow control disable supported 00:35:55.553 portid: 1 00:35:55.553 trsvcid: 4420 00:35:55.553 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:35:55.553 traddr: 10.0.0.1 00:35:55.553 eflags: none 00:35:55.553 sectype: none 00:35:55.553 =====Discovery Log Entry 1====== 00:35:55.553 trtype: tcp 00:35:55.553 adrfam: ipv4 00:35:55.553 subtype: nvme subsystem 00:35:55.553 treq: not specified, sq flow control disable supported 00:35:55.553 portid: 1 00:35:55.553 trsvcid: 4420 00:35:55.553 subnqn: nqn.2016-06.io.spdk:testnqn 00:35:55.553 traddr: 10.0.0.1 00:35:55.553 eflags: none 00:35:55.553 sectype: none 00:35:55.553 00:16:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:35:55.553 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:35:55.860 ===================================================== 00:35:55.860 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:35:55.860 ===================================================== 00:35:55.860 Controller Capabilities/Features 00:35:55.860 ================================ 00:35:55.860 Vendor ID: 0000 00:35:55.860 Subsystem Vendor ID: 0000 00:35:55.860 Serial Number: cda87f38804f65935444 00:35:55.860 Model Number: Linux 00:35:55.860 Firmware Version: 6.8.9-20 00:35:55.860 Recommended Arb Burst: 0 00:35:55.860 IEEE OUI Identifier: 00 00 00 00:35:55.860 Multi-path I/O 00:35:55.860 May have multiple subsystem ports: No 00:35:55.860 May have multiple controllers: No 00:35:55.860 Associated with SR-IOV VF: No 00:35:55.860 Max Data Transfer Size: Unlimited 00:35:55.860 Max Number of Namespaces: 0 00:35:55.860 Max Number of I/O Queues: 1024 00:35:55.860 NVMe Specification Version (VS): 1.3 00:35:55.860 NVMe Specification Version (Identify): 1.3 00:35:55.860 Maximum Queue Entries: 1024 00:35:55.860 Contiguous Queues Required: No 00:35:55.860 Arbitration Mechanisms Supported 00:35:55.860 Weighted Round Robin: Not Supported 00:35:55.860 Vendor Specific: Not Supported 00:35:55.860 Reset Timeout: 7500 ms 00:35:55.860 Doorbell Stride: 4 bytes 00:35:55.860 NVM Subsystem Reset: Not Supported 00:35:55.860 Command Sets Supported 00:35:55.860 NVM Command Set: Supported 00:35:55.860 Boot Partition: Not Supported 00:35:55.860 Memory Page Size Minimum: 4096 bytes 00:35:55.860 Memory Page Size Maximum: 4096 bytes 00:35:55.860 Persistent Memory Region: Not Supported 00:35:55.860 Optional Asynchronous Events Supported 00:35:55.860 Namespace Attribute Notices: Not Supported 00:35:55.860 Firmware Activation Notices: Not Supported 00:35:55.860 ANA Change Notices: Not Supported 00:35:55.860 PLE Aggregate Log Change Notices: Not Supported 00:35:55.860 LBA Status Info Alert Notices: Not Supported 00:35:55.860 EGE Aggregate Log Change Notices: Not Supported 00:35:55.860 Normal NVM Subsystem Shutdown event: Not Supported 00:35:55.860 Zone Descriptor Change Notices: Not Supported 00:35:55.860 Discovery Log Change Notices: Supported 00:35:55.860 Controller Attributes 00:35:55.860 128-bit Host Identifier: Not Supported 00:35:55.860 Non-Operational Permissive Mode: Not Supported 00:35:55.860 NVM Sets: Not Supported 00:35:55.860 Read Recovery Levels: Not Supported 00:35:55.860 Endurance Groups: Not Supported 00:35:55.860 Predictable Latency Mode: Not Supported 00:35:55.860 Traffic Based Keep ALive: Not Supported 00:35:55.860 Namespace Granularity: Not Supported 00:35:55.860 SQ Associations: Not Supported 00:35:55.860 UUID List: Not Supported 00:35:55.860 Multi-Domain Subsystem: Not Supported 00:35:55.860 Fixed Capacity Management: Not Supported 00:35:55.860 Variable Capacity Management: Not Supported 00:35:55.860 Delete Endurance Group: Not Supported 00:35:55.860 Delete NVM Set: Not Supported 00:35:55.860 Extended LBA Formats Supported: Not Supported 00:35:55.860 Flexible Data Placement Supported: Not Supported 00:35:55.860 00:35:55.860 Controller Memory Buffer Support 00:35:55.860 ================================ 00:35:55.860 Supported: No 00:35:55.860 00:35:55.860 Persistent Memory Region Support 00:35:55.860 ================================ 00:35:55.860 Supported: No 00:35:55.860 00:35:55.860 Admin Command Set Attributes 00:35:55.860 ============================ 00:35:55.860 Security Send/Receive: Not Supported 00:35:55.860 Format NVM: Not Supported 00:35:55.860 Firmware Activate/Download: Not Supported 00:35:55.860 Namespace Management: Not Supported 00:35:55.860 Device Self-Test: Not Supported 00:35:55.860 Directives: Not Supported 00:35:55.860 NVMe-MI: Not Supported 00:35:55.860 Virtualization Management: Not Supported 00:35:55.860 Doorbell Buffer Config: Not Supported 00:35:55.860 Get LBA Status Capability: Not Supported 00:35:55.860 Command & Feature Lockdown Capability: Not Supported 00:35:55.860 Abort Command Limit: 1 00:35:55.860 Async Event Request Limit: 1 00:35:55.860 Number of Firmware Slots: N/A 00:35:55.860 Firmware Slot 1 Read-Only: N/A 00:35:55.860 Firmware Activation Without Reset: N/A 00:35:55.860 Multiple Update Detection Support: N/A 00:35:55.860 Firmware Update Granularity: No Information Provided 00:35:55.860 Per-Namespace SMART Log: No 00:35:55.860 Asymmetric Namespace Access Log Page: Not Supported 00:35:55.860 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:35:55.860 Command Effects Log Page: Not Supported 00:35:55.860 Get Log Page Extended Data: Supported 00:35:55.860 Telemetry Log Pages: Not Supported 00:35:55.860 Persistent Event Log Pages: Not Supported 00:35:55.860 Supported Log Pages Log Page: May Support 00:35:55.860 Commands Supported & Effects Log Page: Not Supported 00:35:55.860 Feature Identifiers & Effects Log Page:May Support 00:35:55.860 NVMe-MI Commands & Effects Log Page: May Support 00:35:55.860 Data Area 4 for Telemetry Log: Not Supported 00:35:55.860 Error Log Page Entries Supported: 1 00:35:55.860 Keep Alive: Not Supported 00:35:55.860 00:35:55.860 NVM Command Set Attributes 00:35:55.860 ========================== 00:35:55.860 Submission Queue Entry Size 00:35:55.860 Max: 1 00:35:55.860 Min: 1 00:35:55.860 Completion Queue Entry Size 00:35:55.860 Max: 1 00:35:55.860 Min: 1 00:35:55.860 Number of Namespaces: 0 00:35:55.861 Compare Command: Not Supported 00:35:55.861 Write Uncorrectable Command: Not Supported 00:35:55.861 Dataset Management Command: Not Supported 00:35:55.861 Write Zeroes Command: Not Supported 00:35:55.861 Set Features Save Field: Not Supported 00:35:55.861 Reservations: Not Supported 00:35:55.861 Timestamp: Not Supported 00:35:55.861 Copy: Not Supported 00:35:55.861 Volatile Write Cache: Not Present 00:35:55.861 Atomic Write Unit (Normal): 1 00:35:55.861 Atomic Write Unit (PFail): 1 00:35:55.861 Atomic Compare & Write Unit: 1 00:35:55.861 Fused Compare & Write: Not Supported 00:35:55.861 Scatter-Gather List 00:35:55.861 SGL Command Set: Supported 00:35:55.861 SGL Keyed: Not Supported 00:35:55.861 SGL Bit Bucket Descriptor: Not Supported 00:35:55.861 SGL Metadata Pointer: Not Supported 00:35:55.861 Oversized SGL: Not Supported 00:35:55.861 SGL Metadata Address: Not Supported 00:35:55.861 SGL Offset: Supported 00:35:55.861 Transport SGL Data Block: Not Supported 00:35:55.861 Replay Protected Memory Block: Not Supported 00:35:55.861 00:35:55.861 Firmware Slot Information 00:35:55.861 ========================= 00:35:55.861 Active slot: 0 00:35:55.861 00:35:55.861 00:35:55.861 Error Log 00:35:55.861 ========= 00:35:55.861 00:35:55.861 Active Namespaces 00:35:55.861 ================= 00:35:55.861 Discovery Log Page 00:35:55.861 ================== 00:35:55.861 Generation Counter: 2 00:35:55.861 Number of Records: 2 00:35:55.861 Record Format: 0 00:35:55.861 00:35:55.861 Discovery Log Entry 0 00:35:55.861 ---------------------- 00:35:55.861 Transport Type: 3 (TCP) 00:35:55.861 Address Family: 1 (IPv4) 00:35:55.861 Subsystem Type: 3 (Current Discovery Subsystem) 00:35:55.861 Entry Flags: 00:35:55.861 Duplicate Returned Information: 0 00:35:55.861 Explicit Persistent Connection Support for Discovery: 0 00:35:55.861 Transport Requirements: 00:35:55.861 Secure Channel: Not Specified 00:35:55.861 Port ID: 1 (0x0001) 00:35:55.861 Controller ID: 65535 (0xffff) 00:35:55.861 Admin Max SQ Size: 32 00:35:55.861 Transport Service Identifier: 4420 00:35:55.861 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:35:55.861 Transport Address: 10.0.0.1 00:35:55.861 Discovery Log Entry 1 00:35:55.861 ---------------------- 00:35:55.861 Transport Type: 3 (TCP) 00:35:55.861 Address Family: 1 (IPv4) 00:35:55.861 Subsystem Type: 2 (NVM Subsystem) 00:35:55.861 Entry Flags: 00:35:55.861 Duplicate Returned Information: 0 00:35:55.861 Explicit Persistent Connection Support for Discovery: 0 00:35:55.861 Transport Requirements: 00:35:55.861 Secure Channel: Not Specified 00:35:55.861 Port ID: 1 (0x0001) 00:35:55.861 Controller ID: 65535 (0xffff) 00:35:55.861 Admin Max SQ Size: 32 00:35:55.861 Transport Service Identifier: 4420 00:35:55.861 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:35:55.861 Transport Address: 10.0.0.1 00:35:55.861 00:16:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:35:55.861 get_feature(0x01) failed 00:35:55.861 get_feature(0x02) failed 00:35:55.861 get_feature(0x04) failed 00:35:55.861 ===================================================== 00:35:55.861 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:35:55.861 ===================================================== 00:35:55.861 Controller Capabilities/Features 00:35:55.861 ================================ 00:35:55.861 Vendor ID: 0000 00:35:55.861 Subsystem Vendor ID: 0000 00:35:55.861 Serial Number: f321b86618583dbac10a 00:35:55.861 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:35:55.861 Firmware Version: 6.8.9-20 00:35:55.861 Recommended Arb Burst: 6 00:35:55.861 IEEE OUI Identifier: 00 00 00 00:35:55.861 Multi-path I/O 00:35:55.861 May have multiple subsystem ports: Yes 00:35:55.861 May have multiple controllers: Yes 00:35:55.861 Associated with SR-IOV VF: No 00:35:55.861 Max Data Transfer Size: Unlimited 00:35:55.861 Max Number of Namespaces: 1024 00:35:55.861 Max Number of I/O Queues: 128 00:35:55.861 NVMe Specification Version (VS): 1.3 00:35:55.861 NVMe Specification Version (Identify): 1.3 00:35:55.861 Maximum Queue Entries: 1024 00:35:55.861 Contiguous Queues Required: No 00:35:55.861 Arbitration Mechanisms Supported 00:35:55.861 Weighted Round Robin: Not Supported 00:35:55.861 Vendor Specific: Not Supported 00:35:55.861 Reset Timeout: 7500 ms 00:35:55.861 Doorbell Stride: 4 bytes 00:35:55.861 NVM Subsystem Reset: Not Supported 00:35:55.861 Command Sets Supported 00:35:55.861 NVM Command Set: Supported 00:35:55.861 Boot Partition: Not Supported 00:35:55.861 Memory Page Size Minimum: 4096 bytes 00:35:55.861 Memory Page Size Maximum: 4096 bytes 00:35:55.861 Persistent Memory Region: Not Supported 00:35:55.861 Optional Asynchronous Events Supported 00:35:55.861 Namespace Attribute Notices: Supported 00:35:55.861 Firmware Activation Notices: Not Supported 00:35:55.861 ANA Change Notices: Supported 00:35:55.861 PLE Aggregate Log Change Notices: Not Supported 00:35:55.861 LBA Status Info Alert Notices: Not Supported 00:35:55.861 EGE Aggregate Log Change Notices: Not Supported 00:35:55.861 Normal NVM Subsystem Shutdown event: Not Supported 00:35:55.861 Zone Descriptor Change Notices: Not Supported 00:35:55.861 Discovery Log Change Notices: Not Supported 00:35:55.861 Controller Attributes 00:35:55.861 128-bit Host Identifier: Supported 00:35:55.861 Non-Operational Permissive Mode: Not Supported 00:35:55.861 NVM Sets: Not Supported 00:35:55.861 Read Recovery Levels: Not Supported 00:35:55.861 Endurance Groups: Not Supported 00:35:55.861 Predictable Latency Mode: Not Supported 00:35:55.861 Traffic Based Keep ALive: Supported 00:35:55.861 Namespace Granularity: Not Supported 00:35:55.861 SQ Associations: Not Supported 00:35:55.861 UUID List: Not Supported 00:35:55.861 Multi-Domain Subsystem: Not Supported 00:35:55.861 Fixed Capacity Management: Not Supported 00:35:55.861 Variable Capacity Management: Not Supported 00:35:55.861 Delete Endurance Group: Not Supported 00:35:55.861 Delete NVM Set: Not Supported 00:35:55.861 Extended LBA Formats Supported: Not Supported 00:35:55.861 Flexible Data Placement Supported: Not Supported 00:35:55.861 00:35:55.861 Controller Memory Buffer Support 00:35:55.861 ================================ 00:35:55.861 Supported: No 00:35:55.861 00:35:55.861 Persistent Memory Region Support 00:35:55.861 ================================ 00:35:55.861 Supported: No 00:35:55.861 00:35:55.861 Admin Command Set Attributes 00:35:55.861 ============================ 00:35:55.861 Security Send/Receive: Not Supported 00:35:55.861 Format NVM: Not Supported 00:35:55.861 Firmware Activate/Download: Not Supported 00:35:55.861 Namespace Management: Not Supported 00:35:55.861 Device Self-Test: Not Supported 00:35:55.861 Directives: Not Supported 00:35:55.861 NVMe-MI: Not Supported 00:35:55.861 Virtualization Management: Not Supported 00:35:55.861 Doorbell Buffer Config: Not Supported 00:35:55.861 Get LBA Status Capability: Not Supported 00:35:55.861 Command & Feature Lockdown Capability: Not Supported 00:35:55.861 Abort Command Limit: 4 00:35:55.861 Async Event Request Limit: 4 00:35:55.861 Number of Firmware Slots: N/A 00:35:55.861 Firmware Slot 1 Read-Only: N/A 00:35:55.861 Firmware Activation Without Reset: N/A 00:35:55.861 Multiple Update Detection Support: N/A 00:35:55.861 Firmware Update Granularity: No Information Provided 00:35:55.861 Per-Namespace SMART Log: Yes 00:35:55.861 Asymmetric Namespace Access Log Page: Supported 00:35:55.861 ANA Transition Time : 10 sec 00:35:55.861 00:35:55.861 Asymmetric Namespace Access Capabilities 00:35:55.861 ANA Optimized State : Supported 00:35:55.861 ANA Non-Optimized State : Supported 00:35:55.861 ANA Inaccessible State : Supported 00:35:55.861 ANA Persistent Loss State : Supported 00:35:55.861 ANA Change State : Supported 00:35:55.861 ANAGRPID is not changed : No 00:35:55.861 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:35:55.861 00:35:55.861 ANA Group Identifier Maximum : 128 00:35:55.861 Number of ANA Group Identifiers : 128 00:35:55.861 Max Number of Allowed Namespaces : 1024 00:35:55.861 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:35:55.861 Command Effects Log Page: Supported 00:35:55.861 Get Log Page Extended Data: Supported 00:35:55.861 Telemetry Log Pages: Not Supported 00:35:55.861 Persistent Event Log Pages: Not Supported 00:35:55.861 Supported Log Pages Log Page: May Support 00:35:55.861 Commands Supported & Effects Log Page: Not Supported 00:35:55.861 Feature Identifiers & Effects Log Page:May Support 00:35:55.861 NVMe-MI Commands & Effects Log Page: May Support 00:35:55.861 Data Area 4 for Telemetry Log: Not Supported 00:35:55.861 Error Log Page Entries Supported: 128 00:35:55.861 Keep Alive: Supported 00:35:55.861 Keep Alive Granularity: 1000 ms 00:35:55.861 00:35:55.861 NVM Command Set Attributes 00:35:55.861 ========================== 00:35:55.861 Submission Queue Entry Size 00:35:55.861 Max: 64 00:35:55.861 Min: 64 00:35:55.861 Completion Queue Entry Size 00:35:55.861 Max: 16 00:35:55.861 Min: 16 00:35:55.861 Number of Namespaces: 1024 00:35:55.861 Compare Command: Not Supported 00:35:55.861 Write Uncorrectable Command: Not Supported 00:35:55.861 Dataset Management Command: Supported 00:35:55.861 Write Zeroes Command: Supported 00:35:55.861 Set Features Save Field: Not Supported 00:35:55.861 Reservations: Not Supported 00:35:55.861 Timestamp: Not Supported 00:35:55.861 Copy: Not Supported 00:35:55.861 Volatile Write Cache: Present 00:35:55.861 Atomic Write Unit (Normal): 1 00:35:55.861 Atomic Write Unit (PFail): 1 00:35:55.861 Atomic Compare & Write Unit: 1 00:35:55.861 Fused Compare & Write: Not Supported 00:35:55.861 Scatter-Gather List 00:35:55.861 SGL Command Set: Supported 00:35:55.861 SGL Keyed: Not Supported 00:35:55.861 SGL Bit Bucket Descriptor: Not Supported 00:35:55.861 SGL Metadata Pointer: Not Supported 00:35:55.861 Oversized SGL: Not Supported 00:35:55.861 SGL Metadata Address: Not Supported 00:35:55.861 SGL Offset: Supported 00:35:55.861 Transport SGL Data Block: Not Supported 00:35:55.861 Replay Protected Memory Block: Not Supported 00:35:55.861 00:35:55.861 Firmware Slot Information 00:35:55.861 ========================= 00:35:55.861 Active slot: 0 00:35:55.861 00:35:55.861 Asymmetric Namespace Access 00:35:55.861 =========================== 00:35:55.861 Change Count : 0 00:35:55.861 Number of ANA Group Descriptors : 1 00:35:55.861 ANA Group Descriptor : 0 00:35:55.861 ANA Group ID : 1 00:35:55.861 Number of NSID Values : 1 00:35:55.861 Change Count : 0 00:35:55.861 ANA State : 1 00:35:55.861 Namespace Identifier : 1 00:35:55.861 00:35:55.861 Commands Supported and Effects 00:35:55.861 ============================== 00:35:55.861 Admin Commands 00:35:55.861 -------------- 00:35:55.861 Get Log Page (02h): Supported 00:35:55.861 Identify (06h): Supported 00:35:55.861 Abort (08h): Supported 00:35:55.861 Set Features (09h): Supported 00:35:55.861 Get Features (0Ah): Supported 00:35:55.861 Asynchronous Event Request (0Ch): Supported 00:35:55.861 Keep Alive (18h): Supported 00:35:55.861 I/O Commands 00:35:55.861 ------------ 00:35:55.861 Flush (00h): Supported 00:35:55.861 Write (01h): Supported LBA-Change 00:35:55.861 Read (02h): Supported 00:35:55.861 Write Zeroes (08h): Supported LBA-Change 00:35:55.861 Dataset Management (09h): Supported 00:35:55.861 00:35:55.861 Error Log 00:35:55.861 ========= 00:35:55.861 Entry: 0 00:35:55.861 Error Count: 0x3 00:35:55.861 Submission Queue Id: 0x0 00:35:55.861 Command Id: 0x5 00:35:55.861 Phase Bit: 0 00:35:55.861 Status Code: 0x2 00:35:55.861 Status Code Type: 0x0 00:35:55.861 Do Not Retry: 1 00:35:55.861 Error Location: 0x28 00:35:55.862 LBA: 0x0 00:35:55.862 Namespace: 0x0 00:35:55.862 Vendor Log Page: 0x0 00:35:55.862 ----------- 00:35:55.862 Entry: 1 00:35:55.862 Error Count: 0x2 00:35:55.862 Submission Queue Id: 0x0 00:35:55.862 Command Id: 0x5 00:35:55.862 Phase Bit: 0 00:35:55.862 Status Code: 0x2 00:35:55.862 Status Code Type: 0x0 00:35:55.862 Do Not Retry: 1 00:35:55.862 Error Location: 0x28 00:35:55.862 LBA: 0x0 00:35:55.862 Namespace: 0x0 00:35:55.862 Vendor Log Page: 0x0 00:35:55.862 ----------- 00:35:55.862 Entry: 2 00:35:55.862 Error Count: 0x1 00:35:55.862 Submission Queue Id: 0x0 00:35:55.862 Command Id: 0x4 00:35:55.862 Phase Bit: 0 00:35:55.862 Status Code: 0x2 00:35:55.862 Status Code Type: 0x0 00:35:55.862 Do Not Retry: 1 00:35:55.862 Error Location: 0x28 00:35:55.862 LBA: 0x0 00:35:55.862 Namespace: 0x0 00:35:55.862 Vendor Log Page: 0x0 00:35:55.862 00:35:55.862 Number of Queues 00:35:55.862 ================ 00:35:55.862 Number of I/O Submission Queues: 128 00:35:55.862 Number of I/O Completion Queues: 128 00:35:55.862 00:35:55.862 ZNS Specific Controller Data 00:35:55.862 ============================ 00:35:55.862 Zone Append Size Limit: 0 00:35:55.862 00:35:55.862 00:35:55.862 Active Namespaces 00:35:55.862 ================= 00:35:55.862 get_feature(0x05) failed 00:35:55.862 Namespace ID:1 00:35:55.862 Command Set Identifier: NVM (00h) 00:35:55.862 Deallocate: Supported 00:35:55.862 Deallocated/Unwritten Error: Not Supported 00:35:55.862 Deallocated Read Value: Unknown 00:35:55.862 Deallocate in Write Zeroes: Not Supported 00:35:55.862 Deallocated Guard Field: 0xFFFF 00:35:55.862 Flush: Supported 00:35:55.862 Reservation: Not Supported 00:35:55.862 Namespace Sharing Capabilities: Multiple Controllers 00:35:55.862 Size (in LBAs): 1953525168 (931GiB) 00:35:55.862 Capacity (in LBAs): 1953525168 (931GiB) 00:35:55.862 Utilization (in LBAs): 1953525168 (931GiB) 00:35:55.862 UUID: cbde6b28-38ea-473f-bbb4-14cf298978ea 00:35:55.862 Thin Provisioning: Not Supported 00:35:55.862 Per-NS Atomic Units: Yes 00:35:55.862 Atomic Boundary Size (Normal): 0 00:35:55.862 Atomic Boundary Size (PFail): 0 00:35:55.862 Atomic Boundary Offset: 0 00:35:55.862 NGUID/EUI64 Never Reused: No 00:35:55.862 ANA group ID: 1 00:35:55.862 Namespace Write Protected: No 00:35:55.862 Number of LBA Formats: 1 00:35:55.862 Current LBA Format: LBA Format #00 00:35:55.862 LBA Format #00: Data Size: 512 Metadata Size: 0 00:35:55.862 00:35:55.862 00:16:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:35:55.862 00:16:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:35:55.862 00:16:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # sync 00:35:55.862 00:16:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:35:55.862 00:16:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set +e 00:35:55.862 00:16:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:35:55.862 00:16:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:35:55.862 rmmod nvme_tcp 00:35:55.862 rmmod nvme_fabrics 00:35:55.862 00:16:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:35:55.862 00:16:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@128 -- # set -e 00:35:55.862 00:16:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@129 -- # return 0 00:35:55.862 00:16:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:35:55.862 00:16:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:35:55.862 00:16:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:35:55.862 00:16:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:35:55.862 00:16:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # iptr 00:35:55.862 00:16:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:35:55.862 00:16:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-restore 00:35:55.862 00:16:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-save 00:35:55.862 00:16:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:35:55.862 00:16:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:35:55.862 00:16:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:55.862 00:16:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:55.862 00:16:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:58.401 00:16:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:35:58.401 00:16:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:35:58.401 00:16:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:35:58.401 00:16:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@714 -- # echo 0 00:35:58.401 00:16:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:35:58.401 00:16:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:35:58.401 00:16:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:35:58.401 00:16:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:35:58.401 00:16:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:35:58.401 00:16:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:35:58.401 00:16:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:36:00.937 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:36:00.937 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:36:00.937 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:36:00.937 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:36:00.937 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:36:00.937 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:36:00.937 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:36:00.937 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:36:00.937 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:36:00.938 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:36:00.938 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:36:00.938 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:36:00.938 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:36:00.938 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:36:00.938 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:36:00.938 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:36:01.505 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:36:01.764 00:36:01.764 real 0m15.767s 00:36:01.764 user 0m4.010s 00:36:01.764 sys 0m8.093s 00:36:01.764 00:16:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:01.764 00:16:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:36:01.764 ************************************ 00:36:01.765 END TEST nvmf_identify_kernel_target 00:36:01.765 ************************************ 00:36:01.765 00:16:40 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@30 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:36:01.765 00:16:40 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:36:01.765 00:16:40 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:01.765 00:16:40 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:36:01.765 ************************************ 00:36:01.765 START TEST nvmf_auth_host 00:36:01.765 ************************************ 00:36:01.765 00:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:36:01.765 * Looking for test storage... 00:36:01.765 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:36:01.765 00:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:36:01.765 00:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1711 -- # lcov --version 00:36:01.765 00:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:36:02.024 00:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:36:02.024 00:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:36:02.024 00:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:36:02.024 00:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:36:02.024 00:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # IFS=.-: 00:36:02.024 00:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # read -ra ver1 00:36:02.024 00:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # IFS=.-: 00:36:02.024 00:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # read -ra ver2 00:36:02.024 00:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@338 -- # local 'op=<' 00:36:02.024 00:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@340 -- # ver1_l=2 00:36:02.024 00:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@341 -- # ver2_l=1 00:36:02.024 00:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:36:02.024 00:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@344 -- # case "$op" in 00:36:02.024 00:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@345 -- # : 1 00:36:02.024 00:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:36:02.024 00:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:36:02.024 00:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # decimal 1 00:36:02.024 00:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=1 00:36:02.024 00:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:36:02.024 00:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 1 00:36:02.024 00:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # ver1[v]=1 00:36:02.024 00:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # decimal 2 00:36:02.024 00:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=2 00:36:02.024 00:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:36:02.024 00:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 2 00:36:02.024 00:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # ver2[v]=2 00:36:02.024 00:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:36:02.024 00:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:36:02.024 00:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # return 0 00:36:02.024 00:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:36:02.024 00:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:36:02.024 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:02.024 --rc genhtml_branch_coverage=1 00:36:02.024 --rc genhtml_function_coverage=1 00:36:02.024 --rc genhtml_legend=1 00:36:02.024 --rc geninfo_all_blocks=1 00:36:02.024 --rc geninfo_unexecuted_blocks=1 00:36:02.024 00:36:02.024 ' 00:36:02.024 00:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:36:02.024 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:02.024 --rc genhtml_branch_coverage=1 00:36:02.024 --rc genhtml_function_coverage=1 00:36:02.024 --rc genhtml_legend=1 00:36:02.024 --rc geninfo_all_blocks=1 00:36:02.024 --rc geninfo_unexecuted_blocks=1 00:36:02.024 00:36:02.024 ' 00:36:02.024 00:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:36:02.024 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:02.024 --rc genhtml_branch_coverage=1 00:36:02.024 --rc genhtml_function_coverage=1 00:36:02.024 --rc genhtml_legend=1 00:36:02.024 --rc geninfo_all_blocks=1 00:36:02.024 --rc geninfo_unexecuted_blocks=1 00:36:02.024 00:36:02.024 ' 00:36:02.024 00:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:36:02.024 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:02.024 --rc genhtml_branch_coverage=1 00:36:02.024 --rc genhtml_function_coverage=1 00:36:02.024 --rc genhtml_legend=1 00:36:02.024 --rc geninfo_all_blocks=1 00:36:02.024 --rc geninfo_unexecuted_blocks=1 00:36:02.024 00:36:02.024 ' 00:36:02.024 00:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:02.025 00:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:36:02.025 00:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:02.025 00:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:02.025 00:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:02.025 00:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:02.025 00:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:02.025 00:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:02.025 00:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:02.025 00:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:02.025 00:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:02.025 00:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:02.025 00:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:36:02.025 00:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:36:02.025 00:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:02.025 00:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:02.025 00:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:02.025 00:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:02.025 00:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:02.025 00:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@15 -- # shopt -s extglob 00:36:02.025 00:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:02.025 00:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:02.025 00:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:02.025 00:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:02.025 00:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:02.025 00:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:02.025 00:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:36:02.025 00:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:02.025 00:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@51 -- # : 0 00:36:02.025 00:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:36:02.025 00:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:36:02.025 00:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:02.025 00:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:02.025 00:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:02.025 00:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:36:02.025 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:36:02.025 00:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:36:02.025 00:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:36:02.025 00:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:36:02.025 00:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:36:02.025 00:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:36:02.025 00:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:36:02.025 00:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:36:02.025 00:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:36:02.025 00:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:36:02.025 00:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:36:02.025 00:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:36:02.025 00:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:36:02.025 00:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:36:02.025 00:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:36:02.025 00:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:36:02.025 00:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:36:02.025 00:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:36:02.025 00:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:02.025 00:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:02.025 00:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:02.025 00:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:36:02.025 00:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:36:02.025 00:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@309 -- # xtrace_disable 00:36:02.025 00:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:07.297 00:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:36:07.297 00:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # pci_devs=() 00:36:07.297 00:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:36:07.297 00:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:36:07.297 00:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:36:07.297 00:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:36:07.297 00:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:36:07.297 00:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # net_devs=() 00:36:07.297 00:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:36:07.297 00:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # e810=() 00:36:07.297 00:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # local -ga e810 00:36:07.297 00:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # x722=() 00:36:07.297 00:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # local -ga x722 00:36:07.297 00:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # mlx=() 00:36:07.297 00:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # local -ga mlx 00:36:07.297 00:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:36:07.297 00:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:36:07.297 00:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:36:07.297 00:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:36:07.297 00:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:36:07.297 00:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:36:07.297 00:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:36:07.297 00:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:36:07.297 00:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:36:07.297 00:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:36:07.297 00:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:36:07.297 00:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:36:07.297 00:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:36:07.297 00:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:36:07.297 00:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:36:07.297 00:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:36:07.297 00:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:36:07.297 00:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:36:07.297 00:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:07.297 00:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:36:07.297 Found 0000:af:00.0 (0x8086 - 0x159b) 00:36:07.297 00:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:07.297 00:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:07.297 00:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:07.297 00:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:07.297 00:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:07.297 00:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:07.298 00:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:36:07.298 Found 0000:af:00.1 (0x8086 - 0x159b) 00:36:07.298 00:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:07.298 00:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:07.298 00:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:07.298 00:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:07.298 00:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:07.298 00:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:36:07.298 00:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:36:07.298 00:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:36:07.298 00:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:36:07.298 00:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:07.298 00:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:36:07.298 00:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:07.298 00:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:36:07.298 00:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:36:07.298 00:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:07.298 00:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:36:07.298 Found net devices under 0000:af:00.0: cvl_0_0 00:36:07.298 00:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:36:07.298 00:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:36:07.298 00:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:07.298 00:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:36:07.298 00:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:07.298 00:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:36:07.298 00:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:36:07.298 00:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:07.298 00:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:36:07.298 Found net devices under 0000:af:00.1: cvl_0_1 00:36:07.298 00:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:36:07.298 00:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:36:07.298 00:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # is_hw=yes 00:36:07.298 00:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:36:07.298 00:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:36:07.298 00:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:36:07.298 00:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:36:07.298 00:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:36:07.298 00:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:36:07.298 00:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:36:07.298 00:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:36:07.298 00:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:36:07.298 00:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:36:07.298 00:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:36:07.298 00:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:36:07.298 00:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:36:07.298 00:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:36:07.298 00:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:36:07.298 00:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:36:07.298 00:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:36:07.298 00:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:36:07.298 00:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:36:07.298 00:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:36:07.298 00:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:36:07.298 00:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:36:07.298 00:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:36:07.298 00:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:36:07.298 00:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:36:07.298 00:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:36:07.298 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:36:07.298 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.382 ms 00:36:07.298 00:36:07.298 --- 10.0.0.2 ping statistics --- 00:36:07.298 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:07.298 rtt min/avg/max/mdev = 0.382/0.382/0.382/0.000 ms 00:36:07.298 00:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:36:07.298 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:36:07.298 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.218 ms 00:36:07.298 00:36:07.298 --- 10.0.0.1 ping statistics --- 00:36:07.298 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:07.298 rtt min/avg/max/mdev = 0.218/0.218/0.218/0.000 ms 00:36:07.298 00:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:36:07.298 00:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@450 -- # return 0 00:36:07.298 00:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:36:07.298 00:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:36:07.298 00:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:36:07.298 00:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:36:07.298 00:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:36:07.298 00:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:36:07.298 00:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:36:07.298 00:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:36:07.298 00:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:36:07.298 00:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:36:07.298 00:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:07.298 00:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:36:07.298 00:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@509 -- # nvmfpid=27437 00:36:07.298 00:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@510 -- # waitforlisten 27437 00:36:07.298 00:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 27437 ']' 00:36:07.298 00:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:07.298 00:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:07.298 00:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:07.298 00:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:07.298 00:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:08.235 00:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:08.235 00:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:36:08.235 00:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:36:08.235 00:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:36:08.235 00:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:08.235 00:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:08.235 00:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:36:08.235 00:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:36:08.235 00:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:36:08.235 00:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:36:08.235 00:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:36:08.235 00:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:36:08.235 00:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:36:08.235 00:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:36:08.235 00:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=485a143926e31ccf342327cd72c92059 00:36:08.235 00:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:36:08.235 00:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.0A3 00:36:08.235 00:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 485a143926e31ccf342327cd72c92059 0 00:36:08.235 00:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 485a143926e31ccf342327cd72c92059 0 00:36:08.235 00:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:36:08.235 00:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:36:08.235 00:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=485a143926e31ccf342327cd72c92059 00:36:08.235 00:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:36:08.235 00:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:36:08.235 00:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.0A3 00:36:08.235 00:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.0A3 00:36:08.235 00:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.0A3 00:36:08.235 00:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:36:08.235 00:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:36:08.235 00:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:36:08.235 00:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:36:08.235 00:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:36:08.235 00:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:36:08.235 00:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:36:08.235 00:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=60af114203a836d6b384b29f3a80fb28966ec0988bd6eb92456644ac2b395078 00:36:08.235 00:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:36:08.236 00:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.Xlv 00:36:08.236 00:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 60af114203a836d6b384b29f3a80fb28966ec0988bd6eb92456644ac2b395078 3 00:36:08.236 00:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 60af114203a836d6b384b29f3a80fb28966ec0988bd6eb92456644ac2b395078 3 00:36:08.236 00:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:36:08.236 00:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:36:08.236 00:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=60af114203a836d6b384b29f3a80fb28966ec0988bd6eb92456644ac2b395078 00:36:08.236 00:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:36:08.236 00:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:36:08.236 00:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.Xlv 00:36:08.495 00:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.Xlv 00:36:08.495 00:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.Xlv 00:36:08.495 00:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:36:08.495 00:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:36:08.495 00:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:36:08.495 00:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:36:08.495 00:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:36:08.495 00:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:36:08.495 00:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:36:08.495 00:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=760bcdc1241b1632b97ba7c7249922866e786c4144ae26c8 00:36:08.495 00:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:36:08.495 00:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.FbK 00:36:08.495 00:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 760bcdc1241b1632b97ba7c7249922866e786c4144ae26c8 0 00:36:08.495 00:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 760bcdc1241b1632b97ba7c7249922866e786c4144ae26c8 0 00:36:08.495 00:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:36:08.495 00:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:36:08.495 00:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=760bcdc1241b1632b97ba7c7249922866e786c4144ae26c8 00:36:08.495 00:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:36:08.495 00:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:36:08.495 00:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.FbK 00:36:08.495 00:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.FbK 00:36:08.495 00:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.FbK 00:36:08.495 00:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:36:08.495 00:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:36:08.495 00:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:36:08.495 00:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:36:08.495 00:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:36:08.495 00:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:36:08.495 00:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:36:08.495 00:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=c704e168f0160c9927c3364307909f0cea9d5e6b22dfe510 00:36:08.495 00:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:36:08.495 00:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.f3Z 00:36:08.495 00:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key c704e168f0160c9927c3364307909f0cea9d5e6b22dfe510 2 00:36:08.495 00:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 c704e168f0160c9927c3364307909f0cea9d5e6b22dfe510 2 00:36:08.495 00:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:36:08.495 00:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:36:08.495 00:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=c704e168f0160c9927c3364307909f0cea9d5e6b22dfe510 00:36:08.495 00:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:36:08.495 00:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:36:08.495 00:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.f3Z 00:36:08.495 00:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.f3Z 00:36:08.495 00:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.f3Z 00:36:08.495 00:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:36:08.495 00:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:36:08.495 00:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:36:08.495 00:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:36:08.495 00:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:36:08.495 00:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:36:08.495 00:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:36:08.495 00:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=4f270e5373b99b476454b174aed95dac 00:36:08.495 00:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:36:08.495 00:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.mag 00:36:08.495 00:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 4f270e5373b99b476454b174aed95dac 1 00:36:08.495 00:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 4f270e5373b99b476454b174aed95dac 1 00:36:08.495 00:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:36:08.495 00:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:36:08.495 00:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=4f270e5373b99b476454b174aed95dac 00:36:08.495 00:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:36:08.495 00:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:36:08.495 00:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.mag 00:36:08.495 00:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.mag 00:36:08.495 00:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.mag 00:36:08.495 00:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:36:08.495 00:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:36:08.495 00:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:36:08.495 00:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:36:08.495 00:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:36:08.495 00:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:36:08.495 00:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:36:08.495 00:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=327cbf9dd411b909a821ba8a7c599eb5 00:36:08.495 00:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:36:08.495 00:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.jEo 00:36:08.495 00:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 327cbf9dd411b909a821ba8a7c599eb5 1 00:36:08.495 00:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 327cbf9dd411b909a821ba8a7c599eb5 1 00:36:08.495 00:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:36:08.495 00:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:36:08.495 00:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=327cbf9dd411b909a821ba8a7c599eb5 00:36:08.495 00:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:36:08.495 00:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:36:08.495 00:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.jEo 00:36:08.495 00:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.jEo 00:36:08.495 00:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.jEo 00:36:08.495 00:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:36:08.495 00:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:36:08.495 00:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:36:08.495 00:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:36:08.495 00:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:36:08.495 00:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:36:08.495 00:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:36:08.495 00:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=78846d8d41789558a8ff12f89eb37537063745f24d975487 00:36:08.495 00:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:36:08.495 00:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.bUq 00:36:08.495 00:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 78846d8d41789558a8ff12f89eb37537063745f24d975487 2 00:36:08.495 00:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 78846d8d41789558a8ff12f89eb37537063745f24d975487 2 00:36:08.495 00:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:36:08.495 00:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:36:08.495 00:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=78846d8d41789558a8ff12f89eb37537063745f24d975487 00:36:08.495 00:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:36:08.495 00:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:36:08.755 00:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.bUq 00:36:08.755 00:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.bUq 00:36:08.755 00:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.bUq 00:36:08.755 00:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:36:08.755 00:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:36:08.755 00:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:36:08.755 00:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:36:08.755 00:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:36:08.755 00:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:36:08.755 00:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:36:08.755 00:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=29c056c83afc5d177ead8a452eb9b5e8 00:36:08.755 00:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:36:08.755 00:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.AXW 00:36:08.755 00:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 29c056c83afc5d177ead8a452eb9b5e8 0 00:36:08.755 00:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 29c056c83afc5d177ead8a452eb9b5e8 0 00:36:08.755 00:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:36:08.755 00:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:36:08.755 00:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=29c056c83afc5d177ead8a452eb9b5e8 00:36:08.755 00:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:36:08.755 00:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:36:08.755 00:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.AXW 00:36:08.755 00:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.AXW 00:36:08.755 00:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.AXW 00:36:08.755 00:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:36:08.755 00:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:36:08.755 00:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:36:08.755 00:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:36:08.755 00:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:36:08.755 00:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:36:08.755 00:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:36:08.755 00:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=b5f8745310ef9bb901a5333b5e5da49ca6fa4bf0ed2e7fd55a4d3fea5325e6dd 00:36:08.755 00:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:36:08.755 00:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.uln 00:36:08.755 00:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key b5f8745310ef9bb901a5333b5e5da49ca6fa4bf0ed2e7fd55a4d3fea5325e6dd 3 00:36:08.755 00:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 b5f8745310ef9bb901a5333b5e5da49ca6fa4bf0ed2e7fd55a4d3fea5325e6dd 3 00:36:08.755 00:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:36:08.755 00:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:36:08.755 00:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=b5f8745310ef9bb901a5333b5e5da49ca6fa4bf0ed2e7fd55a4d3fea5325e6dd 00:36:08.755 00:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:36:08.755 00:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:36:08.755 00:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.uln 00:36:08.755 00:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.uln 00:36:08.755 00:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.uln 00:36:08.755 00:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:36:08.755 00:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 27437 00:36:08.755 00:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 27437 ']' 00:36:08.755 00:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:08.755 00:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:08.755 00:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:08.755 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:08.755 00:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:08.755 00:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:09.014 00:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:09.014 00:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:36:09.014 00:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:36:09.014 00:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.0A3 00:36:09.014 00:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:09.014 00:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:09.014 00:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:09.014 00:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.Xlv ]] 00:36:09.014 00:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.Xlv 00:36:09.014 00:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:09.014 00:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:09.014 00:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:09.014 00:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:36:09.014 00:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.FbK 00:36:09.014 00:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:09.014 00:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:09.014 00:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:09.014 00:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.f3Z ]] 00:36:09.014 00:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.f3Z 00:36:09.014 00:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:09.014 00:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:09.014 00:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:09.014 00:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:36:09.014 00:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.mag 00:36:09.014 00:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:09.014 00:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:09.014 00:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:09.014 00:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.jEo ]] 00:36:09.014 00:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.jEo 00:36:09.014 00:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:09.014 00:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:09.014 00:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:09.014 00:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:36:09.014 00:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.bUq 00:36:09.014 00:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:09.014 00:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:09.014 00:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:09.014 00:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.AXW ]] 00:36:09.014 00:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.AXW 00:36:09.014 00:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:09.014 00:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:09.014 00:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:09.014 00:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:36:09.014 00:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.uln 00:36:09.014 00:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:09.014 00:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:09.014 00:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:09.014 00:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:36:09.014 00:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:36:09.014 00:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:36:09.014 00:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:09.014 00:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:09.014 00:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:09.014 00:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:09.014 00:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:09.014 00:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:09.014 00:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:09.014 00:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:09.015 00:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:09.015 00:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:09.015 00:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:36:09.015 00:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@660 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:36:09.015 00:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:36:09.015 00:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:36:09.015 00:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:36:09.015 00:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:36:09.015 00:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@667 -- # local block nvme 00:36:09.015 00:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:36:09.015 00:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@670 -- # modprobe nvmet 00:36:09.015 00:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:36:09.015 00:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:36:11.548 Waiting for block devices as requested 00:36:11.548 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:36:11.548 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:36:11.807 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:36:11.807 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:36:11.807 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:36:11.807 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:36:12.065 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:36:12.065 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:36:12.065 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:36:12.323 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:36:12.323 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:36:12.323 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:36:12.323 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:36:12.582 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:36:12.582 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:36:12.582 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:36:12.841 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:36:13.408 00:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:36:13.408 00:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:36:13.408 00:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:36:13.408 00:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:36:13.408 00:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:36:13.408 00:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:36:13.408 00:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:36:13.408 00:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:36:13.408 00:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:36:13.408 No valid GPT data, bailing 00:36:13.408 00:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:36:13.408 00:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:36:13.408 00:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:36:13.408 00:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:36:13.408 00:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:36:13.408 00:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:36:13.408 00:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:36:13.408 00:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:36:13.408 00:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@693 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:36:13.409 00:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@695 -- # echo 1 00:36:13.409 00:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:36:13.409 00:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@697 -- # echo 1 00:36:13.409 00:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:36:13.409 00:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@700 -- # echo tcp 00:36:13.409 00:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@701 -- # echo 4420 00:36:13.409 00:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # echo ipv4 00:36:13.409 00:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:36:13.409 00:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -a 10.0.0.1 -t tcp -s 4420 00:36:13.409 00:36:13.409 Discovery Log Number of Records 2, Generation counter 2 00:36:13.409 =====Discovery Log Entry 0====== 00:36:13.409 trtype: tcp 00:36:13.409 adrfam: ipv4 00:36:13.409 subtype: current discovery subsystem 00:36:13.409 treq: not specified, sq flow control disable supported 00:36:13.409 portid: 1 00:36:13.409 trsvcid: 4420 00:36:13.409 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:36:13.409 traddr: 10.0.0.1 00:36:13.409 eflags: none 00:36:13.409 sectype: none 00:36:13.409 =====Discovery Log Entry 1====== 00:36:13.409 trtype: tcp 00:36:13.409 adrfam: ipv4 00:36:13.409 subtype: nvme subsystem 00:36:13.409 treq: not specified, sq flow control disable supported 00:36:13.409 portid: 1 00:36:13.409 trsvcid: 4420 00:36:13.409 subnqn: nqn.2024-02.io.spdk:cnode0 00:36:13.409 traddr: 10.0.0.1 00:36:13.409 eflags: none 00:36:13.409 sectype: none 00:36:13.409 00:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:36:13.409 00:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:36:13.409 00:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:36:13.409 00:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:36:13.409 00:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:13.409 00:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:13.409 00:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:36:13.409 00:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:36:13.409 00:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzYwYmNkYzEyNDFiMTYzMmI5N2JhN2M3MjQ5OTIyODY2ZTc4NmM0MTQ0YWUyNmM4CTa6Pw==: 00:36:13.409 00:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzcwNGUxNjhmMDE2MGM5OTI3YzMzNjQzMDc5MDlmMGNlYTlkNWU2YjIyZGZlNTEwEYJuGQ==: 00:36:13.409 00:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:13.409 00:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:36:13.409 00:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzYwYmNkYzEyNDFiMTYzMmI5N2JhN2M3MjQ5OTIyODY2ZTc4NmM0MTQ0YWUyNmM4CTa6Pw==: 00:36:13.409 00:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzcwNGUxNjhmMDE2MGM5OTI3YzMzNjQzMDc5MDlmMGNlYTlkNWU2YjIyZGZlNTEwEYJuGQ==: ]] 00:36:13.409 00:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YzcwNGUxNjhmMDE2MGM5OTI3YzMzNjQzMDc5MDlmMGNlYTlkNWU2YjIyZGZlNTEwEYJuGQ==: 00:36:13.409 00:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:36:13.409 00:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:36:13.409 00:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:36:13.409 00:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:36:13.409 00:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:36:13.409 00:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:13.409 00:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:36:13.409 00:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:36:13.409 00:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:36:13.409 00:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:13.409 00:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:36:13.409 00:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:13.409 00:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:13.409 00:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:13.409 00:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:13.409 00:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:13.409 00:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:13.409 00:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:13.409 00:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:13.409 00:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:13.409 00:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:13.409 00:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:13.409 00:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:13.409 00:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:13.409 00:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:13.409 00:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:36:13.409 00:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:13.409 00:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:13.668 nvme0n1 00:36:13.668 00:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:13.668 00:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:13.669 00:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:13.669 00:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:13.669 00:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:13.669 00:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:13.669 00:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:13.669 00:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:13.669 00:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:13.669 00:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:13.669 00:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:13.669 00:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:36:13.669 00:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:36:13.669 00:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:13.669 00:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:36:13.669 00:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:13.669 00:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:13.669 00:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:36:13.669 00:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:36:13.669 00:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDg1YTE0MzkyNmUzMWNjZjM0MjMyN2NkNzJjOTIwNTmemql8: 00:36:13.669 00:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NjBhZjExNDIwM2E4MzZkNmIzODRiMjlmM2E4MGZiMjg5NjZlYzA5ODhiZDZlYjkyNDU2NjQ0YWMyYjM5NTA3OOSoAhI=: 00:36:13.669 00:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:13.669 00:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:36:13.669 00:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDg1YTE0MzkyNmUzMWNjZjM0MjMyN2NkNzJjOTIwNTmemql8: 00:36:13.669 00:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NjBhZjExNDIwM2E4MzZkNmIzODRiMjlmM2E4MGZiMjg5NjZlYzA5ODhiZDZlYjkyNDU2NjQ0YWMyYjM5NTA3OOSoAhI=: ]] 00:36:13.669 00:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NjBhZjExNDIwM2E4MzZkNmIzODRiMjlmM2E4MGZiMjg5NjZlYzA5ODhiZDZlYjkyNDU2NjQ0YWMyYjM5NTA3OOSoAhI=: 00:36:13.669 00:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:36:13.669 00:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:13.669 00:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:13.669 00:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:36:13.669 00:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:36:13.669 00:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:13.669 00:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:36:13.669 00:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:13.669 00:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:13.669 00:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:13.669 00:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:13.669 00:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:13.669 00:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:13.669 00:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:13.669 00:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:13.669 00:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:13.669 00:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:13.669 00:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:13.669 00:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:13.669 00:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:13.669 00:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:13.669 00:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:36:13.669 00:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:13.669 00:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:13.927 nvme0n1 00:36:13.927 00:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:13.927 00:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:13.927 00:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:13.927 00:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:13.927 00:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:13.927 00:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:13.927 00:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:13.927 00:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:13.927 00:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:13.927 00:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:13.927 00:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:13.927 00:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:13.927 00:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:36:13.927 00:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:13.927 00:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:13.927 00:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:36:13.927 00:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:36:13.927 00:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzYwYmNkYzEyNDFiMTYzMmI5N2JhN2M3MjQ5OTIyODY2ZTc4NmM0MTQ0YWUyNmM4CTa6Pw==: 00:36:13.927 00:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzcwNGUxNjhmMDE2MGM5OTI3YzMzNjQzMDc5MDlmMGNlYTlkNWU2YjIyZGZlNTEwEYJuGQ==: 00:36:13.927 00:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:13.927 00:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:36:13.927 00:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzYwYmNkYzEyNDFiMTYzMmI5N2JhN2M3MjQ5OTIyODY2ZTc4NmM0MTQ0YWUyNmM4CTa6Pw==: 00:36:13.927 00:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzcwNGUxNjhmMDE2MGM5OTI3YzMzNjQzMDc5MDlmMGNlYTlkNWU2YjIyZGZlNTEwEYJuGQ==: ]] 00:36:13.927 00:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YzcwNGUxNjhmMDE2MGM5OTI3YzMzNjQzMDc5MDlmMGNlYTlkNWU2YjIyZGZlNTEwEYJuGQ==: 00:36:13.927 00:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:36:13.927 00:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:13.927 00:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:13.927 00:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:36:13.927 00:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:36:13.927 00:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:13.927 00:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:36:13.927 00:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:13.927 00:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:13.927 00:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:13.927 00:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:13.927 00:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:13.928 00:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:13.928 00:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:13.928 00:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:13.928 00:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:13.928 00:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:13.928 00:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:13.928 00:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:13.928 00:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:13.928 00:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:13.928 00:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:36:13.928 00:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:13.928 00:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:14.186 nvme0n1 00:36:14.186 00:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:14.186 00:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:14.186 00:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:14.186 00:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:14.186 00:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:14.186 00:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:14.186 00:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:14.186 00:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:14.186 00:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:14.186 00:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:14.186 00:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:14.186 00:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:14.186 00:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:36:14.186 00:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:14.186 00:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:14.186 00:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:36:14.186 00:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:36:14.186 00:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NGYyNzBlNTM3M2I5OWI0NzY0NTRiMTc0YWVkOTVkYWN0mx2S: 00:36:14.186 00:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MzI3Y2JmOWRkNDExYjkwOWE4MjFiYThhN2M1OTllYjURdW4z: 00:36:14.186 00:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:14.186 00:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:36:14.186 00:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NGYyNzBlNTM3M2I5OWI0NzY0NTRiMTc0YWVkOTVkYWN0mx2S: 00:36:14.186 00:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MzI3Y2JmOWRkNDExYjkwOWE4MjFiYThhN2M1OTllYjURdW4z: ]] 00:36:14.186 00:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MzI3Y2JmOWRkNDExYjkwOWE4MjFiYThhN2M1OTllYjURdW4z: 00:36:14.186 00:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:36:14.186 00:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:14.186 00:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:14.186 00:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:36:14.186 00:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:36:14.186 00:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:14.186 00:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:36:14.186 00:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:14.186 00:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:14.186 00:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:14.186 00:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:14.186 00:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:14.186 00:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:14.186 00:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:14.186 00:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:14.186 00:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:14.186 00:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:14.186 00:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:14.186 00:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:14.186 00:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:14.186 00:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:14.186 00:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:36:14.186 00:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:14.186 00:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:14.445 nvme0n1 00:36:14.445 00:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:14.445 00:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:14.445 00:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:14.445 00:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:14.445 00:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:14.445 00:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:14.445 00:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:14.445 00:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:14.445 00:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:14.445 00:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:14.445 00:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:14.445 00:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:14.445 00:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:36:14.445 00:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:14.445 00:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:14.445 00:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:36:14.445 00:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:36:14.445 00:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Nzg4NDZkOGQ0MTc4OTU1OGE4ZmYxMmY4OWViMzc1MzcwNjM3NDVmMjRkOTc1NDg3dE/pLw==: 00:36:14.445 00:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MjljMDU2YzgzYWZjNWQxNzdlYWQ4YTQ1MmViOWI1ZTiWULSZ: 00:36:14.445 00:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:14.445 00:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:36:14.445 00:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Nzg4NDZkOGQ0MTc4OTU1OGE4ZmYxMmY4OWViMzc1MzcwNjM3NDVmMjRkOTc1NDg3dE/pLw==: 00:36:14.445 00:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MjljMDU2YzgzYWZjNWQxNzdlYWQ4YTQ1MmViOWI1ZTiWULSZ: ]] 00:36:14.445 00:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MjljMDU2YzgzYWZjNWQxNzdlYWQ4YTQ1MmViOWI1ZTiWULSZ: 00:36:14.445 00:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:36:14.445 00:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:14.445 00:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:14.445 00:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:36:14.445 00:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:36:14.445 00:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:14.445 00:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:36:14.445 00:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:14.445 00:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:14.445 00:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:14.445 00:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:14.445 00:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:14.445 00:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:14.445 00:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:14.445 00:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:14.445 00:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:14.445 00:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:14.445 00:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:14.445 00:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:14.445 00:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:14.445 00:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:14.445 00:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:36:14.445 00:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:14.445 00:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:14.445 nvme0n1 00:36:14.445 00:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:14.445 00:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:14.445 00:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:14.445 00:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:14.445 00:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:14.445 00:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:14.704 00:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:14.704 00:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:14.704 00:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:14.704 00:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:14.704 00:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:14.704 00:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:14.704 00:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:36:14.704 00:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:14.704 00:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:14.704 00:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:36:14.704 00:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:36:14.704 00:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YjVmODc0NTMxMGVmOWJiOTAxYTUzMzNiNWU1ZGE0OWNhNmZhNGJmMGVkMmU3ZmQ1NWE0ZDNmZWE1MzI1ZTZkZMBuIkI=: 00:36:14.704 00:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:36:14.704 00:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:14.704 00:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:36:14.704 00:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YjVmODc0NTMxMGVmOWJiOTAxYTUzMzNiNWU1ZGE0OWNhNmZhNGJmMGVkMmU3ZmQ1NWE0ZDNmZWE1MzI1ZTZkZMBuIkI=: 00:36:14.704 00:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:36:14.704 00:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:36:14.704 00:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:14.704 00:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:14.704 00:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:36:14.704 00:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:36:14.704 00:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:14.704 00:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:36:14.704 00:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:14.704 00:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:14.704 00:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:14.704 00:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:14.704 00:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:14.704 00:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:14.704 00:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:14.704 00:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:14.704 00:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:14.704 00:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:14.704 00:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:14.704 00:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:14.704 00:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:14.704 00:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:14.704 00:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:36:14.704 00:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:14.704 00:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:14.704 nvme0n1 00:36:14.704 00:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:14.704 00:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:14.704 00:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:14.704 00:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:14.704 00:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:14.704 00:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:14.704 00:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:14.704 00:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:14.704 00:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:14.704 00:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:14.704 00:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:14.704 00:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:36:14.704 00:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:14.704 00:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:36:14.704 00:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:14.704 00:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:14.704 00:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:36:14.704 00:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:36:14.704 00:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDg1YTE0MzkyNmUzMWNjZjM0MjMyN2NkNzJjOTIwNTmemql8: 00:36:14.704 00:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NjBhZjExNDIwM2E4MzZkNmIzODRiMjlmM2E4MGZiMjg5NjZlYzA5ODhiZDZlYjkyNDU2NjQ0YWMyYjM5NTA3OOSoAhI=: 00:36:14.704 00:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:14.704 00:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:36:14.704 00:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDg1YTE0MzkyNmUzMWNjZjM0MjMyN2NkNzJjOTIwNTmemql8: 00:36:14.704 00:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NjBhZjExNDIwM2E4MzZkNmIzODRiMjlmM2E4MGZiMjg5NjZlYzA5ODhiZDZlYjkyNDU2NjQ0YWMyYjM5NTA3OOSoAhI=: ]] 00:36:14.704 00:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NjBhZjExNDIwM2E4MzZkNmIzODRiMjlmM2E4MGZiMjg5NjZlYzA5ODhiZDZlYjkyNDU2NjQ0YWMyYjM5NTA3OOSoAhI=: 00:36:14.704 00:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:36:14.704 00:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:14.704 00:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:14.704 00:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:36:14.704 00:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:36:14.704 00:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:14.704 00:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:36:14.704 00:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:14.704 00:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:14.963 00:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:14.963 00:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:14.963 00:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:14.963 00:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:14.963 00:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:14.963 00:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:14.963 00:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:14.963 00:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:14.963 00:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:14.963 00:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:14.963 00:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:14.963 00:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:14.963 00:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:36:14.963 00:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:14.963 00:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:14.963 nvme0n1 00:36:14.963 00:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:14.963 00:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:14.963 00:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:14.963 00:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:14.963 00:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:14.963 00:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:14.963 00:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:14.963 00:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:14.963 00:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:14.963 00:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:14.963 00:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:14.963 00:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:14.963 00:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:36:14.963 00:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:14.963 00:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:14.963 00:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:36:14.963 00:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:36:14.963 00:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzYwYmNkYzEyNDFiMTYzMmI5N2JhN2M3MjQ5OTIyODY2ZTc4NmM0MTQ0YWUyNmM4CTa6Pw==: 00:36:14.963 00:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzcwNGUxNjhmMDE2MGM5OTI3YzMzNjQzMDc5MDlmMGNlYTlkNWU2YjIyZGZlNTEwEYJuGQ==: 00:36:14.963 00:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:14.963 00:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:36:14.963 00:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzYwYmNkYzEyNDFiMTYzMmI5N2JhN2M3MjQ5OTIyODY2ZTc4NmM0MTQ0YWUyNmM4CTa6Pw==: 00:36:14.963 00:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzcwNGUxNjhmMDE2MGM5OTI3YzMzNjQzMDc5MDlmMGNlYTlkNWU2YjIyZGZlNTEwEYJuGQ==: ]] 00:36:14.963 00:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YzcwNGUxNjhmMDE2MGM5OTI3YzMzNjQzMDc5MDlmMGNlYTlkNWU2YjIyZGZlNTEwEYJuGQ==: 00:36:14.963 00:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:36:14.963 00:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:14.963 00:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:14.963 00:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:36:14.963 00:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:36:14.963 00:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:14.963 00:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:36:14.963 00:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:14.963 00:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:14.963 00:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:15.222 00:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:15.222 00:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:15.222 00:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:15.222 00:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:15.222 00:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:15.222 00:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:15.222 00:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:15.222 00:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:15.222 00:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:15.222 00:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:15.222 00:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:15.222 00:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:36:15.222 00:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:15.222 00:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:15.222 nvme0n1 00:36:15.222 00:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:15.222 00:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:15.222 00:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:15.222 00:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:15.222 00:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:15.222 00:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:15.222 00:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:15.222 00:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:15.222 00:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:15.222 00:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:15.222 00:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:15.222 00:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:15.222 00:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:36:15.222 00:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:15.222 00:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:15.222 00:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:36:15.222 00:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:36:15.222 00:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NGYyNzBlNTM3M2I5OWI0NzY0NTRiMTc0YWVkOTVkYWN0mx2S: 00:36:15.222 00:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MzI3Y2JmOWRkNDExYjkwOWE4MjFiYThhN2M1OTllYjURdW4z: 00:36:15.222 00:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:15.222 00:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:36:15.222 00:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NGYyNzBlNTM3M2I5OWI0NzY0NTRiMTc0YWVkOTVkYWN0mx2S: 00:36:15.222 00:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MzI3Y2JmOWRkNDExYjkwOWE4MjFiYThhN2M1OTllYjURdW4z: ]] 00:36:15.222 00:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MzI3Y2JmOWRkNDExYjkwOWE4MjFiYThhN2M1OTllYjURdW4z: 00:36:15.222 00:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:36:15.222 00:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:15.222 00:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:15.222 00:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:36:15.222 00:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:36:15.222 00:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:15.222 00:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:36:15.222 00:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:15.222 00:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:15.222 00:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:15.222 00:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:15.222 00:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:15.222 00:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:15.222 00:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:15.222 00:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:15.223 00:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:15.223 00:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:15.223 00:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:15.223 00:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:15.223 00:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:15.223 00:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:15.223 00:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:36:15.223 00:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:15.223 00:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:15.481 nvme0n1 00:36:15.481 00:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:15.481 00:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:15.481 00:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:15.482 00:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:15.482 00:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:15.482 00:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:15.482 00:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:15.482 00:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:15.482 00:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:15.482 00:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:15.482 00:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:15.482 00:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:15.482 00:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:36:15.482 00:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:15.482 00:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:15.482 00:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:36:15.482 00:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:36:15.482 00:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Nzg4NDZkOGQ0MTc4OTU1OGE4ZmYxMmY4OWViMzc1MzcwNjM3NDVmMjRkOTc1NDg3dE/pLw==: 00:36:15.482 00:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MjljMDU2YzgzYWZjNWQxNzdlYWQ4YTQ1MmViOWI1ZTiWULSZ: 00:36:15.482 00:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:15.482 00:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:36:15.482 00:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Nzg4NDZkOGQ0MTc4OTU1OGE4ZmYxMmY4OWViMzc1MzcwNjM3NDVmMjRkOTc1NDg3dE/pLw==: 00:36:15.482 00:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MjljMDU2YzgzYWZjNWQxNzdlYWQ4YTQ1MmViOWI1ZTiWULSZ: ]] 00:36:15.482 00:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MjljMDU2YzgzYWZjNWQxNzdlYWQ4YTQ1MmViOWI1ZTiWULSZ: 00:36:15.482 00:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:36:15.482 00:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:15.482 00:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:15.482 00:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:36:15.482 00:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:36:15.482 00:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:15.482 00:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:36:15.482 00:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:15.482 00:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:15.482 00:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:15.482 00:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:15.482 00:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:15.482 00:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:15.482 00:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:15.482 00:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:15.482 00:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:15.482 00:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:15.482 00:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:15.482 00:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:15.482 00:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:15.482 00:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:15.482 00:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:36:15.482 00:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:15.482 00:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:15.741 nvme0n1 00:36:15.741 00:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:15.741 00:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:15.741 00:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:15.741 00:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:15.741 00:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:15.741 00:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:15.741 00:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:15.741 00:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:15.741 00:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:15.741 00:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:15.741 00:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:15.741 00:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:15.741 00:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:36:15.741 00:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:15.741 00:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:15.741 00:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:36:15.741 00:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:36:15.741 00:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YjVmODc0NTMxMGVmOWJiOTAxYTUzMzNiNWU1ZGE0OWNhNmZhNGJmMGVkMmU3ZmQ1NWE0ZDNmZWE1MzI1ZTZkZMBuIkI=: 00:36:15.741 00:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:36:15.741 00:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:15.741 00:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:36:15.741 00:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YjVmODc0NTMxMGVmOWJiOTAxYTUzMzNiNWU1ZGE0OWNhNmZhNGJmMGVkMmU3ZmQ1NWE0ZDNmZWE1MzI1ZTZkZMBuIkI=: 00:36:15.741 00:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:36:15.741 00:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:36:15.741 00:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:15.741 00:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:15.741 00:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:36:15.741 00:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:36:15.741 00:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:15.741 00:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:36:15.741 00:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:15.741 00:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:15.741 00:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:15.741 00:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:15.741 00:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:15.741 00:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:15.741 00:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:15.741 00:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:15.741 00:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:15.741 00:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:15.741 00:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:15.741 00:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:15.741 00:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:15.741 00:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:15.741 00:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:36:15.741 00:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:15.741 00:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:16.000 nvme0n1 00:36:16.000 00:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:16.000 00:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:16.000 00:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:16.000 00:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:16.000 00:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:16.000 00:16:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:16.000 00:16:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:16.000 00:16:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:16.000 00:16:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:16.000 00:16:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:16.000 00:16:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:16.000 00:16:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:36:16.000 00:16:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:16.000 00:16:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:36:16.000 00:16:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:16.000 00:16:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:16.000 00:16:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:36:16.000 00:16:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:36:16.000 00:16:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDg1YTE0MzkyNmUzMWNjZjM0MjMyN2NkNzJjOTIwNTmemql8: 00:36:16.000 00:16:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NjBhZjExNDIwM2E4MzZkNmIzODRiMjlmM2E4MGZiMjg5NjZlYzA5ODhiZDZlYjkyNDU2NjQ0YWMyYjM5NTA3OOSoAhI=: 00:36:16.000 00:16:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:16.000 00:16:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:36:16.000 00:16:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDg1YTE0MzkyNmUzMWNjZjM0MjMyN2NkNzJjOTIwNTmemql8: 00:36:16.000 00:16:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NjBhZjExNDIwM2E4MzZkNmIzODRiMjlmM2E4MGZiMjg5NjZlYzA5ODhiZDZlYjkyNDU2NjQ0YWMyYjM5NTA3OOSoAhI=: ]] 00:36:16.000 00:16:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NjBhZjExNDIwM2E4MzZkNmIzODRiMjlmM2E4MGZiMjg5NjZlYzA5ODhiZDZlYjkyNDU2NjQ0YWMyYjM5NTA3OOSoAhI=: 00:36:16.000 00:16:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:36:16.000 00:16:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:16.000 00:16:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:16.000 00:16:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:36:16.000 00:16:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:36:16.000 00:16:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:16.000 00:16:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:36:16.000 00:16:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:16.000 00:16:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:16.000 00:16:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:16.000 00:16:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:16.000 00:16:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:16.000 00:16:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:16.000 00:16:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:16.000 00:16:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:16.000 00:16:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:16.000 00:16:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:16.000 00:16:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:16.000 00:16:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:16.000 00:16:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:16.000 00:16:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:16.000 00:16:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:36:16.000 00:16:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:16.000 00:16:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:16.259 nvme0n1 00:36:16.259 00:16:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:16.259 00:16:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:16.259 00:16:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:16.259 00:16:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:16.259 00:16:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:16.259 00:16:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:16.259 00:16:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:16.259 00:16:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:16.259 00:16:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:16.259 00:16:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:16.259 00:16:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:16.259 00:16:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:16.259 00:16:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:36:16.259 00:16:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:16.259 00:16:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:16.259 00:16:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:36:16.259 00:16:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:36:16.259 00:16:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzYwYmNkYzEyNDFiMTYzMmI5N2JhN2M3MjQ5OTIyODY2ZTc4NmM0MTQ0YWUyNmM4CTa6Pw==: 00:36:16.259 00:16:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzcwNGUxNjhmMDE2MGM5OTI3YzMzNjQzMDc5MDlmMGNlYTlkNWU2YjIyZGZlNTEwEYJuGQ==: 00:36:16.259 00:16:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:16.259 00:16:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:36:16.259 00:16:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzYwYmNkYzEyNDFiMTYzMmI5N2JhN2M3MjQ5OTIyODY2ZTc4NmM0MTQ0YWUyNmM4CTa6Pw==: 00:36:16.259 00:16:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzcwNGUxNjhmMDE2MGM5OTI3YzMzNjQzMDc5MDlmMGNlYTlkNWU2YjIyZGZlNTEwEYJuGQ==: ]] 00:36:16.259 00:16:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YzcwNGUxNjhmMDE2MGM5OTI3YzMzNjQzMDc5MDlmMGNlYTlkNWU2YjIyZGZlNTEwEYJuGQ==: 00:36:16.259 00:16:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:36:16.259 00:16:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:16.259 00:16:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:16.259 00:16:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:36:16.259 00:16:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:36:16.259 00:16:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:16.259 00:16:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:36:16.259 00:16:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:16.259 00:16:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:16.259 00:16:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:16.259 00:16:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:16.259 00:16:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:16.259 00:16:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:16.259 00:16:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:16.259 00:16:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:16.259 00:16:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:16.259 00:16:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:16.259 00:16:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:16.259 00:16:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:16.259 00:16:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:16.259 00:16:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:16.259 00:16:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:36:16.259 00:16:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:16.259 00:16:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:16.517 nvme0n1 00:36:16.517 00:16:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:16.517 00:16:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:16.517 00:16:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:16.517 00:16:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:16.517 00:16:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:16.517 00:16:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:16.776 00:16:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:16.776 00:16:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:16.776 00:16:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:16.776 00:16:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:16.776 00:16:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:16.776 00:16:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:16.776 00:16:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:36:16.776 00:16:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:16.776 00:16:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:16.776 00:16:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:36:16.776 00:16:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:36:16.776 00:16:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NGYyNzBlNTM3M2I5OWI0NzY0NTRiMTc0YWVkOTVkYWN0mx2S: 00:36:16.776 00:16:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MzI3Y2JmOWRkNDExYjkwOWE4MjFiYThhN2M1OTllYjURdW4z: 00:36:16.776 00:16:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:16.776 00:16:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:36:16.776 00:16:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NGYyNzBlNTM3M2I5OWI0NzY0NTRiMTc0YWVkOTVkYWN0mx2S: 00:36:16.776 00:16:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MzI3Y2JmOWRkNDExYjkwOWE4MjFiYThhN2M1OTllYjURdW4z: ]] 00:36:16.776 00:16:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MzI3Y2JmOWRkNDExYjkwOWE4MjFiYThhN2M1OTllYjURdW4z: 00:36:16.776 00:16:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:36:16.776 00:16:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:16.776 00:16:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:16.776 00:16:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:36:16.776 00:16:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:36:16.776 00:16:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:16.776 00:16:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:36:16.776 00:16:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:16.776 00:16:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:16.776 00:16:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:16.776 00:16:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:16.776 00:16:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:16.776 00:16:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:16.776 00:16:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:16.776 00:16:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:16.776 00:16:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:16.776 00:16:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:16.776 00:16:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:16.776 00:16:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:16.776 00:16:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:16.776 00:16:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:16.776 00:16:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:36:16.776 00:16:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:16.776 00:16:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:17.035 nvme0n1 00:36:17.035 00:16:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:17.035 00:16:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:17.035 00:16:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:17.035 00:16:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:17.035 00:16:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:17.035 00:16:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:17.035 00:16:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:17.035 00:16:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:17.035 00:16:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:17.035 00:16:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:17.035 00:16:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:17.035 00:16:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:17.035 00:16:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:36:17.035 00:16:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:17.035 00:16:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:17.035 00:16:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:36:17.035 00:16:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:36:17.035 00:16:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Nzg4NDZkOGQ0MTc4OTU1OGE4ZmYxMmY4OWViMzc1MzcwNjM3NDVmMjRkOTc1NDg3dE/pLw==: 00:36:17.035 00:16:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MjljMDU2YzgzYWZjNWQxNzdlYWQ4YTQ1MmViOWI1ZTiWULSZ: 00:36:17.035 00:16:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:17.035 00:16:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:36:17.035 00:16:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Nzg4NDZkOGQ0MTc4OTU1OGE4ZmYxMmY4OWViMzc1MzcwNjM3NDVmMjRkOTc1NDg3dE/pLw==: 00:36:17.035 00:16:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MjljMDU2YzgzYWZjNWQxNzdlYWQ4YTQ1MmViOWI1ZTiWULSZ: ]] 00:36:17.035 00:16:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MjljMDU2YzgzYWZjNWQxNzdlYWQ4YTQ1MmViOWI1ZTiWULSZ: 00:36:17.035 00:16:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:36:17.035 00:16:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:17.035 00:16:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:17.035 00:16:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:36:17.035 00:16:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:36:17.035 00:16:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:17.035 00:16:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:36:17.035 00:16:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:17.035 00:16:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:17.035 00:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:17.035 00:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:17.035 00:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:17.035 00:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:17.035 00:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:17.035 00:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:17.035 00:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:17.035 00:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:17.035 00:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:17.035 00:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:17.036 00:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:17.036 00:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:17.036 00:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:36:17.036 00:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:17.036 00:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:17.294 nvme0n1 00:36:17.294 00:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:17.294 00:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:17.294 00:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:17.294 00:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:17.294 00:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:17.294 00:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:17.294 00:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:17.294 00:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:17.294 00:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:17.294 00:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:17.294 00:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:17.295 00:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:17.295 00:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:36:17.295 00:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:17.295 00:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:17.295 00:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:36:17.295 00:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:36:17.295 00:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YjVmODc0NTMxMGVmOWJiOTAxYTUzMzNiNWU1ZGE0OWNhNmZhNGJmMGVkMmU3ZmQ1NWE0ZDNmZWE1MzI1ZTZkZMBuIkI=: 00:36:17.295 00:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:36:17.295 00:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:17.295 00:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:36:17.295 00:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YjVmODc0NTMxMGVmOWJiOTAxYTUzMzNiNWU1ZGE0OWNhNmZhNGJmMGVkMmU3ZmQ1NWE0ZDNmZWE1MzI1ZTZkZMBuIkI=: 00:36:17.295 00:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:36:17.295 00:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:36:17.295 00:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:17.295 00:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:17.295 00:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:36:17.295 00:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:36:17.295 00:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:17.295 00:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:36:17.295 00:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:17.295 00:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:17.295 00:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:17.295 00:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:17.295 00:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:17.295 00:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:17.295 00:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:17.295 00:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:17.295 00:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:17.295 00:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:17.295 00:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:17.295 00:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:17.295 00:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:17.295 00:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:17.295 00:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:36:17.295 00:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:17.295 00:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:17.554 nvme0n1 00:36:17.554 00:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:17.554 00:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:17.554 00:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:17.554 00:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:17.554 00:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:17.554 00:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:17.554 00:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:17.554 00:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:17.554 00:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:17.554 00:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:17.554 00:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:17.554 00:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:36:17.554 00:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:17.554 00:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:36:17.554 00:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:17.554 00:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:17.554 00:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:36:17.554 00:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:36:17.554 00:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDg1YTE0MzkyNmUzMWNjZjM0MjMyN2NkNzJjOTIwNTmemql8: 00:36:17.554 00:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NjBhZjExNDIwM2E4MzZkNmIzODRiMjlmM2E4MGZiMjg5NjZlYzA5ODhiZDZlYjkyNDU2NjQ0YWMyYjM5NTA3OOSoAhI=: 00:36:17.554 00:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:17.554 00:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:36:17.554 00:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDg1YTE0MzkyNmUzMWNjZjM0MjMyN2NkNzJjOTIwNTmemql8: 00:36:17.554 00:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NjBhZjExNDIwM2E4MzZkNmIzODRiMjlmM2E4MGZiMjg5NjZlYzA5ODhiZDZlYjkyNDU2NjQ0YWMyYjM5NTA3OOSoAhI=: ]] 00:36:17.554 00:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NjBhZjExNDIwM2E4MzZkNmIzODRiMjlmM2E4MGZiMjg5NjZlYzA5ODhiZDZlYjkyNDU2NjQ0YWMyYjM5NTA3OOSoAhI=: 00:36:17.554 00:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:36:17.554 00:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:17.554 00:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:17.554 00:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:36:17.554 00:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:36:17.554 00:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:17.554 00:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:36:17.554 00:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:17.554 00:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:17.554 00:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:17.554 00:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:17.554 00:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:17.554 00:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:17.554 00:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:17.554 00:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:17.554 00:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:17.554 00:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:17.554 00:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:17.554 00:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:17.554 00:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:17.554 00:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:17.554 00:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:36:17.554 00:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:17.554 00:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:18.121 nvme0n1 00:36:18.121 00:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:18.121 00:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:18.121 00:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:18.121 00:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:18.121 00:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:18.121 00:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:18.121 00:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:18.121 00:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:18.121 00:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:18.121 00:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:18.121 00:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:18.122 00:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:18.122 00:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:36:18.122 00:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:18.122 00:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:18.122 00:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:36:18.122 00:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:36:18.122 00:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzYwYmNkYzEyNDFiMTYzMmI5N2JhN2M3MjQ5OTIyODY2ZTc4NmM0MTQ0YWUyNmM4CTa6Pw==: 00:36:18.122 00:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzcwNGUxNjhmMDE2MGM5OTI3YzMzNjQzMDc5MDlmMGNlYTlkNWU2YjIyZGZlNTEwEYJuGQ==: 00:36:18.122 00:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:18.122 00:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:36:18.122 00:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzYwYmNkYzEyNDFiMTYzMmI5N2JhN2M3MjQ5OTIyODY2ZTc4NmM0MTQ0YWUyNmM4CTa6Pw==: 00:36:18.122 00:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzcwNGUxNjhmMDE2MGM5OTI3YzMzNjQzMDc5MDlmMGNlYTlkNWU2YjIyZGZlNTEwEYJuGQ==: ]] 00:36:18.122 00:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YzcwNGUxNjhmMDE2MGM5OTI3YzMzNjQzMDc5MDlmMGNlYTlkNWU2YjIyZGZlNTEwEYJuGQ==: 00:36:18.122 00:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:36:18.122 00:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:18.122 00:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:18.122 00:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:36:18.122 00:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:36:18.122 00:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:18.122 00:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:36:18.122 00:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:18.122 00:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:18.122 00:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:18.122 00:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:18.122 00:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:18.122 00:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:18.122 00:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:18.122 00:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:18.122 00:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:18.122 00:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:18.122 00:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:18.122 00:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:18.122 00:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:18.122 00:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:18.122 00:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:36:18.122 00:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:18.122 00:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:18.381 nvme0n1 00:36:18.381 00:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:18.381 00:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:18.381 00:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:18.381 00:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:18.381 00:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:18.381 00:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:18.381 00:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:18.381 00:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:18.381 00:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:18.381 00:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:18.381 00:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:18.381 00:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:18.381 00:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:36:18.381 00:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:18.381 00:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:18.381 00:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:36:18.381 00:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:36:18.381 00:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NGYyNzBlNTM3M2I5OWI0NzY0NTRiMTc0YWVkOTVkYWN0mx2S: 00:36:18.381 00:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MzI3Y2JmOWRkNDExYjkwOWE4MjFiYThhN2M1OTllYjURdW4z: 00:36:18.381 00:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:18.381 00:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:36:18.381 00:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NGYyNzBlNTM3M2I5OWI0NzY0NTRiMTc0YWVkOTVkYWN0mx2S: 00:36:18.381 00:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MzI3Y2JmOWRkNDExYjkwOWE4MjFiYThhN2M1OTllYjURdW4z: ]] 00:36:18.381 00:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MzI3Y2JmOWRkNDExYjkwOWE4MjFiYThhN2M1OTllYjURdW4z: 00:36:18.381 00:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:36:18.381 00:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:18.381 00:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:18.381 00:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:36:18.381 00:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:36:18.381 00:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:18.381 00:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:36:18.381 00:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:18.381 00:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:18.381 00:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:18.381 00:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:18.381 00:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:18.381 00:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:18.381 00:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:18.381 00:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:18.381 00:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:18.381 00:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:18.381 00:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:18.381 00:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:18.381 00:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:18.381 00:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:18.381 00:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:36:18.381 00:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:18.381 00:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:18.949 nvme0n1 00:36:18.949 00:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:18.949 00:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:18.949 00:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:18.949 00:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:18.949 00:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:18.949 00:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:18.949 00:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:18.949 00:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:18.949 00:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:18.949 00:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:18.949 00:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:18.949 00:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:18.949 00:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:36:18.949 00:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:18.949 00:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:18.949 00:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:36:18.949 00:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:36:18.949 00:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Nzg4NDZkOGQ0MTc4OTU1OGE4ZmYxMmY4OWViMzc1MzcwNjM3NDVmMjRkOTc1NDg3dE/pLw==: 00:36:18.949 00:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MjljMDU2YzgzYWZjNWQxNzdlYWQ4YTQ1MmViOWI1ZTiWULSZ: 00:36:18.949 00:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:18.949 00:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:36:18.949 00:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Nzg4NDZkOGQ0MTc4OTU1OGE4ZmYxMmY4OWViMzc1MzcwNjM3NDVmMjRkOTc1NDg3dE/pLw==: 00:36:18.949 00:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MjljMDU2YzgzYWZjNWQxNzdlYWQ4YTQ1MmViOWI1ZTiWULSZ: ]] 00:36:18.949 00:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MjljMDU2YzgzYWZjNWQxNzdlYWQ4YTQ1MmViOWI1ZTiWULSZ: 00:36:18.949 00:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:36:18.949 00:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:18.949 00:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:18.949 00:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:36:18.949 00:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:36:18.949 00:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:18.949 00:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:36:18.949 00:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:18.949 00:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:18.949 00:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:18.949 00:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:18.949 00:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:18.949 00:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:18.949 00:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:18.949 00:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:18.949 00:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:18.949 00:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:18.949 00:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:18.949 00:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:18.949 00:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:18.949 00:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:18.949 00:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:36:18.949 00:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:18.949 00:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:19.208 nvme0n1 00:36:19.208 00:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:19.208 00:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:19.208 00:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:19.208 00:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:19.208 00:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:19.208 00:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:19.467 00:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:19.467 00:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:19.467 00:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:19.467 00:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:19.467 00:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:19.467 00:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:19.467 00:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:36:19.467 00:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:19.467 00:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:19.467 00:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:36:19.467 00:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:36:19.467 00:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YjVmODc0NTMxMGVmOWJiOTAxYTUzMzNiNWU1ZGE0OWNhNmZhNGJmMGVkMmU3ZmQ1NWE0ZDNmZWE1MzI1ZTZkZMBuIkI=: 00:36:19.467 00:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:36:19.467 00:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:19.467 00:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:36:19.467 00:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YjVmODc0NTMxMGVmOWJiOTAxYTUzMzNiNWU1ZGE0OWNhNmZhNGJmMGVkMmU3ZmQ1NWE0ZDNmZWE1MzI1ZTZkZMBuIkI=: 00:36:19.467 00:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:36:19.467 00:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:36:19.467 00:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:19.467 00:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:19.467 00:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:36:19.467 00:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:36:19.467 00:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:19.467 00:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:36:19.467 00:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:19.467 00:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:19.467 00:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:19.467 00:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:19.467 00:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:19.467 00:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:19.467 00:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:19.467 00:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:19.467 00:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:19.467 00:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:19.467 00:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:19.467 00:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:19.467 00:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:19.467 00:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:19.467 00:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:36:19.467 00:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:19.467 00:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:19.726 nvme0n1 00:36:19.726 00:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:19.726 00:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:19.726 00:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:19.726 00:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:19.726 00:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:19.726 00:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:19.726 00:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:19.726 00:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:19.726 00:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:19.726 00:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:19.726 00:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:19.726 00:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:36:19.726 00:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:19.726 00:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:36:19.726 00:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:19.726 00:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:19.726 00:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:36:19.726 00:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:36:19.726 00:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDg1YTE0MzkyNmUzMWNjZjM0MjMyN2NkNzJjOTIwNTmemql8: 00:36:19.726 00:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NjBhZjExNDIwM2E4MzZkNmIzODRiMjlmM2E4MGZiMjg5NjZlYzA5ODhiZDZlYjkyNDU2NjQ0YWMyYjM5NTA3OOSoAhI=: 00:36:19.726 00:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:19.726 00:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:36:19.726 00:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDg1YTE0MzkyNmUzMWNjZjM0MjMyN2NkNzJjOTIwNTmemql8: 00:36:19.726 00:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NjBhZjExNDIwM2E4MzZkNmIzODRiMjlmM2E4MGZiMjg5NjZlYzA5ODhiZDZlYjkyNDU2NjQ0YWMyYjM5NTA3OOSoAhI=: ]] 00:36:19.726 00:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NjBhZjExNDIwM2E4MzZkNmIzODRiMjlmM2E4MGZiMjg5NjZlYzA5ODhiZDZlYjkyNDU2NjQ0YWMyYjM5NTA3OOSoAhI=: 00:36:19.726 00:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:36:19.726 00:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:19.726 00:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:19.726 00:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:36:19.726 00:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:36:19.726 00:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:19.726 00:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:36:19.726 00:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:19.726 00:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:19.726 00:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:19.726 00:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:19.726 00:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:19.726 00:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:19.726 00:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:19.726 00:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:19.726 00:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:19.726 00:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:19.726 00:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:19.726 00:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:19.726 00:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:19.726 00:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:19.726 00:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:36:19.726 00:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:19.726 00:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:20.293 nvme0n1 00:36:20.293 00:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:20.293 00:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:20.293 00:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:20.293 00:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:20.293 00:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:20.552 00:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:20.552 00:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:20.552 00:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:20.552 00:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:20.552 00:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:20.552 00:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:20.552 00:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:20.552 00:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:36:20.552 00:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:20.552 00:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:20.552 00:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:36:20.552 00:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:36:20.552 00:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzYwYmNkYzEyNDFiMTYzMmI5N2JhN2M3MjQ5OTIyODY2ZTc4NmM0MTQ0YWUyNmM4CTa6Pw==: 00:36:20.552 00:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzcwNGUxNjhmMDE2MGM5OTI3YzMzNjQzMDc5MDlmMGNlYTlkNWU2YjIyZGZlNTEwEYJuGQ==: 00:36:20.552 00:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:20.552 00:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:36:20.552 00:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzYwYmNkYzEyNDFiMTYzMmI5N2JhN2M3MjQ5OTIyODY2ZTc4NmM0MTQ0YWUyNmM4CTa6Pw==: 00:36:20.552 00:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzcwNGUxNjhmMDE2MGM5OTI3YzMzNjQzMDc5MDlmMGNlYTlkNWU2YjIyZGZlNTEwEYJuGQ==: ]] 00:36:20.552 00:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YzcwNGUxNjhmMDE2MGM5OTI3YzMzNjQzMDc5MDlmMGNlYTlkNWU2YjIyZGZlNTEwEYJuGQ==: 00:36:20.552 00:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:36:20.552 00:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:20.552 00:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:20.552 00:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:36:20.552 00:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:36:20.552 00:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:20.552 00:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:36:20.552 00:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:20.552 00:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:20.552 00:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:20.552 00:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:20.552 00:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:20.552 00:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:20.552 00:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:20.552 00:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:20.552 00:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:20.552 00:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:20.552 00:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:20.552 00:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:20.552 00:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:20.552 00:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:20.552 00:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:36:20.552 00:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:20.552 00:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:21.120 nvme0n1 00:36:21.120 00:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:21.120 00:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:21.120 00:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:21.120 00:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:21.120 00:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:21.120 00:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:21.120 00:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:21.120 00:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:21.120 00:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:21.120 00:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:21.120 00:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:21.120 00:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:21.120 00:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:36:21.120 00:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:21.120 00:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:21.120 00:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:36:21.120 00:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:36:21.120 00:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NGYyNzBlNTM3M2I5OWI0NzY0NTRiMTc0YWVkOTVkYWN0mx2S: 00:36:21.120 00:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MzI3Y2JmOWRkNDExYjkwOWE4MjFiYThhN2M1OTllYjURdW4z: 00:36:21.120 00:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:21.120 00:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:36:21.120 00:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NGYyNzBlNTM3M2I5OWI0NzY0NTRiMTc0YWVkOTVkYWN0mx2S: 00:36:21.120 00:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MzI3Y2JmOWRkNDExYjkwOWE4MjFiYThhN2M1OTllYjURdW4z: ]] 00:36:21.120 00:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MzI3Y2JmOWRkNDExYjkwOWE4MjFiYThhN2M1OTllYjURdW4z: 00:36:21.120 00:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:36:21.120 00:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:21.120 00:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:21.120 00:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:36:21.120 00:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:36:21.120 00:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:21.120 00:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:36:21.120 00:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:21.120 00:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:21.120 00:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:21.120 00:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:21.120 00:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:21.120 00:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:21.120 00:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:21.120 00:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:21.120 00:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:21.120 00:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:21.120 00:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:21.120 00:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:21.120 00:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:21.120 00:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:21.120 00:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:36:21.120 00:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:21.120 00:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:21.688 nvme0n1 00:36:21.688 00:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:21.688 00:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:21.688 00:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:21.688 00:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:21.688 00:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:21.688 00:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:21.688 00:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:21.688 00:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:21.688 00:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:21.688 00:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:21.688 00:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:21.688 00:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:21.688 00:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:36:21.688 00:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:21.688 00:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:21.688 00:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:36:21.688 00:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:36:21.688 00:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Nzg4NDZkOGQ0MTc4OTU1OGE4ZmYxMmY4OWViMzc1MzcwNjM3NDVmMjRkOTc1NDg3dE/pLw==: 00:36:21.688 00:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MjljMDU2YzgzYWZjNWQxNzdlYWQ4YTQ1MmViOWI1ZTiWULSZ: 00:36:21.688 00:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:21.688 00:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:36:21.688 00:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Nzg4NDZkOGQ0MTc4OTU1OGE4ZmYxMmY4OWViMzc1MzcwNjM3NDVmMjRkOTc1NDg3dE/pLw==: 00:36:21.688 00:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MjljMDU2YzgzYWZjNWQxNzdlYWQ4YTQ1MmViOWI1ZTiWULSZ: ]] 00:36:21.688 00:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MjljMDU2YzgzYWZjNWQxNzdlYWQ4YTQ1MmViOWI1ZTiWULSZ: 00:36:21.688 00:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:36:21.688 00:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:21.688 00:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:21.688 00:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:36:21.688 00:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:36:21.688 00:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:21.688 00:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:36:21.688 00:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:21.688 00:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:21.688 00:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:21.688 00:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:21.688 00:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:21.688 00:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:21.688 00:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:21.688 00:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:21.688 00:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:21.688 00:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:21.688 00:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:21.688 00:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:21.688 00:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:21.688 00:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:21.688 00:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:36:21.688 00:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:21.688 00:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:22.255 nvme0n1 00:36:22.255 00:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:22.255 00:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:22.255 00:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:22.255 00:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:22.255 00:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:22.255 00:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:22.255 00:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:22.255 00:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:22.255 00:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:22.255 00:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:22.514 00:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:22.514 00:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:22.514 00:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:36:22.514 00:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:22.514 00:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:22.514 00:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:36:22.514 00:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:36:22.514 00:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YjVmODc0NTMxMGVmOWJiOTAxYTUzMzNiNWU1ZGE0OWNhNmZhNGJmMGVkMmU3ZmQ1NWE0ZDNmZWE1MzI1ZTZkZMBuIkI=: 00:36:22.514 00:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:36:22.514 00:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:22.514 00:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:36:22.514 00:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YjVmODc0NTMxMGVmOWJiOTAxYTUzMzNiNWU1ZGE0OWNhNmZhNGJmMGVkMmU3ZmQ1NWE0ZDNmZWE1MzI1ZTZkZMBuIkI=: 00:36:22.514 00:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:36:22.514 00:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:36:22.514 00:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:22.514 00:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:22.514 00:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:36:22.514 00:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:36:22.514 00:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:22.514 00:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:36:22.514 00:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:22.514 00:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:22.514 00:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:22.514 00:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:22.514 00:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:22.514 00:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:22.514 00:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:22.514 00:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:22.514 00:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:22.514 00:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:22.514 00:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:22.514 00:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:22.514 00:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:22.514 00:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:22.514 00:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:36:22.514 00:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:22.514 00:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:23.081 nvme0n1 00:36:23.081 00:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:23.081 00:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:23.081 00:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:23.081 00:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:23.081 00:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:23.081 00:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:23.081 00:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:23.081 00:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:23.081 00:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:23.081 00:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:23.081 00:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:23.082 00:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:36:23.082 00:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:36:23.082 00:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:23.082 00:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:36:23.082 00:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:23.082 00:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:23.082 00:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:36:23.082 00:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:36:23.082 00:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDg1YTE0MzkyNmUzMWNjZjM0MjMyN2NkNzJjOTIwNTmemql8: 00:36:23.082 00:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NjBhZjExNDIwM2E4MzZkNmIzODRiMjlmM2E4MGZiMjg5NjZlYzA5ODhiZDZlYjkyNDU2NjQ0YWMyYjM5NTA3OOSoAhI=: 00:36:23.082 00:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:23.082 00:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:36:23.082 00:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDg1YTE0MzkyNmUzMWNjZjM0MjMyN2NkNzJjOTIwNTmemql8: 00:36:23.082 00:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NjBhZjExNDIwM2E4MzZkNmIzODRiMjlmM2E4MGZiMjg5NjZlYzA5ODhiZDZlYjkyNDU2NjQ0YWMyYjM5NTA3OOSoAhI=: ]] 00:36:23.082 00:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NjBhZjExNDIwM2E4MzZkNmIzODRiMjlmM2E4MGZiMjg5NjZlYzA5ODhiZDZlYjkyNDU2NjQ0YWMyYjM5NTA3OOSoAhI=: 00:36:23.082 00:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:36:23.082 00:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:23.082 00:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:23.082 00:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:36:23.082 00:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:36:23.082 00:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:23.082 00:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:36:23.082 00:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:23.082 00:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:23.082 00:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:23.082 00:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:23.082 00:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:23.082 00:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:23.082 00:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:23.082 00:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:23.082 00:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:23.082 00:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:23.082 00:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:23.082 00:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:23.082 00:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:23.082 00:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:23.082 00:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:36:23.082 00:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:23.082 00:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:23.082 nvme0n1 00:36:23.082 00:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:23.082 00:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:23.082 00:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:23.082 00:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:23.082 00:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:23.341 00:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:23.341 00:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:23.341 00:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:23.341 00:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:23.341 00:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:23.341 00:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:23.341 00:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:23.341 00:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:36:23.341 00:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:23.341 00:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:23.341 00:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:36:23.341 00:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:36:23.341 00:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzYwYmNkYzEyNDFiMTYzMmI5N2JhN2M3MjQ5OTIyODY2ZTc4NmM0MTQ0YWUyNmM4CTa6Pw==: 00:36:23.341 00:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzcwNGUxNjhmMDE2MGM5OTI3YzMzNjQzMDc5MDlmMGNlYTlkNWU2YjIyZGZlNTEwEYJuGQ==: 00:36:23.341 00:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:23.341 00:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:36:23.341 00:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzYwYmNkYzEyNDFiMTYzMmI5N2JhN2M3MjQ5OTIyODY2ZTc4NmM0MTQ0YWUyNmM4CTa6Pw==: 00:36:23.341 00:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzcwNGUxNjhmMDE2MGM5OTI3YzMzNjQzMDc5MDlmMGNlYTlkNWU2YjIyZGZlNTEwEYJuGQ==: ]] 00:36:23.341 00:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YzcwNGUxNjhmMDE2MGM5OTI3YzMzNjQzMDc5MDlmMGNlYTlkNWU2YjIyZGZlNTEwEYJuGQ==: 00:36:23.341 00:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:36:23.341 00:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:23.341 00:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:23.341 00:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:36:23.341 00:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:36:23.341 00:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:23.341 00:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:36:23.341 00:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:23.341 00:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:23.341 00:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:23.341 00:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:23.341 00:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:23.341 00:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:23.341 00:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:23.341 00:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:23.341 00:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:23.341 00:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:23.341 00:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:23.341 00:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:23.341 00:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:23.341 00:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:23.341 00:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:36:23.341 00:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:23.341 00:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:23.341 nvme0n1 00:36:23.341 00:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:23.341 00:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:23.341 00:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:23.341 00:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:23.341 00:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:23.341 00:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:23.601 00:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:23.601 00:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:23.601 00:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:23.601 00:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:23.601 00:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:23.601 00:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:23.601 00:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:36:23.601 00:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:23.601 00:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:23.601 00:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:36:23.601 00:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:36:23.601 00:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NGYyNzBlNTM3M2I5OWI0NzY0NTRiMTc0YWVkOTVkYWN0mx2S: 00:36:23.601 00:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MzI3Y2JmOWRkNDExYjkwOWE4MjFiYThhN2M1OTllYjURdW4z: 00:36:23.601 00:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:23.601 00:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:36:23.601 00:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NGYyNzBlNTM3M2I5OWI0NzY0NTRiMTc0YWVkOTVkYWN0mx2S: 00:36:23.601 00:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MzI3Y2JmOWRkNDExYjkwOWE4MjFiYThhN2M1OTllYjURdW4z: ]] 00:36:23.601 00:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MzI3Y2JmOWRkNDExYjkwOWE4MjFiYThhN2M1OTllYjURdW4z: 00:36:23.601 00:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:36:23.601 00:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:23.601 00:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:23.601 00:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:36:23.601 00:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:36:23.601 00:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:23.601 00:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:36:23.601 00:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:23.601 00:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:23.601 00:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:23.601 00:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:23.601 00:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:23.601 00:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:23.601 00:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:23.601 00:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:23.601 00:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:23.601 00:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:23.601 00:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:23.601 00:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:23.601 00:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:23.601 00:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:23.601 00:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:36:23.601 00:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:23.601 00:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:23.601 nvme0n1 00:36:23.601 00:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:23.601 00:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:23.601 00:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:23.601 00:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:23.601 00:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:23.601 00:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:23.601 00:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:23.601 00:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:23.601 00:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:23.601 00:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:23.601 00:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:23.601 00:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:23.601 00:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:36:23.601 00:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:23.601 00:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:23.601 00:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:36:23.601 00:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:36:23.601 00:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Nzg4NDZkOGQ0MTc4OTU1OGE4ZmYxMmY4OWViMzc1MzcwNjM3NDVmMjRkOTc1NDg3dE/pLw==: 00:36:23.601 00:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MjljMDU2YzgzYWZjNWQxNzdlYWQ4YTQ1MmViOWI1ZTiWULSZ: 00:36:23.601 00:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:23.601 00:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:36:23.601 00:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Nzg4NDZkOGQ0MTc4OTU1OGE4ZmYxMmY4OWViMzc1MzcwNjM3NDVmMjRkOTc1NDg3dE/pLw==: 00:36:23.601 00:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MjljMDU2YzgzYWZjNWQxNzdlYWQ4YTQ1MmViOWI1ZTiWULSZ: ]] 00:36:23.601 00:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MjljMDU2YzgzYWZjNWQxNzdlYWQ4YTQ1MmViOWI1ZTiWULSZ: 00:36:23.601 00:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:36:23.601 00:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:23.601 00:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:23.601 00:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:36:23.601 00:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:36:23.601 00:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:23.601 00:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:36:23.601 00:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:23.601 00:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:23.859 00:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:23.859 00:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:23.859 00:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:23.859 00:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:23.859 00:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:23.859 00:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:23.860 00:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:23.860 00:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:23.860 00:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:23.860 00:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:23.860 00:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:23.860 00:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:23.860 00:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:36:23.860 00:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:23.860 00:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:23.860 nvme0n1 00:36:23.860 00:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:23.860 00:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:23.860 00:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:23.860 00:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:23.860 00:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:23.860 00:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:23.860 00:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:23.860 00:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:23.860 00:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:23.860 00:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:23.860 00:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:23.860 00:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:23.860 00:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:36:23.860 00:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:23.860 00:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:23.860 00:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:36:23.860 00:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:36:23.860 00:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YjVmODc0NTMxMGVmOWJiOTAxYTUzMzNiNWU1ZGE0OWNhNmZhNGJmMGVkMmU3ZmQ1NWE0ZDNmZWE1MzI1ZTZkZMBuIkI=: 00:36:23.860 00:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:36:23.860 00:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:23.860 00:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:36:23.860 00:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YjVmODc0NTMxMGVmOWJiOTAxYTUzMzNiNWU1ZGE0OWNhNmZhNGJmMGVkMmU3ZmQ1NWE0ZDNmZWE1MzI1ZTZkZMBuIkI=: 00:36:23.860 00:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:36:23.860 00:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:36:23.860 00:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:23.860 00:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:23.860 00:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:36:23.860 00:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:36:23.860 00:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:23.860 00:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:36:23.860 00:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:23.860 00:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:23.860 00:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:23.860 00:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:23.860 00:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:23.860 00:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:23.860 00:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:23.860 00:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:23.860 00:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:23.860 00:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:23.860 00:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:23.860 00:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:23.860 00:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:23.860 00:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:23.860 00:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:36:23.860 00:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:23.860 00:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:24.119 nvme0n1 00:36:24.119 00:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:24.119 00:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:24.119 00:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:24.119 00:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:24.119 00:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:24.119 00:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:24.119 00:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:24.119 00:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:24.119 00:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:24.119 00:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:24.119 00:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:24.119 00:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:36:24.119 00:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:24.119 00:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:36:24.119 00:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:24.119 00:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:24.119 00:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:36:24.119 00:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:36:24.119 00:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDg1YTE0MzkyNmUzMWNjZjM0MjMyN2NkNzJjOTIwNTmemql8: 00:36:24.119 00:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NjBhZjExNDIwM2E4MzZkNmIzODRiMjlmM2E4MGZiMjg5NjZlYzA5ODhiZDZlYjkyNDU2NjQ0YWMyYjM5NTA3OOSoAhI=: 00:36:24.119 00:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:24.119 00:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:36:24.119 00:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDg1YTE0MzkyNmUzMWNjZjM0MjMyN2NkNzJjOTIwNTmemql8: 00:36:24.119 00:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NjBhZjExNDIwM2E4MzZkNmIzODRiMjlmM2E4MGZiMjg5NjZlYzA5ODhiZDZlYjkyNDU2NjQ0YWMyYjM5NTA3OOSoAhI=: ]] 00:36:24.119 00:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NjBhZjExNDIwM2E4MzZkNmIzODRiMjlmM2E4MGZiMjg5NjZlYzA5ODhiZDZlYjkyNDU2NjQ0YWMyYjM5NTA3OOSoAhI=: 00:36:24.119 00:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:36:24.119 00:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:24.119 00:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:24.119 00:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:36:24.119 00:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:36:24.119 00:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:24.119 00:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:36:24.119 00:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:24.119 00:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:24.119 00:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:24.119 00:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:24.119 00:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:24.119 00:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:24.119 00:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:24.119 00:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:24.119 00:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:24.119 00:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:24.119 00:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:24.119 00:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:24.119 00:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:24.119 00:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:24.119 00:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:36:24.119 00:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:24.119 00:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:24.378 nvme0n1 00:36:24.378 00:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:24.378 00:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:24.378 00:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:24.378 00:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:24.378 00:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:24.378 00:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:24.378 00:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:24.378 00:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:24.378 00:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:24.378 00:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:24.378 00:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:24.378 00:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:24.378 00:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:36:24.378 00:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:24.378 00:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:24.378 00:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:36:24.378 00:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:36:24.378 00:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzYwYmNkYzEyNDFiMTYzMmI5N2JhN2M3MjQ5OTIyODY2ZTc4NmM0MTQ0YWUyNmM4CTa6Pw==: 00:36:24.378 00:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzcwNGUxNjhmMDE2MGM5OTI3YzMzNjQzMDc5MDlmMGNlYTlkNWU2YjIyZGZlNTEwEYJuGQ==: 00:36:24.378 00:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:24.378 00:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:36:24.378 00:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzYwYmNkYzEyNDFiMTYzMmI5N2JhN2M3MjQ5OTIyODY2ZTc4NmM0MTQ0YWUyNmM4CTa6Pw==: 00:36:24.378 00:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzcwNGUxNjhmMDE2MGM5OTI3YzMzNjQzMDc5MDlmMGNlYTlkNWU2YjIyZGZlNTEwEYJuGQ==: ]] 00:36:24.378 00:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YzcwNGUxNjhmMDE2MGM5OTI3YzMzNjQzMDc5MDlmMGNlYTlkNWU2YjIyZGZlNTEwEYJuGQ==: 00:36:24.378 00:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:36:24.378 00:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:24.378 00:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:24.378 00:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:36:24.378 00:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:36:24.378 00:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:24.378 00:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:36:24.378 00:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:24.378 00:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:24.378 00:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:24.378 00:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:24.378 00:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:24.378 00:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:24.378 00:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:24.378 00:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:24.378 00:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:24.378 00:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:24.378 00:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:24.378 00:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:24.378 00:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:24.378 00:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:24.378 00:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:36:24.378 00:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:24.378 00:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:24.636 nvme0n1 00:36:24.636 00:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:24.636 00:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:24.636 00:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:24.636 00:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:24.636 00:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:24.637 00:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:24.637 00:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:24.637 00:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:24.637 00:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:24.637 00:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:24.637 00:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:24.637 00:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:24.637 00:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:36:24.637 00:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:24.637 00:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:24.637 00:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:36:24.637 00:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:36:24.637 00:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NGYyNzBlNTM3M2I5OWI0NzY0NTRiMTc0YWVkOTVkYWN0mx2S: 00:36:24.637 00:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MzI3Y2JmOWRkNDExYjkwOWE4MjFiYThhN2M1OTllYjURdW4z: 00:36:24.637 00:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:24.637 00:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:36:24.637 00:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NGYyNzBlNTM3M2I5OWI0NzY0NTRiMTc0YWVkOTVkYWN0mx2S: 00:36:24.637 00:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MzI3Y2JmOWRkNDExYjkwOWE4MjFiYThhN2M1OTllYjURdW4z: ]] 00:36:24.637 00:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MzI3Y2JmOWRkNDExYjkwOWE4MjFiYThhN2M1OTllYjURdW4z: 00:36:24.637 00:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:36:24.637 00:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:24.637 00:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:24.637 00:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:36:24.637 00:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:36:24.637 00:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:24.637 00:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:36:24.637 00:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:24.637 00:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:24.637 00:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:24.637 00:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:24.637 00:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:24.637 00:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:24.637 00:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:24.637 00:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:24.637 00:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:24.637 00:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:24.637 00:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:24.637 00:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:24.637 00:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:24.637 00:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:24.637 00:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:36:24.637 00:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:24.637 00:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:24.895 nvme0n1 00:36:24.895 00:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:24.895 00:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:24.895 00:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:24.895 00:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:24.895 00:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:24.895 00:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:24.895 00:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:24.895 00:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:24.895 00:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:24.895 00:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:24.895 00:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:24.895 00:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:24.895 00:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:36:24.895 00:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:24.895 00:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:24.895 00:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:36:24.895 00:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:36:24.895 00:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Nzg4NDZkOGQ0MTc4OTU1OGE4ZmYxMmY4OWViMzc1MzcwNjM3NDVmMjRkOTc1NDg3dE/pLw==: 00:36:24.895 00:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MjljMDU2YzgzYWZjNWQxNzdlYWQ4YTQ1MmViOWI1ZTiWULSZ: 00:36:24.895 00:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:24.895 00:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:36:24.895 00:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Nzg4NDZkOGQ0MTc4OTU1OGE4ZmYxMmY4OWViMzc1MzcwNjM3NDVmMjRkOTc1NDg3dE/pLw==: 00:36:24.896 00:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MjljMDU2YzgzYWZjNWQxNzdlYWQ4YTQ1MmViOWI1ZTiWULSZ: ]] 00:36:24.896 00:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MjljMDU2YzgzYWZjNWQxNzdlYWQ4YTQ1MmViOWI1ZTiWULSZ: 00:36:24.896 00:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:36:24.896 00:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:24.896 00:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:24.896 00:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:36:24.896 00:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:36:24.896 00:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:24.896 00:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:36:24.896 00:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:24.896 00:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:24.896 00:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:24.896 00:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:24.896 00:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:24.896 00:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:24.896 00:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:24.896 00:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:24.896 00:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:24.896 00:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:24.896 00:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:24.896 00:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:24.896 00:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:24.896 00:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:24.896 00:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:36:24.896 00:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:24.896 00:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:25.154 nvme0n1 00:36:25.154 00:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:25.154 00:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:25.154 00:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:25.154 00:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:25.154 00:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:25.154 00:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:25.154 00:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:25.154 00:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:25.154 00:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:25.154 00:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:25.154 00:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:25.154 00:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:25.155 00:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:36:25.155 00:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:25.155 00:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:25.155 00:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:36:25.155 00:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:36:25.155 00:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YjVmODc0NTMxMGVmOWJiOTAxYTUzMzNiNWU1ZGE0OWNhNmZhNGJmMGVkMmU3ZmQ1NWE0ZDNmZWE1MzI1ZTZkZMBuIkI=: 00:36:25.155 00:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:36:25.155 00:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:25.155 00:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:36:25.155 00:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YjVmODc0NTMxMGVmOWJiOTAxYTUzMzNiNWU1ZGE0OWNhNmZhNGJmMGVkMmU3ZmQ1NWE0ZDNmZWE1MzI1ZTZkZMBuIkI=: 00:36:25.155 00:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:36:25.155 00:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:36:25.155 00:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:25.155 00:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:25.155 00:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:36:25.155 00:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:36:25.155 00:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:25.155 00:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:36:25.155 00:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:25.155 00:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:25.155 00:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:25.155 00:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:25.155 00:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:25.155 00:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:25.155 00:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:25.155 00:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:25.155 00:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:25.155 00:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:25.155 00:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:25.155 00:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:25.155 00:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:25.155 00:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:25.155 00:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:36:25.155 00:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:25.155 00:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:25.413 nvme0n1 00:36:25.413 00:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:25.413 00:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:25.413 00:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:25.414 00:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:25.414 00:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:25.414 00:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:25.414 00:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:25.414 00:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:25.414 00:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:25.414 00:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:25.414 00:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:25.414 00:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:36:25.414 00:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:25.414 00:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:36:25.414 00:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:25.414 00:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:25.414 00:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:36:25.414 00:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:36:25.414 00:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDg1YTE0MzkyNmUzMWNjZjM0MjMyN2NkNzJjOTIwNTmemql8: 00:36:25.414 00:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NjBhZjExNDIwM2E4MzZkNmIzODRiMjlmM2E4MGZiMjg5NjZlYzA5ODhiZDZlYjkyNDU2NjQ0YWMyYjM5NTA3OOSoAhI=: 00:36:25.414 00:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:25.414 00:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:36:25.414 00:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDg1YTE0MzkyNmUzMWNjZjM0MjMyN2NkNzJjOTIwNTmemql8: 00:36:25.414 00:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NjBhZjExNDIwM2E4MzZkNmIzODRiMjlmM2E4MGZiMjg5NjZlYzA5ODhiZDZlYjkyNDU2NjQ0YWMyYjM5NTA3OOSoAhI=: ]] 00:36:25.414 00:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NjBhZjExNDIwM2E4MzZkNmIzODRiMjlmM2E4MGZiMjg5NjZlYzA5ODhiZDZlYjkyNDU2NjQ0YWMyYjM5NTA3OOSoAhI=: 00:36:25.414 00:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:36:25.414 00:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:25.414 00:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:25.414 00:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:36:25.414 00:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:36:25.414 00:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:25.414 00:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:36:25.414 00:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:25.414 00:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:25.414 00:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:25.414 00:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:25.414 00:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:25.414 00:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:25.414 00:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:25.414 00:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:25.414 00:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:25.414 00:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:25.414 00:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:25.414 00:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:25.414 00:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:25.414 00:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:25.414 00:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:36:25.414 00:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:25.414 00:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:25.673 nvme0n1 00:36:25.673 00:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:25.673 00:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:25.673 00:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:25.673 00:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:25.673 00:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:25.673 00:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:25.673 00:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:25.673 00:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:25.673 00:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:25.673 00:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:25.673 00:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:25.673 00:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:25.673 00:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:36:25.673 00:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:25.673 00:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:25.673 00:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:36:25.673 00:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:36:25.673 00:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzYwYmNkYzEyNDFiMTYzMmI5N2JhN2M3MjQ5OTIyODY2ZTc4NmM0MTQ0YWUyNmM4CTa6Pw==: 00:36:25.673 00:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzcwNGUxNjhmMDE2MGM5OTI3YzMzNjQzMDc5MDlmMGNlYTlkNWU2YjIyZGZlNTEwEYJuGQ==: 00:36:25.673 00:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:25.673 00:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:36:25.673 00:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzYwYmNkYzEyNDFiMTYzMmI5N2JhN2M3MjQ5OTIyODY2ZTc4NmM0MTQ0YWUyNmM4CTa6Pw==: 00:36:25.673 00:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzcwNGUxNjhmMDE2MGM5OTI3YzMzNjQzMDc5MDlmMGNlYTlkNWU2YjIyZGZlNTEwEYJuGQ==: ]] 00:36:25.673 00:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YzcwNGUxNjhmMDE2MGM5OTI3YzMzNjQzMDc5MDlmMGNlYTlkNWU2YjIyZGZlNTEwEYJuGQ==: 00:36:25.673 00:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:36:25.673 00:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:25.673 00:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:25.673 00:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:36:25.673 00:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:36:25.673 00:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:25.673 00:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:36:25.673 00:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:25.673 00:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:25.673 00:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:25.673 00:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:25.673 00:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:25.673 00:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:25.673 00:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:25.673 00:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:25.673 00:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:25.673 00:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:25.673 00:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:25.673 00:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:25.673 00:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:25.673 00:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:25.673 00:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:36:25.673 00:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:25.673 00:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:25.932 nvme0n1 00:36:25.932 00:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:25.932 00:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:25.932 00:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:25.932 00:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:25.932 00:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:25.932 00:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:26.190 00:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:26.190 00:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:26.190 00:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:26.190 00:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:26.190 00:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:26.190 00:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:26.190 00:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:36:26.190 00:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:26.190 00:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:26.190 00:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:36:26.190 00:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:36:26.190 00:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NGYyNzBlNTM3M2I5OWI0NzY0NTRiMTc0YWVkOTVkYWN0mx2S: 00:36:26.190 00:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MzI3Y2JmOWRkNDExYjkwOWE4MjFiYThhN2M1OTllYjURdW4z: 00:36:26.190 00:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:26.190 00:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:36:26.190 00:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NGYyNzBlNTM3M2I5OWI0NzY0NTRiMTc0YWVkOTVkYWN0mx2S: 00:36:26.190 00:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MzI3Y2JmOWRkNDExYjkwOWE4MjFiYThhN2M1OTllYjURdW4z: ]] 00:36:26.190 00:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MzI3Y2JmOWRkNDExYjkwOWE4MjFiYThhN2M1OTllYjURdW4z: 00:36:26.190 00:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:36:26.190 00:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:26.190 00:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:26.190 00:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:36:26.190 00:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:36:26.190 00:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:26.190 00:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:36:26.190 00:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:26.190 00:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:26.190 00:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:26.190 00:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:26.190 00:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:26.190 00:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:26.190 00:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:26.190 00:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:26.190 00:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:26.190 00:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:26.190 00:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:26.190 00:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:26.190 00:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:26.190 00:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:26.190 00:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:36:26.190 00:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:26.190 00:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:26.448 nvme0n1 00:36:26.448 00:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:26.448 00:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:26.448 00:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:26.448 00:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:26.448 00:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:26.448 00:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:26.448 00:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:26.448 00:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:26.448 00:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:26.448 00:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:26.448 00:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:26.448 00:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:26.448 00:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:36:26.448 00:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:26.448 00:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:26.448 00:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:36:26.448 00:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:36:26.448 00:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Nzg4NDZkOGQ0MTc4OTU1OGE4ZmYxMmY4OWViMzc1MzcwNjM3NDVmMjRkOTc1NDg3dE/pLw==: 00:36:26.448 00:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MjljMDU2YzgzYWZjNWQxNzdlYWQ4YTQ1MmViOWI1ZTiWULSZ: 00:36:26.448 00:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:26.448 00:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:36:26.448 00:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Nzg4NDZkOGQ0MTc4OTU1OGE4ZmYxMmY4OWViMzc1MzcwNjM3NDVmMjRkOTc1NDg3dE/pLw==: 00:36:26.448 00:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MjljMDU2YzgzYWZjNWQxNzdlYWQ4YTQ1MmViOWI1ZTiWULSZ: ]] 00:36:26.448 00:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MjljMDU2YzgzYWZjNWQxNzdlYWQ4YTQ1MmViOWI1ZTiWULSZ: 00:36:26.448 00:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:36:26.448 00:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:26.448 00:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:26.448 00:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:36:26.448 00:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:36:26.448 00:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:26.448 00:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:36:26.448 00:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:26.448 00:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:26.448 00:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:26.448 00:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:26.448 00:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:26.448 00:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:26.448 00:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:26.448 00:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:26.448 00:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:26.448 00:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:26.448 00:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:26.448 00:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:26.448 00:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:26.448 00:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:26.448 00:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:36:26.448 00:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:26.448 00:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:26.708 nvme0n1 00:36:26.708 00:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:26.708 00:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:26.708 00:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:26.708 00:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:26.708 00:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:26.708 00:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:26.708 00:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:26.708 00:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:26.708 00:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:26.708 00:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:26.708 00:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:26.708 00:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:26.708 00:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:36:26.708 00:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:26.708 00:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:26.708 00:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:36:26.708 00:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:36:26.708 00:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YjVmODc0NTMxMGVmOWJiOTAxYTUzMzNiNWU1ZGE0OWNhNmZhNGJmMGVkMmU3ZmQ1NWE0ZDNmZWE1MzI1ZTZkZMBuIkI=: 00:36:26.708 00:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:36:26.708 00:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:26.708 00:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:36:26.708 00:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YjVmODc0NTMxMGVmOWJiOTAxYTUzMzNiNWU1ZGE0OWNhNmZhNGJmMGVkMmU3ZmQ1NWE0ZDNmZWE1MzI1ZTZkZMBuIkI=: 00:36:26.708 00:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:36:26.708 00:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:36:26.708 00:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:26.708 00:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:26.708 00:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:36:26.708 00:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:36:26.708 00:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:26.708 00:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:36:26.708 00:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:26.708 00:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:26.708 00:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:26.708 00:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:26.708 00:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:26.708 00:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:26.708 00:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:26.708 00:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:26.708 00:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:26.708 00:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:26.708 00:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:26.708 00:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:26.708 00:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:26.708 00:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:26.708 00:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:36:26.708 00:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:26.708 00:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:27.087 nvme0n1 00:36:27.087 00:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:27.087 00:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:27.087 00:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:27.087 00:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:27.087 00:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:27.087 00:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:27.087 00:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:27.087 00:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:27.087 00:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:27.087 00:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:27.087 00:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:27.087 00:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:36:27.087 00:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:27.087 00:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:36:27.087 00:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:27.087 00:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:27.087 00:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:36:27.087 00:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:36:27.087 00:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDg1YTE0MzkyNmUzMWNjZjM0MjMyN2NkNzJjOTIwNTmemql8: 00:36:27.087 00:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NjBhZjExNDIwM2E4MzZkNmIzODRiMjlmM2E4MGZiMjg5NjZlYzA5ODhiZDZlYjkyNDU2NjQ0YWMyYjM5NTA3OOSoAhI=: 00:36:27.087 00:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:27.087 00:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:36:27.087 00:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDg1YTE0MzkyNmUzMWNjZjM0MjMyN2NkNzJjOTIwNTmemql8: 00:36:27.087 00:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NjBhZjExNDIwM2E4MzZkNmIzODRiMjlmM2E4MGZiMjg5NjZlYzA5ODhiZDZlYjkyNDU2NjQ0YWMyYjM5NTA3OOSoAhI=: ]] 00:36:27.087 00:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NjBhZjExNDIwM2E4MzZkNmIzODRiMjlmM2E4MGZiMjg5NjZlYzA5ODhiZDZlYjkyNDU2NjQ0YWMyYjM5NTA3OOSoAhI=: 00:36:27.087 00:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:36:27.087 00:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:27.087 00:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:27.088 00:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:36:27.088 00:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:36:27.088 00:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:27.088 00:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:36:27.088 00:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:27.088 00:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:27.088 00:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:27.088 00:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:27.088 00:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:27.088 00:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:27.088 00:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:27.088 00:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:27.088 00:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:27.088 00:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:27.088 00:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:27.088 00:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:27.088 00:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:27.088 00:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:27.088 00:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:36:27.088 00:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:27.088 00:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:27.350 nvme0n1 00:36:27.350 00:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:27.350 00:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:27.350 00:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:27.350 00:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:27.350 00:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:27.350 00:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:27.350 00:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:27.350 00:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:27.350 00:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:27.350 00:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:27.350 00:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:27.350 00:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:27.350 00:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:36:27.350 00:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:27.350 00:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:27.350 00:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:36:27.350 00:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:36:27.350 00:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzYwYmNkYzEyNDFiMTYzMmI5N2JhN2M3MjQ5OTIyODY2ZTc4NmM0MTQ0YWUyNmM4CTa6Pw==: 00:36:27.350 00:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzcwNGUxNjhmMDE2MGM5OTI3YzMzNjQzMDc5MDlmMGNlYTlkNWU2YjIyZGZlNTEwEYJuGQ==: 00:36:27.350 00:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:27.350 00:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:36:27.609 00:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzYwYmNkYzEyNDFiMTYzMmI5N2JhN2M3MjQ5OTIyODY2ZTc4NmM0MTQ0YWUyNmM4CTa6Pw==: 00:36:27.609 00:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzcwNGUxNjhmMDE2MGM5OTI3YzMzNjQzMDc5MDlmMGNlYTlkNWU2YjIyZGZlNTEwEYJuGQ==: ]] 00:36:27.609 00:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YzcwNGUxNjhmMDE2MGM5OTI3YzMzNjQzMDc5MDlmMGNlYTlkNWU2YjIyZGZlNTEwEYJuGQ==: 00:36:27.609 00:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:36:27.609 00:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:27.609 00:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:27.609 00:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:36:27.609 00:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:36:27.609 00:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:27.609 00:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:36:27.609 00:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:27.609 00:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:27.609 00:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:27.609 00:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:27.609 00:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:27.609 00:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:27.609 00:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:27.609 00:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:27.609 00:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:27.609 00:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:27.609 00:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:27.609 00:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:27.609 00:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:27.609 00:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:27.609 00:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:36:27.609 00:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:27.609 00:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:27.868 nvme0n1 00:36:27.868 00:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:27.868 00:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:27.868 00:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:27.868 00:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:27.868 00:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:27.868 00:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:27.868 00:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:27.868 00:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:27.868 00:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:27.868 00:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:27.868 00:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:27.868 00:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:27.868 00:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:36:27.868 00:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:27.868 00:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:27.868 00:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:36:27.868 00:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:36:27.868 00:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NGYyNzBlNTM3M2I5OWI0NzY0NTRiMTc0YWVkOTVkYWN0mx2S: 00:36:27.868 00:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MzI3Y2JmOWRkNDExYjkwOWE4MjFiYThhN2M1OTllYjURdW4z: 00:36:27.868 00:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:27.868 00:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:36:27.868 00:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NGYyNzBlNTM3M2I5OWI0NzY0NTRiMTc0YWVkOTVkYWN0mx2S: 00:36:27.868 00:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MzI3Y2JmOWRkNDExYjkwOWE4MjFiYThhN2M1OTllYjURdW4z: ]] 00:36:27.868 00:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MzI3Y2JmOWRkNDExYjkwOWE4MjFiYThhN2M1OTllYjURdW4z: 00:36:27.868 00:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:36:27.868 00:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:27.868 00:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:27.868 00:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:36:27.868 00:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:36:27.869 00:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:27.869 00:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:36:27.869 00:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:27.869 00:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:27.869 00:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:27.869 00:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:27.869 00:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:27.869 00:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:27.869 00:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:27.869 00:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:27.869 00:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:27.869 00:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:27.869 00:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:27.869 00:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:27.869 00:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:27.869 00:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:27.869 00:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:36:27.869 00:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:27.869 00:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:28.436 nvme0n1 00:36:28.436 00:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:28.436 00:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:28.436 00:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:28.436 00:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:28.436 00:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:28.436 00:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:28.436 00:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:28.436 00:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:28.436 00:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:28.437 00:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:28.437 00:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:28.437 00:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:28.437 00:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:36:28.437 00:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:28.437 00:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:28.437 00:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:36:28.437 00:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:36:28.437 00:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Nzg4NDZkOGQ0MTc4OTU1OGE4ZmYxMmY4OWViMzc1MzcwNjM3NDVmMjRkOTc1NDg3dE/pLw==: 00:36:28.437 00:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MjljMDU2YzgzYWZjNWQxNzdlYWQ4YTQ1MmViOWI1ZTiWULSZ: 00:36:28.437 00:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:28.437 00:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:36:28.437 00:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Nzg4NDZkOGQ0MTc4OTU1OGE4ZmYxMmY4OWViMzc1MzcwNjM3NDVmMjRkOTc1NDg3dE/pLw==: 00:36:28.437 00:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MjljMDU2YzgzYWZjNWQxNzdlYWQ4YTQ1MmViOWI1ZTiWULSZ: ]] 00:36:28.437 00:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MjljMDU2YzgzYWZjNWQxNzdlYWQ4YTQ1MmViOWI1ZTiWULSZ: 00:36:28.437 00:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:36:28.437 00:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:28.437 00:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:28.437 00:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:36:28.437 00:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:36:28.437 00:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:28.437 00:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:36:28.437 00:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:28.437 00:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:28.437 00:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:28.437 00:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:28.437 00:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:28.437 00:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:28.437 00:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:28.437 00:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:28.437 00:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:28.437 00:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:28.437 00:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:28.437 00:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:28.437 00:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:28.437 00:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:28.437 00:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:36:28.437 00:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:28.437 00:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:28.695 nvme0n1 00:36:28.695 00:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:28.695 00:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:28.695 00:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:28.695 00:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:28.695 00:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:28.695 00:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:28.695 00:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:28.695 00:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:28.695 00:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:28.695 00:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:28.954 00:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:28.954 00:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:28.954 00:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:36:28.954 00:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:28.954 00:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:28.954 00:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:36:28.954 00:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:36:28.954 00:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YjVmODc0NTMxMGVmOWJiOTAxYTUzMzNiNWU1ZGE0OWNhNmZhNGJmMGVkMmU3ZmQ1NWE0ZDNmZWE1MzI1ZTZkZMBuIkI=: 00:36:28.954 00:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:36:28.954 00:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:28.954 00:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:36:28.954 00:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YjVmODc0NTMxMGVmOWJiOTAxYTUzMzNiNWU1ZGE0OWNhNmZhNGJmMGVkMmU3ZmQ1NWE0ZDNmZWE1MzI1ZTZkZMBuIkI=: 00:36:28.954 00:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:36:28.954 00:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:36:28.954 00:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:28.954 00:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:28.954 00:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:36:28.954 00:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:36:28.954 00:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:28.954 00:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:36:28.954 00:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:28.954 00:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:28.954 00:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:28.954 00:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:28.954 00:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:28.954 00:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:28.954 00:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:28.954 00:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:28.954 00:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:28.954 00:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:28.954 00:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:28.954 00:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:28.954 00:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:28.954 00:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:28.954 00:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:36:28.954 00:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:28.954 00:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:29.213 nvme0n1 00:36:29.213 00:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:29.213 00:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:29.213 00:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:29.213 00:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:29.213 00:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:29.213 00:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:29.213 00:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:29.213 00:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:29.213 00:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:29.213 00:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:29.213 00:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:29.213 00:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:36:29.213 00:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:29.213 00:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:36:29.213 00:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:29.213 00:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:29.213 00:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:36:29.213 00:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:36:29.213 00:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDg1YTE0MzkyNmUzMWNjZjM0MjMyN2NkNzJjOTIwNTmemql8: 00:36:29.213 00:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NjBhZjExNDIwM2E4MzZkNmIzODRiMjlmM2E4MGZiMjg5NjZlYzA5ODhiZDZlYjkyNDU2NjQ0YWMyYjM5NTA3OOSoAhI=: 00:36:29.213 00:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:29.213 00:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:36:29.213 00:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDg1YTE0MzkyNmUzMWNjZjM0MjMyN2NkNzJjOTIwNTmemql8: 00:36:29.213 00:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NjBhZjExNDIwM2E4MzZkNmIzODRiMjlmM2E4MGZiMjg5NjZlYzA5ODhiZDZlYjkyNDU2NjQ0YWMyYjM5NTA3OOSoAhI=: ]] 00:36:29.213 00:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NjBhZjExNDIwM2E4MzZkNmIzODRiMjlmM2E4MGZiMjg5NjZlYzA5ODhiZDZlYjkyNDU2NjQ0YWMyYjM5NTA3OOSoAhI=: 00:36:29.213 00:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:36:29.213 00:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:29.213 00:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:29.213 00:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:36:29.213 00:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:36:29.213 00:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:29.213 00:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:36:29.213 00:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:29.213 00:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:29.213 00:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:29.213 00:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:29.213 00:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:29.213 00:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:29.213 00:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:29.213 00:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:29.213 00:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:29.213 00:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:29.213 00:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:29.213 00:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:29.213 00:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:29.213 00:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:29.213 00:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:36:29.213 00:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:29.213 00:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:29.780 nvme0n1 00:36:29.780 00:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:29.780 00:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:29.780 00:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:29.780 00:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:29.780 00:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:29.780 00:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:29.780 00:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:29.780 00:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:29.780 00:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:29.780 00:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:30.039 00:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:30.039 00:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:30.039 00:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:36:30.039 00:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:30.039 00:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:30.039 00:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:36:30.039 00:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:36:30.039 00:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzYwYmNkYzEyNDFiMTYzMmI5N2JhN2M3MjQ5OTIyODY2ZTc4NmM0MTQ0YWUyNmM4CTa6Pw==: 00:36:30.039 00:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzcwNGUxNjhmMDE2MGM5OTI3YzMzNjQzMDc5MDlmMGNlYTlkNWU2YjIyZGZlNTEwEYJuGQ==: 00:36:30.039 00:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:30.039 00:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:36:30.039 00:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzYwYmNkYzEyNDFiMTYzMmI5N2JhN2M3MjQ5OTIyODY2ZTc4NmM0MTQ0YWUyNmM4CTa6Pw==: 00:36:30.039 00:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzcwNGUxNjhmMDE2MGM5OTI3YzMzNjQzMDc5MDlmMGNlYTlkNWU2YjIyZGZlNTEwEYJuGQ==: ]] 00:36:30.039 00:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YzcwNGUxNjhmMDE2MGM5OTI3YzMzNjQzMDc5MDlmMGNlYTlkNWU2YjIyZGZlNTEwEYJuGQ==: 00:36:30.039 00:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:36:30.039 00:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:30.039 00:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:30.039 00:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:36:30.039 00:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:36:30.039 00:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:30.039 00:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:36:30.039 00:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:30.039 00:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:30.039 00:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:30.039 00:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:30.039 00:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:30.039 00:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:30.039 00:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:30.039 00:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:30.039 00:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:30.039 00:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:30.039 00:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:30.039 00:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:30.039 00:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:30.039 00:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:30.039 00:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:36:30.039 00:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:30.039 00:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:30.607 nvme0n1 00:36:30.607 00:17:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:30.607 00:17:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:30.607 00:17:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:30.607 00:17:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:30.607 00:17:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:30.607 00:17:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:30.607 00:17:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:30.607 00:17:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:30.607 00:17:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:30.607 00:17:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:30.607 00:17:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:30.607 00:17:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:30.607 00:17:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:36:30.607 00:17:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:30.607 00:17:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:30.607 00:17:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:36:30.607 00:17:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:36:30.607 00:17:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NGYyNzBlNTM3M2I5OWI0NzY0NTRiMTc0YWVkOTVkYWN0mx2S: 00:36:30.607 00:17:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MzI3Y2JmOWRkNDExYjkwOWE4MjFiYThhN2M1OTllYjURdW4z: 00:36:30.607 00:17:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:30.607 00:17:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:36:30.607 00:17:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NGYyNzBlNTM3M2I5OWI0NzY0NTRiMTc0YWVkOTVkYWN0mx2S: 00:36:30.607 00:17:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MzI3Y2JmOWRkNDExYjkwOWE4MjFiYThhN2M1OTllYjURdW4z: ]] 00:36:30.607 00:17:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MzI3Y2JmOWRkNDExYjkwOWE4MjFiYThhN2M1OTllYjURdW4z: 00:36:30.607 00:17:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:36:30.607 00:17:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:30.607 00:17:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:30.607 00:17:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:36:30.607 00:17:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:36:30.607 00:17:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:30.607 00:17:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:36:30.607 00:17:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:30.607 00:17:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:30.607 00:17:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:30.607 00:17:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:30.607 00:17:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:30.607 00:17:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:30.607 00:17:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:30.607 00:17:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:30.607 00:17:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:30.607 00:17:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:30.607 00:17:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:30.607 00:17:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:30.607 00:17:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:30.607 00:17:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:30.607 00:17:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:36:30.607 00:17:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:30.607 00:17:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:31.174 nvme0n1 00:36:31.174 00:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:31.174 00:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:31.174 00:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:31.174 00:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:31.174 00:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:31.174 00:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:31.174 00:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:31.174 00:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:31.174 00:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:31.174 00:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:31.174 00:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:31.174 00:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:31.174 00:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:36:31.174 00:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:31.174 00:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:31.174 00:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:36:31.174 00:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:36:31.174 00:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Nzg4NDZkOGQ0MTc4OTU1OGE4ZmYxMmY4OWViMzc1MzcwNjM3NDVmMjRkOTc1NDg3dE/pLw==: 00:36:31.174 00:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MjljMDU2YzgzYWZjNWQxNzdlYWQ4YTQ1MmViOWI1ZTiWULSZ: 00:36:31.174 00:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:31.174 00:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:36:31.174 00:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Nzg4NDZkOGQ0MTc4OTU1OGE4ZmYxMmY4OWViMzc1MzcwNjM3NDVmMjRkOTc1NDg3dE/pLw==: 00:36:31.174 00:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MjljMDU2YzgzYWZjNWQxNzdlYWQ4YTQ1MmViOWI1ZTiWULSZ: ]] 00:36:31.174 00:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MjljMDU2YzgzYWZjNWQxNzdlYWQ4YTQ1MmViOWI1ZTiWULSZ: 00:36:31.174 00:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:36:31.174 00:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:31.174 00:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:31.174 00:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:36:31.174 00:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:36:31.174 00:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:31.174 00:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:36:31.174 00:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:31.174 00:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:31.174 00:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:31.174 00:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:31.174 00:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:31.174 00:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:31.174 00:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:31.174 00:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:31.174 00:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:31.174 00:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:31.174 00:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:31.174 00:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:31.174 00:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:31.174 00:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:31.174 00:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:36:31.174 00:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:31.174 00:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:31.741 nvme0n1 00:36:31.741 00:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:31.741 00:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:31.741 00:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:31.741 00:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:31.741 00:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:31.741 00:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:32.000 00:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:32.000 00:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:32.000 00:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:32.000 00:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:32.000 00:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:32.000 00:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:32.000 00:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:36:32.000 00:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:32.000 00:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:32.000 00:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:36:32.000 00:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:36:32.000 00:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YjVmODc0NTMxMGVmOWJiOTAxYTUzMzNiNWU1ZGE0OWNhNmZhNGJmMGVkMmU3ZmQ1NWE0ZDNmZWE1MzI1ZTZkZMBuIkI=: 00:36:32.000 00:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:36:32.000 00:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:32.000 00:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:36:32.000 00:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YjVmODc0NTMxMGVmOWJiOTAxYTUzMzNiNWU1ZGE0OWNhNmZhNGJmMGVkMmU3ZmQ1NWE0ZDNmZWE1MzI1ZTZkZMBuIkI=: 00:36:32.000 00:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:36:32.000 00:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:36:32.000 00:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:32.000 00:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:32.000 00:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:36:32.000 00:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:36:32.000 00:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:32.000 00:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:36:32.000 00:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:32.000 00:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:32.000 00:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:32.000 00:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:32.000 00:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:32.000 00:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:32.000 00:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:32.000 00:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:32.000 00:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:32.000 00:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:32.000 00:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:32.000 00:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:32.000 00:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:32.000 00:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:32.000 00:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:36:32.000 00:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:32.000 00:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:32.567 nvme0n1 00:36:32.567 00:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:32.567 00:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:32.567 00:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:32.567 00:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:32.567 00:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:32.567 00:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:32.567 00:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:32.567 00:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:32.567 00:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:32.567 00:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:32.567 00:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:32.567 00:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:36:32.567 00:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:36:32.567 00:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:32.567 00:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:36:32.567 00:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:32.567 00:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:32.567 00:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:36:32.567 00:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:36:32.567 00:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDg1YTE0MzkyNmUzMWNjZjM0MjMyN2NkNzJjOTIwNTmemql8: 00:36:32.567 00:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NjBhZjExNDIwM2E4MzZkNmIzODRiMjlmM2E4MGZiMjg5NjZlYzA5ODhiZDZlYjkyNDU2NjQ0YWMyYjM5NTA3OOSoAhI=: 00:36:32.567 00:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:32.567 00:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:36:32.567 00:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDg1YTE0MzkyNmUzMWNjZjM0MjMyN2NkNzJjOTIwNTmemql8: 00:36:32.567 00:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NjBhZjExNDIwM2E4MzZkNmIzODRiMjlmM2E4MGZiMjg5NjZlYzA5ODhiZDZlYjkyNDU2NjQ0YWMyYjM5NTA3OOSoAhI=: ]] 00:36:32.567 00:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NjBhZjExNDIwM2E4MzZkNmIzODRiMjlmM2E4MGZiMjg5NjZlYzA5ODhiZDZlYjkyNDU2NjQ0YWMyYjM5NTA3OOSoAhI=: 00:36:32.567 00:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:36:32.567 00:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:32.567 00:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:32.567 00:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:36:32.567 00:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:36:32.567 00:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:32.567 00:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:36:32.567 00:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:32.567 00:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:32.567 00:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:32.567 00:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:32.567 00:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:32.567 00:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:32.567 00:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:32.567 00:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:32.567 00:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:32.567 00:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:32.567 00:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:32.567 00:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:32.567 00:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:32.567 00:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:32.567 00:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:36:32.567 00:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:32.567 00:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:32.826 nvme0n1 00:36:32.826 00:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:32.826 00:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:32.826 00:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:32.826 00:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:32.826 00:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:32.826 00:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:32.826 00:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:32.826 00:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:32.826 00:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:32.826 00:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:32.826 00:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:32.826 00:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:32.826 00:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:36:32.826 00:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:32.826 00:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:32.826 00:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:36:32.826 00:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:36:32.826 00:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzYwYmNkYzEyNDFiMTYzMmI5N2JhN2M3MjQ5OTIyODY2ZTc4NmM0MTQ0YWUyNmM4CTa6Pw==: 00:36:32.826 00:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzcwNGUxNjhmMDE2MGM5OTI3YzMzNjQzMDc5MDlmMGNlYTlkNWU2YjIyZGZlNTEwEYJuGQ==: 00:36:32.826 00:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:32.826 00:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:36:32.826 00:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzYwYmNkYzEyNDFiMTYzMmI5N2JhN2M3MjQ5OTIyODY2ZTc4NmM0MTQ0YWUyNmM4CTa6Pw==: 00:36:32.826 00:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzcwNGUxNjhmMDE2MGM5OTI3YzMzNjQzMDc5MDlmMGNlYTlkNWU2YjIyZGZlNTEwEYJuGQ==: ]] 00:36:32.826 00:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YzcwNGUxNjhmMDE2MGM5OTI3YzMzNjQzMDc5MDlmMGNlYTlkNWU2YjIyZGZlNTEwEYJuGQ==: 00:36:32.826 00:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:36:32.826 00:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:32.826 00:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:32.826 00:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:36:32.826 00:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:36:32.826 00:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:32.826 00:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:36:32.826 00:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:32.826 00:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:32.826 00:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:32.826 00:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:32.826 00:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:32.826 00:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:32.826 00:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:32.826 00:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:32.826 00:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:32.826 00:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:32.826 00:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:32.826 00:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:32.826 00:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:32.826 00:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:32.826 00:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:36:32.826 00:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:32.826 00:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:32.826 nvme0n1 00:36:32.826 00:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:32.826 00:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:32.826 00:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:32.826 00:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:32.826 00:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:32.826 00:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:33.085 00:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:33.085 00:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:33.085 00:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:33.085 00:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:33.085 00:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:33.085 00:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:33.085 00:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:36:33.085 00:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:33.085 00:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:33.085 00:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:36:33.085 00:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:36:33.085 00:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NGYyNzBlNTM3M2I5OWI0NzY0NTRiMTc0YWVkOTVkYWN0mx2S: 00:36:33.085 00:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MzI3Y2JmOWRkNDExYjkwOWE4MjFiYThhN2M1OTllYjURdW4z: 00:36:33.085 00:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:33.085 00:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:36:33.085 00:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NGYyNzBlNTM3M2I5OWI0NzY0NTRiMTc0YWVkOTVkYWN0mx2S: 00:36:33.085 00:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MzI3Y2JmOWRkNDExYjkwOWE4MjFiYThhN2M1OTllYjURdW4z: ]] 00:36:33.085 00:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MzI3Y2JmOWRkNDExYjkwOWE4MjFiYThhN2M1OTllYjURdW4z: 00:36:33.085 00:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:36:33.085 00:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:33.085 00:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:33.085 00:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:36:33.085 00:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:36:33.085 00:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:33.085 00:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:36:33.085 00:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:33.085 00:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:33.085 00:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:33.085 00:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:33.085 00:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:33.085 00:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:33.085 00:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:33.085 00:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:33.085 00:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:33.085 00:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:33.085 00:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:33.085 00:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:33.085 00:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:33.085 00:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:33.085 00:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:36:33.085 00:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:33.085 00:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:33.085 nvme0n1 00:36:33.085 00:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:33.085 00:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:33.085 00:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:33.085 00:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:33.085 00:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:33.086 00:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:33.086 00:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:33.086 00:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:33.086 00:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:33.086 00:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:33.345 00:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:33.345 00:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:33.345 00:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:36:33.345 00:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:33.345 00:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:33.345 00:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:36:33.345 00:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:36:33.345 00:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Nzg4NDZkOGQ0MTc4OTU1OGE4ZmYxMmY4OWViMzc1MzcwNjM3NDVmMjRkOTc1NDg3dE/pLw==: 00:36:33.345 00:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MjljMDU2YzgzYWZjNWQxNzdlYWQ4YTQ1MmViOWI1ZTiWULSZ: 00:36:33.345 00:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:33.345 00:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:36:33.345 00:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Nzg4NDZkOGQ0MTc4OTU1OGE4ZmYxMmY4OWViMzc1MzcwNjM3NDVmMjRkOTc1NDg3dE/pLw==: 00:36:33.345 00:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MjljMDU2YzgzYWZjNWQxNzdlYWQ4YTQ1MmViOWI1ZTiWULSZ: ]] 00:36:33.345 00:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MjljMDU2YzgzYWZjNWQxNzdlYWQ4YTQ1MmViOWI1ZTiWULSZ: 00:36:33.345 00:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:36:33.345 00:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:33.345 00:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:33.345 00:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:36:33.345 00:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:36:33.345 00:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:33.345 00:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:36:33.345 00:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:33.345 00:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:33.345 00:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:33.345 00:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:33.345 00:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:33.345 00:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:33.345 00:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:33.345 00:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:33.345 00:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:33.345 00:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:33.345 00:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:33.345 00:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:33.345 00:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:33.345 00:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:33.345 00:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:36:33.345 00:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:33.345 00:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:33.345 nvme0n1 00:36:33.345 00:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:33.345 00:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:33.345 00:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:33.345 00:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:33.345 00:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:33.345 00:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:33.345 00:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:33.345 00:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:33.345 00:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:33.345 00:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:33.345 00:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:33.345 00:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:33.345 00:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:36:33.345 00:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:33.345 00:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:33.345 00:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:36:33.345 00:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:36:33.345 00:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YjVmODc0NTMxMGVmOWJiOTAxYTUzMzNiNWU1ZGE0OWNhNmZhNGJmMGVkMmU3ZmQ1NWE0ZDNmZWE1MzI1ZTZkZMBuIkI=: 00:36:33.345 00:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:36:33.346 00:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:33.346 00:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:36:33.346 00:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YjVmODc0NTMxMGVmOWJiOTAxYTUzMzNiNWU1ZGE0OWNhNmZhNGJmMGVkMmU3ZmQ1NWE0ZDNmZWE1MzI1ZTZkZMBuIkI=: 00:36:33.346 00:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:36:33.346 00:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:36:33.346 00:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:33.346 00:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:33.346 00:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:36:33.346 00:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:36:33.346 00:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:33.346 00:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:36:33.346 00:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:33.346 00:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:33.346 00:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:33.346 00:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:33.346 00:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:33.346 00:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:33.346 00:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:33.346 00:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:33.346 00:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:33.346 00:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:33.346 00:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:33.346 00:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:33.605 00:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:33.605 00:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:33.605 00:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:36:33.605 00:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:33.605 00:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:33.605 nvme0n1 00:36:33.605 00:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:33.605 00:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:33.605 00:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:33.605 00:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:33.605 00:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:33.605 00:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:33.605 00:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:33.605 00:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:33.605 00:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:33.605 00:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:33.605 00:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:33.605 00:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:36:33.605 00:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:33.605 00:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:36:33.605 00:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:33.605 00:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:33.605 00:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:36:33.605 00:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:36:33.605 00:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDg1YTE0MzkyNmUzMWNjZjM0MjMyN2NkNzJjOTIwNTmemql8: 00:36:33.605 00:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NjBhZjExNDIwM2E4MzZkNmIzODRiMjlmM2E4MGZiMjg5NjZlYzA5ODhiZDZlYjkyNDU2NjQ0YWMyYjM5NTA3OOSoAhI=: 00:36:33.605 00:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:33.605 00:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:36:33.605 00:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDg1YTE0MzkyNmUzMWNjZjM0MjMyN2NkNzJjOTIwNTmemql8: 00:36:33.605 00:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NjBhZjExNDIwM2E4MzZkNmIzODRiMjlmM2E4MGZiMjg5NjZlYzA5ODhiZDZlYjkyNDU2NjQ0YWMyYjM5NTA3OOSoAhI=: ]] 00:36:33.605 00:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NjBhZjExNDIwM2E4MzZkNmIzODRiMjlmM2E4MGZiMjg5NjZlYzA5ODhiZDZlYjkyNDU2NjQ0YWMyYjM5NTA3OOSoAhI=: 00:36:33.605 00:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:36:33.605 00:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:33.605 00:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:33.605 00:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:36:33.605 00:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:36:33.605 00:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:33.605 00:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:36:33.605 00:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:33.605 00:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:33.605 00:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:33.605 00:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:33.605 00:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:33.605 00:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:33.605 00:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:33.605 00:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:33.605 00:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:33.605 00:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:33.605 00:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:33.605 00:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:33.605 00:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:33.605 00:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:33.605 00:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:36:33.605 00:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:33.605 00:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:33.864 nvme0n1 00:36:33.865 00:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:33.865 00:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:33.865 00:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:33.865 00:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:33.865 00:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:33.865 00:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:33.865 00:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:33.865 00:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:33.865 00:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:33.865 00:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:33.865 00:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:33.865 00:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:33.865 00:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:36:33.865 00:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:33.865 00:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:33.865 00:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:36:33.865 00:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:36:33.865 00:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzYwYmNkYzEyNDFiMTYzMmI5N2JhN2M3MjQ5OTIyODY2ZTc4NmM0MTQ0YWUyNmM4CTa6Pw==: 00:36:33.865 00:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzcwNGUxNjhmMDE2MGM5OTI3YzMzNjQzMDc5MDlmMGNlYTlkNWU2YjIyZGZlNTEwEYJuGQ==: 00:36:33.865 00:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:33.865 00:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:36:33.865 00:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzYwYmNkYzEyNDFiMTYzMmI5N2JhN2M3MjQ5OTIyODY2ZTc4NmM0MTQ0YWUyNmM4CTa6Pw==: 00:36:33.865 00:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzcwNGUxNjhmMDE2MGM5OTI3YzMzNjQzMDc5MDlmMGNlYTlkNWU2YjIyZGZlNTEwEYJuGQ==: ]] 00:36:33.865 00:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YzcwNGUxNjhmMDE2MGM5OTI3YzMzNjQzMDc5MDlmMGNlYTlkNWU2YjIyZGZlNTEwEYJuGQ==: 00:36:33.865 00:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:36:33.865 00:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:33.865 00:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:33.865 00:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:36:33.865 00:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:36:33.865 00:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:33.865 00:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:36:33.865 00:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:33.865 00:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:33.865 00:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:33.865 00:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:33.865 00:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:33.865 00:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:33.865 00:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:33.865 00:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:33.865 00:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:33.865 00:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:33.865 00:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:33.865 00:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:33.865 00:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:33.865 00:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:33.865 00:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:36:33.865 00:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:33.865 00:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:34.124 nvme0n1 00:36:34.124 00:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:34.124 00:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:34.124 00:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:34.124 00:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:34.124 00:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:34.124 00:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:34.124 00:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:34.124 00:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:34.124 00:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:34.124 00:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:34.124 00:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:34.124 00:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:34.124 00:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:36:34.124 00:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:34.124 00:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:34.124 00:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:36:34.124 00:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:36:34.124 00:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NGYyNzBlNTM3M2I5OWI0NzY0NTRiMTc0YWVkOTVkYWN0mx2S: 00:36:34.124 00:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MzI3Y2JmOWRkNDExYjkwOWE4MjFiYThhN2M1OTllYjURdW4z: 00:36:34.124 00:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:34.124 00:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:36:34.124 00:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NGYyNzBlNTM3M2I5OWI0NzY0NTRiMTc0YWVkOTVkYWN0mx2S: 00:36:34.124 00:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MzI3Y2JmOWRkNDExYjkwOWE4MjFiYThhN2M1OTllYjURdW4z: ]] 00:36:34.124 00:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MzI3Y2JmOWRkNDExYjkwOWE4MjFiYThhN2M1OTllYjURdW4z: 00:36:34.124 00:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:36:34.124 00:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:34.124 00:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:34.124 00:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:36:34.124 00:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:36:34.124 00:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:34.124 00:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:36:34.124 00:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:34.124 00:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:34.124 00:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:34.124 00:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:34.124 00:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:34.124 00:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:34.124 00:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:34.124 00:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:34.124 00:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:34.124 00:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:34.124 00:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:34.124 00:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:34.124 00:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:34.124 00:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:34.124 00:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:36:34.124 00:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:34.124 00:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:34.383 nvme0n1 00:36:34.383 00:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:34.383 00:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:34.383 00:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:34.383 00:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:34.383 00:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:34.383 00:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:34.383 00:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:34.383 00:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:34.383 00:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:34.383 00:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:34.383 00:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:34.383 00:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:34.383 00:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:36:34.383 00:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:34.383 00:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:34.383 00:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:36:34.383 00:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:36:34.383 00:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Nzg4NDZkOGQ0MTc4OTU1OGE4ZmYxMmY4OWViMzc1MzcwNjM3NDVmMjRkOTc1NDg3dE/pLw==: 00:36:34.383 00:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MjljMDU2YzgzYWZjNWQxNzdlYWQ4YTQ1MmViOWI1ZTiWULSZ: 00:36:34.383 00:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:34.383 00:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:36:34.383 00:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Nzg4NDZkOGQ0MTc4OTU1OGE4ZmYxMmY4OWViMzc1MzcwNjM3NDVmMjRkOTc1NDg3dE/pLw==: 00:36:34.383 00:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MjljMDU2YzgzYWZjNWQxNzdlYWQ4YTQ1MmViOWI1ZTiWULSZ: ]] 00:36:34.383 00:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MjljMDU2YzgzYWZjNWQxNzdlYWQ4YTQ1MmViOWI1ZTiWULSZ: 00:36:34.383 00:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:36:34.383 00:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:34.383 00:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:34.383 00:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:36:34.383 00:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:36:34.383 00:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:34.383 00:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:36:34.383 00:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:34.383 00:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:34.383 00:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:34.383 00:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:34.383 00:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:34.383 00:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:34.383 00:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:34.383 00:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:34.383 00:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:34.383 00:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:34.383 00:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:34.383 00:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:34.383 00:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:34.383 00:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:34.383 00:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:36:34.383 00:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:34.383 00:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:34.642 nvme0n1 00:36:34.642 00:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:34.642 00:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:34.642 00:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:34.642 00:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:34.642 00:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:34.642 00:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:34.642 00:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:34.642 00:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:34.642 00:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:34.642 00:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:34.642 00:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:34.642 00:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:34.642 00:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:36:34.642 00:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:34.642 00:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:34.642 00:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:36:34.642 00:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:36:34.642 00:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YjVmODc0NTMxMGVmOWJiOTAxYTUzMzNiNWU1ZGE0OWNhNmZhNGJmMGVkMmU3ZmQ1NWE0ZDNmZWE1MzI1ZTZkZMBuIkI=: 00:36:34.642 00:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:36:34.642 00:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:34.642 00:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:36:34.642 00:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YjVmODc0NTMxMGVmOWJiOTAxYTUzMzNiNWU1ZGE0OWNhNmZhNGJmMGVkMmU3ZmQ1NWE0ZDNmZWE1MzI1ZTZkZMBuIkI=: 00:36:34.642 00:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:36:34.642 00:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:36:34.642 00:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:34.642 00:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:34.643 00:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:36:34.643 00:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:36:34.643 00:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:34.643 00:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:36:34.643 00:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:34.643 00:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:34.643 00:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:34.643 00:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:34.643 00:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:34.643 00:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:34.643 00:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:34.643 00:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:34.643 00:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:34.643 00:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:34.643 00:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:34.643 00:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:34.643 00:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:34.643 00:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:34.643 00:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:36:34.643 00:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:34.643 00:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:34.902 nvme0n1 00:36:34.902 00:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:34.902 00:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:34.902 00:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:34.902 00:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:34.902 00:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:34.902 00:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:34.902 00:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:34.902 00:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:34.902 00:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:34.902 00:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:34.902 00:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:34.902 00:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:36:34.902 00:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:34.902 00:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:36:34.902 00:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:34.902 00:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:34.902 00:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:36:34.902 00:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:36:34.902 00:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDg1YTE0MzkyNmUzMWNjZjM0MjMyN2NkNzJjOTIwNTmemql8: 00:36:34.902 00:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NjBhZjExNDIwM2E4MzZkNmIzODRiMjlmM2E4MGZiMjg5NjZlYzA5ODhiZDZlYjkyNDU2NjQ0YWMyYjM5NTA3OOSoAhI=: 00:36:34.902 00:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:34.902 00:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:36:34.902 00:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDg1YTE0MzkyNmUzMWNjZjM0MjMyN2NkNzJjOTIwNTmemql8: 00:36:34.902 00:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NjBhZjExNDIwM2E4MzZkNmIzODRiMjlmM2E4MGZiMjg5NjZlYzA5ODhiZDZlYjkyNDU2NjQ0YWMyYjM5NTA3OOSoAhI=: ]] 00:36:34.902 00:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NjBhZjExNDIwM2E4MzZkNmIzODRiMjlmM2E4MGZiMjg5NjZlYzA5ODhiZDZlYjkyNDU2NjQ0YWMyYjM5NTA3OOSoAhI=: 00:36:34.902 00:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:36:34.902 00:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:34.902 00:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:34.902 00:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:36:34.902 00:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:36:34.902 00:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:34.902 00:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:36:34.902 00:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:34.902 00:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:34.902 00:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:34.902 00:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:34.902 00:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:34.902 00:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:34.902 00:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:34.902 00:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:34.902 00:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:34.902 00:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:34.902 00:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:34.902 00:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:34.902 00:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:34.902 00:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:34.902 00:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:36:34.902 00:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:34.902 00:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:35.162 nvme0n1 00:36:35.162 00:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:35.162 00:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:35.162 00:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:35.162 00:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:35.162 00:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:35.162 00:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:35.162 00:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:35.162 00:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:35.162 00:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:35.162 00:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:35.162 00:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:35.162 00:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:35.162 00:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:36:35.162 00:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:35.162 00:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:35.162 00:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:36:35.162 00:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:36:35.162 00:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzYwYmNkYzEyNDFiMTYzMmI5N2JhN2M3MjQ5OTIyODY2ZTc4NmM0MTQ0YWUyNmM4CTa6Pw==: 00:36:35.162 00:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzcwNGUxNjhmMDE2MGM5OTI3YzMzNjQzMDc5MDlmMGNlYTlkNWU2YjIyZGZlNTEwEYJuGQ==: 00:36:35.162 00:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:35.162 00:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:36:35.162 00:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzYwYmNkYzEyNDFiMTYzMmI5N2JhN2M3MjQ5OTIyODY2ZTc4NmM0MTQ0YWUyNmM4CTa6Pw==: 00:36:35.162 00:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzcwNGUxNjhmMDE2MGM5OTI3YzMzNjQzMDc5MDlmMGNlYTlkNWU2YjIyZGZlNTEwEYJuGQ==: ]] 00:36:35.162 00:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YzcwNGUxNjhmMDE2MGM5OTI3YzMzNjQzMDc5MDlmMGNlYTlkNWU2YjIyZGZlNTEwEYJuGQ==: 00:36:35.162 00:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:36:35.162 00:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:35.162 00:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:35.162 00:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:36:35.162 00:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:36:35.162 00:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:35.162 00:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:36:35.162 00:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:35.162 00:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:35.162 00:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:35.162 00:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:35.162 00:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:35.162 00:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:35.162 00:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:35.162 00:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:35.162 00:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:35.162 00:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:35.162 00:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:35.162 00:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:35.162 00:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:35.162 00:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:35.162 00:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:36:35.162 00:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:35.162 00:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:35.421 nvme0n1 00:36:35.421 00:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:35.421 00:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:35.421 00:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:35.421 00:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:35.421 00:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:35.421 00:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:35.680 00:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:35.680 00:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:35.680 00:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:35.680 00:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:35.680 00:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:35.680 00:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:35.680 00:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:36:35.680 00:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:35.680 00:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:35.680 00:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:36:35.680 00:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:36:35.680 00:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NGYyNzBlNTM3M2I5OWI0NzY0NTRiMTc0YWVkOTVkYWN0mx2S: 00:36:35.680 00:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MzI3Y2JmOWRkNDExYjkwOWE4MjFiYThhN2M1OTllYjURdW4z: 00:36:35.680 00:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:35.680 00:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:36:35.680 00:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NGYyNzBlNTM3M2I5OWI0NzY0NTRiMTc0YWVkOTVkYWN0mx2S: 00:36:35.680 00:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MzI3Y2JmOWRkNDExYjkwOWE4MjFiYThhN2M1OTllYjURdW4z: ]] 00:36:35.680 00:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MzI3Y2JmOWRkNDExYjkwOWE4MjFiYThhN2M1OTllYjURdW4z: 00:36:35.680 00:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:36:35.680 00:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:35.680 00:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:35.680 00:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:36:35.680 00:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:36:35.680 00:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:35.680 00:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:36:35.680 00:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:35.680 00:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:35.680 00:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:35.680 00:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:35.680 00:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:35.680 00:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:35.680 00:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:35.680 00:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:35.680 00:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:35.680 00:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:35.680 00:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:35.680 00:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:35.680 00:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:35.680 00:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:35.680 00:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:36:35.680 00:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:35.680 00:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:35.939 nvme0n1 00:36:35.939 00:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:35.939 00:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:35.939 00:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:35.939 00:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:35.939 00:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:35.939 00:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:35.939 00:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:35.939 00:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:35.939 00:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:35.939 00:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:35.939 00:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:35.939 00:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:35.939 00:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:36:35.939 00:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:35.939 00:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:35.939 00:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:36:35.939 00:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:36:35.939 00:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Nzg4NDZkOGQ0MTc4OTU1OGE4ZmYxMmY4OWViMzc1MzcwNjM3NDVmMjRkOTc1NDg3dE/pLw==: 00:36:35.939 00:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MjljMDU2YzgzYWZjNWQxNzdlYWQ4YTQ1MmViOWI1ZTiWULSZ: 00:36:35.939 00:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:35.939 00:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:36:35.939 00:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Nzg4NDZkOGQ0MTc4OTU1OGE4ZmYxMmY4OWViMzc1MzcwNjM3NDVmMjRkOTc1NDg3dE/pLw==: 00:36:35.939 00:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MjljMDU2YzgzYWZjNWQxNzdlYWQ4YTQ1MmViOWI1ZTiWULSZ: ]] 00:36:35.939 00:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MjljMDU2YzgzYWZjNWQxNzdlYWQ4YTQ1MmViOWI1ZTiWULSZ: 00:36:35.939 00:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:36:35.939 00:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:35.939 00:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:35.939 00:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:36:35.939 00:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:36:35.939 00:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:35.939 00:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:36:35.939 00:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:35.939 00:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:35.939 00:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:35.939 00:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:35.939 00:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:35.939 00:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:35.939 00:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:35.939 00:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:35.939 00:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:35.939 00:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:35.939 00:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:35.939 00:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:35.939 00:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:35.939 00:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:35.939 00:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:36:35.939 00:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:35.939 00:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:36.198 nvme0n1 00:36:36.198 00:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:36.198 00:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:36.198 00:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:36.198 00:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:36.198 00:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:36.198 00:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:36.198 00:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:36.198 00:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:36.198 00:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:36.198 00:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:36.198 00:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:36.198 00:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:36.198 00:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:36:36.198 00:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:36.198 00:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:36.198 00:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:36:36.198 00:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:36:36.198 00:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YjVmODc0NTMxMGVmOWJiOTAxYTUzMzNiNWU1ZGE0OWNhNmZhNGJmMGVkMmU3ZmQ1NWE0ZDNmZWE1MzI1ZTZkZMBuIkI=: 00:36:36.198 00:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:36:36.198 00:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:36.198 00:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:36:36.198 00:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YjVmODc0NTMxMGVmOWJiOTAxYTUzMzNiNWU1ZGE0OWNhNmZhNGJmMGVkMmU3ZmQ1NWE0ZDNmZWE1MzI1ZTZkZMBuIkI=: 00:36:36.198 00:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:36:36.198 00:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:36:36.198 00:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:36.198 00:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:36.198 00:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:36:36.198 00:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:36:36.198 00:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:36.198 00:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:36:36.198 00:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:36.198 00:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:36.198 00:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:36.198 00:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:36.198 00:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:36.198 00:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:36.198 00:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:36.198 00:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:36.198 00:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:36.198 00:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:36.198 00:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:36.198 00:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:36.198 00:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:36.198 00:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:36.198 00:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:36:36.198 00:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:36.198 00:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:36.456 nvme0n1 00:36:36.456 00:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:36.456 00:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:36.456 00:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:36.456 00:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:36.456 00:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:36.456 00:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:36.456 00:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:36.456 00:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:36.456 00:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:36.456 00:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:36.456 00:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:36.456 00:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:36:36.456 00:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:36.456 00:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:36:36.456 00:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:36.456 00:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:36.456 00:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:36:36.456 00:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:36:36.456 00:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDg1YTE0MzkyNmUzMWNjZjM0MjMyN2NkNzJjOTIwNTmemql8: 00:36:36.456 00:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NjBhZjExNDIwM2E4MzZkNmIzODRiMjlmM2E4MGZiMjg5NjZlYzA5ODhiZDZlYjkyNDU2NjQ0YWMyYjM5NTA3OOSoAhI=: 00:36:36.456 00:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:36.456 00:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:36:36.456 00:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDg1YTE0MzkyNmUzMWNjZjM0MjMyN2NkNzJjOTIwNTmemql8: 00:36:36.456 00:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NjBhZjExNDIwM2E4MzZkNmIzODRiMjlmM2E4MGZiMjg5NjZlYzA5ODhiZDZlYjkyNDU2NjQ0YWMyYjM5NTA3OOSoAhI=: ]] 00:36:36.456 00:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NjBhZjExNDIwM2E4MzZkNmIzODRiMjlmM2E4MGZiMjg5NjZlYzA5ODhiZDZlYjkyNDU2NjQ0YWMyYjM5NTA3OOSoAhI=: 00:36:36.456 00:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:36:36.456 00:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:36.456 00:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:36.456 00:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:36:36.456 00:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:36:36.456 00:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:36.456 00:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:36:36.456 00:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:36.456 00:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:36.456 00:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:36.456 00:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:36.457 00:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:36.457 00:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:36.457 00:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:36.457 00:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:36.457 00:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:36.457 00:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:36.457 00:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:36.457 00:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:36.457 00:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:36.457 00:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:36.457 00:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:36:36.457 00:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:36.457 00:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:37.024 nvme0n1 00:36:37.024 00:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:37.024 00:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:37.024 00:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:37.024 00:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:37.024 00:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:37.024 00:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:37.024 00:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:37.024 00:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:37.024 00:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:37.024 00:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:37.024 00:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:37.024 00:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:37.024 00:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:36:37.024 00:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:37.024 00:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:37.024 00:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:36:37.024 00:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:36:37.024 00:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzYwYmNkYzEyNDFiMTYzMmI5N2JhN2M3MjQ5OTIyODY2ZTc4NmM0MTQ0YWUyNmM4CTa6Pw==: 00:36:37.024 00:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzcwNGUxNjhmMDE2MGM5OTI3YzMzNjQzMDc5MDlmMGNlYTlkNWU2YjIyZGZlNTEwEYJuGQ==: 00:36:37.024 00:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:37.024 00:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:36:37.024 00:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzYwYmNkYzEyNDFiMTYzMmI5N2JhN2M3MjQ5OTIyODY2ZTc4NmM0MTQ0YWUyNmM4CTa6Pw==: 00:36:37.024 00:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzcwNGUxNjhmMDE2MGM5OTI3YzMzNjQzMDc5MDlmMGNlYTlkNWU2YjIyZGZlNTEwEYJuGQ==: ]] 00:36:37.024 00:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YzcwNGUxNjhmMDE2MGM5OTI3YzMzNjQzMDc5MDlmMGNlYTlkNWU2YjIyZGZlNTEwEYJuGQ==: 00:36:37.024 00:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:36:37.024 00:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:37.024 00:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:37.024 00:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:36:37.024 00:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:36:37.024 00:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:37.024 00:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:36:37.024 00:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:37.024 00:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:37.024 00:17:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:37.024 00:17:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:37.024 00:17:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:37.024 00:17:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:37.024 00:17:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:37.024 00:17:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:37.024 00:17:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:37.024 00:17:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:37.024 00:17:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:37.024 00:17:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:37.024 00:17:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:37.024 00:17:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:37.024 00:17:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:36:37.024 00:17:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:37.024 00:17:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:37.283 nvme0n1 00:36:37.283 00:17:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:37.283 00:17:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:37.283 00:17:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:37.283 00:17:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:37.283 00:17:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:37.283 00:17:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:37.542 00:17:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:37.542 00:17:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:37.542 00:17:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:37.542 00:17:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:37.542 00:17:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:37.542 00:17:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:37.542 00:17:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:36:37.542 00:17:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:37.542 00:17:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:37.542 00:17:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:36:37.542 00:17:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:36:37.542 00:17:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NGYyNzBlNTM3M2I5OWI0NzY0NTRiMTc0YWVkOTVkYWN0mx2S: 00:36:37.542 00:17:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MzI3Y2JmOWRkNDExYjkwOWE4MjFiYThhN2M1OTllYjURdW4z: 00:36:37.542 00:17:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:37.542 00:17:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:36:37.542 00:17:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NGYyNzBlNTM3M2I5OWI0NzY0NTRiMTc0YWVkOTVkYWN0mx2S: 00:36:37.542 00:17:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MzI3Y2JmOWRkNDExYjkwOWE4MjFiYThhN2M1OTllYjURdW4z: ]] 00:36:37.542 00:17:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MzI3Y2JmOWRkNDExYjkwOWE4MjFiYThhN2M1OTllYjURdW4z: 00:36:37.542 00:17:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:36:37.542 00:17:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:37.542 00:17:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:37.542 00:17:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:36:37.542 00:17:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:36:37.542 00:17:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:37.542 00:17:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:36:37.542 00:17:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:37.542 00:17:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:37.542 00:17:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:37.542 00:17:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:37.542 00:17:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:37.542 00:17:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:37.542 00:17:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:37.542 00:17:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:37.542 00:17:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:37.542 00:17:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:37.542 00:17:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:37.542 00:17:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:37.542 00:17:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:37.542 00:17:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:37.542 00:17:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:36:37.542 00:17:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:37.542 00:17:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:37.801 nvme0n1 00:36:37.801 00:17:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:37.801 00:17:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:37.801 00:17:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:37.802 00:17:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:37.802 00:17:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:37.802 00:17:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:37.802 00:17:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:37.802 00:17:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:37.802 00:17:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:37.802 00:17:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:37.802 00:17:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:37.802 00:17:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:37.802 00:17:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:36:37.802 00:17:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:37.802 00:17:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:37.802 00:17:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:36:37.802 00:17:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:36:37.802 00:17:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Nzg4NDZkOGQ0MTc4OTU1OGE4ZmYxMmY4OWViMzc1MzcwNjM3NDVmMjRkOTc1NDg3dE/pLw==: 00:36:37.802 00:17:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MjljMDU2YzgzYWZjNWQxNzdlYWQ4YTQ1MmViOWI1ZTiWULSZ: 00:36:37.802 00:17:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:37.802 00:17:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:36:37.802 00:17:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Nzg4NDZkOGQ0MTc4OTU1OGE4ZmYxMmY4OWViMzc1MzcwNjM3NDVmMjRkOTc1NDg3dE/pLw==: 00:36:37.802 00:17:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MjljMDU2YzgzYWZjNWQxNzdlYWQ4YTQ1MmViOWI1ZTiWULSZ: ]] 00:36:37.802 00:17:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MjljMDU2YzgzYWZjNWQxNzdlYWQ4YTQ1MmViOWI1ZTiWULSZ: 00:36:37.802 00:17:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:36:37.802 00:17:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:37.802 00:17:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:37.802 00:17:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:36:37.802 00:17:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:36:37.802 00:17:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:37.802 00:17:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:36:37.802 00:17:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:37.802 00:17:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:37.802 00:17:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:37.802 00:17:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:37.802 00:17:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:37.802 00:17:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:37.802 00:17:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:37.802 00:17:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:37.802 00:17:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:37.802 00:17:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:37.802 00:17:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:37.802 00:17:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:37.802 00:17:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:37.802 00:17:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:37.802 00:17:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:36:37.802 00:17:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:37.802 00:17:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:38.369 nvme0n1 00:36:38.369 00:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:38.369 00:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:38.369 00:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:38.369 00:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:38.369 00:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:38.369 00:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:38.369 00:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:38.369 00:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:38.369 00:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:38.369 00:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:38.369 00:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:38.369 00:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:38.369 00:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:36:38.369 00:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:38.369 00:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:38.369 00:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:36:38.369 00:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:36:38.369 00:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YjVmODc0NTMxMGVmOWJiOTAxYTUzMzNiNWU1ZGE0OWNhNmZhNGJmMGVkMmU3ZmQ1NWE0ZDNmZWE1MzI1ZTZkZMBuIkI=: 00:36:38.369 00:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:36:38.369 00:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:38.369 00:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:36:38.369 00:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YjVmODc0NTMxMGVmOWJiOTAxYTUzMzNiNWU1ZGE0OWNhNmZhNGJmMGVkMmU3ZmQ1NWE0ZDNmZWE1MzI1ZTZkZMBuIkI=: 00:36:38.369 00:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:36:38.369 00:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:36:38.369 00:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:38.369 00:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:38.369 00:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:36:38.369 00:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:36:38.369 00:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:38.369 00:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:36:38.369 00:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:38.369 00:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:38.369 00:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:38.369 00:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:38.369 00:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:38.369 00:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:38.369 00:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:38.369 00:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:38.369 00:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:38.369 00:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:38.369 00:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:38.369 00:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:38.369 00:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:38.369 00:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:38.369 00:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:36:38.369 00:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:38.369 00:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:38.628 nvme0n1 00:36:38.628 00:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:38.628 00:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:38.628 00:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:38.628 00:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:38.628 00:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:38.628 00:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:38.628 00:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:38.628 00:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:38.628 00:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:38.628 00:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:38.887 00:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:38.887 00:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:36:38.887 00:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:38.887 00:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:36:38.887 00:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:38.887 00:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:38.887 00:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:36:38.887 00:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:36:38.887 00:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDg1YTE0MzkyNmUzMWNjZjM0MjMyN2NkNzJjOTIwNTmemql8: 00:36:38.887 00:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NjBhZjExNDIwM2E4MzZkNmIzODRiMjlmM2E4MGZiMjg5NjZlYzA5ODhiZDZlYjkyNDU2NjQ0YWMyYjM5NTA3OOSoAhI=: 00:36:38.888 00:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:38.888 00:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:36:38.888 00:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDg1YTE0MzkyNmUzMWNjZjM0MjMyN2NkNzJjOTIwNTmemql8: 00:36:38.888 00:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NjBhZjExNDIwM2E4MzZkNmIzODRiMjlmM2E4MGZiMjg5NjZlYzA5ODhiZDZlYjkyNDU2NjQ0YWMyYjM5NTA3OOSoAhI=: ]] 00:36:38.888 00:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NjBhZjExNDIwM2E4MzZkNmIzODRiMjlmM2E4MGZiMjg5NjZlYzA5ODhiZDZlYjkyNDU2NjQ0YWMyYjM5NTA3OOSoAhI=: 00:36:38.888 00:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:36:38.888 00:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:38.888 00:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:38.888 00:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:36:38.888 00:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:36:38.888 00:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:38.888 00:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:36:38.888 00:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:38.888 00:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:38.888 00:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:38.888 00:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:38.888 00:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:38.888 00:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:38.888 00:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:38.888 00:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:38.888 00:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:38.888 00:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:38.888 00:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:38.888 00:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:38.888 00:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:38.888 00:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:38.888 00:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:36:38.888 00:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:38.888 00:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:39.458 nvme0n1 00:36:39.458 00:17:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:39.458 00:17:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:39.458 00:17:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:39.458 00:17:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:39.458 00:17:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:39.458 00:17:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:39.458 00:17:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:39.458 00:17:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:39.458 00:17:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:39.458 00:17:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:39.458 00:17:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:39.458 00:17:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:39.458 00:17:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:36:39.458 00:17:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:39.458 00:17:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:39.458 00:17:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:36:39.458 00:17:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:36:39.458 00:17:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzYwYmNkYzEyNDFiMTYzMmI5N2JhN2M3MjQ5OTIyODY2ZTc4NmM0MTQ0YWUyNmM4CTa6Pw==: 00:36:39.458 00:17:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzcwNGUxNjhmMDE2MGM5OTI3YzMzNjQzMDc5MDlmMGNlYTlkNWU2YjIyZGZlNTEwEYJuGQ==: 00:36:39.458 00:17:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:39.458 00:17:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:36:39.458 00:17:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzYwYmNkYzEyNDFiMTYzMmI5N2JhN2M3MjQ5OTIyODY2ZTc4NmM0MTQ0YWUyNmM4CTa6Pw==: 00:36:39.458 00:17:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzcwNGUxNjhmMDE2MGM5OTI3YzMzNjQzMDc5MDlmMGNlYTlkNWU2YjIyZGZlNTEwEYJuGQ==: ]] 00:36:39.458 00:17:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YzcwNGUxNjhmMDE2MGM5OTI3YzMzNjQzMDc5MDlmMGNlYTlkNWU2YjIyZGZlNTEwEYJuGQ==: 00:36:39.458 00:17:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:36:39.458 00:17:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:39.458 00:17:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:39.458 00:17:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:36:39.458 00:17:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:36:39.458 00:17:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:39.458 00:17:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:36:39.458 00:17:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:39.459 00:17:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:39.459 00:17:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:39.459 00:17:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:39.459 00:17:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:39.459 00:17:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:39.459 00:17:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:39.459 00:17:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:39.459 00:17:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:39.459 00:17:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:39.459 00:17:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:39.459 00:17:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:39.459 00:17:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:39.459 00:17:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:39.459 00:17:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:36:39.459 00:17:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:39.459 00:17:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:40.027 nvme0n1 00:36:40.027 00:17:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:40.027 00:17:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:40.027 00:17:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:40.027 00:17:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:40.027 00:17:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:40.027 00:17:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:40.027 00:17:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:40.027 00:17:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:40.027 00:17:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:40.027 00:17:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:40.027 00:17:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:40.027 00:17:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:40.027 00:17:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:36:40.027 00:17:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:40.027 00:17:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:40.027 00:17:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:36:40.027 00:17:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:36:40.027 00:17:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NGYyNzBlNTM3M2I5OWI0NzY0NTRiMTc0YWVkOTVkYWN0mx2S: 00:36:40.027 00:17:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MzI3Y2JmOWRkNDExYjkwOWE4MjFiYThhN2M1OTllYjURdW4z: 00:36:40.027 00:17:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:40.027 00:17:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:36:40.027 00:17:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NGYyNzBlNTM3M2I5OWI0NzY0NTRiMTc0YWVkOTVkYWN0mx2S: 00:36:40.027 00:17:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MzI3Y2JmOWRkNDExYjkwOWE4MjFiYThhN2M1OTllYjURdW4z: ]] 00:36:40.027 00:17:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MzI3Y2JmOWRkNDExYjkwOWE4MjFiYThhN2M1OTllYjURdW4z: 00:36:40.027 00:17:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:36:40.027 00:17:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:40.027 00:17:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:40.027 00:17:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:36:40.027 00:17:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:36:40.027 00:17:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:40.027 00:17:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:36:40.027 00:17:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:40.027 00:17:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:40.027 00:17:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:40.027 00:17:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:40.027 00:17:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:40.027 00:17:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:40.027 00:17:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:40.027 00:17:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:40.027 00:17:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:40.027 00:17:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:40.027 00:17:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:40.027 00:17:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:40.027 00:17:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:40.027 00:17:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:40.027 00:17:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:36:40.027 00:17:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:40.027 00:17:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:40.595 nvme0n1 00:36:40.595 00:17:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:40.595 00:17:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:40.595 00:17:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:40.595 00:17:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:40.595 00:17:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:40.595 00:17:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:40.595 00:17:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:40.595 00:17:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:40.595 00:17:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:40.595 00:17:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:40.595 00:17:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:40.595 00:17:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:40.595 00:17:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:36:40.595 00:17:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:40.595 00:17:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:40.595 00:17:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:36:40.595 00:17:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:36:40.595 00:17:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Nzg4NDZkOGQ0MTc4OTU1OGE4ZmYxMmY4OWViMzc1MzcwNjM3NDVmMjRkOTc1NDg3dE/pLw==: 00:36:40.595 00:17:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MjljMDU2YzgzYWZjNWQxNzdlYWQ4YTQ1MmViOWI1ZTiWULSZ: 00:36:40.854 00:17:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:40.854 00:17:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:36:40.854 00:17:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Nzg4NDZkOGQ0MTc4OTU1OGE4ZmYxMmY4OWViMzc1MzcwNjM3NDVmMjRkOTc1NDg3dE/pLw==: 00:36:40.854 00:17:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MjljMDU2YzgzYWZjNWQxNzdlYWQ4YTQ1MmViOWI1ZTiWULSZ: ]] 00:36:40.854 00:17:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MjljMDU2YzgzYWZjNWQxNzdlYWQ4YTQ1MmViOWI1ZTiWULSZ: 00:36:40.854 00:17:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:36:40.854 00:17:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:40.854 00:17:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:40.854 00:17:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:36:40.854 00:17:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:36:40.854 00:17:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:40.854 00:17:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:36:40.854 00:17:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:40.854 00:17:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:40.854 00:17:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:40.854 00:17:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:40.854 00:17:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:40.854 00:17:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:40.854 00:17:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:40.854 00:17:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:40.854 00:17:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:40.854 00:17:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:40.854 00:17:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:40.854 00:17:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:40.854 00:17:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:40.854 00:17:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:40.854 00:17:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:36:40.854 00:17:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:40.854 00:17:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:41.421 nvme0n1 00:36:41.421 00:17:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:41.421 00:17:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:41.421 00:17:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:41.421 00:17:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:41.421 00:17:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:41.421 00:17:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:41.421 00:17:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:41.421 00:17:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:41.421 00:17:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:41.421 00:17:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:41.421 00:17:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:41.421 00:17:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:41.421 00:17:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:36:41.421 00:17:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:41.421 00:17:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:41.421 00:17:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:36:41.421 00:17:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:36:41.422 00:17:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YjVmODc0NTMxMGVmOWJiOTAxYTUzMzNiNWU1ZGE0OWNhNmZhNGJmMGVkMmU3ZmQ1NWE0ZDNmZWE1MzI1ZTZkZMBuIkI=: 00:36:41.422 00:17:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:36:41.422 00:17:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:41.422 00:17:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:36:41.422 00:17:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YjVmODc0NTMxMGVmOWJiOTAxYTUzMzNiNWU1ZGE0OWNhNmZhNGJmMGVkMmU3ZmQ1NWE0ZDNmZWE1MzI1ZTZkZMBuIkI=: 00:36:41.422 00:17:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:36:41.422 00:17:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:36:41.422 00:17:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:41.422 00:17:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:41.422 00:17:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:36:41.422 00:17:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:36:41.422 00:17:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:41.422 00:17:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:36:41.422 00:17:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:41.422 00:17:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:41.422 00:17:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:41.422 00:17:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:41.422 00:17:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:41.422 00:17:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:41.422 00:17:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:41.422 00:17:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:41.422 00:17:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:41.422 00:17:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:41.422 00:17:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:41.422 00:17:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:41.422 00:17:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:41.422 00:17:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:41.422 00:17:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:36:41.422 00:17:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:41.422 00:17:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:42.036 nvme0n1 00:36:42.036 00:17:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:42.036 00:17:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:42.036 00:17:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:42.036 00:17:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:42.036 00:17:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:42.036 00:17:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:42.036 00:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:42.036 00:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:42.036 00:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:42.036 00:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:42.036 00:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:42.036 00:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:36:42.036 00:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:42.036 00:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:42.036 00:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:36:42.036 00:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:36:42.036 00:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzYwYmNkYzEyNDFiMTYzMmI5N2JhN2M3MjQ5OTIyODY2ZTc4NmM0MTQ0YWUyNmM4CTa6Pw==: 00:36:42.036 00:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzcwNGUxNjhmMDE2MGM5OTI3YzMzNjQzMDc5MDlmMGNlYTlkNWU2YjIyZGZlNTEwEYJuGQ==: 00:36:42.036 00:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:42.036 00:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:36:42.036 00:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzYwYmNkYzEyNDFiMTYzMmI5N2JhN2M3MjQ5OTIyODY2ZTc4NmM0MTQ0YWUyNmM4CTa6Pw==: 00:36:42.036 00:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzcwNGUxNjhmMDE2MGM5OTI3YzMzNjQzMDc5MDlmMGNlYTlkNWU2YjIyZGZlNTEwEYJuGQ==: ]] 00:36:42.036 00:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YzcwNGUxNjhmMDE2MGM5OTI3YzMzNjQzMDc5MDlmMGNlYTlkNWU2YjIyZGZlNTEwEYJuGQ==: 00:36:42.036 00:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:36:42.036 00:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:42.036 00:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:42.036 00:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:42.036 00:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:36:42.036 00:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:42.036 00:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:42.037 00:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:42.037 00:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:42.037 00:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:42.037 00:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:42.037 00:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:42.037 00:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:42.037 00:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:42.037 00:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:42.037 00:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:36:42.037 00:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:36:42.037 00:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:36:42.037 00:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:36:42.037 00:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:36:42.037 00:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:36:42.037 00:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:36:42.037 00:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:36:42.037 00:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:42.037 00:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:42.037 request: 00:36:42.037 { 00:36:42.037 "name": "nvme0", 00:36:42.037 "trtype": "tcp", 00:36:42.037 "traddr": "10.0.0.1", 00:36:42.037 "adrfam": "ipv4", 00:36:42.037 "trsvcid": "4420", 00:36:42.037 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:36:42.037 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:36:42.037 "prchk_reftag": false, 00:36:42.037 "prchk_guard": false, 00:36:42.037 "hdgst": false, 00:36:42.037 "ddgst": false, 00:36:42.037 "allow_unrecognized_csi": false, 00:36:42.037 "method": "bdev_nvme_attach_controller", 00:36:42.037 "req_id": 1 00:36:42.037 } 00:36:42.037 Got JSON-RPC error response 00:36:42.037 response: 00:36:42.037 { 00:36:42.037 "code": -5, 00:36:42.037 "message": "Input/output error" 00:36:42.037 } 00:36:42.037 00:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:36:42.037 00:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:36:42.037 00:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:36:42.037 00:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:36:42.037 00:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:36:42.037 00:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:36:42.037 00:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:36:42.037 00:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:42.037 00:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:42.037 00:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:42.037 00:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:36:42.037 00:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:36:42.037 00:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:42.037 00:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:42.037 00:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:42.037 00:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:42.037 00:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:42.037 00:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:42.037 00:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:42.037 00:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:42.037 00:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:42.037 00:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:42.037 00:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:36:42.037 00:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:36:42.037 00:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:36:42.037 00:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:36:42.037 00:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:36:42.037 00:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:36:42.037 00:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:36:42.037 00:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:36:42.037 00:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:42.037 00:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:42.296 request: 00:36:42.296 { 00:36:42.296 "name": "nvme0", 00:36:42.296 "trtype": "tcp", 00:36:42.296 "traddr": "10.0.0.1", 00:36:42.296 "adrfam": "ipv4", 00:36:42.296 "trsvcid": "4420", 00:36:42.296 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:36:42.296 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:36:42.296 "prchk_reftag": false, 00:36:42.296 "prchk_guard": false, 00:36:42.296 "hdgst": false, 00:36:42.296 "ddgst": false, 00:36:42.296 "dhchap_key": "key2", 00:36:42.296 "allow_unrecognized_csi": false, 00:36:42.296 "method": "bdev_nvme_attach_controller", 00:36:42.296 "req_id": 1 00:36:42.296 } 00:36:42.296 Got JSON-RPC error response 00:36:42.296 response: 00:36:42.296 { 00:36:42.296 "code": -5, 00:36:42.296 "message": "Input/output error" 00:36:42.296 } 00:36:42.296 00:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:36:42.296 00:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:36:42.296 00:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:36:42.296 00:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:36:42.296 00:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:36:42.296 00:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:36:42.296 00:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:36:42.296 00:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:42.296 00:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:42.296 00:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:42.296 00:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:36:42.296 00:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:36:42.296 00:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:42.296 00:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:42.296 00:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:42.296 00:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:42.296 00:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:42.296 00:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:42.296 00:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:42.296 00:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:42.296 00:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:42.296 00:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:42.296 00:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:36:42.296 00:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:36:42.296 00:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:36:42.296 00:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:36:42.296 00:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:36:42.296 00:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:36:42.296 00:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:36:42.296 00:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:36:42.296 00:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:42.296 00:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:42.296 request: 00:36:42.296 { 00:36:42.296 "name": "nvme0", 00:36:42.296 "trtype": "tcp", 00:36:42.296 "traddr": "10.0.0.1", 00:36:42.296 "adrfam": "ipv4", 00:36:42.296 "trsvcid": "4420", 00:36:42.296 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:36:42.296 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:36:42.296 "prchk_reftag": false, 00:36:42.296 "prchk_guard": false, 00:36:42.296 "hdgst": false, 00:36:42.296 "ddgst": false, 00:36:42.296 "dhchap_key": "key1", 00:36:42.296 "dhchap_ctrlr_key": "ckey2", 00:36:42.296 "allow_unrecognized_csi": false, 00:36:42.296 "method": "bdev_nvme_attach_controller", 00:36:42.296 "req_id": 1 00:36:42.296 } 00:36:42.296 Got JSON-RPC error response 00:36:42.296 response: 00:36:42.296 { 00:36:42.296 "code": -5, 00:36:42.296 "message": "Input/output error" 00:36:42.296 } 00:36:42.296 00:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:36:42.296 00:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:36:42.296 00:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:36:42.296 00:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:36:42.296 00:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:36:42.296 00:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # get_main_ns_ip 00:36:42.296 00:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:42.296 00:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:42.296 00:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:42.296 00:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:42.296 00:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:42.296 00:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:42.296 00:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:42.296 00:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:42.296 00:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:42.296 00:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:42.296 00:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:36:42.296 00:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:42.296 00:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:42.555 nvme0n1 00:36:42.555 00:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:42.555 00:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@132 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:36:42.555 00:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:42.555 00:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:42.555 00:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:36:42.555 00:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:36:42.555 00:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NGYyNzBlNTM3M2I5OWI0NzY0NTRiMTc0YWVkOTVkYWN0mx2S: 00:36:42.555 00:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MzI3Y2JmOWRkNDExYjkwOWE4MjFiYThhN2M1OTllYjURdW4z: 00:36:42.555 00:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:42.555 00:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:36:42.555 00:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NGYyNzBlNTM3M2I5OWI0NzY0NTRiMTc0YWVkOTVkYWN0mx2S: 00:36:42.555 00:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MzI3Y2JmOWRkNDExYjkwOWE4MjFiYThhN2M1OTllYjURdW4z: ]] 00:36:42.555 00:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MzI3Y2JmOWRkNDExYjkwOWE4MjFiYThhN2M1OTllYjURdW4z: 00:36:42.555 00:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@133 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:36:42.555 00:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:42.555 00:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:42.555 00:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:42.555 00:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # jq -r '.[].name' 00:36:42.555 00:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # rpc_cmd bdev_nvme_get_controllers 00:36:42.555 00:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:42.555 00:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:42.555 00:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:42.555 00:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:42.555 00:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@136 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:36:42.555 00:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:36:42.555 00:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:36:42.555 00:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:36:42.555 00:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:36:42.555 00:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:36:42.555 00:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:36:42.555 00:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:36:42.555 00:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:42.555 00:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:42.555 request: 00:36:42.555 { 00:36:42.555 "name": "nvme0", 00:36:42.555 "dhchap_key": "key1", 00:36:42.555 "dhchap_ctrlr_key": "ckey2", 00:36:42.555 "method": "bdev_nvme_set_keys", 00:36:42.555 "req_id": 1 00:36:42.555 } 00:36:42.555 Got JSON-RPC error response 00:36:42.555 response: 00:36:42.555 { 00:36:42.555 "code": -13, 00:36:42.555 "message": "Permission denied" 00:36:42.555 } 00:36:42.555 00:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:36:42.555 00:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:36:42.555 00:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:36:42.555 00:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:36:42.555 00:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:36:42.555 00:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:36:42.555 00:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:42.555 00:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:36:42.556 00:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:42.815 00:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:42.815 00:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:36:42.815 00:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:36:43.751 00:17:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:36:43.751 00:17:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:36:43.751 00:17:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:43.751 00:17:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:43.751 00:17:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:43.751 00:17:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:36:43.751 00:17:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:36:44.688 00:17:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:36:44.688 00:17:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:44.688 00:17:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:36:44.688 00:17:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:44.688 00:17:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:44.688 00:17:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 0 != 0 )) 00:36:44.688 00:17:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@141 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:36:44.688 00:17:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:44.688 00:17:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:44.688 00:17:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:36:44.688 00:17:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:36:44.688 00:17:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzYwYmNkYzEyNDFiMTYzMmI5N2JhN2M3MjQ5OTIyODY2ZTc4NmM0MTQ0YWUyNmM4CTa6Pw==: 00:36:44.688 00:17:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzcwNGUxNjhmMDE2MGM5OTI3YzMzNjQzMDc5MDlmMGNlYTlkNWU2YjIyZGZlNTEwEYJuGQ==: 00:36:44.688 00:17:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:44.688 00:17:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:36:44.688 00:17:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzYwYmNkYzEyNDFiMTYzMmI5N2JhN2M3MjQ5OTIyODY2ZTc4NmM0MTQ0YWUyNmM4CTa6Pw==: 00:36:44.688 00:17:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzcwNGUxNjhmMDE2MGM5OTI3YzMzNjQzMDc5MDlmMGNlYTlkNWU2YjIyZGZlNTEwEYJuGQ==: ]] 00:36:44.688 00:17:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YzcwNGUxNjhmMDE2MGM5OTI3YzMzNjQzMDc5MDlmMGNlYTlkNWU2YjIyZGZlNTEwEYJuGQ==: 00:36:44.947 00:17:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # get_main_ns_ip 00:36:44.947 00:17:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:44.947 00:17:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:44.947 00:17:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:44.947 00:17:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:44.947 00:17:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:44.947 00:17:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:44.947 00:17:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:44.947 00:17:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:44.947 00:17:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:44.947 00:17:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:44.948 00:17:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:36:44.948 00:17:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:44.948 00:17:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:44.948 nvme0n1 00:36:44.948 00:17:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:44.948 00:17:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@146 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:36:44.948 00:17:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:44.948 00:17:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:44.948 00:17:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:36:44.948 00:17:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:36:44.948 00:17:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NGYyNzBlNTM3M2I5OWI0NzY0NTRiMTc0YWVkOTVkYWN0mx2S: 00:36:44.948 00:17:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MzI3Y2JmOWRkNDExYjkwOWE4MjFiYThhN2M1OTllYjURdW4z: 00:36:44.948 00:17:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:44.948 00:17:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:36:44.948 00:17:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NGYyNzBlNTM3M2I5OWI0NzY0NTRiMTc0YWVkOTVkYWN0mx2S: 00:36:44.948 00:17:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MzI3Y2JmOWRkNDExYjkwOWE4MjFiYThhN2M1OTllYjURdW4z: ]] 00:36:44.948 00:17:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MzI3Y2JmOWRkNDExYjkwOWE4MjFiYThhN2M1OTllYjURdW4z: 00:36:44.948 00:17:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@147 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:36:44.948 00:17:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:36:44.948 00:17:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:36:44.948 00:17:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:36:44.948 00:17:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:36:44.948 00:17:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:36:44.948 00:17:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:36:44.948 00:17:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:36:44.948 00:17:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:44.948 00:17:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:44.948 request: 00:36:44.948 { 00:36:44.948 "name": "nvme0", 00:36:44.948 "dhchap_key": "key2", 00:36:44.948 "dhchap_ctrlr_key": "ckey1", 00:36:44.948 "method": "bdev_nvme_set_keys", 00:36:44.948 "req_id": 1 00:36:44.948 } 00:36:44.948 Got JSON-RPC error response 00:36:44.948 response: 00:36:44.948 { 00:36:44.948 "code": -13, 00:36:44.948 "message": "Permission denied" 00:36:44.948 } 00:36:44.948 00:17:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:36:44.948 00:17:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:36:44.948 00:17:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:36:44.948 00:17:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:36:44.948 00:17:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:36:44.948 00:17:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:36:44.948 00:17:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:36:44.948 00:17:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:44.948 00:17:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:44.948 00:17:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:45.207 00:17:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 1 != 0 )) 00:36:45.207 00:17:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@149 -- # sleep 1s 00:36:46.143 00:17:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:36:46.143 00:17:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:36:46.143 00:17:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:46.143 00:17:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:46.143 00:17:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:46.143 00:17:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 0 != 0 )) 00:36:46.143 00:17:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@152 -- # trap - SIGINT SIGTERM EXIT 00:36:46.143 00:17:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@153 -- # cleanup 00:36:46.143 00:17:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:36:46.143 00:17:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:36:46.143 00:17:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@121 -- # sync 00:36:46.143 00:17:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:36:46.143 00:17:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@124 -- # set +e 00:36:46.143 00:17:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:36:46.143 00:17:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:36:46.143 rmmod nvme_tcp 00:36:46.143 rmmod nvme_fabrics 00:36:46.143 00:17:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:36:46.143 00:17:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@128 -- # set -e 00:36:46.143 00:17:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@129 -- # return 0 00:36:46.143 00:17:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@517 -- # '[' -n 27437 ']' 00:36:46.143 00:17:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@518 -- # killprocess 27437 00:36:46.143 00:17:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@954 -- # '[' -z 27437 ']' 00:36:46.143 00:17:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@958 -- # kill -0 27437 00:36:46.143 00:17:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # uname 00:36:46.143 00:17:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:36:46.143 00:17:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 27437 00:36:46.143 00:17:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:36:46.143 00:17:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:36:46.143 00:17:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 27437' 00:36:46.143 killing process with pid 27437 00:36:46.143 00:17:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@973 -- # kill 27437 00:36:46.143 00:17:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@978 -- # wait 27437 00:36:47.079 00:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:36:47.079 00:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:36:47.079 00:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:36:47.079 00:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@297 -- # iptr 00:36:47.079 00:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:36:47.079 00:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-save 00:36:47.079 00:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-restore 00:36:47.079 00:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:36:47.079 00:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@302 -- # remove_spdk_ns 00:36:47.079 00:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:47.079 00:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:47.079 00:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:49.614 00:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:36:49.614 00:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:36:49.614 00:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:36:49.614 00:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:36:49.614 00:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:36:49.614 00:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@714 -- # echo 0 00:36:49.614 00:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:36:49.614 00:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:36:49.614 00:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:36:49.614 00:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:36:49.614 00:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:36:49.614 00:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:36:49.614 00:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:36:52.145 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:36:52.145 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:36:52.145 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:36:52.145 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:36:52.145 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:36:52.145 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:36:52.145 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:36:52.145 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:36:52.145 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:36:52.145 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:36:52.145 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:36:52.145 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:36:52.145 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:36:52.145 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:36:52.145 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:36:52.145 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:36:52.713 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:36:52.971 00:17:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.0A3 /tmp/spdk.key-null.FbK /tmp/spdk.key-sha256.mag /tmp/spdk.key-sha384.bUq /tmp/spdk.key-sha512.uln /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:36:52.971 00:17:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:36:55.506 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:36:55.506 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:36:55.506 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:36:55.506 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:36:55.506 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:36:55.506 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:36:55.506 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:36:55.506 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:36:55.506 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:36:55.506 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:36:55.506 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:36:55.506 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:36:55.506 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:36:55.506 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:36:55.506 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:36:55.506 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:36:55.506 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:36:55.506 00:36:55.506 real 0m53.735s 00:36:55.506 user 0m48.884s 00:36:55.506 sys 0m11.859s 00:36:55.506 00:17:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:55.506 00:17:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:55.506 ************************************ 00:36:55.506 END TEST nvmf_auth_host 00:36:55.506 ************************************ 00:36:55.506 00:17:34 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@32 -- # [[ tcp == \t\c\p ]] 00:36:55.506 00:17:34 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@33 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:36:55.506 00:17:34 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:36:55.506 00:17:34 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:55.506 00:17:34 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:36:55.506 ************************************ 00:36:55.506 START TEST nvmf_digest 00:36:55.506 ************************************ 00:36:55.506 00:17:34 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:36:55.506 * Looking for test storage... 00:36:55.506 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:36:55.506 00:17:34 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:36:55.506 00:17:34 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1711 -- # lcov --version 00:36:55.506 00:17:34 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:36:55.766 00:17:34 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:36:55.766 00:17:34 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:36:55.766 00:17:34 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@333 -- # local ver1 ver1_l 00:36:55.766 00:17:34 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@334 -- # local ver2 ver2_l 00:36:55.766 00:17:34 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # IFS=.-: 00:36:55.766 00:17:34 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # read -ra ver1 00:36:55.766 00:17:34 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # IFS=.-: 00:36:55.766 00:17:34 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # read -ra ver2 00:36:55.766 00:17:34 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@338 -- # local 'op=<' 00:36:55.766 00:17:34 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@340 -- # ver1_l=2 00:36:55.766 00:17:34 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@341 -- # ver2_l=1 00:36:55.766 00:17:34 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:36:55.766 00:17:34 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@344 -- # case "$op" in 00:36:55.766 00:17:34 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@345 -- # : 1 00:36:55.766 00:17:34 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v = 0 )) 00:36:55.766 00:17:34 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:36:55.766 00:17:34 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # decimal 1 00:36:55.766 00:17:34 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=1 00:36:55.766 00:17:34 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:36:55.766 00:17:34 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 1 00:36:55.766 00:17:34 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # ver1[v]=1 00:36:55.766 00:17:34 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # decimal 2 00:36:55.766 00:17:34 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=2 00:36:55.766 00:17:34 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:36:55.766 00:17:34 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 2 00:36:55.766 00:17:34 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # ver2[v]=2 00:36:55.766 00:17:34 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:36:55.766 00:17:34 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:36:55.766 00:17:34 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # return 0 00:36:55.766 00:17:34 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:36:55.766 00:17:34 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:36:55.766 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:55.766 --rc genhtml_branch_coverage=1 00:36:55.766 --rc genhtml_function_coverage=1 00:36:55.766 --rc genhtml_legend=1 00:36:55.766 --rc geninfo_all_blocks=1 00:36:55.766 --rc geninfo_unexecuted_blocks=1 00:36:55.766 00:36:55.766 ' 00:36:55.766 00:17:34 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:36:55.766 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:55.766 --rc genhtml_branch_coverage=1 00:36:55.766 --rc genhtml_function_coverage=1 00:36:55.766 --rc genhtml_legend=1 00:36:55.766 --rc geninfo_all_blocks=1 00:36:55.766 --rc geninfo_unexecuted_blocks=1 00:36:55.766 00:36:55.766 ' 00:36:55.766 00:17:34 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:36:55.766 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:55.766 --rc genhtml_branch_coverage=1 00:36:55.766 --rc genhtml_function_coverage=1 00:36:55.766 --rc genhtml_legend=1 00:36:55.766 --rc geninfo_all_blocks=1 00:36:55.766 --rc geninfo_unexecuted_blocks=1 00:36:55.766 00:36:55.766 ' 00:36:55.766 00:17:34 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:36:55.766 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:55.766 --rc genhtml_branch_coverage=1 00:36:55.766 --rc genhtml_function_coverage=1 00:36:55.766 --rc genhtml_legend=1 00:36:55.766 --rc geninfo_all_blocks=1 00:36:55.766 --rc geninfo_unexecuted_blocks=1 00:36:55.766 00:36:55.766 ' 00:36:55.766 00:17:34 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:55.766 00:17:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:36:55.766 00:17:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:55.766 00:17:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:55.766 00:17:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:55.766 00:17:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:55.766 00:17:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:55.766 00:17:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:55.766 00:17:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:55.766 00:17:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:55.766 00:17:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:55.766 00:17:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:55.766 00:17:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:36:55.766 00:17:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:36:55.766 00:17:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:55.766 00:17:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:55.766 00:17:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:55.766 00:17:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:55.766 00:17:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:55.766 00:17:34 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@15 -- # shopt -s extglob 00:36:55.766 00:17:34 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:55.766 00:17:34 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:55.766 00:17:34 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:55.767 00:17:34 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:55.767 00:17:34 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:55.767 00:17:34 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:55.767 00:17:34 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:36:55.767 00:17:34 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:55.767 00:17:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@51 -- # : 0 00:36:55.767 00:17:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:36:55.767 00:17:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:36:55.767 00:17:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:55.767 00:17:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:55.767 00:17:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:55.767 00:17:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:36:55.767 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:36:55.767 00:17:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:36:55.767 00:17:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:36:55.767 00:17:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@55 -- # have_pci_nics=0 00:36:55.767 00:17:34 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:36:55.767 00:17:34 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:36:55.767 00:17:34 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:36:55.767 00:17:34 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:36:55.767 00:17:34 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:36:55.767 00:17:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:36:55.767 00:17:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:36:55.767 00:17:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@476 -- # prepare_net_devs 00:36:55.767 00:17:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@438 -- # local -g is_hw=no 00:36:55.767 00:17:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@440 -- # remove_spdk_ns 00:36:55.767 00:17:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:55.767 00:17:34 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:55.767 00:17:34 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:55.767 00:17:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:36:55.767 00:17:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:36:55.767 00:17:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@309 -- # xtrace_disable 00:36:55.767 00:17:34 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:37:01.041 00:17:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:37:01.041 00:17:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # pci_devs=() 00:37:01.041 00:17:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # local -a pci_devs 00:37:01.041 00:17:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # pci_net_devs=() 00:37:01.041 00:17:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:37:01.041 00:17:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # pci_drivers=() 00:37:01.041 00:17:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # local -A pci_drivers 00:37:01.041 00:17:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # net_devs=() 00:37:01.041 00:17:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # local -ga net_devs 00:37:01.041 00:17:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # e810=() 00:37:01.041 00:17:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # local -ga e810 00:37:01.041 00:17:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # x722=() 00:37:01.041 00:17:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # local -ga x722 00:37:01.041 00:17:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # mlx=() 00:37:01.041 00:17:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # local -ga mlx 00:37:01.041 00:17:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:37:01.041 00:17:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:37:01.041 00:17:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:37:01.041 00:17:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:37:01.041 00:17:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:37:01.041 00:17:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:37:01.041 00:17:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:37:01.041 00:17:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:37:01.041 00:17:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:37:01.041 00:17:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:37:01.041 00:17:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:37:01.041 00:17:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:37:01.041 00:17:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:37:01.041 00:17:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:37:01.041 00:17:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:37:01.041 00:17:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:37:01.041 00:17:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:37:01.041 00:17:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:37:01.041 00:17:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:01.041 00:17:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:37:01.041 Found 0000:af:00.0 (0x8086 - 0x159b) 00:37:01.041 00:17:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:01.041 00:17:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:01.041 00:17:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:01.041 00:17:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:01.041 00:17:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:01.041 00:17:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:01.041 00:17:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:37:01.041 Found 0000:af:00.1 (0x8086 - 0x159b) 00:37:01.041 00:17:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:01.041 00:17:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:01.041 00:17:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:01.041 00:17:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:01.041 00:17:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:01.041 00:17:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:37:01.041 00:17:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:37:01.041 00:17:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:37:01.041 00:17:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:37:01.041 00:17:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:01.041 00:17:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:37:01.041 00:17:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:01.041 00:17:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@418 -- # [[ up == up ]] 00:37:01.041 00:17:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:37:01.041 00:17:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:01.041 00:17:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:37:01.041 Found net devices under 0000:af:00.0: cvl_0_0 00:37:01.041 00:17:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:37:01.041 00:17:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:37:01.041 00:17:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:01.041 00:17:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:37:01.041 00:17:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:01.041 00:17:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@418 -- # [[ up == up ]] 00:37:01.041 00:17:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:37:01.041 00:17:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:01.041 00:17:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:37:01.041 Found net devices under 0000:af:00.1: cvl_0_1 00:37:01.041 00:17:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:37:01.041 00:17:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:37:01.041 00:17:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # is_hw=yes 00:37:01.041 00:17:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:37:01.041 00:17:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:37:01.041 00:17:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:37:01.041 00:17:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:37:01.041 00:17:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:37:01.041 00:17:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:37:01.041 00:17:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:37:01.041 00:17:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:37:01.041 00:17:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:37:01.041 00:17:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:37:01.041 00:17:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:37:01.041 00:17:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:37:01.041 00:17:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:37:01.041 00:17:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:37:01.041 00:17:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:37:01.041 00:17:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:37:01.041 00:17:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:37:01.041 00:17:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:37:01.041 00:17:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:37:01.041 00:17:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:37:01.041 00:17:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:37:01.041 00:17:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:37:01.041 00:17:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:37:01.041 00:17:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:37:01.042 00:17:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:37:01.042 00:17:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:37:01.042 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:37:01.042 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.364 ms 00:37:01.042 00:37:01.042 --- 10.0.0.2 ping statistics --- 00:37:01.042 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:01.042 rtt min/avg/max/mdev = 0.364/0.364/0.364/0.000 ms 00:37:01.042 00:17:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:37:01.042 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:37:01.042 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.200 ms 00:37:01.042 00:37:01.042 --- 10.0.0.1 ping statistics --- 00:37:01.042 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:01.042 rtt min/avg/max/mdev = 0.200/0.200/0.200/0.000 ms 00:37:01.042 00:17:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:37:01.042 00:17:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@450 -- # return 0 00:37:01.042 00:17:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:37:01.042 00:17:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:37:01.042 00:17:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:37:01.042 00:17:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:37:01.042 00:17:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:37:01.042 00:17:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:37:01.042 00:17:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:37:01.042 00:17:39 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:37:01.042 00:17:39 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:37:01.042 00:17:39 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:37:01.042 00:17:39 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:37:01.042 00:17:39 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:37:01.042 00:17:39 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:37:01.042 ************************************ 00:37:01.042 START TEST nvmf_digest_clean 00:37:01.042 ************************************ 00:37:01.042 00:17:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1129 -- # run_digest 00:37:01.042 00:17:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:37:01.042 00:17:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:37:01.042 00:17:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:37:01.042 00:17:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:37:01.042 00:17:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:37:01.042 00:17:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:37:01.042 00:17:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@726 -- # xtrace_disable 00:37:01.042 00:17:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:37:01.042 00:17:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@509 -- # nvmfpid=41072 00:37:01.042 00:17:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@510 -- # waitforlisten 41072 00:37:01.042 00:17:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:37:01.042 00:17:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 41072 ']' 00:37:01.042 00:17:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:01.042 00:17:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:37:01.042 00:17:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:01.042 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:01.042 00:17:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:37:01.042 00:17:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:37:01.042 [2024-12-14 00:17:39.907508] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:37:01.042 [2024-12-14 00:17:39.907602] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:01.042 [2024-12-14 00:17:40.029561] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:01.042 [2024-12-14 00:17:40.142673] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:37:01.042 [2024-12-14 00:17:40.142720] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:37:01.042 [2024-12-14 00:17:40.142730] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:37:01.042 [2024-12-14 00:17:40.142741] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:37:01.042 [2024-12-14 00:17:40.142749] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:37:01.042 [2024-12-14 00:17:40.144004] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:37:01.610 00:17:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:37:01.610 00:17:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:37:01.610 00:17:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:37:01.610 00:17:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@732 -- # xtrace_disable 00:37:01.610 00:17:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:37:01.610 00:17:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:37:01.610 00:17:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:37:01.610 00:17:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:37:01.610 00:17:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:37:01.610 00:17:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:01.610 00:17:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:37:02.178 null0 00:37:02.178 [2024-12-14 00:17:41.053692] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:02.178 [2024-12-14 00:17:41.077936] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:02.178 00:17:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:02.178 00:17:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:37:02.178 00:17:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:37:02.178 00:17:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:37:02.178 00:17:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:37:02.178 00:17:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:37:02.178 00:17:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:37:02.178 00:17:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:37:02.178 00:17:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=41319 00:37:02.178 00:17:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:37:02.178 00:17:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 41319 /var/tmp/bperf.sock 00:37:02.178 00:17:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 41319 ']' 00:37:02.178 00:17:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:37:02.178 00:17:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:37:02.178 00:17:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:37:02.178 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:37:02.178 00:17:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:37:02.178 00:17:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:37:02.178 [2024-12-14 00:17:41.143460] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:37:02.178 [2024-12-14 00:17:41.143547] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid41319 ] 00:37:02.178 [2024-12-14 00:17:41.256312] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:02.437 [2024-12-14 00:17:41.366683] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:37:03.005 00:17:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:37:03.005 00:17:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:37:03.005 00:17:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:37:03.005 00:17:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:37:03.005 00:17:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:37:03.572 00:17:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:37:03.572 00:17:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:37:03.831 nvme0n1 00:37:03.831 00:17:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:37:03.831 00:17:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:37:03.831 Running I/O for 2 seconds... 00:37:06.144 22234.00 IOPS, 86.85 MiB/s [2024-12-13T23:17:45.285Z] 21554.50 IOPS, 84.20 MiB/s 00:37:06.144 Latency(us) 00:37:06.144 [2024-12-13T23:17:45.285Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:06.144 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:37:06.144 nvme0n1 : 2.04 21154.65 82.64 0.00 0.00 5931.11 2995.93 45937.62 00:37:06.144 [2024-12-13T23:17:45.285Z] =================================================================================================================== 00:37:06.144 [2024-12-13T23:17:45.285Z] Total : 21154.65 82.64 0.00 0.00 5931.11 2995.93 45937.62 00:37:06.144 { 00:37:06.144 "results": [ 00:37:06.144 { 00:37:06.144 "job": "nvme0n1", 00:37:06.144 "core_mask": "0x2", 00:37:06.144 "workload": "randread", 00:37:06.144 "status": "finished", 00:37:06.144 "queue_depth": 128, 00:37:06.144 "io_size": 4096, 00:37:06.144 "runtime": 2.043853, 00:37:06.144 "iops": 21154.652511702163, 00:37:06.144 "mibps": 82.63536137383657, 00:37:06.144 "io_failed": 0, 00:37:06.144 "io_timeout": 0, 00:37:06.144 "avg_latency_us": 5931.10717262662, 00:37:06.144 "min_latency_us": 2995.9314285714286, 00:37:06.144 "max_latency_us": 45937.61523809524 00:37:06.144 } 00:37:06.144 ], 00:37:06.144 "core_count": 1 00:37:06.144 } 00:37:06.145 00:17:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:37:06.145 00:17:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:37:06.145 00:17:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:37:06.145 00:17:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:37:06.145 | select(.opcode=="crc32c") 00:37:06.145 | "\(.module_name) \(.executed)"' 00:37:06.145 00:17:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:37:06.145 00:17:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:37:06.145 00:17:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:37:06.145 00:17:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:37:06.145 00:17:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:37:06.145 00:17:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 41319 00:37:06.145 00:17:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 41319 ']' 00:37:06.145 00:17:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 41319 00:37:06.145 00:17:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:37:06.145 00:17:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:37:06.145 00:17:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 41319 00:37:06.145 00:17:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:37:06.145 00:17:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:37:06.145 00:17:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 41319' 00:37:06.145 killing process with pid 41319 00:37:06.145 00:17:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 41319 00:37:06.145 Received shutdown signal, test time was about 2.000000 seconds 00:37:06.145 00:37:06.145 Latency(us) 00:37:06.145 [2024-12-13T23:17:45.286Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:06.145 [2024-12-13T23:17:45.286Z] =================================================================================================================== 00:37:06.145 [2024-12-13T23:17:45.286Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:37:06.145 00:17:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 41319 00:37:07.080 00:17:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:37:07.080 00:17:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:37:07.080 00:17:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:37:07.080 00:17:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:37:07.080 00:17:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:37:07.080 00:17:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:37:07.080 00:17:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:37:07.080 00:17:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=42003 00:37:07.080 00:17:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 42003 /var/tmp/bperf.sock 00:37:07.080 00:17:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 42003 ']' 00:37:07.080 00:17:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:37:07.080 00:17:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:37:07.080 00:17:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:37:07.080 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:37:07.080 00:17:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:37:07.080 00:17:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:37:07.080 00:17:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:37:07.080 [2024-12-14 00:17:46.113408] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:37:07.080 [2024-12-14 00:17:46.113505] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid42003 ] 00:37:07.080 I/O size of 131072 is greater than zero copy threshold (65536). 00:37:07.080 Zero copy mechanism will not be used. 00:37:07.339 [2024-12-14 00:17:46.226693] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:07.339 [2024-12-14 00:17:46.336729] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:37:07.906 00:17:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:37:07.906 00:17:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:37:07.906 00:17:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:37:07.906 00:17:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:37:07.906 00:17:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:37:08.474 00:17:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:37:08.474 00:17:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:37:08.733 nvme0n1 00:37:08.733 00:17:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:37:08.733 00:17:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:37:08.733 I/O size of 131072 is greater than zero copy threshold (65536). 00:37:08.733 Zero copy mechanism will not be used. 00:37:08.733 Running I/O for 2 seconds... 00:37:11.046 5011.00 IOPS, 626.38 MiB/s [2024-12-13T23:17:50.187Z] 4857.00 IOPS, 607.12 MiB/s 00:37:11.046 Latency(us) 00:37:11.046 [2024-12-13T23:17:50.187Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:11.046 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:37:11.046 nvme0n1 : 2.00 4860.98 607.62 0.00 0.00 3288.57 752.88 10985.08 00:37:11.046 [2024-12-13T23:17:50.187Z] =================================================================================================================== 00:37:11.046 [2024-12-13T23:17:50.187Z] Total : 4860.98 607.62 0.00 0.00 3288.57 752.88 10985.08 00:37:11.046 { 00:37:11.046 "results": [ 00:37:11.046 { 00:37:11.046 "job": "nvme0n1", 00:37:11.046 "core_mask": "0x2", 00:37:11.046 "workload": "randread", 00:37:11.046 "status": "finished", 00:37:11.046 "queue_depth": 16, 00:37:11.046 "io_size": 131072, 00:37:11.046 "runtime": 2.001653, 00:37:11.046 "iops": 4860.982398048013, 00:37:11.046 "mibps": 607.6227997560017, 00:37:11.046 "io_failed": 0, 00:37:11.046 "io_timeout": 0, 00:37:11.046 "avg_latency_us": 3288.567204815739, 00:37:11.046 "min_latency_us": 752.8838095238095, 00:37:11.046 "max_latency_us": 10985.081904761904 00:37:11.046 } 00:37:11.046 ], 00:37:11.046 "core_count": 1 00:37:11.046 } 00:37:11.046 00:17:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:37:11.046 00:17:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:37:11.046 00:17:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:37:11.046 00:17:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:37:11.046 | select(.opcode=="crc32c") 00:37:11.046 | "\(.module_name) \(.executed)"' 00:37:11.046 00:17:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:37:11.046 00:17:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:37:11.046 00:17:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:37:11.046 00:17:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:37:11.046 00:17:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:37:11.046 00:17:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 42003 00:37:11.046 00:17:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 42003 ']' 00:37:11.046 00:17:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 42003 00:37:11.046 00:17:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:37:11.046 00:17:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:37:11.046 00:17:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 42003 00:37:11.046 00:17:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:37:11.046 00:17:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:37:11.046 00:17:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 42003' 00:37:11.046 killing process with pid 42003 00:37:11.046 00:17:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 42003 00:37:11.046 Received shutdown signal, test time was about 2.000000 seconds 00:37:11.046 00:37:11.046 Latency(us) 00:37:11.046 [2024-12-13T23:17:50.187Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:11.046 [2024-12-13T23:17:50.187Z] =================================================================================================================== 00:37:11.046 [2024-12-13T23:17:50.187Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:37:11.046 00:17:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 42003 00:37:11.981 00:17:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:37:11.981 00:17:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:37:11.981 00:17:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:37:11.981 00:17:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:37:11.981 00:17:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:37:11.981 00:17:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:37:11.981 00:17:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:37:11.981 00:17:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=42879 00:37:11.982 00:17:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 42879 /var/tmp/bperf.sock 00:37:11.982 00:17:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:37:11.982 00:17:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 42879 ']' 00:37:11.982 00:17:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:37:11.982 00:17:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:37:11.982 00:17:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:37:11.982 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:37:11.982 00:17:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:37:11.982 00:17:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:37:11.982 [2024-12-14 00:17:51.031551] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:37:11.982 [2024-12-14 00:17:51.031640] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid42879 ] 00:37:12.240 [2024-12-14 00:17:51.144895] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:12.240 [2024-12-14 00:17:51.248667] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:37:12.807 00:17:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:37:12.807 00:17:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:37:12.807 00:17:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:37:12.807 00:17:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:37:12.807 00:17:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:37:13.374 00:17:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:37:13.374 00:17:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:37:13.633 nvme0n1 00:37:13.633 00:17:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:37:13.633 00:17:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:37:13.892 Running I/O for 2 seconds... 00:37:15.763 23629.00 IOPS, 92.30 MiB/s [2024-12-13T23:17:54.904Z] 23674.50 IOPS, 92.48 MiB/s 00:37:15.763 Latency(us) 00:37:15.763 [2024-12-13T23:17:54.904Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:15.763 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:37:15.763 nvme0n1 : 2.01 23676.16 92.48 0.00 0.00 5396.53 2340.57 7645.87 00:37:15.763 [2024-12-13T23:17:54.904Z] =================================================================================================================== 00:37:15.763 [2024-12-13T23:17:54.904Z] Total : 23676.16 92.48 0.00 0.00 5396.53 2340.57 7645.87 00:37:15.763 { 00:37:15.763 "results": [ 00:37:15.763 { 00:37:15.763 "job": "nvme0n1", 00:37:15.763 "core_mask": "0x2", 00:37:15.763 "workload": "randwrite", 00:37:15.763 "status": "finished", 00:37:15.763 "queue_depth": 128, 00:37:15.763 "io_size": 4096, 00:37:15.763 "runtime": 2.006618, 00:37:15.763 "iops": 23676.15560111591, 00:37:15.763 "mibps": 92.48498281685902, 00:37:15.763 "io_failed": 0, 00:37:15.763 "io_timeout": 0, 00:37:15.763 "avg_latency_us": 5396.534449031712, 00:37:15.763 "min_latency_us": 2340.5714285714284, 00:37:15.763 "max_latency_us": 7645.866666666667 00:37:15.763 } 00:37:15.763 ], 00:37:15.763 "core_count": 1 00:37:15.763 } 00:37:15.763 00:17:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:37:15.763 00:17:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:37:15.763 00:17:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:37:15.763 00:17:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:37:15.763 | select(.opcode=="crc32c") 00:37:15.763 | "\(.module_name) \(.executed)"' 00:37:15.763 00:17:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:37:16.027 00:17:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:37:16.027 00:17:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:37:16.027 00:17:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:37:16.027 00:17:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:37:16.027 00:17:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 42879 00:37:16.027 00:17:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 42879 ']' 00:37:16.027 00:17:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 42879 00:37:16.027 00:17:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:37:16.027 00:17:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:37:16.027 00:17:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 42879 00:37:16.027 00:17:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:37:16.027 00:17:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:37:16.027 00:17:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 42879' 00:37:16.027 killing process with pid 42879 00:37:16.027 00:17:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 42879 00:37:16.027 Received shutdown signal, test time was about 2.000000 seconds 00:37:16.027 00:37:16.027 Latency(us) 00:37:16.027 [2024-12-13T23:17:55.168Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:16.027 [2024-12-13T23:17:55.168Z] =================================================================================================================== 00:37:16.027 [2024-12-13T23:17:55.168Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:37:16.027 00:17:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 42879 00:37:16.968 00:17:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:37:16.968 00:17:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:37:16.968 00:17:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:37:16.968 00:17:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:37:16.968 00:17:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:37:16.968 00:17:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:37:16.968 00:17:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:37:16.968 00:17:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=43586 00:37:16.968 00:17:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 43586 /var/tmp/bperf.sock 00:37:16.968 00:17:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:37:16.968 00:17:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 43586 ']' 00:37:16.968 00:17:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:37:16.968 00:17:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:37:16.968 00:17:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:37:16.968 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:37:16.968 00:17:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:37:16.968 00:17:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:37:16.968 [2024-12-14 00:17:56.070209] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:37:16.968 [2024-12-14 00:17:56.070301] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid43586 ] 00:37:16.968 I/O size of 131072 is greater than zero copy threshold (65536). 00:37:16.968 Zero copy mechanism will not be used. 00:37:17.227 [2024-12-14 00:17:56.182179] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:17.227 [2024-12-14 00:17:56.293144] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:37:17.793 00:17:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:37:17.793 00:17:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:37:17.793 00:17:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:37:17.793 00:17:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:37:17.793 00:17:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:37:18.360 00:17:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:37:18.360 00:17:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:37:18.927 nvme0n1 00:37:18.927 00:17:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:37:18.927 00:17:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:37:18.927 I/O size of 131072 is greater than zero copy threshold (65536). 00:37:18.927 Zero copy mechanism will not be used. 00:37:18.927 Running I/O for 2 seconds... 00:37:20.800 5441.00 IOPS, 680.12 MiB/s [2024-12-13T23:17:59.941Z] 5693.00 IOPS, 711.62 MiB/s 00:37:20.800 Latency(us) 00:37:20.800 [2024-12-13T23:17:59.941Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:20.800 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:37:20.800 nvme0n1 : 2.00 5691.56 711.45 0.00 0.00 2806.15 2090.91 12483.05 00:37:20.800 [2024-12-13T23:17:59.941Z] =================================================================================================================== 00:37:20.800 [2024-12-13T23:17:59.941Z] Total : 5691.56 711.45 0.00 0.00 2806.15 2090.91 12483.05 00:37:20.800 { 00:37:20.800 "results": [ 00:37:20.800 { 00:37:20.800 "job": "nvme0n1", 00:37:20.800 "core_mask": "0x2", 00:37:20.800 "workload": "randwrite", 00:37:20.800 "status": "finished", 00:37:20.800 "queue_depth": 16, 00:37:20.800 "io_size": 131072, 00:37:20.800 "runtime": 2.003317, 00:37:20.800 "iops": 5691.560546833078, 00:37:20.800 "mibps": 711.4450683541347, 00:37:20.800 "io_failed": 0, 00:37:20.800 "io_timeout": 0, 00:37:20.800 "avg_latency_us": 2806.1464344601195, 00:37:20.800 "min_latency_us": 2090.9104761904764, 00:37:20.800 "max_latency_us": 12483.047619047618 00:37:20.800 } 00:37:20.800 ], 00:37:20.800 "core_count": 1 00:37:20.800 } 00:37:20.800 00:17:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:37:20.800 00:17:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:37:20.800 00:17:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:37:20.800 00:17:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:37:20.800 | select(.opcode=="crc32c") 00:37:20.800 | "\(.module_name) \(.executed)"' 00:37:20.800 00:17:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:37:21.059 00:18:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:37:21.059 00:18:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:37:21.059 00:18:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:37:21.059 00:18:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:37:21.059 00:18:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 43586 00:37:21.059 00:18:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 43586 ']' 00:37:21.059 00:18:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 43586 00:37:21.059 00:18:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:37:21.059 00:18:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:37:21.059 00:18:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 43586 00:37:21.059 00:18:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:37:21.059 00:18:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:37:21.059 00:18:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 43586' 00:37:21.059 killing process with pid 43586 00:37:21.059 00:18:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 43586 00:37:21.059 Received shutdown signal, test time was about 2.000000 seconds 00:37:21.059 00:37:21.059 Latency(us) 00:37:21.059 [2024-12-13T23:18:00.200Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:21.059 [2024-12-13T23:18:00.200Z] =================================================================================================================== 00:37:21.059 [2024-12-13T23:18:00.200Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:37:21.059 00:18:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 43586 00:37:21.994 00:18:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 41072 00:37:21.995 00:18:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 41072 ']' 00:37:21.995 00:18:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 41072 00:37:21.995 00:18:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:37:21.995 00:18:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:37:21.995 00:18:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 41072 00:37:21.995 00:18:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:37:21.995 00:18:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:37:21.995 00:18:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 41072' 00:37:21.995 killing process with pid 41072 00:37:21.995 00:18:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 41072 00:37:21.995 00:18:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 41072 00:37:23.371 00:37:23.371 real 0m22.421s 00:37:23.371 user 0m42.200s 00:37:23.371 sys 0m4.811s 00:37:23.371 00:18:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:23.371 00:18:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:37:23.371 ************************************ 00:37:23.371 END TEST nvmf_digest_clean 00:37:23.371 ************************************ 00:37:23.371 00:18:02 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:37:23.371 00:18:02 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:37:23.371 00:18:02 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:37:23.371 00:18:02 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:37:23.371 ************************************ 00:37:23.371 START TEST nvmf_digest_error 00:37:23.371 ************************************ 00:37:23.371 00:18:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1129 -- # run_digest_error 00:37:23.371 00:18:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:37:23.371 00:18:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:37:23.371 00:18:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@726 -- # xtrace_disable 00:37:23.371 00:18:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:37:23.371 00:18:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@509 -- # nvmfpid=44789 00:37:23.371 00:18:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@510 -- # waitforlisten 44789 00:37:23.371 00:18:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:37:23.371 00:18:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 44789 ']' 00:37:23.371 00:18:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:23.371 00:18:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:37:23.371 00:18:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:23.371 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:23.371 00:18:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:37:23.371 00:18:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:37:23.371 [2024-12-14 00:18:02.412993] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:37:23.371 [2024-12-14 00:18:02.413084] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:23.630 [2024-12-14 00:18:02.529737] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:23.630 [2024-12-14 00:18:02.634254] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:37:23.630 [2024-12-14 00:18:02.634296] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:37:23.630 [2024-12-14 00:18:02.634307] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:37:23.630 [2024-12-14 00:18:02.634317] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:37:23.630 [2024-12-14 00:18:02.634324] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:37:23.630 [2024-12-14 00:18:02.635685] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:37:24.198 00:18:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:37:24.198 00:18:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:37:24.198 00:18:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:37:24.198 00:18:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@732 -- # xtrace_disable 00:37:24.198 00:18:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:37:24.198 00:18:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:37:24.198 00:18:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:37:24.198 00:18:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:24.198 00:18:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:37:24.198 [2024-12-14 00:18:03.249760] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:37:24.198 00:18:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:24.198 00:18:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:37:24.198 00:18:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:37:24.198 00:18:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:24.198 00:18:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:37:24.457 null0 00:37:24.457 [2024-12-14 00:18:03.589972] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:24.715 [2024-12-14 00:18:03.614182] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:24.715 00:18:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:24.715 00:18:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:37:24.715 00:18:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:37:24.715 00:18:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:37:24.715 00:18:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:37:24.715 00:18:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:37:24.715 00:18:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=44964 00:37:24.715 00:18:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 44964 /var/tmp/bperf.sock 00:37:24.715 00:18:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:37:24.715 00:18:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 44964 ']' 00:37:24.715 00:18:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:37:24.716 00:18:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:37:24.716 00:18:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:37:24.716 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:37:24.716 00:18:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:37:24.716 00:18:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:37:24.716 [2024-12-14 00:18:03.694258] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:37:24.716 [2024-12-14 00:18:03.694346] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid44964 ] 00:37:24.716 [2024-12-14 00:18:03.807655] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:24.974 [2024-12-14 00:18:03.919254] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:37:25.541 00:18:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:37:25.541 00:18:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:37:25.541 00:18:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:37:25.541 00:18:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:37:25.800 00:18:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:37:25.800 00:18:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:25.800 00:18:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:37:25.800 00:18:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:25.800 00:18:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:37:25.800 00:18:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:37:26.058 nvme0n1 00:37:26.058 00:18:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:37:26.058 00:18:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:26.058 00:18:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:37:26.058 00:18:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:26.058 00:18:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:37:26.058 00:18:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:37:26.058 Running I/O for 2 seconds... 00:37:26.058 [2024-12-14 00:18:05.136318] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:26.058 [2024-12-14 00:18:05.136371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:384 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:26.058 [2024-12-14 00:18:05.136388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:26.058 [2024-12-14 00:18:05.145709] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:26.058 [2024-12-14 00:18:05.145744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4565 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:26.058 [2024-12-14 00:18:05.145758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:26.058 [2024-12-14 00:18:05.159306] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:26.058 [2024-12-14 00:18:05.159337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:4999 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:26.058 [2024-12-14 00:18:05.159350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:26.058 [2024-12-14 00:18:05.170904] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:26.059 [2024-12-14 00:18:05.170934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:5793 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:26.059 [2024-12-14 00:18:05.170946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:26.059 [2024-12-14 00:18:05.180417] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:26.059 [2024-12-14 00:18:05.180452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:14167 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:26.059 [2024-12-14 00:18:05.180465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:26.059 [2024-12-14 00:18:05.194425] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:26.059 [2024-12-14 00:18:05.194462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:3399 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:26.059 [2024-12-14 00:18:05.194475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:26.317 [2024-12-14 00:18:05.209167] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:26.317 [2024-12-14 00:18:05.209197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:21100 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:26.317 [2024-12-14 00:18:05.209210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:26.317 [2024-12-14 00:18:05.218461] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:26.317 [2024-12-14 00:18:05.218490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:15030 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:26.317 [2024-12-14 00:18:05.218502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:26.317 [2024-12-14 00:18:05.232393] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:26.317 [2024-12-14 00:18:05.232421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14355 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:26.317 [2024-12-14 00:18:05.232446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:26.317 [2024-12-14 00:18:05.245245] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:26.317 [2024-12-14 00:18:05.245274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:482 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:26.317 [2024-12-14 00:18:05.245286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:26.317 [2024-12-14 00:18:05.255010] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:26.317 [2024-12-14 00:18:05.255039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:22933 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:26.317 [2024-12-14 00:18:05.255064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:26.317 [2024-12-14 00:18:05.267294] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:26.317 [2024-12-14 00:18:05.267327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:21880 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:26.317 [2024-12-14 00:18:05.267339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:26.317 [2024-12-14 00:18:05.280146] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:26.317 [2024-12-14 00:18:05.280176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:21869 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:26.317 [2024-12-14 00:18:05.280189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:26.317 [2024-12-14 00:18:05.290285] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:26.317 [2024-12-14 00:18:05.290314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:11403 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:26.317 [2024-12-14 00:18:05.290326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:26.317 [2024-12-14 00:18:05.301659] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:26.317 [2024-12-14 00:18:05.301687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:3443 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:26.317 [2024-12-14 00:18:05.301699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:26.317 [2024-12-14 00:18:05.314417] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:26.317 [2024-12-14 00:18:05.314451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:5701 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:26.317 [2024-12-14 00:18:05.314463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:26.317 [2024-12-14 00:18:05.325043] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:26.317 [2024-12-14 00:18:05.325071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:5134 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:26.317 [2024-12-14 00:18:05.325083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:26.317 [2024-12-14 00:18:05.337479] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:26.317 [2024-12-14 00:18:05.337512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:8343 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:26.317 [2024-12-14 00:18:05.337524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:26.318 [2024-12-14 00:18:05.347284] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:26.318 [2024-12-14 00:18:05.347312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:1127 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:26.318 [2024-12-14 00:18:05.347324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:26.318 [2024-12-14 00:18:05.359435] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:26.318 [2024-12-14 00:18:05.359469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:15850 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:26.318 [2024-12-14 00:18:05.359482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:26.318 [2024-12-14 00:18:05.370346] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:26.318 [2024-12-14 00:18:05.370373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:2449 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:26.318 [2024-12-14 00:18:05.370386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:26.318 [2024-12-14 00:18:05.380793] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:26.318 [2024-12-14 00:18:05.380821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:13773 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:26.318 [2024-12-14 00:18:05.380834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:26.318 [2024-12-14 00:18:05.390682] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:26.318 [2024-12-14 00:18:05.390709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:3267 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:26.318 [2024-12-14 00:18:05.390722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:26.318 [2024-12-14 00:18:05.401818] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:26.318 [2024-12-14 00:18:05.401847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:15525 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:26.318 [2024-12-14 00:18:05.401859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:26.318 [2024-12-14 00:18:05.411482] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:26.318 [2024-12-14 00:18:05.411510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:9854 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:26.318 [2024-12-14 00:18:05.411523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:26.318 [2024-12-14 00:18:05.425499] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:26.318 [2024-12-14 00:18:05.425528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14944 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:26.318 [2024-12-14 00:18:05.425544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:26.318 [2024-12-14 00:18:05.434656] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:26.318 [2024-12-14 00:18:05.434684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:5832 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:26.318 [2024-12-14 00:18:05.434698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:26.318 [2024-12-14 00:18:05.447916] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:26.318 [2024-12-14 00:18:05.447945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:5155 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:26.318 [2024-12-14 00:18:05.447957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:26.577 [2024-12-14 00:18:05.461042] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:26.577 [2024-12-14 00:18:05.461072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:22608 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:26.577 [2024-12-14 00:18:05.461085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:26.577 [2024-12-14 00:18:05.473812] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:26.577 [2024-12-14 00:18:05.473840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:7416 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:26.577 [2024-12-14 00:18:05.473852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:26.577 [2024-12-14 00:18:05.486454] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:26.577 [2024-12-14 00:18:05.486482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:11944 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:26.577 [2024-12-14 00:18:05.486494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:26.577 [2024-12-14 00:18:05.500244] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:26.577 [2024-12-14 00:18:05.500272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:1410 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:26.577 [2024-12-14 00:18:05.500285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:26.577 [2024-12-14 00:18:05.509678] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:26.577 [2024-12-14 00:18:05.509706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:15215 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:26.577 [2024-12-14 00:18:05.509718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:26.577 [2024-12-14 00:18:05.523760] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:26.577 [2024-12-14 00:18:05.523788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18089 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:26.577 [2024-12-14 00:18:05.523800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:26.577 [2024-12-14 00:18:05.538603] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:26.577 [2024-12-14 00:18:05.538637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:21124 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:26.577 [2024-12-14 00:18:05.538649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:26.577 [2024-12-14 00:18:05.548808] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:26.577 [2024-12-14 00:18:05.548836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16207 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:26.577 [2024-12-14 00:18:05.548848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:26.578 [2024-12-14 00:18:05.560529] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:26.578 [2024-12-14 00:18:05.560556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:12597 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:26.578 [2024-12-14 00:18:05.560569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:26.578 [2024-12-14 00:18:05.569299] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:26.578 [2024-12-14 00:18:05.569326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:4346 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:26.578 [2024-12-14 00:18:05.569338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:26.578 [2024-12-14 00:18:05.581004] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:26.578 [2024-12-14 00:18:05.581031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:6254 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:26.578 [2024-12-14 00:18:05.581043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:26.578 [2024-12-14 00:18:05.592951] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:26.578 [2024-12-14 00:18:05.592978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:7048 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:26.578 [2024-12-14 00:18:05.592990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:26.578 [2024-12-14 00:18:05.602674] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:26.578 [2024-12-14 00:18:05.602703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:16875 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:26.578 [2024-12-14 00:18:05.602715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:26.578 [2024-12-14 00:18:05.617657] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:26.578 [2024-12-14 00:18:05.617684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:16351 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:26.578 [2024-12-14 00:18:05.617696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:26.578 [2024-12-14 00:18:05.632286] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:26.578 [2024-12-14 00:18:05.632314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7272 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:26.578 [2024-12-14 00:18:05.632326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:26.578 [2024-12-14 00:18:05.643699] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:26.578 [2024-12-14 00:18:05.643728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:14746 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:26.578 [2024-12-14 00:18:05.643740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:26.578 [2024-12-14 00:18:05.653460] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:26.578 [2024-12-14 00:18:05.653492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:21197 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:26.578 [2024-12-14 00:18:05.653504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:26.578 [2024-12-14 00:18:05.666107] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:26.578 [2024-12-14 00:18:05.666136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:25533 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:26.578 [2024-12-14 00:18:05.666149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:26.578 [2024-12-14 00:18:05.678699] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:26.578 [2024-12-14 00:18:05.678727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:1635 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:26.578 [2024-12-14 00:18:05.678740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:26.578 [2024-12-14 00:18:05.689669] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:26.578 [2024-12-14 00:18:05.689709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:9315 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:26.578 [2024-12-14 00:18:05.689722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:26.578 [2024-12-14 00:18:05.699054] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:26.578 [2024-12-14 00:18:05.699082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:15361 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:26.578 [2024-12-14 00:18:05.699094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:26.578 [2024-12-14 00:18:05.713111] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:26.578 [2024-12-14 00:18:05.713140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:16235 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:26.578 [2024-12-14 00:18:05.713153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:26.837 [2024-12-14 00:18:05.722999] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:26.837 [2024-12-14 00:18:05.723028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:19520 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:26.837 [2024-12-14 00:18:05.723040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:26.837 [2024-12-14 00:18:05.736958] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:26.837 [2024-12-14 00:18:05.737002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:14383 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:26.837 [2024-12-14 00:18:05.737015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:26.837 [2024-12-14 00:18:05.751624] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:26.837 [2024-12-14 00:18:05.751654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:13987 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:26.837 [2024-12-14 00:18:05.751666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:26.837 [2024-12-14 00:18:05.761317] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:26.837 [2024-12-14 00:18:05.761354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23086 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:26.837 [2024-12-14 00:18:05.761367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:26.837 [2024-12-14 00:18:05.774312] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:26.837 [2024-12-14 00:18:05.774341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:13264 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:26.837 [2024-12-14 00:18:05.774353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:26.837 [2024-12-14 00:18:05.783724] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:26.837 [2024-12-14 00:18:05.783751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:9406 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:26.837 [2024-12-14 00:18:05.783763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:26.837 [2024-12-14 00:18:05.797479] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:26.837 [2024-12-14 00:18:05.797507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:7866 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:26.837 [2024-12-14 00:18:05.797520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:26.837 [2024-12-14 00:18:05.811596] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:26.837 [2024-12-14 00:18:05.811624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:18877 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:26.837 [2024-12-14 00:18:05.811636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:26.837 [2024-12-14 00:18:05.821841] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:26.837 [2024-12-14 00:18:05.821870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:19670 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:26.837 [2024-12-14 00:18:05.821882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:26.837 [2024-12-14 00:18:05.834413] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:26.837 [2024-12-14 00:18:05.834447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:6972 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:26.837 [2024-12-14 00:18:05.834460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:26.838 [2024-12-14 00:18:05.846958] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:26.838 [2024-12-14 00:18:05.846986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:18755 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:26.838 [2024-12-14 00:18:05.846998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:26.838 [2024-12-14 00:18:05.856906] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:26.838 [2024-12-14 00:18:05.856933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:15418 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:26.838 [2024-12-14 00:18:05.856945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:26.838 [2024-12-14 00:18:05.869258] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:26.838 [2024-12-14 00:18:05.869286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:13059 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:26.838 [2024-12-14 00:18:05.869299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:26.838 [2024-12-14 00:18:05.880164] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:26.838 [2024-12-14 00:18:05.880192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:17479 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:26.838 [2024-12-14 00:18:05.880204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:26.838 [2024-12-14 00:18:05.890277] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:26.838 [2024-12-14 00:18:05.890305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:23077 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:26.838 [2024-12-14 00:18:05.890318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:26.838 [2024-12-14 00:18:05.902391] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:26.838 [2024-12-14 00:18:05.902419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:7741 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:26.838 [2024-12-14 00:18:05.902431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:26.838 [2024-12-14 00:18:05.912629] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:26.838 [2024-12-14 00:18:05.912657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:17401 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:26.838 [2024-12-14 00:18:05.912670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:26.838 [2024-12-14 00:18:05.925420] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:26.838 [2024-12-14 00:18:05.925456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:24044 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:26.838 [2024-12-14 00:18:05.925469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:26.838 [2024-12-14 00:18:05.936789] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:26.838 [2024-12-14 00:18:05.936823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:4666 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:26.838 [2024-12-14 00:18:05.936837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:26.838 [2024-12-14 00:18:05.947975] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:26.838 [2024-12-14 00:18:05.948004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:2251 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:26.838 [2024-12-14 00:18:05.948017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:26.838 [2024-12-14 00:18:05.958624] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:26.838 [2024-12-14 00:18:05.958652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:20807 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:26.838 [2024-12-14 00:18:05.958665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:26.838 [2024-12-14 00:18:05.970768] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:26.838 [2024-12-14 00:18:05.970795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:425 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:26.838 [2024-12-14 00:18:05.970807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:27.097 [2024-12-14 00:18:05.984620] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:27.097 [2024-12-14 00:18:05.984648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:9547 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:27.097 [2024-12-14 00:18:05.984661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:27.097 [2024-12-14 00:18:05.994272] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:27.097 [2024-12-14 00:18:05.994301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:7002 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:27.097 [2024-12-14 00:18:05.994315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:27.097 [2024-12-14 00:18:06.007983] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:27.097 [2024-12-14 00:18:06.008012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:6770 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:27.097 [2024-12-14 00:18:06.008024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:27.097 [2024-12-14 00:18:06.017470] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:27.097 [2024-12-14 00:18:06.017498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:9834 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:27.097 [2024-12-14 00:18:06.017510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:27.097 [2024-12-14 00:18:06.028898] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:27.097 [2024-12-14 00:18:06.028926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:8211 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:27.098 [2024-12-14 00:18:06.028937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:27.098 [2024-12-14 00:18:06.039606] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:27.098 [2024-12-14 00:18:06.039634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:17870 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:27.098 [2024-12-14 00:18:06.039646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:27.098 [2024-12-14 00:18:06.050382] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:27.098 [2024-12-14 00:18:06.050411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:745 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:27.098 [2024-12-14 00:18:06.050423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:27.098 [2024-12-14 00:18:06.061400] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:27.098 [2024-12-14 00:18:06.061428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:10896 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:27.098 [2024-12-14 00:18:06.061446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:27.098 [2024-12-14 00:18:06.072706] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:27.098 [2024-12-14 00:18:06.072734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:7739 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:27.098 [2024-12-14 00:18:06.072747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:27.098 [2024-12-14 00:18:06.082936] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:27.098 [2024-12-14 00:18:06.082963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:12927 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:27.098 [2024-12-14 00:18:06.082975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:27.098 [2024-12-14 00:18:06.094210] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:27.098 [2024-12-14 00:18:06.094238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:18447 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:27.098 [2024-12-14 00:18:06.094251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:27.098 [2024-12-14 00:18:06.105262] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:27.098 [2024-12-14 00:18:06.105290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:9700 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:27.098 [2024-12-14 00:18:06.105304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:27.098 [2024-12-14 00:18:06.114739] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:27.098 [2024-12-14 00:18:06.114768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:22619 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:27.098 [2024-12-14 00:18:06.114780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:27.098 21711.00 IOPS, 84.81 MiB/s [2024-12-13T23:18:06.239Z] [2024-12-14 00:18:06.128062] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:27.098 [2024-12-14 00:18:06.128095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:8240 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:27.098 [2024-12-14 00:18:06.128107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:27.098 [2024-12-14 00:18:06.140312] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:27.098 [2024-12-14 00:18:06.140339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:18229 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:27.098 [2024-12-14 00:18:06.140351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:27.098 [2024-12-14 00:18:06.153974] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:27.098 [2024-12-14 00:18:06.154002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:11175 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:27.098 [2024-12-14 00:18:06.154014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:27.098 [2024-12-14 00:18:06.164381] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:27.098 [2024-12-14 00:18:06.164409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:6654 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:27.098 [2024-12-14 00:18:06.164420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:27.098 [2024-12-14 00:18:06.175167] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:27.098 [2024-12-14 00:18:06.175195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:13335 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:27.098 [2024-12-14 00:18:06.175208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:27.098 [2024-12-14 00:18:06.186082] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:27.098 [2024-12-14 00:18:06.186108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:3995 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:27.098 [2024-12-14 00:18:06.186120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:27.098 [2024-12-14 00:18:06.197030] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:27.098 [2024-12-14 00:18:06.197057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:10373 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:27.098 [2024-12-14 00:18:06.197070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:27.098 [2024-12-14 00:18:06.207460] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:27.098 [2024-12-14 00:18:06.207487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:18333 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:27.098 [2024-12-14 00:18:06.207499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:27.098 [2024-12-14 00:18:06.219562] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:27.098 [2024-12-14 00:18:06.219590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:25563 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:27.098 [2024-12-14 00:18:06.219602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:27.098 [2024-12-14 00:18:06.230757] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:27.098 [2024-12-14 00:18:06.230784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:1141 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:27.098 [2024-12-14 00:18:06.230796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:27.357 [2024-12-14 00:18:06.241636] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:27.357 [2024-12-14 00:18:06.241664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:12443 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:27.357 [2024-12-14 00:18:06.241677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:27.357 [2024-12-14 00:18:06.254776] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:27.357 [2024-12-14 00:18:06.254805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:22690 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:27.357 [2024-12-14 00:18:06.254818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:27.357 [2024-12-14 00:18:06.264050] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:27.357 [2024-12-14 00:18:06.264077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:14223 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:27.357 [2024-12-14 00:18:06.264089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:27.357 [2024-12-14 00:18:06.277466] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:27.357 [2024-12-14 00:18:06.277494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:15075 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:27.357 [2024-12-14 00:18:06.277505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:27.357 [2024-12-14 00:18:06.290067] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:27.357 [2024-12-14 00:18:06.290097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13689 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:27.357 [2024-12-14 00:18:06.290109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:27.357 [2024-12-14 00:18:06.300022] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:27.357 [2024-12-14 00:18:06.300050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:16742 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:27.357 [2024-12-14 00:18:06.300062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:27.357 [2024-12-14 00:18:06.312496] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:27.357 [2024-12-14 00:18:06.312524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:20411 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:27.357 [2024-12-14 00:18:06.312536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:27.357 [2024-12-14 00:18:06.323480] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:27.358 [2024-12-14 00:18:06.323512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:16375 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:27.358 [2024-12-14 00:18:06.323524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:27.358 [2024-12-14 00:18:06.333287] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:27.358 [2024-12-14 00:18:06.333316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:7193 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:27.358 [2024-12-14 00:18:06.333328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:27.358 [2024-12-14 00:18:06.346442] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:27.358 [2024-12-14 00:18:06.346471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:10547 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:27.358 [2024-12-14 00:18:06.346482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:27.358 [2024-12-14 00:18:06.356849] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:27.358 [2024-12-14 00:18:06.356877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:12116 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:27.358 [2024-12-14 00:18:06.356890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:27.358 [2024-12-14 00:18:06.369349] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:27.358 [2024-12-14 00:18:06.369378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:21170 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:27.358 [2024-12-14 00:18:06.369391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:27.358 [2024-12-14 00:18:06.379384] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:27.358 [2024-12-14 00:18:06.379412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:16107 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:27.358 [2024-12-14 00:18:06.379425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:27.358 [2024-12-14 00:18:06.391359] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:27.358 [2024-12-14 00:18:06.391387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:15425 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:27.358 [2024-12-14 00:18:06.391399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:27.358 [2024-12-14 00:18:06.405491] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:27.358 [2024-12-14 00:18:06.405519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:18879 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:27.358 [2024-12-14 00:18:06.405532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:27.358 [2024-12-14 00:18:06.415318] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:27.358 [2024-12-14 00:18:06.415347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:24071 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:27.358 [2024-12-14 00:18:06.415360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:27.358 [2024-12-14 00:18:06.427344] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:27.358 [2024-12-14 00:18:06.427373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:3019 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:27.358 [2024-12-14 00:18:06.427385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:27.358 [2024-12-14 00:18:06.440175] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:27.358 [2024-12-14 00:18:06.440204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:9660 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:27.358 [2024-12-14 00:18:06.440216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:27.358 [2024-12-14 00:18:06.451356] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:27.358 [2024-12-14 00:18:06.451385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:13521 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:27.358 [2024-12-14 00:18:06.451397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:27.358 [2024-12-14 00:18:06.461990] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:27.358 [2024-12-14 00:18:06.462018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:13013 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:27.358 [2024-12-14 00:18:06.462030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:27.358 [2024-12-14 00:18:06.473090] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:27.358 [2024-12-14 00:18:06.473119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:10760 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:27.358 [2024-12-14 00:18:06.473131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:27.358 [2024-12-14 00:18:06.482701] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:27.358 [2024-12-14 00:18:06.482735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:4539 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:27.358 [2024-12-14 00:18:06.482748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:27.358 [2024-12-14 00:18:06.496266] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:27.358 [2024-12-14 00:18:06.496296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:17104 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:27.358 [2024-12-14 00:18:06.496308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:27.617 [2024-12-14 00:18:06.507991] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:27.617 [2024-12-14 00:18:06.508020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16880 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:27.617 [2024-12-14 00:18:06.508033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:27.617 [2024-12-14 00:18:06.518905] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:27.617 [2024-12-14 00:18:06.518939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:1118 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:27.617 [2024-12-14 00:18:06.518951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:27.617 [2024-12-14 00:18:06.528770] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:27.617 [2024-12-14 00:18:06.528799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:13129 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:27.617 [2024-12-14 00:18:06.528811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:27.617 [2024-12-14 00:18:06.541390] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:27.617 [2024-12-14 00:18:06.541420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21617 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:27.617 [2024-12-14 00:18:06.541433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:27.618 [2024-12-14 00:18:06.551449] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:27.618 [2024-12-14 00:18:06.551478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:9752 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:27.618 [2024-12-14 00:18:06.551491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:27.618 [2024-12-14 00:18:06.565979] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:27.618 [2024-12-14 00:18:06.566010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9980 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:27.618 [2024-12-14 00:18:06.566024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:27.618 [2024-12-14 00:18:06.581564] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:27.618 [2024-12-14 00:18:06.581595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:1631 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:27.618 [2024-12-14 00:18:06.581608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:27.618 [2024-12-14 00:18:06.595887] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:27.618 [2024-12-14 00:18:06.595918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:8922 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:27.618 [2024-12-14 00:18:06.595932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:27.618 [2024-12-14 00:18:06.606869] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:27.618 [2024-12-14 00:18:06.606900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:8702 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:27.618 [2024-12-14 00:18:06.606914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:27.618 [2024-12-14 00:18:06.620779] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:27.618 [2024-12-14 00:18:06.620810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:1407 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:27.618 [2024-12-14 00:18:06.620823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:27.618 [2024-12-14 00:18:06.633094] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:27.618 [2024-12-14 00:18:06.633125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:1510 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:27.618 [2024-12-14 00:18:06.633139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:27.618 [2024-12-14 00:18:06.646098] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:27.618 [2024-12-14 00:18:06.646129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:18849 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:27.618 [2024-12-14 00:18:06.646144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:27.618 [2024-12-14 00:18:06.659113] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:27.618 [2024-12-14 00:18:06.659144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:15942 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:27.618 [2024-12-14 00:18:06.659157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:27.618 [2024-12-14 00:18:06.673310] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:27.618 [2024-12-14 00:18:06.673341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:15603 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:27.618 [2024-12-14 00:18:06.673354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:27.618 [2024-12-14 00:18:06.684617] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:27.618 [2024-12-14 00:18:06.684646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:18873 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:27.618 [2024-12-14 00:18:06.684658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:27.618 [2024-12-14 00:18:06.694273] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:27.618 [2024-12-14 00:18:06.694302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:203 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:27.618 [2024-12-14 00:18:06.694314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:27.618 [2024-12-14 00:18:06.707141] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:27.618 [2024-12-14 00:18:06.707170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:6015 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:27.618 [2024-12-14 00:18:06.707182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:27.618 [2024-12-14 00:18:06.717572] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:27.618 [2024-12-14 00:18:06.717600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:5592 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:27.618 [2024-12-14 00:18:06.717611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:27.618 [2024-12-14 00:18:06.732306] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:27.618 [2024-12-14 00:18:06.732341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:19616 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:27.618 [2024-12-14 00:18:06.732353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:27.618 [2024-12-14 00:18:06.742051] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:27.618 [2024-12-14 00:18:06.742080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:9934 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:27.618 [2024-12-14 00:18:06.742092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:27.618 [2024-12-14 00:18:06.756049] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:27.618 [2024-12-14 00:18:06.756079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:18475 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:27.618 [2024-12-14 00:18:06.756104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:27.878 [2024-12-14 00:18:06.771569] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:27.878 [2024-12-14 00:18:06.771602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:538 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:27.878 [2024-12-14 00:18:06.771617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:27.878 [2024-12-14 00:18:06.784662] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:27.878 [2024-12-14 00:18:06.784693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:18006 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:27.878 [2024-12-14 00:18:06.784706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:27.878 [2024-12-14 00:18:06.794356] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:27.878 [2024-12-14 00:18:06.794384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:9871 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:27.878 [2024-12-14 00:18:06.794396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:27.878 [2024-12-14 00:18:06.807312] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:27.878 [2024-12-14 00:18:06.807340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:699 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:27.878 [2024-12-14 00:18:06.807351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:27.878 [2024-12-14 00:18:06.816716] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:27.878 [2024-12-14 00:18:06.816743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:18198 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:27.878 [2024-12-14 00:18:06.816755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:27.878 [2024-12-14 00:18:06.830586] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:27.878 [2024-12-14 00:18:06.830615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:5507 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:27.878 [2024-12-14 00:18:06.830627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:27.878 [2024-12-14 00:18:06.845483] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:27.878 [2024-12-14 00:18:06.845512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:19810 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:27.878 [2024-12-14 00:18:06.845524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:27.878 [2024-12-14 00:18:06.856844] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:27.878 [2024-12-14 00:18:06.856873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:21395 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:27.878 [2024-12-14 00:18:06.856886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:27.878 [2024-12-14 00:18:06.867243] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:27.878 [2024-12-14 00:18:06.867271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:17306 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:27.878 [2024-12-14 00:18:06.867284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:27.878 [2024-12-14 00:18:06.878384] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:27.878 [2024-12-14 00:18:06.878412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:18881 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:27.878 [2024-12-14 00:18:06.878424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:27.878 [2024-12-14 00:18:06.888733] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:27.878 [2024-12-14 00:18:06.888760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:14607 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:27.878 [2024-12-14 00:18:06.888773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:27.878 [2024-12-14 00:18:06.899898] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:27.878 [2024-12-14 00:18:06.899927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:7938 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:27.878 [2024-12-14 00:18:06.899938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:27.878 [2024-12-14 00:18:06.910417] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:27.878 [2024-12-14 00:18:06.910451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:18917 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:27.878 [2024-12-14 00:18:06.910464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:27.878 [2024-12-14 00:18:06.921597] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:27.878 [2024-12-14 00:18:06.921625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:5530 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:27.878 [2024-12-14 00:18:06.921637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:27.878 [2024-12-14 00:18:06.933583] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:27.878 [2024-12-14 00:18:06.933611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:4554 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:27.878 [2024-12-14 00:18:06.933629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:27.878 [2024-12-14 00:18:06.942959] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:27.878 [2024-12-14 00:18:06.942987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:6987 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:27.878 [2024-12-14 00:18:06.943000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:27.878 [2024-12-14 00:18:06.954483] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:27.878 [2024-12-14 00:18:06.954511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:2467 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:27.878 [2024-12-14 00:18:06.954523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:27.878 [2024-12-14 00:18:06.964999] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:27.878 [2024-12-14 00:18:06.965027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:5255 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:27.878 [2024-12-14 00:18:06.965039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:27.878 [2024-12-14 00:18:06.974235] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:27.878 [2024-12-14 00:18:06.974263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:7225 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:27.878 [2024-12-14 00:18:06.974276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:27.878 [2024-12-14 00:18:06.987508] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:27.878 [2024-12-14 00:18:06.987534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12958 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:27.878 [2024-12-14 00:18:06.987546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:27.878 [2024-12-14 00:18:07.001835] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:27.878 [2024-12-14 00:18:07.001864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:497 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:27.878 [2024-12-14 00:18:07.001876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:27.878 [2024-12-14 00:18:07.014785] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:27.878 [2024-12-14 00:18:07.014813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:14578 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:27.878 [2024-12-14 00:18:07.014825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:28.137 [2024-12-14 00:18:07.024643] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:28.137 [2024-12-14 00:18:07.024673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:23410 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:28.137 [2024-12-14 00:18:07.024685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:28.137 [2024-12-14 00:18:07.039926] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:28.137 [2024-12-14 00:18:07.039954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:13599 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:28.137 [2024-12-14 00:18:07.039966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:28.137 [2024-12-14 00:18:07.054636] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:28.137 [2024-12-14 00:18:07.054664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:15197 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:28.137 [2024-12-14 00:18:07.054677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:28.138 [2024-12-14 00:18:07.067703] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:28.138 [2024-12-14 00:18:07.067731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:10803 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:28.138 [2024-12-14 00:18:07.067743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:28.138 [2024-12-14 00:18:07.077735] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:28.138 [2024-12-14 00:18:07.077762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:15098 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:28.138 [2024-12-14 00:18:07.077775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:28.138 [2024-12-14 00:18:07.092299] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:28.138 [2024-12-14 00:18:07.092327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:2780 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:28.138 [2024-12-14 00:18:07.092339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:28.138 [2024-12-14 00:18:07.106271] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:28.138 [2024-12-14 00:18:07.106299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:19942 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:28.138 [2024-12-14 00:18:07.106312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:28.138 [2024-12-14 00:18:07.116243] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:28.138 [2024-12-14 00:18:07.116271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:4758 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:28.138 [2024-12-14 00:18:07.116283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:28.138 21561.00 IOPS, 84.22 MiB/s 00:37:28.138 Latency(us) 00:37:28.138 [2024-12-13T23:18:07.279Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:28.138 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:37:28.138 nvme0n1 : 2.00 21578.58 84.29 0.00 0.00 5926.52 3042.74 21346.01 00:37:28.138 [2024-12-13T23:18:07.279Z] =================================================================================================================== 00:37:28.138 [2024-12-13T23:18:07.279Z] Total : 21578.58 84.29 0.00 0.00 5926.52 3042.74 21346.01 00:37:28.138 { 00:37:28.138 "results": [ 00:37:28.138 { 00:37:28.138 "job": "nvme0n1", 00:37:28.138 "core_mask": "0x2", 00:37:28.138 "workload": "randread", 00:37:28.138 "status": "finished", 00:37:28.138 "queue_depth": 128, 00:37:28.138 "io_size": 4096, 00:37:28.138 "runtime": 2.004302, 00:37:28.138 "iops": 21578.58446481618, 00:37:28.138 "mibps": 84.2913455656882, 00:37:28.138 "io_failed": 0, 00:37:28.138 "io_timeout": 0, 00:37:28.138 "avg_latency_us": 5926.515906765758, 00:37:28.138 "min_latency_us": 3042.7428571428572, 00:37:28.138 "max_latency_us": 21346.01142857143 00:37:28.138 } 00:37:28.138 ], 00:37:28.138 "core_count": 1 00:37:28.138 } 00:37:28.138 00:18:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:37:28.138 00:18:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:37:28.138 00:18:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:37:28.138 | .driver_specific 00:37:28.138 | .nvme_error 00:37:28.138 | .status_code 00:37:28.138 | .command_transient_transport_error' 00:37:28.138 00:18:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:37:28.397 00:18:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 169 > 0 )) 00:37:28.397 00:18:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 44964 00:37:28.397 00:18:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 44964 ']' 00:37:28.397 00:18:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 44964 00:37:28.397 00:18:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:37:28.397 00:18:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:37:28.397 00:18:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 44964 00:37:28.397 00:18:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:37:28.397 00:18:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:37:28.397 00:18:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 44964' 00:37:28.397 killing process with pid 44964 00:37:28.397 00:18:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 44964 00:37:28.397 Received shutdown signal, test time was about 2.000000 seconds 00:37:28.397 00:37:28.397 Latency(us) 00:37:28.397 [2024-12-13T23:18:07.538Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:28.397 [2024-12-13T23:18:07.538Z] =================================================================================================================== 00:37:28.397 [2024-12-13T23:18:07.538Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:37:28.397 00:18:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 44964 00:37:29.334 00:18:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:37:29.334 00:18:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:37:29.334 00:18:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:37:29.334 00:18:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:37:29.334 00:18:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:37:29.334 00:18:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=46168 00:37:29.334 00:18:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 46168 /var/tmp/bperf.sock 00:37:29.334 00:18:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:37:29.334 00:18:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 46168 ']' 00:37:29.334 00:18:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:37:29.334 00:18:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:37:29.334 00:18:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:37:29.334 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:37:29.334 00:18:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:37:29.334 00:18:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:37:29.334 [2024-12-14 00:18:08.363122] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:37:29.334 [2024-12-14 00:18:08.363212] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid46168 ] 00:37:29.334 I/O size of 131072 is greater than zero copy threshold (65536). 00:37:29.334 Zero copy mechanism will not be used. 00:37:29.592 [2024-12-14 00:18:08.475134] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:29.592 [2024-12-14 00:18:08.586456] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:37:30.245 00:18:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:37:30.245 00:18:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:37:30.245 00:18:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:37:30.245 00:18:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:37:30.245 00:18:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:37:30.245 00:18:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:30.245 00:18:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:37:30.245 00:18:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:30.245 00:18:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:37:30.245 00:18:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:37:30.900 nvme0n1 00:37:30.900 00:18:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:37:30.900 00:18:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:30.900 00:18:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:37:30.900 00:18:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:30.900 00:18:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:37:30.900 00:18:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:37:30.900 I/O size of 131072 is greater than zero copy threshold (65536). 00:37:30.900 Zero copy mechanism will not be used. 00:37:30.900 Running I/O for 2 seconds... 00:37:30.900 [2024-12-14 00:18:09.893814] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:30.900 [2024-12-14 00:18:09.893865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:30.901 [2024-12-14 00:18:09.893886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:30.901 [2024-12-14 00:18:09.900819] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:30.901 [2024-12-14 00:18:09.900853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:30.901 [2024-12-14 00:18:09.900867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:30.901 [2024-12-14 00:18:09.908445] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:30.901 [2024-12-14 00:18:09.908477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:30.901 [2024-12-14 00:18:09.908490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:30.901 [2024-12-14 00:18:09.916326] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:30.901 [2024-12-14 00:18:09.916356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:30.901 [2024-12-14 00:18:09.916368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:30.901 [2024-12-14 00:18:09.923349] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:30.901 [2024-12-14 00:18:09.923380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:30.901 [2024-12-14 00:18:09.923393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:30.901 [2024-12-14 00:18:09.930074] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:30.901 [2024-12-14 00:18:09.930105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:30.901 [2024-12-14 00:18:09.930118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:30.901 [2024-12-14 00:18:09.936766] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:30.901 [2024-12-14 00:18:09.936797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:30.901 [2024-12-14 00:18:09.936809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:30.901 [2024-12-14 00:18:09.942883] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:30.901 [2024-12-14 00:18:09.942912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:30.901 [2024-12-14 00:18:09.942924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:30.901 [2024-12-14 00:18:09.948926] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:30.901 [2024-12-14 00:18:09.948955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:30.901 [2024-12-14 00:18:09.948967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:30.901 [2024-12-14 00:18:09.955124] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:30.901 [2024-12-14 00:18:09.955155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:30.901 [2024-12-14 00:18:09.955175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:30.901 [2024-12-14 00:18:09.961226] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:30.901 [2024-12-14 00:18:09.961254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:30.901 [2024-12-14 00:18:09.961266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:30.901 [2024-12-14 00:18:09.967367] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:30.901 [2024-12-14 00:18:09.967398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:30.901 [2024-12-14 00:18:09.967410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:30.901 [2024-12-14 00:18:09.973651] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:30.901 [2024-12-14 00:18:09.973681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:30.901 [2024-12-14 00:18:09.973693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:30.901 [2024-12-14 00:18:09.980053] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:30.901 [2024-12-14 00:18:09.980083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:30.901 [2024-12-14 00:18:09.980096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:30.901 [2024-12-14 00:18:09.986289] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:30.901 [2024-12-14 00:18:09.986318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:30.901 [2024-12-14 00:18:09.986330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:30.901 [2024-12-14 00:18:09.992531] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:30.901 [2024-12-14 00:18:09.992560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:30.901 [2024-12-14 00:18:09.992572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:30.901 [2024-12-14 00:18:09.998698] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:30.901 [2024-12-14 00:18:09.998727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:30.901 [2024-12-14 00:18:09.998739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:30.901 [2024-12-14 00:18:10.005042] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:30.901 [2024-12-14 00:18:10.005071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:30.901 [2024-12-14 00:18:10.005088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:30.901 [2024-12-14 00:18:10.012151] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:30.901 [2024-12-14 00:18:10.012187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:30.901 [2024-12-14 00:18:10.012203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:30.901 [2024-12-14 00:18:10.018741] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:30.901 [2024-12-14 00:18:10.018771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:30.901 [2024-12-14 00:18:10.018784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:30.901 [2024-12-14 00:18:10.025106] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:30.901 [2024-12-14 00:18:10.025138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:30.901 [2024-12-14 00:18:10.025151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:30.901 [2024-12-14 00:18:10.031343] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:30.901 [2024-12-14 00:18:10.031373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:30.901 [2024-12-14 00:18:10.031386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:30.901 [2024-12-14 00:18:10.038591] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:30.901 [2024-12-14 00:18:10.038626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:30.901 [2024-12-14 00:18:10.038641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:31.161 [2024-12-14 00:18:10.045195] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:31.161 [2024-12-14 00:18:10.045224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.161 [2024-12-14 00:18:10.045237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:31.161 [2024-12-14 00:18:10.049476] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:31.161 [2024-12-14 00:18:10.049504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.161 [2024-12-14 00:18:10.049517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:31.161 [2024-12-14 00:18:10.054811] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:31.161 [2024-12-14 00:18:10.054840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.161 [2024-12-14 00:18:10.054852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:31.161 [2024-12-14 00:18:10.061013] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:31.161 [2024-12-14 00:18:10.061043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.161 [2024-12-14 00:18:10.061056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:31.161 [2024-12-14 00:18:10.067046] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:31.161 [2024-12-14 00:18:10.067074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.162 [2024-12-14 00:18:10.067086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:31.162 [2024-12-14 00:18:10.073074] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:31.162 [2024-12-14 00:18:10.073102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.162 [2024-12-14 00:18:10.073114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:31.162 [2024-12-14 00:18:10.078750] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:31.162 [2024-12-14 00:18:10.078778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.162 [2024-12-14 00:18:10.078790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:31.162 [2024-12-14 00:18:10.085188] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:31.162 [2024-12-14 00:18:10.085217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.162 [2024-12-14 00:18:10.085229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:31.162 [2024-12-14 00:18:10.091412] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:31.162 [2024-12-14 00:18:10.091445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.162 [2024-12-14 00:18:10.091458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:31.162 [2024-12-14 00:18:10.097878] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:31.162 [2024-12-14 00:18:10.097907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.162 [2024-12-14 00:18:10.097919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:31.162 [2024-12-14 00:18:10.104845] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:31.162 [2024-12-14 00:18:10.104874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.162 [2024-12-14 00:18:10.104887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:31.162 [2024-12-14 00:18:10.112731] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:31.162 [2024-12-14 00:18:10.112760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.162 [2024-12-14 00:18:10.112778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:31.162 [2024-12-14 00:18:10.121791] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:31.162 [2024-12-14 00:18:10.121821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.162 [2024-12-14 00:18:10.121834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:31.162 [2024-12-14 00:18:10.129995] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:31.162 [2024-12-14 00:18:10.130024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.162 [2024-12-14 00:18:10.130036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:31.162 [2024-12-14 00:18:10.138915] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:31.162 [2024-12-14 00:18:10.138944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.162 [2024-12-14 00:18:10.138957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:31.162 [2024-12-14 00:18:10.147587] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:31.162 [2024-12-14 00:18:10.147617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.162 [2024-12-14 00:18:10.147631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:31.162 [2024-12-14 00:18:10.156262] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:31.162 [2024-12-14 00:18:10.156292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.162 [2024-12-14 00:18:10.156305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:31.162 [2024-12-14 00:18:10.165078] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:31.162 [2024-12-14 00:18:10.165107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.162 [2024-12-14 00:18:10.165120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:31.162 [2024-12-14 00:18:10.174360] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:31.162 [2024-12-14 00:18:10.174388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.162 [2024-12-14 00:18:10.174401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:31.162 [2024-12-14 00:18:10.183459] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:31.162 [2024-12-14 00:18:10.183488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.162 [2024-12-14 00:18:10.183502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:31.162 [2024-12-14 00:18:10.192335] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:31.162 [2024-12-14 00:18:10.192365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.162 [2024-12-14 00:18:10.192378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:31.162 [2024-12-14 00:18:10.200954] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:31.162 [2024-12-14 00:18:10.200984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.162 [2024-12-14 00:18:10.200998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:31.162 [2024-12-14 00:18:10.208605] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:31.162 [2024-12-14 00:18:10.208634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.162 [2024-12-14 00:18:10.208647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:31.162 [2024-12-14 00:18:10.217272] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:31.162 [2024-12-14 00:18:10.217302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.162 [2024-12-14 00:18:10.217315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:31.162 [2024-12-14 00:18:10.225647] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:31.162 [2024-12-14 00:18:10.225677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.162 [2024-12-14 00:18:10.225691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:31.162 [2024-12-14 00:18:10.234371] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:31.162 [2024-12-14 00:18:10.234402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.162 [2024-12-14 00:18:10.234415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:31.162 [2024-12-14 00:18:10.242784] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:31.162 [2024-12-14 00:18:10.242814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.162 [2024-12-14 00:18:10.242827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:31.162 [2024-12-14 00:18:10.251012] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:31.162 [2024-12-14 00:18:10.251042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.162 [2024-12-14 00:18:10.251055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:31.162 [2024-12-14 00:18:10.259835] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:31.162 [2024-12-14 00:18:10.259865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.162 [2024-12-14 00:18:10.259883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:31.162 [2024-12-14 00:18:10.268642] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:31.162 [2024-12-14 00:18:10.268683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.162 [2024-12-14 00:18:10.268697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:31.162 [2024-12-14 00:18:10.274999] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:31.162 [2024-12-14 00:18:10.275029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.162 [2024-12-14 00:18:10.275042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:31.162 [2024-12-14 00:18:10.281274] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:31.162 [2024-12-14 00:18:10.281304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.162 [2024-12-14 00:18:10.281317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:31.163 [2024-12-14 00:18:10.288272] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:31.163 [2024-12-14 00:18:10.288300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.163 [2024-12-14 00:18:10.288313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:31.163 [2024-12-14 00:18:10.295718] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:31.163 [2024-12-14 00:18:10.295748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.163 [2024-12-14 00:18:10.295760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:31.422 [2024-12-14 00:18:10.302452] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:31.422 [2024-12-14 00:18:10.302482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.422 [2024-12-14 00:18:10.302494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:31.422 [2024-12-14 00:18:10.306602] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:31.422 [2024-12-14 00:18:10.306630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.422 [2024-12-14 00:18:10.306643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:31.423 [2024-12-14 00:18:10.311316] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:31.423 [2024-12-14 00:18:10.311343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.423 [2024-12-14 00:18:10.311356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:31.423 [2024-12-14 00:18:10.317627] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:31.423 [2024-12-14 00:18:10.317665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.423 [2024-12-14 00:18:10.317677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:31.423 [2024-12-14 00:18:10.323888] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:31.423 [2024-12-14 00:18:10.323916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.423 [2024-12-14 00:18:10.323928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:31.423 [2024-12-14 00:18:10.329920] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:31.423 [2024-12-14 00:18:10.329948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.423 [2024-12-14 00:18:10.329960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:31.423 [2024-12-14 00:18:10.336005] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:31.423 [2024-12-14 00:18:10.336034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.423 [2024-12-14 00:18:10.336046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:31.423 [2024-12-14 00:18:10.341977] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:31.423 [2024-12-14 00:18:10.342004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.423 [2024-12-14 00:18:10.342016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:31.423 [2024-12-14 00:18:10.348307] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:31.423 [2024-12-14 00:18:10.348335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.423 [2024-12-14 00:18:10.348347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:31.423 [2024-12-14 00:18:10.354655] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:31.423 [2024-12-14 00:18:10.354682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.423 [2024-12-14 00:18:10.354694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:31.423 [2024-12-14 00:18:10.361021] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:31.423 [2024-12-14 00:18:10.361048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.423 [2024-12-14 00:18:10.361059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:31.423 [2024-12-14 00:18:10.367431] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:31.423 [2024-12-14 00:18:10.367465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.423 [2024-12-14 00:18:10.367480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:31.423 [2024-12-14 00:18:10.373755] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:31.423 [2024-12-14 00:18:10.373782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.423 [2024-12-14 00:18:10.373794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:31.423 [2024-12-14 00:18:10.379961] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:31.423 [2024-12-14 00:18:10.379988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.423 [2024-12-14 00:18:10.379999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:31.423 [2024-12-14 00:18:10.386155] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:31.423 [2024-12-14 00:18:10.386182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.423 [2024-12-14 00:18:10.386193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:31.423 [2024-12-14 00:18:10.393065] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:31.423 [2024-12-14 00:18:10.393092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.423 [2024-12-14 00:18:10.393104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:31.423 [2024-12-14 00:18:10.401600] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:31.423 [2024-12-14 00:18:10.401629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.423 [2024-12-14 00:18:10.401642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:31.423 [2024-12-14 00:18:10.407921] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:31.423 [2024-12-14 00:18:10.407949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.423 [2024-12-14 00:18:10.407961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:31.423 [2024-12-14 00:18:10.414537] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:31.423 [2024-12-14 00:18:10.414565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.423 [2024-12-14 00:18:10.414578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:31.423 [2024-12-14 00:18:10.420531] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:31.423 [2024-12-14 00:18:10.420558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.423 [2024-12-14 00:18:10.420569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:31.423 [2024-12-14 00:18:10.426380] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:31.423 [2024-12-14 00:18:10.426411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.423 [2024-12-14 00:18:10.426423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:31.423 [2024-12-14 00:18:10.432251] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:31.423 [2024-12-14 00:18:10.432278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.423 [2024-12-14 00:18:10.432290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:31.423 [2024-12-14 00:18:10.438120] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:31.423 [2024-12-14 00:18:10.438147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.423 [2024-12-14 00:18:10.438159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:31.423 [2024-12-14 00:18:10.444022] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:31.423 [2024-12-14 00:18:10.444049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.423 [2024-12-14 00:18:10.444062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:31.423 [2024-12-14 00:18:10.449897] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:31.423 [2024-12-14 00:18:10.449925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.423 [2024-12-14 00:18:10.449937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:31.423 [2024-12-14 00:18:10.455986] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:31.423 [2024-12-14 00:18:10.456014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.423 [2024-12-14 00:18:10.456026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:31.423 [2024-12-14 00:18:10.462096] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:31.423 [2024-12-14 00:18:10.462123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.423 [2024-12-14 00:18:10.462134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:31.423 [2024-12-14 00:18:10.468387] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:31.423 [2024-12-14 00:18:10.468415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.423 [2024-12-14 00:18:10.468426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:31.423 [2024-12-14 00:18:10.474627] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:31.423 [2024-12-14 00:18:10.474653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.424 [2024-12-14 00:18:10.474669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:31.424 [2024-12-14 00:18:10.481019] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:31.424 [2024-12-14 00:18:10.481048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.424 [2024-12-14 00:18:10.481061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:31.424 [2024-12-14 00:18:10.487165] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:31.424 [2024-12-14 00:18:10.487192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.424 [2024-12-14 00:18:10.487205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:31.424 [2024-12-14 00:18:10.493278] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:31.424 [2024-12-14 00:18:10.493305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.424 [2024-12-14 00:18:10.493318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:31.424 [2024-12-14 00:18:10.499357] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:31.424 [2024-12-14 00:18:10.499386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.424 [2024-12-14 00:18:10.499397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:31.424 [2024-12-14 00:18:10.505503] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:31.424 [2024-12-14 00:18:10.505530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.424 [2024-12-14 00:18:10.505542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:31.424 [2024-12-14 00:18:10.511578] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:31.424 [2024-12-14 00:18:10.511605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.424 [2024-12-14 00:18:10.511617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:31.424 [2024-12-14 00:18:10.517785] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:31.424 [2024-12-14 00:18:10.517812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.424 [2024-12-14 00:18:10.517825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:31.424 [2024-12-14 00:18:10.523892] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:31.424 [2024-12-14 00:18:10.523920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.424 [2024-12-14 00:18:10.523932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:31.424 [2024-12-14 00:18:10.530112] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:31.424 [2024-12-14 00:18:10.530144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.424 [2024-12-14 00:18:10.530155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:31.424 [2024-12-14 00:18:10.536273] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:31.424 [2024-12-14 00:18:10.536300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.424 [2024-12-14 00:18:10.536311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:31.424 [2024-12-14 00:18:10.542389] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:31.424 [2024-12-14 00:18:10.542416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.424 [2024-12-14 00:18:10.542428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:31.424 [2024-12-14 00:18:10.548554] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:31.424 [2024-12-14 00:18:10.548580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.424 [2024-12-14 00:18:10.548592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:31.424 [2024-12-14 00:18:10.554715] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:31.424 [2024-12-14 00:18:10.554743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.424 [2024-12-14 00:18:10.554755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:31.424 [2024-12-14 00:18:10.560867] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:31.424 [2024-12-14 00:18:10.560894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.424 [2024-12-14 00:18:10.560906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:31.684 [2024-12-14 00:18:10.567003] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:31.684 [2024-12-14 00:18:10.567032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.684 [2024-12-14 00:18:10.567044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:31.684 [2024-12-14 00:18:10.573182] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:31.684 [2024-12-14 00:18:10.573210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.684 [2024-12-14 00:18:10.573222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:31.684 [2024-12-14 00:18:10.579162] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:31.684 [2024-12-14 00:18:10.579190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.684 [2024-12-14 00:18:10.579205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:31.684 [2024-12-14 00:18:10.585091] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:31.684 [2024-12-14 00:18:10.585118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.684 [2024-12-14 00:18:10.585129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:31.684 [2024-12-14 00:18:10.591171] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:31.684 [2024-12-14 00:18:10.591198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.684 [2024-12-14 00:18:10.591209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:31.684 [2024-12-14 00:18:10.597341] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:31.684 [2024-12-14 00:18:10.597368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.684 [2024-12-14 00:18:10.597379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:31.684 [2024-12-14 00:18:10.603629] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:31.684 [2024-12-14 00:18:10.603656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.684 [2024-12-14 00:18:10.603668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:31.684 [2024-12-14 00:18:10.609807] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:31.684 [2024-12-14 00:18:10.609834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.684 [2024-12-14 00:18:10.609846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:31.684 [2024-12-14 00:18:10.616024] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:31.684 [2024-12-14 00:18:10.616052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.684 [2024-12-14 00:18:10.616063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:31.684 [2024-12-14 00:18:10.622175] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:31.684 [2024-12-14 00:18:10.622201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.684 [2024-12-14 00:18:10.622213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:31.684 [2024-12-14 00:18:10.628378] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:31.684 [2024-12-14 00:18:10.628405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.684 [2024-12-14 00:18:10.628417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:31.684 [2024-12-14 00:18:10.634575] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:31.684 [2024-12-14 00:18:10.634608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.684 [2024-12-14 00:18:10.634619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:31.684 [2024-12-14 00:18:10.640679] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:31.684 [2024-12-14 00:18:10.640706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.684 [2024-12-14 00:18:10.640718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:31.684 [2024-12-14 00:18:10.646858] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:31.685 [2024-12-14 00:18:10.646885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.685 [2024-12-14 00:18:10.646897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:31.685 [2024-12-14 00:18:10.652991] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:31.685 [2024-12-14 00:18:10.653029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.685 [2024-12-14 00:18:10.653041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:31.685 [2024-12-14 00:18:10.659222] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:31.685 [2024-12-14 00:18:10.659251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.685 [2024-12-14 00:18:10.659263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:31.685 [2024-12-14 00:18:10.665396] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:31.685 [2024-12-14 00:18:10.665423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.685 [2024-12-14 00:18:10.665450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:31.685 [2024-12-14 00:18:10.671529] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:31.685 [2024-12-14 00:18:10.671557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.685 [2024-12-14 00:18:10.671569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:31.685 [2024-12-14 00:18:10.677426] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:31.685 [2024-12-14 00:18:10.677460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.685 [2024-12-14 00:18:10.677472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:31.685 [2024-12-14 00:18:10.683306] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:31.685 [2024-12-14 00:18:10.683333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.685 [2024-12-14 00:18:10.683345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:31.685 [2024-12-14 00:18:10.689399] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:31.685 [2024-12-14 00:18:10.689427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.685 [2024-12-14 00:18:10.689445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:31.685 [2024-12-14 00:18:10.695553] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:31.685 [2024-12-14 00:18:10.695580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.685 [2024-12-14 00:18:10.695592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:31.685 [2024-12-14 00:18:10.701745] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:31.685 [2024-12-14 00:18:10.701772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.685 [2024-12-14 00:18:10.701784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:31.685 [2024-12-14 00:18:10.707966] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:31.685 [2024-12-14 00:18:10.707994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.685 [2024-12-14 00:18:10.708005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:31.685 [2024-12-14 00:18:10.714134] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:31.685 [2024-12-14 00:18:10.714161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.685 [2024-12-14 00:18:10.714174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:31.685 [2024-12-14 00:18:10.720280] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:31.685 [2024-12-14 00:18:10.720307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.685 [2024-12-14 00:18:10.720319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:31.685 [2024-12-14 00:18:10.726363] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:31.685 [2024-12-14 00:18:10.726390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.685 [2024-12-14 00:18:10.726402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:31.685 [2024-12-14 00:18:10.732561] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:31.685 [2024-12-14 00:18:10.732588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.685 [2024-12-14 00:18:10.732600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:31.685 [2024-12-14 00:18:10.738735] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:31.685 [2024-12-14 00:18:10.738771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.685 [2024-12-14 00:18:10.738783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:31.685 [2024-12-14 00:18:10.744879] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:31.685 [2024-12-14 00:18:10.744906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.685 [2024-12-14 00:18:10.744918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:31.685 [2024-12-14 00:18:10.751127] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:31.685 [2024-12-14 00:18:10.751155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.685 [2024-12-14 00:18:10.751166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:31.685 [2024-12-14 00:18:10.757278] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:31.685 [2024-12-14 00:18:10.757305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.685 [2024-12-14 00:18:10.757317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:31.685 [2024-12-14 00:18:10.763462] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:31.685 [2024-12-14 00:18:10.763489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.685 [2024-12-14 00:18:10.763501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:31.685 [2024-12-14 00:18:10.769582] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:31.685 [2024-12-14 00:18:10.769609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.685 [2024-12-14 00:18:10.769621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:31.685 [2024-12-14 00:18:10.775765] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:31.685 [2024-12-14 00:18:10.775793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.685 [2024-12-14 00:18:10.775805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:31.685 [2024-12-14 00:18:10.781898] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:31.685 [2024-12-14 00:18:10.781926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.685 [2024-12-14 00:18:10.781939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:31.685 [2024-12-14 00:18:10.788091] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:31.685 [2024-12-14 00:18:10.788119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.685 [2024-12-14 00:18:10.788131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:31.685 [2024-12-14 00:18:10.794417] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:31.685 [2024-12-14 00:18:10.794455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.685 [2024-12-14 00:18:10.794476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:31.685 [2024-12-14 00:18:10.800726] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:31.685 [2024-12-14 00:18:10.800755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.685 [2024-12-14 00:18:10.800767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:31.685 [2024-12-14 00:18:10.807000] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:31.685 [2024-12-14 00:18:10.807030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.685 [2024-12-14 00:18:10.807042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:31.685 [2024-12-14 00:18:10.813294] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:31.685 [2024-12-14 00:18:10.813324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.686 [2024-12-14 00:18:10.813335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:31.686 [2024-12-14 00:18:10.819500] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:31.686 [2024-12-14 00:18:10.819529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.686 [2024-12-14 00:18:10.819541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:31.946 [2024-12-14 00:18:10.825648] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:31.946 [2024-12-14 00:18:10.825677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.946 [2024-12-14 00:18:10.825690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:31.946 [2024-12-14 00:18:10.831866] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:31.946 [2024-12-14 00:18:10.831894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.946 [2024-12-14 00:18:10.831906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:31.946 [2024-12-14 00:18:10.838053] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:31.946 [2024-12-14 00:18:10.838080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.946 [2024-12-14 00:18:10.838092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:31.946 [2024-12-14 00:18:10.844212] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:31.946 [2024-12-14 00:18:10.844245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.946 [2024-12-14 00:18:10.844257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:31.946 [2024-12-14 00:18:10.850383] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:31.946 [2024-12-14 00:18:10.850409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.946 [2024-12-14 00:18:10.850422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:31.946 [2024-12-14 00:18:10.856554] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:31.946 [2024-12-14 00:18:10.856582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.946 [2024-12-14 00:18:10.856594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:31.946 [2024-12-14 00:18:10.862777] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:31.946 [2024-12-14 00:18:10.862806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.946 [2024-12-14 00:18:10.862821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:31.946 [2024-12-14 00:18:10.868868] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:31.946 [2024-12-14 00:18:10.868897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.946 [2024-12-14 00:18:10.868908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:31.946 [2024-12-14 00:18:10.874892] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:31.946 [2024-12-14 00:18:10.874919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.946 [2024-12-14 00:18:10.874931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:31.946 [2024-12-14 00:18:10.880962] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:31.946 [2024-12-14 00:18:10.880989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.946 [2024-12-14 00:18:10.881001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:31.946 [2024-12-14 00:18:10.886955] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:31.946 [2024-12-14 00:18:10.886982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.946 [2024-12-14 00:18:10.886994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:31.946 4733.00 IOPS, 591.62 MiB/s [2024-12-13T23:18:11.087Z] [2024-12-14 00:18:10.894471] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:31.946 [2024-12-14 00:18:10.894499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.946 [2024-12-14 00:18:10.894511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:31.946 [2024-12-14 00:18:10.900581] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:31.946 [2024-12-14 00:18:10.900609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.946 [2024-12-14 00:18:10.900621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:31.946 [2024-12-14 00:18:10.906644] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:31.946 [2024-12-14 00:18:10.906671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.946 [2024-12-14 00:18:10.906683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:31.946 [2024-12-14 00:18:10.912756] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:31.946 [2024-12-14 00:18:10.912784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.946 [2024-12-14 00:18:10.912796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:31.946 [2024-12-14 00:18:10.918841] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:31.946 [2024-12-14 00:18:10.918868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.946 [2024-12-14 00:18:10.918880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:31.946 [2024-12-14 00:18:10.924846] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:31.946 [2024-12-14 00:18:10.924874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.946 [2024-12-14 00:18:10.924886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:31.946 [2024-12-14 00:18:10.930852] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:31.946 [2024-12-14 00:18:10.930879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.946 [2024-12-14 00:18:10.930890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:31.946 [2024-12-14 00:18:10.936868] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:31.946 [2024-12-14 00:18:10.936895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.946 [2024-12-14 00:18:10.936907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:31.946 [2024-12-14 00:18:10.942913] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:31.946 [2024-12-14 00:18:10.942941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.946 [2024-12-14 00:18:10.942954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:31.946 [2024-12-14 00:18:10.949703] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:31.946 [2024-12-14 00:18:10.949737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.946 [2024-12-14 00:18:10.949749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:31.946 [2024-12-14 00:18:10.957271] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:31.946 [2024-12-14 00:18:10.957300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.946 [2024-12-14 00:18:10.957312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:31.946 [2024-12-14 00:18:10.964651] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:31.946 [2024-12-14 00:18:10.964681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.946 [2024-12-14 00:18:10.964694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:31.946 [2024-12-14 00:18:10.972464] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:31.946 [2024-12-14 00:18:10.972493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.946 [2024-12-14 00:18:10.972505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:31.946 [2024-12-14 00:18:10.980616] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:31.946 [2024-12-14 00:18:10.980645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.946 [2024-12-14 00:18:10.980658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:31.946 [2024-12-14 00:18:10.988434] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:31.946 [2024-12-14 00:18:10.988469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.946 [2024-12-14 00:18:10.988481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:31.946 [2024-12-14 00:18:10.995119] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:31.947 [2024-12-14 00:18:10.995149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.947 [2024-12-14 00:18:10.995161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:31.947 [2024-12-14 00:18:11.001177] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:31.947 [2024-12-14 00:18:11.001206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.947 [2024-12-14 00:18:11.001218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:31.947 [2024-12-14 00:18:11.007186] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:31.947 [2024-12-14 00:18:11.007214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.947 [2024-12-14 00:18:11.007227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:31.947 [2024-12-14 00:18:11.013219] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:31.947 [2024-12-14 00:18:11.013247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.947 [2024-12-14 00:18:11.013259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:31.947 [2024-12-14 00:18:11.019252] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:31.947 [2024-12-14 00:18:11.019280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.947 [2024-12-14 00:18:11.019292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:31.947 [2024-12-14 00:18:11.025265] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:31.947 [2024-12-14 00:18:11.025293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.947 [2024-12-14 00:18:11.025304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:31.947 [2024-12-14 00:18:11.031293] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:31.947 [2024-12-14 00:18:11.031321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.947 [2024-12-14 00:18:11.031333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:31.947 [2024-12-14 00:18:11.037336] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:31.947 [2024-12-14 00:18:11.037363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.947 [2024-12-14 00:18:11.037375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:31.947 [2024-12-14 00:18:11.043311] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:31.947 [2024-12-14 00:18:11.043338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.947 [2024-12-14 00:18:11.043349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:31.947 [2024-12-14 00:18:11.049670] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:31.947 [2024-12-14 00:18:11.049697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.947 [2024-12-14 00:18:11.049708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:31.947 [2024-12-14 00:18:11.055683] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:31.947 [2024-12-14 00:18:11.055709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.947 [2024-12-14 00:18:11.055720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:31.947 [2024-12-14 00:18:11.061555] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:31.947 [2024-12-14 00:18:11.061586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.947 [2024-12-14 00:18:11.061598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:31.947 [2024-12-14 00:18:11.067508] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:31.947 [2024-12-14 00:18:11.067542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.947 [2024-12-14 00:18:11.067553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:31.947 [2024-12-14 00:18:11.073412] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:31.947 [2024-12-14 00:18:11.073444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.947 [2024-12-14 00:18:11.073456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:31.947 [2024-12-14 00:18:11.079421] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:31.947 [2024-12-14 00:18:11.079454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:31.947 [2024-12-14 00:18:11.079466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:32.207 [2024-12-14 00:18:11.085430] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:32.207 [2024-12-14 00:18:11.085466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.207 [2024-12-14 00:18:11.085478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:32.207 [2024-12-14 00:18:11.091502] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:32.207 [2024-12-14 00:18:11.091529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.207 [2024-12-14 00:18:11.091541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:32.207 [2024-12-14 00:18:11.097398] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:32.207 [2024-12-14 00:18:11.097426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.207 [2024-12-14 00:18:11.097443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:32.207 [2024-12-14 00:18:11.103298] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:32.207 [2024-12-14 00:18:11.103325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.207 [2024-12-14 00:18:11.103337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:32.207 [2024-12-14 00:18:11.109221] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:32.207 [2024-12-14 00:18:11.109248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.207 [2024-12-14 00:18:11.109260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:32.207 [2024-12-14 00:18:11.115259] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:32.207 [2024-12-14 00:18:11.115286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.207 [2024-12-14 00:18:11.115298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:32.207 [2024-12-14 00:18:11.121278] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:32.207 [2024-12-14 00:18:11.121305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.207 [2024-12-14 00:18:11.121316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:32.207 [2024-12-14 00:18:11.127196] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:32.207 [2024-12-14 00:18:11.127222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.207 [2024-12-14 00:18:11.127234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:32.207 [2024-12-14 00:18:11.133060] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:32.207 [2024-12-14 00:18:11.133087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.207 [2024-12-14 00:18:11.133099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:32.207 [2024-12-14 00:18:11.138945] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:32.207 [2024-12-14 00:18:11.138972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.207 [2024-12-14 00:18:11.138984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:32.207 [2024-12-14 00:18:11.144890] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:32.207 [2024-12-14 00:18:11.144916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.207 [2024-12-14 00:18:11.144928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:32.207 [2024-12-14 00:18:11.150815] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:32.207 [2024-12-14 00:18:11.150841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.207 [2024-12-14 00:18:11.150853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:32.207 [2024-12-14 00:18:11.156756] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:32.207 [2024-12-14 00:18:11.156782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.207 [2024-12-14 00:18:11.156793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:32.207 [2024-12-14 00:18:11.162784] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:32.207 [2024-12-14 00:18:11.162815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.208 [2024-12-14 00:18:11.162827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:32.208 [2024-12-14 00:18:11.168817] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:32.208 [2024-12-14 00:18:11.168843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.208 [2024-12-14 00:18:11.168854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:32.208 [2024-12-14 00:18:11.174850] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:32.208 [2024-12-14 00:18:11.174877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.208 [2024-12-14 00:18:11.174889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:32.208 [2024-12-14 00:18:11.180886] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:32.208 [2024-12-14 00:18:11.180911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.208 [2024-12-14 00:18:11.180922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:32.208 [2024-12-14 00:18:11.186918] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:32.208 [2024-12-14 00:18:11.186944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.208 [2024-12-14 00:18:11.186956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:32.208 [2024-12-14 00:18:11.192846] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:32.208 [2024-12-14 00:18:11.192873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.208 [2024-12-14 00:18:11.192885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:32.208 [2024-12-14 00:18:11.198785] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:32.208 [2024-12-14 00:18:11.198811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.208 [2024-12-14 00:18:11.198823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:32.208 [2024-12-14 00:18:11.204761] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:32.208 [2024-12-14 00:18:11.204787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.208 [2024-12-14 00:18:11.204798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:32.208 [2024-12-14 00:18:11.210717] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:32.208 [2024-12-14 00:18:11.210744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.208 [2024-12-14 00:18:11.210756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:32.208 [2024-12-14 00:18:11.216662] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:32.208 [2024-12-14 00:18:11.216690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.208 [2024-12-14 00:18:11.216701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:32.208 [2024-12-14 00:18:11.222628] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:32.208 [2024-12-14 00:18:11.222655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.208 [2024-12-14 00:18:11.222667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:32.208 [2024-12-14 00:18:11.228590] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:32.208 [2024-12-14 00:18:11.228616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.208 [2024-12-14 00:18:11.228628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:32.208 [2024-12-14 00:18:11.234539] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:32.208 [2024-12-14 00:18:11.234566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.208 [2024-12-14 00:18:11.234578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:32.208 [2024-12-14 00:18:11.240495] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:32.208 [2024-12-14 00:18:11.240521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.208 [2024-12-14 00:18:11.240533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:32.208 [2024-12-14 00:18:11.246812] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:32.208 [2024-12-14 00:18:11.246839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.208 [2024-12-14 00:18:11.246850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:32.208 [2024-12-14 00:18:11.252826] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:32.208 [2024-12-14 00:18:11.252854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.208 [2024-12-14 00:18:11.252865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:32.208 [2024-12-14 00:18:11.258834] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:32.208 [2024-12-14 00:18:11.258861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.208 [2024-12-14 00:18:11.258872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:32.208 [2024-12-14 00:18:11.264782] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:32.208 [2024-12-14 00:18:11.264808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.208 [2024-12-14 00:18:11.264824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:32.208 [2024-12-14 00:18:11.270761] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:32.208 [2024-12-14 00:18:11.270788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.208 [2024-12-14 00:18:11.270800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:32.208 [2024-12-14 00:18:11.276658] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:32.208 [2024-12-14 00:18:11.276684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.208 [2024-12-14 00:18:11.276696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:32.208 [2024-12-14 00:18:11.282571] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:32.208 [2024-12-14 00:18:11.282597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.208 [2024-12-14 00:18:11.282608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:32.208 [2024-12-14 00:18:11.288429] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:32.208 [2024-12-14 00:18:11.288462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.208 [2024-12-14 00:18:11.288474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:32.208 [2024-12-14 00:18:11.294297] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:32.208 [2024-12-14 00:18:11.294323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.208 [2024-12-14 00:18:11.294335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:32.208 [2024-12-14 00:18:11.300225] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:32.208 [2024-12-14 00:18:11.300252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.208 [2024-12-14 00:18:11.300264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:32.208 [2024-12-14 00:18:11.306271] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:32.208 [2024-12-14 00:18:11.306298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.208 [2024-12-14 00:18:11.306309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:32.208 [2024-12-14 00:18:11.312176] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:32.208 [2024-12-14 00:18:11.312203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.208 [2024-12-14 00:18:11.312215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:32.208 [2024-12-14 00:18:11.318224] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:32.208 [2024-12-14 00:18:11.318252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.208 [2024-12-14 00:18:11.318263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:32.208 [2024-12-14 00:18:11.324146] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:32.208 [2024-12-14 00:18:11.324174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.208 [2024-12-14 00:18:11.324186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:32.209 [2024-12-14 00:18:11.330094] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:32.209 [2024-12-14 00:18:11.330120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.209 [2024-12-14 00:18:11.330133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:32.209 [2024-12-14 00:18:11.336042] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:32.209 [2024-12-14 00:18:11.336068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.209 [2024-12-14 00:18:11.336081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:32.209 [2024-12-14 00:18:11.341941] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:32.209 [2024-12-14 00:18:11.341968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.209 [2024-12-14 00:18:11.341980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:32.468 [2024-12-14 00:18:11.347963] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:32.468 [2024-12-14 00:18:11.347990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.469 [2024-12-14 00:18:11.348002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:32.469 [2024-12-14 00:18:11.353927] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:32.469 [2024-12-14 00:18:11.353954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.469 [2024-12-14 00:18:11.353966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:32.469 [2024-12-14 00:18:11.359935] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:32.469 [2024-12-14 00:18:11.359963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.469 [2024-12-14 00:18:11.359975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:32.469 [2024-12-14 00:18:11.365901] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:32.469 [2024-12-14 00:18:11.365929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.469 [2024-12-14 00:18:11.365944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:32.469 [2024-12-14 00:18:11.371858] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:32.469 [2024-12-14 00:18:11.371885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.469 [2024-12-14 00:18:11.371896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:32.469 [2024-12-14 00:18:11.377804] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:32.469 [2024-12-14 00:18:11.377831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.469 [2024-12-14 00:18:11.377842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:32.469 [2024-12-14 00:18:11.383760] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:32.469 [2024-12-14 00:18:11.383787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.469 [2024-12-14 00:18:11.383799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:32.469 [2024-12-14 00:18:11.389703] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:32.469 [2024-12-14 00:18:11.389730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.469 [2024-12-14 00:18:11.389741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:32.469 [2024-12-14 00:18:11.395645] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:32.469 [2024-12-14 00:18:11.395671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.469 [2024-12-14 00:18:11.395683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:32.469 [2024-12-14 00:18:11.401589] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:32.469 [2024-12-14 00:18:11.401617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.469 [2024-12-14 00:18:11.401628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:32.469 [2024-12-14 00:18:11.408424] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:32.469 [2024-12-14 00:18:11.408458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.469 [2024-12-14 00:18:11.408470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:32.469 [2024-12-14 00:18:11.416126] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:32.469 [2024-12-14 00:18:11.416156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.469 [2024-12-14 00:18:11.416168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:32.469 [2024-12-14 00:18:11.423405] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:32.469 [2024-12-14 00:18:11.423434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.469 [2024-12-14 00:18:11.423453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:32.469 [2024-12-14 00:18:11.430936] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:32.469 [2024-12-14 00:18:11.430963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.469 [2024-12-14 00:18:11.430975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:32.469 [2024-12-14 00:18:11.438829] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:32.469 [2024-12-14 00:18:11.438858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.469 [2024-12-14 00:18:11.438871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:32.469 [2024-12-14 00:18:11.447425] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:32.469 [2024-12-14 00:18:11.447465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.469 [2024-12-14 00:18:11.447478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:32.469 [2024-12-14 00:18:11.455343] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:32.469 [2024-12-14 00:18:11.455371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.469 [2024-12-14 00:18:11.455383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:32.469 [2024-12-14 00:18:11.463221] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:32.469 [2024-12-14 00:18:11.463249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.469 [2024-12-14 00:18:11.463261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:32.469 [2024-12-14 00:18:11.470916] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:32.469 [2024-12-14 00:18:11.470944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.469 [2024-12-14 00:18:11.470956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:32.469 [2024-12-14 00:18:11.478772] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:32.469 [2024-12-14 00:18:11.478800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.469 [2024-12-14 00:18:11.478813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:32.469 [2024-12-14 00:18:11.486733] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:32.469 [2024-12-14 00:18:11.486762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.469 [2024-12-14 00:18:11.486778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:32.469 [2024-12-14 00:18:11.494575] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:32.469 [2024-12-14 00:18:11.494603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.469 [2024-12-14 00:18:11.494615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:32.469 [2024-12-14 00:18:11.502587] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:32.469 [2024-12-14 00:18:11.502616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.469 [2024-12-14 00:18:11.502628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:32.469 [2024-12-14 00:18:11.510781] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:32.469 [2024-12-14 00:18:11.510810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.469 [2024-12-14 00:18:11.510822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:32.469 [2024-12-14 00:18:11.518426] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:32.469 [2024-12-14 00:18:11.518463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.469 [2024-12-14 00:18:11.518475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:32.469 [2024-12-14 00:18:11.526341] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:32.469 [2024-12-14 00:18:11.526369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.469 [2024-12-14 00:18:11.526381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:32.469 [2024-12-14 00:18:11.534192] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:32.469 [2024-12-14 00:18:11.534221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.469 [2024-12-14 00:18:11.534233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:32.469 [2024-12-14 00:18:11.541425] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:32.469 [2024-12-14 00:18:11.541460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.470 [2024-12-14 00:18:11.541472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:32.470 [2024-12-14 00:18:11.547740] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:32.470 [2024-12-14 00:18:11.547768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.470 [2024-12-14 00:18:11.547780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:32.470 [2024-12-14 00:18:11.553721] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:32.470 [2024-12-14 00:18:11.553748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.470 [2024-12-14 00:18:11.553760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:32.470 [2024-12-14 00:18:11.559661] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:32.470 [2024-12-14 00:18:11.559688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.470 [2024-12-14 00:18:11.559700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:32.470 [2024-12-14 00:18:11.565625] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:32.470 [2024-12-14 00:18:11.565652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.470 [2024-12-14 00:18:11.565664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:32.470 [2024-12-14 00:18:11.571518] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:32.470 [2024-12-14 00:18:11.571545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.470 [2024-12-14 00:18:11.571557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:32.470 [2024-12-14 00:18:11.577406] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:32.470 [2024-12-14 00:18:11.577432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.470 [2024-12-14 00:18:11.577450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:32.470 [2024-12-14 00:18:11.583333] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:32.470 [2024-12-14 00:18:11.583360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.470 [2024-12-14 00:18:11.583371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:32.470 [2024-12-14 00:18:11.589287] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:32.470 [2024-12-14 00:18:11.589313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.470 [2024-12-14 00:18:11.589325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:32.470 [2024-12-14 00:18:11.595173] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:32.470 [2024-12-14 00:18:11.595199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.470 [2024-12-14 00:18:11.595210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:32.470 [2024-12-14 00:18:11.601091] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:32.470 [2024-12-14 00:18:11.601118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.470 [2024-12-14 00:18:11.601136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:32.470 [2024-12-14 00:18:11.607167] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:32.470 [2024-12-14 00:18:11.607195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.470 [2024-12-14 00:18:11.607218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:32.731 [2024-12-14 00:18:11.613232] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:32.731 [2024-12-14 00:18:11.613260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.731 [2024-12-14 00:18:11.613273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:32.731 [2024-12-14 00:18:11.618775] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:32.731 [2024-12-14 00:18:11.618803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.731 [2024-12-14 00:18:11.618816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:32.731 [2024-12-14 00:18:11.624742] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:32.731 [2024-12-14 00:18:11.624769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.731 [2024-12-14 00:18:11.624781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:32.731 [2024-12-14 00:18:11.630763] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:32.731 [2024-12-14 00:18:11.630790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.731 [2024-12-14 00:18:11.630802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:32.731 [2024-12-14 00:18:11.636696] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:32.731 [2024-12-14 00:18:11.636724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.731 [2024-12-14 00:18:11.636735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:32.731 [2024-12-14 00:18:11.642666] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:32.731 [2024-12-14 00:18:11.642693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.731 [2024-12-14 00:18:11.642704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:32.731 [2024-12-14 00:18:11.648870] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:32.731 [2024-12-14 00:18:11.648896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.731 [2024-12-14 00:18:11.648908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:32.731 [2024-12-14 00:18:11.654874] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:32.731 [2024-12-14 00:18:11.654901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.731 [2024-12-14 00:18:11.654912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:32.731 [2024-12-14 00:18:11.660779] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:32.731 [2024-12-14 00:18:11.660806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.731 [2024-12-14 00:18:11.660817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:32.731 [2024-12-14 00:18:11.666610] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:32.731 [2024-12-14 00:18:11.666637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.731 [2024-12-14 00:18:11.666650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:32.731 [2024-12-14 00:18:11.672614] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:32.731 [2024-12-14 00:18:11.672641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.731 [2024-12-14 00:18:11.672653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:32.731 [2024-12-14 00:18:11.678628] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:32.731 [2024-12-14 00:18:11.678654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.731 [2024-12-14 00:18:11.678667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:32.731 [2024-12-14 00:18:11.684878] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:32.731 [2024-12-14 00:18:11.684906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.731 [2024-12-14 00:18:11.684918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:32.731 [2024-12-14 00:18:11.691318] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:32.731 [2024-12-14 00:18:11.691346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.731 [2024-12-14 00:18:11.691359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:32.731 [2024-12-14 00:18:11.698202] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:32.731 [2024-12-14 00:18:11.698230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.731 [2024-12-14 00:18:11.698242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:32.731 [2024-12-14 00:18:11.706599] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:32.731 [2024-12-14 00:18:11.706628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.731 [2024-12-14 00:18:11.706643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:32.731 [2024-12-14 00:18:11.713835] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:32.731 [2024-12-14 00:18:11.713863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.731 [2024-12-14 00:18:11.713875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:32.731 [2024-12-14 00:18:11.720240] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:32.731 [2024-12-14 00:18:11.720270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.731 [2024-12-14 00:18:11.720282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:32.731 [2024-12-14 00:18:11.726989] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:32.731 [2024-12-14 00:18:11.727020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.731 [2024-12-14 00:18:11.727032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:32.731 [2024-12-14 00:18:11.735502] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:32.731 [2024-12-14 00:18:11.735531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.731 [2024-12-14 00:18:11.735543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:32.731 [2024-12-14 00:18:11.739693] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:32.731 [2024-12-14 00:18:11.739722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.731 [2024-12-14 00:18:11.739735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:32.731 [2024-12-14 00:18:11.747385] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:32.731 [2024-12-14 00:18:11.747413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.731 [2024-12-14 00:18:11.747426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:32.731 [2024-12-14 00:18:11.754654] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:32.731 [2024-12-14 00:18:11.754682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.731 [2024-12-14 00:18:11.754695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:32.731 [2024-12-14 00:18:11.762897] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:32.732 [2024-12-14 00:18:11.762925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.732 [2024-12-14 00:18:11.762938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:32.732 [2024-12-14 00:18:11.771464] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:32.732 [2024-12-14 00:18:11.771498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.732 [2024-12-14 00:18:11.771511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:32.732 [2024-12-14 00:18:11.778365] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:32.732 [2024-12-14 00:18:11.778395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.732 [2024-12-14 00:18:11.778407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:32.732 [2024-12-14 00:18:11.785650] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:32.732 [2024-12-14 00:18:11.785680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.732 [2024-12-14 00:18:11.785692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:32.732 [2024-12-14 00:18:11.791844] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:32.732 [2024-12-14 00:18:11.791874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.732 [2024-12-14 00:18:11.791886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:32.732 [2024-12-14 00:18:11.797966] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:32.732 [2024-12-14 00:18:11.797994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.732 [2024-12-14 00:18:11.798006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:32.732 [2024-12-14 00:18:11.804009] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:32.732 [2024-12-14 00:18:11.804036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.732 [2024-12-14 00:18:11.804048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:32.732 [2024-12-14 00:18:11.810002] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:32.732 [2024-12-14 00:18:11.810030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.732 [2024-12-14 00:18:11.810047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:32.732 [2024-12-14 00:18:11.816141] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:32.732 [2024-12-14 00:18:11.816169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.732 [2024-12-14 00:18:11.816182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:32.732 [2024-12-14 00:18:11.822254] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:32.732 [2024-12-14 00:18:11.822281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.732 [2024-12-14 00:18:11.822298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:32.732 [2024-12-14 00:18:11.828330] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:32.732 [2024-12-14 00:18:11.828359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.732 [2024-12-14 00:18:11.828370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:32.732 [2024-12-14 00:18:11.834320] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:32.732 [2024-12-14 00:18:11.834348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.732 [2024-12-14 00:18:11.834359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:32.732 [2024-12-14 00:18:11.840698] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:32.732 [2024-12-14 00:18:11.840726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.732 [2024-12-14 00:18:11.840738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:32.732 [2024-12-14 00:18:11.847510] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:32.732 [2024-12-14 00:18:11.847538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.732 [2024-12-14 00:18:11.847550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:32.732 [2024-12-14 00:18:11.854139] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:32.732 [2024-12-14 00:18:11.854167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.732 [2024-12-14 00:18:11.854179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:32.732 [2024-12-14 00:18:11.860458] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:32.732 [2024-12-14 00:18:11.860485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.732 [2024-12-14 00:18:11.860497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:32.732 [2024-12-14 00:18:11.866681] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:32.732 [2024-12-14 00:18:11.866708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.732 [2024-12-14 00:18:11.866720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:32.991 [2024-12-14 00:18:11.873042] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:32.991 [2024-12-14 00:18:11.873072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.991 [2024-12-14 00:18:11.873083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:32.991 [2024-12-14 00:18:11.879310] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:32.991 [2024-12-14 00:18:11.879342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.991 [2024-12-14 00:18:11.879353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:32.991 [2024-12-14 00:18:11.885598] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:32.991 [2024-12-14 00:18:11.885633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.991 [2024-12-14 00:18:11.885645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:32.991 [2024-12-14 00:18:11.892436] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x615000325f80) 00:37:32.991 [2024-12-14 00:18:11.892470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:32.992 [2024-12-14 00:18:11.892482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:32.992 4792.50 IOPS, 599.06 MiB/s 00:37:32.992 Latency(us) 00:37:32.992 [2024-12-13T23:18:12.133Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:32.992 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:37:32.992 nvme0n1 : 2.00 4792.14 599.02 0.00 0.00 3335.79 885.52 9549.53 00:37:32.992 [2024-12-13T23:18:12.133Z] =================================================================================================================== 00:37:32.992 [2024-12-13T23:18:12.133Z] Total : 4792.14 599.02 0.00 0.00 3335.79 885.52 9549.53 00:37:32.992 { 00:37:32.992 "results": [ 00:37:32.992 { 00:37:32.992 "job": "nvme0n1", 00:37:32.992 "core_mask": "0x2", 00:37:32.992 "workload": "randread", 00:37:32.992 "status": "finished", 00:37:32.992 "queue_depth": 16, 00:37:32.992 "io_size": 131072, 00:37:32.992 "runtime": 2.00349, 00:37:32.992 "iops": 4792.13771967916, 00:37:32.992 "mibps": 599.017214959895, 00:37:32.992 "io_failed": 0, 00:37:32.992 "io_timeout": 0, 00:37:32.992 "avg_latency_us": 3335.790883687711, 00:37:32.992 "min_latency_us": 885.5161904761904, 00:37:32.992 "max_latency_us": 9549.531428571428 00:37:32.992 } 00:37:32.992 ], 00:37:32.992 "core_count": 1 00:37:32.992 } 00:37:32.992 00:18:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:37:32.992 00:18:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:37:32.992 00:18:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:37:32.992 | .driver_specific 00:37:32.992 | .nvme_error 00:37:32.992 | .status_code 00:37:32.992 | .command_transient_transport_error' 00:37:32.992 00:18:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:37:32.992 00:18:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 310 > 0 )) 00:37:32.992 00:18:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 46168 00:37:32.992 00:18:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 46168 ']' 00:37:32.992 00:18:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 46168 00:37:32.992 00:18:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:37:33.251 00:18:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:37:33.251 00:18:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 46168 00:37:33.251 00:18:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:37:33.251 00:18:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:37:33.251 00:18:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 46168' 00:37:33.251 killing process with pid 46168 00:37:33.251 00:18:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 46168 00:37:33.251 Received shutdown signal, test time was about 2.000000 seconds 00:37:33.251 00:37:33.251 Latency(us) 00:37:33.251 [2024-12-13T23:18:12.392Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:33.251 [2024-12-13T23:18:12.392Z] =================================================================================================================== 00:37:33.251 [2024-12-13T23:18:12.392Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:37:33.251 00:18:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 46168 00:37:34.187 00:18:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:37:34.187 00:18:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:37:34.187 00:18:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:37:34.187 00:18:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:37:34.187 00:18:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:37:34.187 00:18:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=46867 00:37:34.187 00:18:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 46867 /var/tmp/bperf.sock 00:37:34.187 00:18:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:37:34.187 00:18:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 46867 ']' 00:37:34.187 00:18:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:37:34.187 00:18:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:37:34.187 00:18:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:37:34.187 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:37:34.187 00:18:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:37:34.187 00:18:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:37:34.187 [2024-12-14 00:18:13.134023] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:37:34.187 [2024-12-14 00:18:13.134114] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid46867 ] 00:37:34.187 [2024-12-14 00:18:13.244517] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:34.445 [2024-12-14 00:18:13.352656] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:37:35.012 00:18:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:37:35.012 00:18:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:37:35.012 00:18:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:37:35.012 00:18:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:37:35.012 00:18:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:37:35.012 00:18:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:35.012 00:18:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:37:35.012 00:18:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:35.012 00:18:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:37:35.012 00:18:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:37:35.579 nvme0n1 00:37:35.579 00:18:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:37:35.579 00:18:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:35.579 00:18:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:37:35.579 00:18:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:35.580 00:18:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:37:35.580 00:18:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:37:35.580 Running I/O for 2 seconds... 00:37:35.580 [2024-12-14 00:18:14.644361] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e0ea0 00:37:35.580 [2024-12-14 00:18:14.645419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:21033 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:35.580 [2024-12-14 00:18:14.645462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:37:35.580 [2024-12-14 00:18:14.654350] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f46d0 00:37:35.580 [2024-12-14 00:18:14.655375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:9914 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:35.580 [2024-12-14 00:18:14.655405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:37:35.580 [2024-12-14 00:18:14.665302] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f7538 00:37:35.580 [2024-12-14 00:18:14.666511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:6832 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:35.580 [2024-12-14 00:18:14.666539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:37:35.580 [2024-12-14 00:18:14.674964] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e6b70 00:37:35.580 [2024-12-14 00:18:14.675700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:24177 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:35.580 [2024-12-14 00:18:14.675727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:37:35.580 [2024-12-14 00:18:14.685225] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f35f0 00:37:35.580 [2024-12-14 00:18:14.685972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:25199 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:35.580 [2024-12-14 00:18:14.685999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:37:35.580 [2024-12-14 00:18:14.695606] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f2510 00:37:35.580 [2024-12-14 00:18:14.696342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:353 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:35.580 [2024-12-14 00:18:14.696367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:37:35.580 [2024-12-14 00:18:14.706022] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f1430 00:37:35.580 [2024-12-14 00:18:14.706766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:22958 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:35.580 [2024-12-14 00:18:14.706795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:37:35.580 [2024-12-14 00:18:14.716343] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f0350 00:37:35.580 [2024-12-14 00:18:14.717196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:20058 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:35.580 [2024-12-14 00:18:14.717222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:37:35.840 [2024-12-14 00:18:14.726884] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f4b08 00:37:35.840 [2024-12-14 00:18:14.727712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:4835 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:35.840 [2024-12-14 00:18:14.727738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:37:35.840 [2024-12-14 00:18:14.737232] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f5be8 00:37:35.840 [2024-12-14 00:18:14.738065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:13423 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:35.840 [2024-12-14 00:18:14.738090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:37:35.840 [2024-12-14 00:18:14.747548] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f6cc8 00:37:35.840 [2024-12-14 00:18:14.748377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:9629 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:35.840 [2024-12-14 00:18:14.748403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:37:35.840 [2024-12-14 00:18:14.757929] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f7da8 00:37:35.840 [2024-12-14 00:18:14.758757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:23913 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:35.840 [2024-12-14 00:18:14.758781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:37:35.840 [2024-12-14 00:18:14.768263] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f8e88 00:37:35.840 [2024-12-14 00:18:14.769096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:16503 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:35.840 [2024-12-14 00:18:14.769123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:37:35.840 [2024-12-14 00:18:14.778585] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f9f68 00:37:35.840 [2024-12-14 00:18:14.779405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:23344 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:35.840 [2024-12-14 00:18:14.779434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:37:35.840 [2024-12-14 00:18:14.788921] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173fb048 00:37:35.840 [2024-12-14 00:18:14.789746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:22021 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:35.840 [2024-12-14 00:18:14.789772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:37:35.840 [2024-12-14 00:18:14.799214] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173de470 00:37:35.840 [2024-12-14 00:18:14.800043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:14402 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:35.840 [2024-12-14 00:18:14.800068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:37:35.840 [2024-12-14 00:18:14.809843] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173fbcf0 00:37:35.840 [2024-12-14 00:18:14.810430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:10440 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:35.840 [2024-12-14 00:18:14.810462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:37:35.840 [2024-12-14 00:18:14.821782] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e1710 00:37:35.840 [2024-12-14 00:18:14.823262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:4894 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:35.840 [2024-12-14 00:18:14.823288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:37:35.840 [2024-12-14 00:18:14.831538] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e95a0 00:37:35.840 [2024-12-14 00:18:14.832618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:8733 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:35.840 [2024-12-14 00:18:14.832644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:37:35.840 [2024-12-14 00:18:14.842011] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173fac10 00:37:35.840 [2024-12-14 00:18:14.842875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:6329 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:35.840 [2024-12-14 00:18:14.842900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:37:35.840 [2024-12-14 00:18:14.853841] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f6cc8 00:37:35.840 [2024-12-14 00:18:14.855602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:14377 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:35.840 [2024-12-14 00:18:14.855628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:37:35.840 [2024-12-14 00:18:14.861161] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173ebb98 00:37:35.840 [2024-12-14 00:18:14.861989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:15846 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:35.840 [2024-12-14 00:18:14.862014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:37:35.840 [2024-12-14 00:18:14.871529] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173ebfd0 00:37:35.840 [2024-12-14 00:18:14.872471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:17101 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:35.840 [2024-12-14 00:18:14.872497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:37:35.840 [2024-12-14 00:18:14.885029] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173fdeb0 00:37:35.840 [2024-12-14 00:18:14.886802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:6097 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:35.840 [2024-12-14 00:18:14.886828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:37:35.840 [2024-12-14 00:18:14.892445] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e88f8 00:37:35.840 [2024-12-14 00:18:14.893249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:19350 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:35.840 [2024-12-14 00:18:14.893276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:37:35.840 [2024-12-14 00:18:14.902774] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e8d30 00:37:35.840 [2024-12-14 00:18:14.903742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:13155 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:35.841 [2024-12-14 00:18:14.903768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:37:35.841 [2024-12-14 00:18:14.914606] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f4f40 00:37:35.841 [2024-12-14 00:18:14.915692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:2220 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:35.841 [2024-12-14 00:18:14.915718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:37:35.841 [2024-12-14 00:18:14.925105] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f6020 00:37:35.841 [2024-12-14 00:18:14.926229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:16317 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:35.841 [2024-12-14 00:18:14.926266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:37:35.841 [2024-12-14 00:18:14.935462] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173ee190 00:37:35.841 [2024-12-14 00:18:14.936547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:23233 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:35.841 [2024-12-14 00:18:14.936573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:37:35.841 [2024-12-14 00:18:14.945747] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173ff3c8 00:37:35.841 [2024-12-14 00:18:14.946864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:8096 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:35.841 [2024-12-14 00:18:14.946889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:37:35.841 [2024-12-14 00:18:14.956094] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173ecc78 00:37:35.841 [2024-12-14 00:18:14.957219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:19971 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:35.841 [2024-12-14 00:18:14.957245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:37:35.841 [2024-12-14 00:18:14.966401] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f96f8 00:37:35.841 [2024-12-14 00:18:14.967506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:25058 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:35.841 [2024-12-14 00:18:14.967532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:37:35.841 [2024-12-14 00:18:14.976767] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173ebb98 00:37:35.841 [2024-12-14 00:18:14.977818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:22024 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:35.841 [2024-12-14 00:18:14.977844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:37:36.100 [2024-12-14 00:18:14.988701] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e38d0 00:37:36.100 [2024-12-14 00:18:14.990354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:3290 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:36.100 [2024-12-14 00:18:14.990380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:37:36.101 [2024-12-14 00:18:14.998316] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173fd640 00:37:36.101 [2024-12-14 00:18:14.999553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:19655 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:36.101 [2024-12-14 00:18:14.999579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:37:36.101 [2024-12-14 00:18:15.008554] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173fc560 00:37:36.101 [2024-12-14 00:18:15.009786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:20690 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:36.101 [2024-12-14 00:18:15.009811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:37:36.101 [2024-12-14 00:18:15.020217] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f3e60 00:37:36.101 [2024-12-14 00:18:15.021914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7292 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:36.101 [2024-12-14 00:18:15.021941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:37:36.101 [2024-12-14 00:18:15.027496] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e4578 00:37:36.101 [2024-12-14 00:18:15.028327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:3450 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:36.101 [2024-12-14 00:18:15.028352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:37:36.101 [2024-12-14 00:18:15.038025] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173ed920 00:37:36.101 [2024-12-14 00:18:15.038861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:20641 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:36.101 [2024-12-14 00:18:15.038887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:37:36.101 [2024-12-14 00:18:15.049627] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173fac10 00:37:36.101 [2024-12-14 00:18:15.050893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:2642 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:36.101 [2024-12-14 00:18:15.050923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:37:36.101 [2024-12-14 00:18:15.060128] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173fdeb0 00:37:36.101 [2024-12-14 00:18:15.061404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:5595 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:36.101 [2024-12-14 00:18:15.061431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:37:36.101 [2024-12-14 00:18:15.069730] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173fb8b8 00:37:36.101 [2024-12-14 00:18:15.070772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:17950 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:36.101 [2024-12-14 00:18:15.070798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:37:36.101 [2024-12-14 00:18:15.079950] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173fdeb0 00:37:36.101 [2024-12-14 00:18:15.081127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:16616 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:36.101 [2024-12-14 00:18:15.081153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:36.101 [2024-12-14 00:18:15.089596] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e4578 00:37:36.101 [2024-12-14 00:18:15.090344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:21844 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:36.101 [2024-12-14 00:18:15.090370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:37:36.101 [2024-12-14 00:18:15.100108] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e12d8 00:37:36.101 [2024-12-14 00:18:15.100644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:7317 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:36.101 [2024-12-14 00:18:15.100669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:36.101 [2024-12-14 00:18:15.111960] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f8a50 00:37:36.101 [2024-12-14 00:18:15.113320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:12176 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:36.101 [2024-12-14 00:18:15.113346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:36.101 [2024-12-14 00:18:15.121663] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173ef270 00:37:36.101 [2024-12-14 00:18:15.122671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:1701 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:36.101 [2024-12-14 00:18:15.122696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:36.101 [2024-12-14 00:18:15.131484] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f5378 00:37:36.101 [2024-12-14 00:18:15.132319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:18429 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:36.101 [2024-12-14 00:18:15.132345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:36.101 [2024-12-14 00:18:15.141060] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173eea00 00:37:36.101 [2024-12-14 00:18:15.141940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:22473 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:36.101 [2024-12-14 00:18:15.141965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:36.101 [2024-12-14 00:18:15.151908] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f2510 00:37:36.101 [2024-12-14 00:18:15.152843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:4299 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:36.101 [2024-12-14 00:18:15.152869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:37:36.101 [2024-12-14 00:18:15.162975] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e4de8 00:37:36.101 [2024-12-14 00:18:15.164164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:23181 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:36.101 [2024-12-14 00:18:15.164189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:36.101 [2024-12-14 00:18:15.173842] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173de8a8 00:37:36.101 [2024-12-14 00:18:15.175137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:18847 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:36.101 [2024-12-14 00:18:15.175163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:37:36.101 [2024-12-14 00:18:15.184688] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173eea00 00:37:36.101 [2024-12-14 00:18:15.186116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:11592 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:36.101 [2024-12-14 00:18:15.186142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:36.101 [2024-12-14 00:18:15.195144] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f3e60 00:37:36.101 [2024-12-14 00:18:15.196595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:18685 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:36.101 [2024-12-14 00:18:15.196621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:37:36.101 [2024-12-14 00:18:15.203860] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173eee38 00:37:36.101 [2024-12-14 00:18:15.204420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:23306 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:36.101 [2024-12-14 00:18:15.204452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:37:36.101 [2024-12-14 00:18:15.214636] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173eb760 00:37:36.101 [2024-12-14 00:18:15.215298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:190 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:36.101 [2024-12-14 00:18:15.215324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:37:36.101 [2024-12-14 00:18:15.225415] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173eaef0 00:37:36.101 [2024-12-14 00:18:15.226221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:8080 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:36.101 [2024-12-14 00:18:15.226251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:36.101 [2024-12-14 00:18:15.237310] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f31b8 00:37:36.101 [2024-12-14 00:18:15.239032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:6527 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:36.101 [2024-12-14 00:18:15.239057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:37:36.360 [2024-12-14 00:18:15.244723] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f7538 00:37:36.360 [2024-12-14 00:18:15.245415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:11825 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:36.360 [2024-12-14 00:18:15.245450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:37:36.360 [2024-12-14 00:18:15.255594] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e88f8 00:37:36.360 [2024-12-14 00:18:15.256474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23822 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:36.360 [2024-12-14 00:18:15.256501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:37:36.360 [2024-12-14 00:18:15.265452] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f7da8 00:37:36.360 [2024-12-14 00:18:15.266335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:20433 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:36.360 [2024-12-14 00:18:15.266361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:36.360 [2024-12-14 00:18:15.276294] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173eb760 00:37:36.360 [2024-12-14 00:18:15.277304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:990 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:36.360 [2024-12-14 00:18:15.277331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:37:36.360 [2024-12-14 00:18:15.287170] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173ee190 00:37:36.360 [2024-12-14 00:18:15.288328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:16332 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:36.360 [2024-12-14 00:18:15.288354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:37:36.360 [2024-12-14 00:18:15.297721] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f8e88 00:37:36.360 [2024-12-14 00:18:15.298870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:18803 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:36.360 [2024-12-14 00:18:15.298895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:37:36.360 [2024-12-14 00:18:15.308328] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e6300 00:37:36.360 [2024-12-14 00:18:15.309467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:2026 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:36.360 [2024-12-14 00:18:15.309493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:37:36.360 [2024-12-14 00:18:15.318162] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f9b30 00:37:36.360 [2024-12-14 00:18:15.319306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:25557 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:36.360 [2024-12-14 00:18:15.319332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:37:36.360 [2024-12-14 00:18:15.329012] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e5658 00:37:36.360 [2024-12-14 00:18:15.330309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:21329 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:36.360 [2024-12-14 00:18:15.330334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:37:36.360 [2024-12-14 00:18:15.339858] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173fef90 00:37:36.360 [2024-12-14 00:18:15.341274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:25086 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:36.360 [2024-12-14 00:18:15.341300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:37:36.360 [2024-12-14 00:18:15.350723] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173edd58 00:37:36.360 [2024-12-14 00:18:15.352264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:19565 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:36.360 [2024-12-14 00:18:15.352290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:37:36.360 [2024-12-14 00:18:15.360290] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f0ff8 00:37:36.360 [2024-12-14 00:18:15.361362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:5454 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:36.360 [2024-12-14 00:18:15.361388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:37:36.360 [2024-12-14 00:18:15.370661] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173ebfd0 00:37:36.360 [2024-12-14 00:18:15.371735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:20967 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:36.360 [2024-12-14 00:18:15.371761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:37:36.360 [2024-12-14 00:18:15.380167] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e73e0 00:37:36.360 [2024-12-14 00:18:15.381318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:17910 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:36.360 [2024-12-14 00:18:15.381344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:37:36.360 [2024-12-14 00:18:15.390968] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f3e60 00:37:36.360 [2024-12-14 00:18:15.392251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:12950 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:36.360 [2024-12-14 00:18:15.392277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:37:36.360 [2024-12-14 00:18:15.401853] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f3a28 00:37:36.360 [2024-12-14 00:18:15.403323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:7784 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:36.360 [2024-12-14 00:18:15.403348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:37:36.360 [2024-12-14 00:18:15.412952] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e9168 00:37:36.360 [2024-12-14 00:18:15.414534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:8295 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:36.361 [2024-12-14 00:18:15.414560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:37:36.361 [2024-12-14 00:18:15.423846] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f7538 00:37:36.361 [2024-12-14 00:18:15.425605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:24012 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:36.361 [2024-12-14 00:18:15.425630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:37:36.361 [2024-12-14 00:18:15.433086] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173ea680 00:37:36.361 [2024-12-14 00:18:15.434151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:15958 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:36.361 [2024-12-14 00:18:15.434177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:37:36.361 [2024-12-14 00:18:15.443675] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e4578 00:37:36.361 [2024-12-14 00:18:15.444940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:17771 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:36.361 [2024-12-14 00:18:15.444966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:37:36.361 [2024-12-14 00:18:15.453449] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173fe720 00:37:36.361 [2024-12-14 00:18:15.454734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:6488 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:36.361 [2024-12-14 00:18:15.454760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:37:36.361 [2024-12-14 00:18:15.464276] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173fb8b8 00:37:36.361 [2024-12-14 00:18:15.465709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:19748 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:36.361 [2024-12-14 00:18:15.465734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:37:36.361 [2024-12-14 00:18:15.475084] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173eb328 00:37:36.361 [2024-12-14 00:18:15.476829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:24653 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:36.361 [2024-12-14 00:18:15.476854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:37:36.361 [2024-12-14 00:18:15.486140] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e95a0 00:37:36.361 [2024-12-14 00:18:15.487838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:14037 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:36.361 [2024-12-14 00:18:15.487865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:37:36.361 [2024-12-14 00:18:15.493458] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173de8a8 00:37:36.361 [2024-12-14 00:18:15.494206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:5872 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:36.361 [2024-12-14 00:18:15.494231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:37:36.620 [2024-12-14 00:18:15.503342] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e0ea0 00:37:36.620 [2024-12-14 00:18:15.504092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:5329 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:36.620 [2024-12-14 00:18:15.504118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:37:36.620 [2024-12-14 00:18:15.514199] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e84c0 00:37:36.620 [2024-12-14 00:18:15.515068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:3017 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:36.620 [2024-12-14 00:18:15.515093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:37:36.620 [2024-12-14 00:18:15.525731] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173fe2e8 00:37:36.620 [2024-12-14 00:18:15.526744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:15183 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:36.620 [2024-12-14 00:18:15.526770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:37:36.620 [2024-12-14 00:18:15.535424] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f7da8 00:37:36.620 [2024-12-14 00:18:15.536428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:282 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:36.620 [2024-12-14 00:18:15.536468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:37:36.620 [2024-12-14 00:18:15.546932] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e5220 00:37:36.620 [2024-12-14 00:18:15.547995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:8408 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:36.620 [2024-12-14 00:18:15.548020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:37:36.620 [2024-12-14 00:18:15.557576] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f0788 00:37:36.620 [2024-12-14 00:18:15.558774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:12493 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:36.620 [2024-12-14 00:18:15.558800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:37:36.620 [2024-12-14 00:18:15.566314] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f4298 00:37:36.620 [2024-12-14 00:18:15.566817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:17846 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:36.620 [2024-12-14 00:18:15.566843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:37:36.620 [2024-12-14 00:18:15.577154] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e5a90 00:37:36.620 [2024-12-14 00:18:15.577792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:21354 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:36.620 [2024-12-14 00:18:15.577819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:37:36.620 [2024-12-14 00:18:15.587957] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f2948 00:37:36.620 [2024-12-14 00:18:15.588731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:19051 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:36.620 [2024-12-14 00:18:15.588758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:37:36.620 [2024-12-14 00:18:15.597768] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f46d0 00:37:36.620 [2024-12-14 00:18:15.599162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:14852 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:36.620 [2024-12-14 00:18:15.599186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:37:36.620 [2024-12-14 00:18:15.606698] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f2510 00:37:36.620 [2024-12-14 00:18:15.607413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:20759 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:36.620 [2024-12-14 00:18:15.607445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:37:36.620 [2024-12-14 00:18:15.617484] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e6b70 00:37:36.620 [2024-12-14 00:18:15.618341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:21288 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:36.620 [2024-12-14 00:18:15.618367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:37:36.620 [2024-12-14 00:18:15.628320] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e5a90 00:37:36.620 [2024-12-14 00:18:15.629347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:11759 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:36.620 [2024-12-14 00:18:15.629372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:37:36.620 24491.00 IOPS, 95.67 MiB/s [2024-12-13T23:18:15.761Z] [2024-12-14 00:18:15.641079] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e99d8 00:37:36.620 [2024-12-14 00:18:15.642619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:3792 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:36.620 [2024-12-14 00:18:15.642645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:37:36.620 [2024-12-14 00:18:15.651893] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f4f40 00:37:36.620 [2024-12-14 00:18:15.653560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:6826 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:36.620 [2024-12-14 00:18:15.653585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:37:36.620 [2024-12-14 00:18:15.659386] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f2510 00:37:36.620 [2024-12-14 00:18:15.660153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:7501 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:36.620 [2024-12-14 00:18:15.660177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:37:36.620 [2024-12-14 00:18:15.670964] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173fc128 00:37:36.620 [2024-12-14 00:18:15.672094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:14582 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:36.620 [2024-12-14 00:18:15.672122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:37:36.620 [2024-12-14 00:18:15.681644] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e99d8 00:37:36.620 [2024-12-14 00:18:15.682779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:16704 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:36.620 [2024-12-14 00:18:15.682804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:37:36.620 [2024-12-14 00:18:15.693448] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e99d8 00:37:36.621 [2024-12-14 00:18:15.695130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:23518 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:36.621 [2024-12-14 00:18:15.695154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:37:36.621 [2024-12-14 00:18:15.700726] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173ed4e8 00:37:36.621 [2024-12-14 00:18:15.701460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:13441 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:36.621 [2024-12-14 00:18:15.701485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:37:36.621 [2024-12-14 00:18:15.712514] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173ebb98 00:37:36.621 [2024-12-14 00:18:15.714017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:24644 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:36.621 [2024-12-14 00:18:15.714042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:37:36.621 [2024-12-14 00:18:15.721415] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173fe720 00:37:36.621 [2024-12-14 00:18:15.722261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:19475 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:36.621 [2024-12-14 00:18:15.722286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:37:36.621 [2024-12-14 00:18:15.732194] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f7538 00:37:36.621 [2024-12-14 00:18:15.733175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:7245 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:36.621 [2024-12-14 00:18:15.733200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:37:36.621 [2024-12-14 00:18:15.743063] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e23b8 00:37:36.621 [2024-12-14 00:18:15.744188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:23321 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:36.621 [2024-12-14 00:18:15.744212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:37:36.621 [2024-12-14 00:18:15.753892] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e9e10 00:37:36.621 [2024-12-14 00:18:15.755158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:1486 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:36.621 [2024-12-14 00:18:15.755182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:37:36.880 [2024-12-14 00:18:15.763604] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173fa7d8 00:37:36.880 [2024-12-14 00:18:15.764387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:9402 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:36.880 [2024-12-14 00:18:15.764412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:37:36.880 [2024-12-14 00:18:15.774221] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173eb760 00:37:36.880 [2024-12-14 00:18:15.775203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:11201 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:36.880 [2024-12-14 00:18:15.775228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:37:36.880 [2024-12-14 00:18:15.786086] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173ff3c8 00:37:36.880 [2024-12-14 00:18:15.787598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:16670 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:36.880 [2024-12-14 00:18:15.787623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:37:36.880 [2024-12-14 00:18:15.796946] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173ef6a8 00:37:36.880 [2024-12-14 00:18:15.798614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:16052 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:36.880 [2024-12-14 00:18:15.798638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:37:36.880 [2024-12-14 00:18:15.807763] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f92c0 00:37:36.880 [2024-12-14 00:18:15.809580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:9555 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:36.880 [2024-12-14 00:18:15.809605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:37:36.880 [2024-12-14 00:18:15.815057] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173fa3a0 00:37:36.880 [2024-12-14 00:18:15.815907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:9272 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:36.880 [2024-12-14 00:18:15.815931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:37:36.880 [2024-12-14 00:18:15.824929] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e7c50 00:37:36.880 [2024-12-14 00:18:15.825778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:3655 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:36.880 [2024-12-14 00:18:15.825803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:37:36.880 [2024-12-14 00:18:15.835816] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173fda78 00:37:36.880 [2024-12-14 00:18:15.836778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:18766 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:36.880 [2024-12-14 00:18:15.836803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:37:36.880 [2024-12-14 00:18:15.846652] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e3d08 00:37:36.880 [2024-12-14 00:18:15.847760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:7855 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:36.880 [2024-12-14 00:18:15.847790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:37:36.880 [2024-12-14 00:18:15.857498] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f8618 00:37:36.880 [2024-12-14 00:18:15.858655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:2012 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:36.880 [2024-12-14 00:18:15.858681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:37:36.880 [2024-12-14 00:18:15.868289] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f4f40 00:37:36.880 [2024-12-14 00:18:15.869669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:24572 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:36.880 [2024-12-14 00:18:15.869693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:37:36.880 [2024-12-14 00:18:15.879154] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173ddc00 00:37:36.880 [2024-12-14 00:18:15.880680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:22300 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:36.880 [2024-12-14 00:18:15.880705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:37:36.880 [2024-12-14 00:18:15.890059] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e49b0 00:37:36.880 [2024-12-14 00:18:15.891719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:5707 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:36.880 [2024-12-14 00:18:15.891744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:37:36.880 [2024-12-14 00:18:15.900884] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173fac10 00:37:36.880 [2024-12-14 00:18:15.902657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:9994 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:36.880 [2024-12-14 00:18:15.902684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:37:36.880 [2024-12-14 00:18:15.908209] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173fe2e8 00:37:36.880 [2024-12-14 00:18:15.909067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:8020 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:36.880 [2024-12-14 00:18:15.909092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:37:36.880 [2024-12-14 00:18:15.918311] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173fbcf0 00:37:36.880 [2024-12-14 00:18:15.919138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:6866 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:36.880 [2024-12-14 00:18:15.919163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:37:36.880 [2024-12-14 00:18:15.929305] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173ff3c8 00:37:36.880 [2024-12-14 00:18:15.930244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:18521 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:36.881 [2024-12-14 00:18:15.930269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:37:36.881 [2024-12-14 00:18:15.940229] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e3d08 00:37:36.881 [2024-12-14 00:18:15.941313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:587 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:36.881 [2024-12-14 00:18:15.941337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:37:36.881 [2024-12-14 00:18:15.951047] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173fb480 00:37:36.881 [2024-12-14 00:18:15.952289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:1816 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:36.881 [2024-12-14 00:18:15.952314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:37:36.881 [2024-12-14 00:18:15.961883] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173eaab8 00:37:36.881 [2024-12-14 00:18:15.963265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:24150 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:36.881 [2024-12-14 00:18:15.963290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:37:36.881 [2024-12-14 00:18:15.972722] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f92c0 00:37:36.881 [2024-12-14 00:18:15.974255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:440 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:36.881 [2024-12-14 00:18:15.974279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:37:36.881 [2024-12-14 00:18:15.982173] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173fac10 00:37:36.881 [2024-12-14 00:18:15.983745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:19548 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:36.881 [2024-12-14 00:18:15.983770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:37:36.881 [2024-12-14 00:18:15.991344] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173fb480 00:37:36.881 [2024-12-14 00:18:15.992207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:18877 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:36.881 [2024-12-14 00:18:15.992232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:37:36.881 [2024-12-14 00:18:16.002424] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f4f40 00:37:36.881 [2024-12-14 00:18:16.003399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:9401 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:36.881 [2024-12-14 00:18:16.003424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:37:36.881 [2024-12-14 00:18:16.013295] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f6458 00:37:36.881 [2024-12-14 00:18:16.014411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:19958 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:36.881 [2024-12-14 00:18:16.014436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:37:37.140 [2024-12-14 00:18:16.024369] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f0788 00:37:37.140 [2024-12-14 00:18:16.025640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:9256 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:37.140 [2024-12-14 00:18:16.025665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:37:37.140 [2024-12-14 00:18:16.035216] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f8618 00:37:37.140 [2024-12-14 00:18:16.036615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:16237 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:37.140 [2024-12-14 00:18:16.036640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:37:37.140 [2024-12-14 00:18:16.046059] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173fc560 00:37:37.140 [2024-12-14 00:18:16.047587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:6207 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:37.140 [2024-12-14 00:18:16.047612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:37:37.140 [2024-12-14 00:18:16.056917] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173ed0b0 00:37:37.141 [2024-12-14 00:18:16.058581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:2388 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:37.141 [2024-12-14 00:18:16.058606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:37:37.141 [2024-12-14 00:18:16.067716] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e27f0 00:37:37.141 [2024-12-14 00:18:16.069538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:21196 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:37.141 [2024-12-14 00:18:16.069562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:37:37.141 [2024-12-14 00:18:16.075036] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173ee190 00:37:37.141 [2024-12-14 00:18:16.075882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:17406 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:37.141 [2024-12-14 00:18:16.075907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:37:37.141 [2024-12-14 00:18:16.084858] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f5be8 00:37:37.141 [2024-12-14 00:18:16.085696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:24666 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:37.141 [2024-12-14 00:18:16.085721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:37:37.141 [2024-12-14 00:18:16.095676] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f96f8 00:37:37.141 [2024-12-14 00:18:16.096639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:14018 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:37.141 [2024-12-14 00:18:16.096664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:37:37.141 [2024-12-14 00:18:16.106548] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f6458 00:37:37.141 [2024-12-14 00:18:16.107659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:2451 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:37.141 [2024-12-14 00:18:16.107684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:37:37.141 [2024-12-14 00:18:16.117358] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f92c0 00:37:37.141 [2024-12-14 00:18:16.118618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:247 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:37.141 [2024-12-14 00:18:16.118647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:37:37.141 [2024-12-14 00:18:16.128181] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173eaab8 00:37:37.141 [2024-12-14 00:18:16.129582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:4018 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:37.141 [2024-12-14 00:18:16.129607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:37:37.141 [2024-12-14 00:18:16.139041] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f20d8 00:37:37.141 [2024-12-14 00:18:16.140583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:18909 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:37.141 [2024-12-14 00:18:16.140607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:37:37.141 [2024-12-14 00:18:16.149836] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f3a28 00:37:37.141 [2024-12-14 00:18:16.151499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:11254 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:37.141 [2024-12-14 00:18:16.151524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:37:37.141 [2024-12-14 00:18:16.160704] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f4f40 00:37:37.141 [2024-12-14 00:18:16.162559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:912 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:37.141 [2024-12-14 00:18:16.162584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:37:37.141 [2024-12-14 00:18:16.168236] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f57b0 00:37:37.141 [2024-12-14 00:18:16.169085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:8726 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:37.141 [2024-12-14 00:18:16.169110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:37:37.141 [2024-12-14 00:18:16.179839] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f6458 00:37:37.141 [2024-12-14 00:18:16.181090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:24823 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:37.141 [2024-12-14 00:18:16.181115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:37:37.141 [2024-12-14 00:18:16.190554] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173fe2e8 00:37:37.141 [2024-12-14 00:18:16.191720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:2468 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:37.141 [2024-12-14 00:18:16.191745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:37:37.141 [2024-12-14 00:18:16.200976] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f6458 00:37:37.141 [2024-12-14 00:18:16.202135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:18098 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:37.141 [2024-12-14 00:18:16.202162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:37:37.141 [2024-12-14 00:18:16.211296] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f1430 00:37:37.141 [2024-12-14 00:18:16.212441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:10891 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:37.141 [2024-12-14 00:18:16.212467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:37:37.141 [2024-12-14 00:18:16.220979] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f5378 00:37:37.141 [2024-12-14 00:18:16.222106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8974 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:37.141 [2024-12-14 00:18:16.222131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:37:37.141 [2024-12-14 00:18:16.231803] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f4f40 00:37:37.141 [2024-12-14 00:18:16.233080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:13431 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:37.141 [2024-12-14 00:18:16.233103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:37:37.141 [2024-12-14 00:18:16.242692] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173fb480 00:37:37.141 [2024-12-14 00:18:16.244097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:4339 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:37.141 [2024-12-14 00:18:16.244122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:37:37.141 [2024-12-14 00:18:16.253530] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173eaef0 00:37:37.141 [2024-12-14 00:18:16.255063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:15685 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:37.141 [2024-12-14 00:18:16.255088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:37:37.141 [2024-12-14 00:18:16.263967] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173ee5c8 00:37:37.141 [2024-12-14 00:18:16.265514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:7634 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:37.141 [2024-12-14 00:18:16.265540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:37:37.141 [2024-12-14 00:18:16.271031] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f0bc0 00:37:37.141 [2024-12-14 00:18:16.271756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:17017 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:37.141 [2024-12-14 00:18:16.271781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:37:37.401 [2024-12-14 00:18:16.282128] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173fc128 00:37:37.401 [2024-12-14 00:18:16.283086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:20122 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:37.401 [2024-12-14 00:18:16.283111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:37:37.401 [2024-12-14 00:18:16.292932] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e73e0 00:37:37.401 [2024-12-14 00:18:16.294016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:2472 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:37.401 [2024-12-14 00:18:16.294044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:37:37.401 [2024-12-14 00:18:16.303800] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f8e88 00:37:37.401 [2024-12-14 00:18:16.305041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:13220 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:37.401 [2024-12-14 00:18:16.305065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:37:37.401 [2024-12-14 00:18:16.314626] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173edd58 00:37:37.401 [2024-12-14 00:18:16.315996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:4072 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:37.401 [2024-12-14 00:18:16.316020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:37:37.401 [2024-12-14 00:18:16.325494] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173fc128 00:37:37.401 [2024-12-14 00:18:16.326994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:11168 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:37.401 [2024-12-14 00:18:16.327018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:37:37.401 [2024-12-14 00:18:16.336371] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173de8a8 00:37:37.401 [2024-12-14 00:18:16.338011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:11334 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:37.401 [2024-12-14 00:18:16.338036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:37:37.401 [2024-12-14 00:18:16.347160] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e9168 00:37:37.401 [2024-12-14 00:18:16.348943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:13712 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:37.401 [2024-12-14 00:18:16.348968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:37:37.401 [2024-12-14 00:18:16.354485] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e8088 00:37:37.401 [2024-12-14 00:18:16.355301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:2351 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:37.401 [2024-12-14 00:18:16.355326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:37:37.401 [2024-12-14 00:18:16.364994] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e1b48 00:37:37.401 [2024-12-14 00:18:16.365844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:3499 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:37.401 [2024-12-14 00:18:16.365869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:37:37.401 [2024-12-14 00:18:16.375641] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e1710 00:37:37.401 [2024-12-14 00:18:16.376520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:19682 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:37.401 [2024-12-14 00:18:16.376545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:37:37.401 [2024-12-14 00:18:16.386517] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173fb8b8 00:37:37.401 [2024-12-14 00:18:16.387624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:7512 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:37.401 [2024-12-14 00:18:16.387649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:37:37.401 [2024-12-14 00:18:16.397277] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173fc998 00:37:37.401 [2024-12-14 00:18:16.398131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:10472 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:37.401 [2024-12-14 00:18:16.398157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:37:37.401 [2024-12-14 00:18:16.407081] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e5ec8 00:37:37.401 [2024-12-14 00:18:16.408299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:3604 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:37.401 [2024-12-14 00:18:16.408324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:37:37.401 [2024-12-14 00:18:16.417872] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173feb58 00:37:37.401 [2024-12-14 00:18:16.419254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:1139 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:37.401 [2024-12-14 00:18:16.419280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:37:37.401 [2024-12-14 00:18:16.426727] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173ec840 00:37:37.401 [2024-12-14 00:18:16.427563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:22380 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:37.401 [2024-12-14 00:18:16.427588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:37:37.401 [2024-12-14 00:18:16.437877] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173fb480 00:37:37.401 [2024-12-14 00:18:16.438781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:1503 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:37.401 [2024-12-14 00:18:16.438806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:37:37.401 [2024-12-14 00:18:16.448752] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f0ff8 00:37:37.401 [2024-12-14 00:18:16.449790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:18403 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:37.401 [2024-12-14 00:18:16.449816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:37:37.401 [2024-12-14 00:18:16.459563] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f6890 00:37:37.401 [2024-12-14 00:18:16.460755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6937 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:37.401 [2024-12-14 00:18:16.460780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:37:37.401 [2024-12-14 00:18:16.470453] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e49b0 00:37:37.401 [2024-12-14 00:18:16.471775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:10474 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:37.401 [2024-12-14 00:18:16.471800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:37:37.401 [2024-12-14 00:18:16.481497] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173fb480 00:37:37.401 [2024-12-14 00:18:16.482983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:18704 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:37.401 [2024-12-14 00:18:16.483010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:37:37.401 [2024-12-14 00:18:16.492325] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173fe720 00:37:37.401 [2024-12-14 00:18:16.493968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:12748 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:37.401 [2024-12-14 00:18:16.493994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:37:37.401 [2024-12-14 00:18:16.499863] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173ef6a8 00:37:37.401 [2024-12-14 00:18:16.500671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:23080 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:37.401 [2024-12-14 00:18:16.500696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:37:37.401 [2024-12-14 00:18:16.510735] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e49b0 00:37:37.401 [2024-12-14 00:18:16.511694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:4514 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:37.401 [2024-12-14 00:18:16.511719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:37:37.401 [2024-12-14 00:18:16.521576] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173ec408 00:37:37.401 [2024-12-14 00:18:16.522648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22440 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:37.401 [2024-12-14 00:18:16.522674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:37:37.401 [2024-12-14 00:18:16.532430] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173fb048 00:37:37.401 [2024-12-14 00:18:16.533638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:5179 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:37.401 [2024-12-14 00:18:16.533663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:37:37.661 [2024-12-14 00:18:16.542165] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173eb328 00:37:37.661 [2024-12-14 00:18:16.542891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:2598 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:37.661 [2024-12-14 00:18:16.542918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:37:37.661 [2024-12-14 00:18:16.551668] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173ff3c8 00:37:37.661 [2024-12-14 00:18:16.552367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:11795 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:37.661 [2024-12-14 00:18:16.552392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:37:37.661 [2024-12-14 00:18:16.564552] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f7da8 00:37:37.661 [2024-12-14 00:18:16.566027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:19711 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:37.661 [2024-12-14 00:18:16.566057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:37:37.661 [2024-12-14 00:18:16.575362] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173e73e0 00:37:37.661 [2024-12-14 00:18:16.576978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:24388 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:37.661 [2024-12-14 00:18:16.577004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:37:37.661 [2024-12-14 00:18:16.586194] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173eaef0 00:37:37.661 [2024-12-14 00:18:16.587915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:5315 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:37.661 [2024-12-14 00:18:16.587941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:37:37.661 [2024-12-14 00:18:16.593510] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173ff3c8 00:37:37.661 [2024-12-14 00:18:16.594212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:8558 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:37.661 [2024-12-14 00:18:16.594238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:37:37.661 [2024-12-14 00:18:16.603923] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173fe720 00:37:37.661 [2024-12-14 00:18:16.604620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:15523 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:37.661 [2024-12-14 00:18:16.604646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:37:37.661 [2024-12-14 00:18:16.613597] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173fc560 00:37:37.661 [2024-12-14 00:18:16.614283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:18608 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:37.661 [2024-12-14 00:18:16.614309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:37:37.661 [2024-12-14 00:18:16.625728] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f20d8 00:37:37.661 [2024-12-14 00:18:16.626699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:25316 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:37.661 [2024-12-14 00:18:16.626725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:37:37.661 [2024-12-14 00:18:16.635362] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000173f5378 00:37:37.661 24552.50 IOPS, 95.91 MiB/s [2024-12-13T23:18:16.802Z] [2024-12-14 00:18:16.636422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:6034 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:37.661 [2024-12-14 00:18:16.636454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:37:37.661 00:37:37.661 Latency(us) 00:37:37.661 [2024-12-13T23:18:16.802Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:37.661 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:37:37.661 nvme0n1 : 2.00 24562.44 95.95 0.00 0.00 5206.42 2559.02 13981.01 00:37:37.661 [2024-12-13T23:18:16.802Z] =================================================================================================================== 00:37:37.661 [2024-12-13T23:18:16.802Z] Total : 24562.44 95.95 0.00 0.00 5206.42 2559.02 13981.01 00:37:37.661 { 00:37:37.661 "results": [ 00:37:37.661 { 00:37:37.661 "job": "nvme0n1", 00:37:37.661 "core_mask": "0x2", 00:37:37.661 "workload": "randwrite", 00:37:37.661 "status": "finished", 00:37:37.661 "queue_depth": 128, 00:37:37.661 "io_size": 4096, 00:37:37.661 "runtime": 2.004402, 00:37:37.661 "iops": 24562.438073799567, 00:37:37.661 "mibps": 95.94702372577956, 00:37:37.661 "io_failed": 0, 00:37:37.661 "io_timeout": 0, 00:37:37.661 "avg_latency_us": 5206.4224611250875, 00:37:37.661 "min_latency_us": 2559.024761904762, 00:37:37.661 "max_latency_us": 13981.013333333334 00:37:37.661 } 00:37:37.661 ], 00:37:37.661 "core_count": 1 00:37:37.661 } 00:37:37.661 00:18:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:37:37.661 00:18:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:37:37.661 00:18:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:37:37.661 | .driver_specific 00:37:37.661 | .nvme_error 00:37:37.661 | .status_code 00:37:37.661 | .command_transient_transport_error' 00:37:37.661 00:18:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:37:37.920 00:18:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 193 > 0 )) 00:37:37.920 00:18:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 46867 00:37:37.920 00:18:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 46867 ']' 00:37:37.920 00:18:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 46867 00:37:37.920 00:18:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:37:37.920 00:18:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:37:37.920 00:18:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 46867 00:37:37.920 00:18:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:37:37.920 00:18:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:37:37.920 00:18:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 46867' 00:37:37.920 killing process with pid 46867 00:37:37.920 00:18:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 46867 00:37:37.920 Received shutdown signal, test time was about 2.000000 seconds 00:37:37.920 00:37:37.920 Latency(us) 00:37:37.920 [2024-12-13T23:18:17.061Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:37.920 [2024-12-13T23:18:17.061Z] =================================================================================================================== 00:37:37.920 [2024-12-13T23:18:17.061Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:37:37.920 00:18:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 46867 00:37:38.857 00:18:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:37:38.857 00:18:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:37:38.857 00:18:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:37:38.857 00:18:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:37:38.857 00:18:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:37:38.857 00:18:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=47547 00:37:38.857 00:18:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 47547 /var/tmp/bperf.sock 00:37:38.857 00:18:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:37:38.857 00:18:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 47547 ']' 00:37:38.857 00:18:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:37:38.857 00:18:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:37:38.857 00:18:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:37:38.857 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:37:38.857 00:18:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:37:38.857 00:18:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:37:38.857 [2024-12-14 00:18:17.868706] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:37:38.857 [2024-12-14 00:18:17.868800] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid47547 ] 00:37:38.857 I/O size of 131072 is greater than zero copy threshold (65536). 00:37:38.857 Zero copy mechanism will not be used. 00:37:38.857 [2024-12-14 00:18:17.986112] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:39.115 [2024-12-14 00:18:18.091550] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:37:39.683 00:18:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:37:39.683 00:18:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:37:39.683 00:18:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:37:39.683 00:18:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:37:39.941 00:18:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:37:39.941 00:18:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:39.942 00:18:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:37:39.942 00:18:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:39.942 00:18:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:37:39.942 00:18:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:37:40.201 nvme0n1 00:37:40.201 00:18:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:37:40.201 00:18:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:40.201 00:18:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:37:40.201 00:18:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:40.201 00:18:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:37:40.201 00:18:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:37:40.201 I/O size of 131072 is greater than zero copy threshold (65536). 00:37:40.201 Zero copy mechanism will not be used. 00:37:40.201 Running I/O for 2 seconds... 00:37:40.201 [2024-12-14 00:18:19.310130] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:40.201 [2024-12-14 00:18:19.310225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.201 [2024-12-14 00:18:19.310264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:40.201 [2024-12-14 00:18:19.316538] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:40.201 [2024-12-14 00:18:19.316618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.201 [2024-12-14 00:18:19.316650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:40.201 [2024-12-14 00:18:19.322971] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:40.201 [2024-12-14 00:18:19.323054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.201 [2024-12-14 00:18:19.323081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:40.201 [2024-12-14 00:18:19.329047] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:40.201 [2024-12-14 00:18:19.329178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.201 [2024-12-14 00:18:19.329205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:40.201 [2024-12-14 00:18:19.334616] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:40.201 [2024-12-14 00:18:19.334702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.201 [2024-12-14 00:18:19.334728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:40.201 [2024-12-14 00:18:19.340071] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:40.201 [2024-12-14 00:18:19.340150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.201 [2024-12-14 00:18:19.340177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:40.461 [2024-12-14 00:18:19.345449] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:40.461 [2024-12-14 00:18:19.345522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.461 [2024-12-14 00:18:19.345548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:40.461 [2024-12-14 00:18:19.350792] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:40.461 [2024-12-14 00:18:19.350865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.461 [2024-12-14 00:18:19.350891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:40.461 [2024-12-14 00:18:19.356162] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:40.461 [2024-12-14 00:18:19.356236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.461 [2024-12-14 00:18:19.356267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:40.461 [2024-12-14 00:18:19.361488] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:40.461 [2024-12-14 00:18:19.361563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.461 [2024-12-14 00:18:19.361590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:40.461 [2024-12-14 00:18:19.366922] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:40.461 [2024-12-14 00:18:19.366993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.461 [2024-12-14 00:18:19.367019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:40.461 [2024-12-14 00:18:19.372392] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:40.461 [2024-12-14 00:18:19.372464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.461 [2024-12-14 00:18:19.372491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:40.461 [2024-12-14 00:18:19.377877] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:40.461 [2024-12-14 00:18:19.377945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.461 [2024-12-14 00:18:19.377971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:40.461 [2024-12-14 00:18:19.383351] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:40.461 [2024-12-14 00:18:19.383422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.461 [2024-12-14 00:18:19.383456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:40.462 [2024-12-14 00:18:19.388860] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:40.462 [2024-12-14 00:18:19.388941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.462 [2024-12-14 00:18:19.388974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:40.462 [2024-12-14 00:18:19.394361] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:40.462 [2024-12-14 00:18:19.394446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.462 [2024-12-14 00:18:19.394472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:40.462 [2024-12-14 00:18:19.399815] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:40.462 [2024-12-14 00:18:19.399895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.462 [2024-12-14 00:18:19.399921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:40.462 [2024-12-14 00:18:19.405200] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:40.462 [2024-12-14 00:18:19.405285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.462 [2024-12-14 00:18:19.405311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:40.462 [2024-12-14 00:18:19.410618] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:40.462 [2024-12-14 00:18:19.410704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.462 [2024-12-14 00:18:19.410729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:40.462 [2024-12-14 00:18:19.416055] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:40.462 [2024-12-14 00:18:19.416130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.462 [2024-12-14 00:18:19.416156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:40.462 [2024-12-14 00:18:19.421511] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:40.462 [2024-12-14 00:18:19.421587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.462 [2024-12-14 00:18:19.421612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:40.462 [2024-12-14 00:18:19.427010] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:40.462 [2024-12-14 00:18:19.427090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.462 [2024-12-14 00:18:19.427116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:40.462 [2024-12-14 00:18:19.432412] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:40.462 [2024-12-14 00:18:19.432509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.462 [2024-12-14 00:18:19.432536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:40.462 [2024-12-14 00:18:19.437758] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:40.462 [2024-12-14 00:18:19.437829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.462 [2024-12-14 00:18:19.437856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:40.462 [2024-12-14 00:18:19.443026] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:40.462 [2024-12-14 00:18:19.443112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.462 [2024-12-14 00:18:19.443138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:40.462 [2024-12-14 00:18:19.448304] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:40.462 [2024-12-14 00:18:19.448376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.462 [2024-12-14 00:18:19.448405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:40.462 [2024-12-14 00:18:19.453612] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:40.462 [2024-12-14 00:18:19.453689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.462 [2024-12-14 00:18:19.453715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:40.462 [2024-12-14 00:18:19.458993] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:40.462 [2024-12-14 00:18:19.459078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.462 [2024-12-14 00:18:19.459104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:40.462 [2024-12-14 00:18:19.464247] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:40.462 [2024-12-14 00:18:19.464323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.462 [2024-12-14 00:18:19.464349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:40.462 [2024-12-14 00:18:19.469573] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:40.462 [2024-12-14 00:18:19.469655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.462 [2024-12-14 00:18:19.469681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:40.462 [2024-12-14 00:18:19.474959] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:40.462 [2024-12-14 00:18:19.475040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.462 [2024-12-14 00:18:19.475065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:40.462 [2024-12-14 00:18:19.480548] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:40.462 [2024-12-14 00:18:19.480621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.462 [2024-12-14 00:18:19.480647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:40.462 [2024-12-14 00:18:19.485955] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:40.462 [2024-12-14 00:18:19.486052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.462 [2024-12-14 00:18:19.486078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:40.462 [2024-12-14 00:18:19.491332] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:40.462 [2024-12-14 00:18:19.491426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.462 [2024-12-14 00:18:19.491459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:40.462 [2024-12-14 00:18:19.496625] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:40.462 [2024-12-14 00:18:19.496716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.462 [2024-12-14 00:18:19.496742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:40.462 [2024-12-14 00:18:19.501980] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:40.462 [2024-12-14 00:18:19.502050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.462 [2024-12-14 00:18:19.502077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:40.462 [2024-12-14 00:18:19.507333] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:40.462 [2024-12-14 00:18:19.507409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.462 [2024-12-14 00:18:19.507435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:40.462 [2024-12-14 00:18:19.512730] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:40.462 [2024-12-14 00:18:19.512799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.462 [2024-12-14 00:18:19.512825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:40.462 [2024-12-14 00:18:19.518115] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:40.462 [2024-12-14 00:18:19.518184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.462 [2024-12-14 00:18:19.518210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:40.462 [2024-12-14 00:18:19.523541] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:40.462 [2024-12-14 00:18:19.523630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.462 [2024-12-14 00:18:19.523656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:40.462 [2024-12-14 00:18:19.528863] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:40.462 [2024-12-14 00:18:19.528944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.462 [2024-12-14 00:18:19.528970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:40.462 [2024-12-14 00:18:19.534201] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:40.462 [2024-12-14 00:18:19.534273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.463 [2024-12-14 00:18:19.534298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:40.463 [2024-12-14 00:18:19.539616] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:40.463 [2024-12-14 00:18:19.539684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.463 [2024-12-14 00:18:19.539710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:40.463 [2024-12-14 00:18:19.545056] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:40.463 [2024-12-14 00:18:19.545140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.463 [2024-12-14 00:18:19.545166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:40.463 [2024-12-14 00:18:19.550554] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:40.463 [2024-12-14 00:18:19.550641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.463 [2024-12-14 00:18:19.550667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:40.463 [2024-12-14 00:18:19.555909] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:40.463 [2024-12-14 00:18:19.556002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.463 [2024-12-14 00:18:19.556027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:40.463 [2024-12-14 00:18:19.561291] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:40.463 [2024-12-14 00:18:19.561361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.463 [2024-12-14 00:18:19.561387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:40.463 [2024-12-14 00:18:19.566795] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:40.463 [2024-12-14 00:18:19.566897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.463 [2024-12-14 00:18:19.566923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:40.463 [2024-12-14 00:18:19.572324] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:40.463 [2024-12-14 00:18:19.572401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.463 [2024-12-14 00:18:19.572427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:40.463 [2024-12-14 00:18:19.577719] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:40.463 [2024-12-14 00:18:19.577805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.463 [2024-12-14 00:18:19.577831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:40.463 [2024-12-14 00:18:19.583129] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:40.463 [2024-12-14 00:18:19.583205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.463 [2024-12-14 00:18:19.583230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:40.463 [2024-12-14 00:18:19.588590] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:40.463 [2024-12-14 00:18:19.588667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.463 [2024-12-14 00:18:19.588693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:40.463 [2024-12-14 00:18:19.594137] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:40.463 [2024-12-14 00:18:19.594208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.463 [2024-12-14 00:18:19.594234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:40.463 [2024-12-14 00:18:19.599626] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:40.463 [2024-12-14 00:18:19.599701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.463 [2024-12-14 00:18:19.599726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:40.723 [2024-12-14 00:18:19.605002] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:40.723 [2024-12-14 00:18:19.605077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.723 [2024-12-14 00:18:19.605103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:40.723 [2024-12-14 00:18:19.610394] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:40.723 [2024-12-14 00:18:19.610513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.723 [2024-12-14 00:18:19.610539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:40.724 [2024-12-14 00:18:19.615779] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:40.724 [2024-12-14 00:18:19.615856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.724 [2024-12-14 00:18:19.615882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:40.724 [2024-12-14 00:18:19.621214] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:40.724 [2024-12-14 00:18:19.621284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.724 [2024-12-14 00:18:19.621311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:40.724 [2024-12-14 00:18:19.626652] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:40.724 [2024-12-14 00:18:19.626724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.724 [2024-12-14 00:18:19.626749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:40.724 [2024-12-14 00:18:19.632060] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:40.724 [2024-12-14 00:18:19.632126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.724 [2024-12-14 00:18:19.632152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:40.724 [2024-12-14 00:18:19.637514] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:40.724 [2024-12-14 00:18:19.637587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.724 [2024-12-14 00:18:19.637612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:40.724 [2024-12-14 00:18:19.642831] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:40.724 [2024-12-14 00:18:19.642911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.724 [2024-12-14 00:18:19.642937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:40.724 [2024-12-14 00:18:19.648241] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:40.724 [2024-12-14 00:18:19.648329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.724 [2024-12-14 00:18:19.648355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:40.724 [2024-12-14 00:18:19.653643] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:40.724 [2024-12-14 00:18:19.653736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.724 [2024-12-14 00:18:19.653762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:40.724 [2024-12-14 00:18:19.659778] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:40.724 [2024-12-14 00:18:19.659930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.724 [2024-12-14 00:18:19.659956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:40.724 [2024-12-14 00:18:19.666386] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:40.724 [2024-12-14 00:18:19.666528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.724 [2024-12-14 00:18:19.666554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:40.724 [2024-12-14 00:18:19.672628] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:40.724 [2024-12-14 00:18:19.672761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.724 [2024-12-14 00:18:19.672786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:40.724 [2024-12-14 00:18:19.678494] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:40.724 [2024-12-14 00:18:19.678568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.724 [2024-12-14 00:18:19.678593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:40.724 [2024-12-14 00:18:19.684412] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:40.724 [2024-12-14 00:18:19.684555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.724 [2024-12-14 00:18:19.684585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:40.724 [2024-12-14 00:18:19.691395] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:40.724 [2024-12-14 00:18:19.691536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.724 [2024-12-14 00:18:19.691561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:40.724 [2024-12-14 00:18:19.698624] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:40.724 [2024-12-14 00:18:19.698994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.724 [2024-12-14 00:18:19.699020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:40.724 [2024-12-14 00:18:19.705903] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:40.724 [2024-12-14 00:18:19.706299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.724 [2024-12-14 00:18:19.706325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:40.724 [2024-12-14 00:18:19.713402] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:40.724 [2024-12-14 00:18:19.713827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.724 [2024-12-14 00:18:19.713853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:40.724 [2024-12-14 00:18:19.720636] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:40.724 [2024-12-14 00:18:19.721054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.724 [2024-12-14 00:18:19.721080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:40.724 [2024-12-14 00:18:19.728706] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:40.724 [2024-12-14 00:18:19.729083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.724 [2024-12-14 00:18:19.729109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:40.724 [2024-12-14 00:18:19.736593] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:40.724 [2024-12-14 00:18:19.736959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.724 [2024-12-14 00:18:19.736986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:40.724 [2024-12-14 00:18:19.744665] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:40.724 [2024-12-14 00:18:19.745048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.724 [2024-12-14 00:18:19.745074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:40.724 [2024-12-14 00:18:19.751546] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:40.724 [2024-12-14 00:18:19.751910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.724 [2024-12-14 00:18:19.751937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:40.724 [2024-12-14 00:18:19.757892] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:40.724 [2024-12-14 00:18:19.758268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.724 [2024-12-14 00:18:19.758305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:40.724 [2024-12-14 00:18:19.764130] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:40.724 [2024-12-14 00:18:19.764478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.724 [2024-12-14 00:18:19.764504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:40.724 [2024-12-14 00:18:19.770476] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:40.724 [2024-12-14 00:18:19.770795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.724 [2024-12-14 00:18:19.770822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:40.724 [2024-12-14 00:18:19.776530] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:40.724 [2024-12-14 00:18:19.776855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.724 [2024-12-14 00:18:19.776880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:40.724 [2024-12-14 00:18:19.782411] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:40.724 [2024-12-14 00:18:19.782748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.725 [2024-12-14 00:18:19.782773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:40.725 [2024-12-14 00:18:19.788013] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:40.725 [2024-12-14 00:18:19.788360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.725 [2024-12-14 00:18:19.788386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:40.725 [2024-12-14 00:18:19.793379] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:40.725 [2024-12-14 00:18:19.793714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.725 [2024-12-14 00:18:19.793740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:40.725 [2024-12-14 00:18:19.798536] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:40.725 [2024-12-14 00:18:19.798863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.725 [2024-12-14 00:18:19.798893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:40.725 [2024-12-14 00:18:19.803889] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:40.725 [2024-12-14 00:18:19.804233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.725 [2024-12-14 00:18:19.804261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:40.725 [2024-12-14 00:18:19.809238] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:40.725 [2024-12-14 00:18:19.809580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.725 [2024-12-14 00:18:19.809606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:40.725 [2024-12-14 00:18:19.814451] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:40.725 [2024-12-14 00:18:19.814802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.725 [2024-12-14 00:18:19.814828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:40.725 [2024-12-14 00:18:19.819854] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:40.725 [2024-12-14 00:18:19.820175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.725 [2024-12-14 00:18:19.820202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:40.725 [2024-12-14 00:18:19.825114] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:40.725 [2024-12-14 00:18:19.825458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.725 [2024-12-14 00:18:19.825484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:40.725 [2024-12-14 00:18:19.830914] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:40.725 [2024-12-14 00:18:19.831261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.725 [2024-12-14 00:18:19.831287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:40.725 [2024-12-14 00:18:19.837395] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:40.725 [2024-12-14 00:18:19.837732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.725 [2024-12-14 00:18:19.837758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:40.725 [2024-12-14 00:18:19.842816] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:40.725 [2024-12-14 00:18:19.843143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.725 [2024-12-14 00:18:19.843169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:40.725 [2024-12-14 00:18:19.848015] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:40.725 [2024-12-14 00:18:19.848352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.725 [2024-12-14 00:18:19.848379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:40.725 [2024-12-14 00:18:19.853339] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:40.725 [2024-12-14 00:18:19.853678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.725 [2024-12-14 00:18:19.853703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:40.725 [2024-12-14 00:18:19.858597] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:40.725 [2024-12-14 00:18:19.858929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.725 [2024-12-14 00:18:19.858955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:40.985 [2024-12-14 00:18:19.863918] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:40.985 [2024-12-14 00:18:19.864252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.985 [2024-12-14 00:18:19.864279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:40.985 [2024-12-14 00:18:19.869033] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:40.985 [2024-12-14 00:18:19.869366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.985 [2024-12-14 00:18:19.869393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:40.985 [2024-12-14 00:18:19.874676] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:40.985 [2024-12-14 00:18:19.875013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.985 [2024-12-14 00:18:19.875040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:40.985 [2024-12-14 00:18:19.880101] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:40.985 [2024-12-14 00:18:19.880443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.985 [2024-12-14 00:18:19.880469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:40.985 [2024-12-14 00:18:19.885331] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:40.985 [2024-12-14 00:18:19.885668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.985 [2024-12-14 00:18:19.885694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:40.985 [2024-12-14 00:18:19.890553] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:40.985 [2024-12-14 00:18:19.890891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.985 [2024-12-14 00:18:19.890917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:40.985 [2024-12-14 00:18:19.895695] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:40.985 [2024-12-14 00:18:19.896021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.985 [2024-12-14 00:18:19.896047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:40.985 [2024-12-14 00:18:19.900893] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:40.985 [2024-12-14 00:18:19.901225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.985 [2024-12-14 00:18:19.901251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:40.985 [2024-12-14 00:18:19.906253] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:40.986 [2024-12-14 00:18:19.906615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.986 [2024-12-14 00:18:19.906641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:40.986 [2024-12-14 00:18:19.911882] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:40.986 [2024-12-14 00:18:19.912219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.986 [2024-12-14 00:18:19.912244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:40.986 [2024-12-14 00:18:19.918002] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:40.986 [2024-12-14 00:18:19.918326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.986 [2024-12-14 00:18:19.918352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:40.986 [2024-12-14 00:18:19.924334] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:40.986 [2024-12-14 00:18:19.924667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.986 [2024-12-14 00:18:19.924694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:40.986 [2024-12-14 00:18:19.930719] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:40.986 [2024-12-14 00:18:19.931069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.986 [2024-12-14 00:18:19.931096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:40.986 [2024-12-14 00:18:19.937153] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:40.986 [2024-12-14 00:18:19.937492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.986 [2024-12-14 00:18:19.937518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:40.986 [2024-12-14 00:18:19.944719] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:40.986 [2024-12-14 00:18:19.945118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.986 [2024-12-14 00:18:19.945148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:40.986 [2024-12-14 00:18:19.951435] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:40.986 [2024-12-14 00:18:19.951769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.986 [2024-12-14 00:18:19.951795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:40.986 [2024-12-14 00:18:19.957258] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:40.986 [2024-12-14 00:18:19.957592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.986 [2024-12-14 00:18:19.957618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:40.986 [2024-12-14 00:18:19.962483] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:40.986 [2024-12-14 00:18:19.962811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.986 [2024-12-14 00:18:19.962837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:40.986 [2024-12-14 00:18:19.968150] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:40.986 [2024-12-14 00:18:19.968566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.986 [2024-12-14 00:18:19.968592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:40.986 [2024-12-14 00:18:19.974318] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:40.986 [2024-12-14 00:18:19.974673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.986 [2024-12-14 00:18:19.974700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:40.986 [2024-12-14 00:18:19.980563] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:40.986 [2024-12-14 00:18:19.980902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.986 [2024-12-14 00:18:19.980928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:40.986 [2024-12-14 00:18:19.986302] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:40.986 [2024-12-14 00:18:19.986644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.986 [2024-12-14 00:18:19.986670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:40.986 [2024-12-14 00:18:19.991586] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:40.986 [2024-12-14 00:18:19.991923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.986 [2024-12-14 00:18:19.991949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:40.986 [2024-12-14 00:18:19.997004] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:40.986 [2024-12-14 00:18:19.997344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.986 [2024-12-14 00:18:19.997370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:40.986 [2024-12-14 00:18:20.002281] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:40.986 [2024-12-14 00:18:20.002635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.986 [2024-12-14 00:18:20.002662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:40.986 [2024-12-14 00:18:20.008150] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:40.986 [2024-12-14 00:18:20.008491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.986 [2024-12-14 00:18:20.008545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:40.986 [2024-12-14 00:18:20.014942] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:40.986 [2024-12-14 00:18:20.015341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.986 [2024-12-14 00:18:20.015369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:40.986 [2024-12-14 00:18:20.020690] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:40.986 [2024-12-14 00:18:20.021012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.986 [2024-12-14 00:18:20.021039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:40.986 [2024-12-14 00:18:20.026193] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:40.986 [2024-12-14 00:18:20.026540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.986 [2024-12-14 00:18:20.026567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:40.986 [2024-12-14 00:18:20.031767] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:40.986 [2024-12-14 00:18:20.032110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.986 [2024-12-14 00:18:20.032137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:40.986 [2024-12-14 00:18:20.038621] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:40.986 [2024-12-14 00:18:20.038961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.986 [2024-12-14 00:18:20.038990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:40.986 [2024-12-14 00:18:20.043991] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:40.986 [2024-12-14 00:18:20.044323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.986 [2024-12-14 00:18:20.044355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:40.986 [2024-12-14 00:18:20.049329] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:40.986 [2024-12-14 00:18:20.049671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.986 [2024-12-14 00:18:20.049698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:40.986 [2024-12-14 00:18:20.054627] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:40.986 [2024-12-14 00:18:20.054963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.986 [2024-12-14 00:18:20.054990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:40.986 [2024-12-14 00:18:20.059929] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:40.986 [2024-12-14 00:18:20.060265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.986 [2024-12-14 00:18:20.060291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:40.986 [2024-12-14 00:18:20.065470] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:40.986 [2024-12-14 00:18:20.065813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.986 [2024-12-14 00:18:20.065839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:40.987 [2024-12-14 00:18:20.072083] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:40.987 [2024-12-14 00:18:20.072468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.987 [2024-12-14 00:18:20.072499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:40.987 [2024-12-14 00:18:20.077875] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:40.987 [2024-12-14 00:18:20.078196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.987 [2024-12-14 00:18:20.078223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:40.987 [2024-12-14 00:18:20.083571] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:40.987 [2024-12-14 00:18:20.083901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.987 [2024-12-14 00:18:20.083927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:40.987 [2024-12-14 00:18:20.089676] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:40.987 [2024-12-14 00:18:20.090012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.987 [2024-12-14 00:18:20.090039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:40.987 [2024-12-14 00:18:20.096418] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:40.987 [2024-12-14 00:18:20.096775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.987 [2024-12-14 00:18:20.096802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:40.987 [2024-12-14 00:18:20.104141] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:40.987 [2024-12-14 00:18:20.104537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.987 [2024-12-14 00:18:20.104564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:40.987 [2024-12-14 00:18:20.112406] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:40.987 [2024-12-14 00:18:20.112814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.987 [2024-12-14 00:18:20.112841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:40.987 [2024-12-14 00:18:20.120060] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:40.987 [2024-12-14 00:18:20.120433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.987 [2024-12-14 00:18:20.120467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:41.246 [2024-12-14 00:18:20.128391] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:41.246 [2024-12-14 00:18:20.128829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:41.246 [2024-12-14 00:18:20.128857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:41.246 [2024-12-14 00:18:20.136652] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:41.247 [2024-12-14 00:18:20.137058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:41.247 [2024-12-14 00:18:20.137095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:41.247 [2024-12-14 00:18:20.144660] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:41.247 [2024-12-14 00:18:20.145034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:41.247 [2024-12-14 00:18:20.145062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:41.247 [2024-12-14 00:18:20.152512] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:41.247 [2024-12-14 00:18:20.152897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:41.247 [2024-12-14 00:18:20.152924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:41.247 [2024-12-14 00:18:20.160382] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:41.247 [2024-12-14 00:18:20.160789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:41.247 [2024-12-14 00:18:20.160822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:41.247 [2024-12-14 00:18:20.168928] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:41.247 [2024-12-14 00:18:20.169332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:41.247 [2024-12-14 00:18:20.169359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:41.247 [2024-12-14 00:18:20.176706] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:41.247 [2024-12-14 00:18:20.177103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:41.247 [2024-12-14 00:18:20.177131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:41.247 [2024-12-14 00:18:20.184318] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:41.247 [2024-12-14 00:18:20.184764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:41.247 [2024-12-14 00:18:20.184794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:41.247 [2024-12-14 00:18:20.192479] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:41.247 [2024-12-14 00:18:20.192894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:41.247 [2024-12-14 00:18:20.192921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:41.247 [2024-12-14 00:18:20.200456] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:41.247 [2024-12-14 00:18:20.200893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:41.247 [2024-12-14 00:18:20.200920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:41.247 [2024-12-14 00:18:20.208613] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:41.247 [2024-12-14 00:18:20.209015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:41.247 [2024-12-14 00:18:20.209042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:41.247 [2024-12-14 00:18:20.216338] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:41.247 [2024-12-14 00:18:20.216679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:41.247 [2024-12-14 00:18:20.216706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:41.247 [2024-12-14 00:18:20.223977] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:41.247 [2024-12-14 00:18:20.224385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:41.247 [2024-12-14 00:18:20.224412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:41.247 [2024-12-14 00:18:20.231935] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:41.247 [2024-12-14 00:18:20.232366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:41.247 [2024-12-14 00:18:20.232392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:41.247 [2024-12-14 00:18:20.239564] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:41.247 [2024-12-14 00:18:20.239951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:41.247 [2024-12-14 00:18:20.239977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:41.247 [2024-12-14 00:18:20.247288] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:41.247 [2024-12-14 00:18:20.247692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:41.247 [2024-12-14 00:18:20.247719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:41.247 [2024-12-14 00:18:20.255475] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:41.247 [2024-12-14 00:18:20.255872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:41.247 [2024-12-14 00:18:20.255900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:41.247 [2024-12-14 00:18:20.263156] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:41.247 [2024-12-14 00:18:20.263500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:41.247 [2024-12-14 00:18:20.263526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:41.247 [2024-12-14 00:18:20.270473] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:41.247 [2024-12-14 00:18:20.270840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:41.247 [2024-12-14 00:18:20.270867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:41.247 [2024-12-14 00:18:20.277423] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:41.247 [2024-12-14 00:18:20.277769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:41.247 [2024-12-14 00:18:20.277796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:41.247 [2024-12-14 00:18:20.284075] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:41.247 [2024-12-14 00:18:20.284419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:41.247 [2024-12-14 00:18:20.284452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:41.247 [2024-12-14 00:18:20.289649] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:41.247 [2024-12-14 00:18:20.289978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:41.247 [2024-12-14 00:18:20.290005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:41.247 [2024-12-14 00:18:20.295551] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:41.247 [2024-12-14 00:18:20.295868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:41.247 [2024-12-14 00:18:20.295895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:41.247 [2024-12-14 00:18:20.301273] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:41.247 [2024-12-14 00:18:20.301669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:41.247 [2024-12-14 00:18:20.301696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:41.247 5094.00 IOPS, 636.75 MiB/s [2024-12-13T23:18:20.388Z] [2024-12-14 00:18:20.308394] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:41.247 [2024-12-14 00:18:20.308962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:41.247 [2024-12-14 00:18:20.308989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:41.247 [2024-12-14 00:18:20.314765] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:41.247 [2024-12-14 00:18:20.315188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:41.247 [2024-12-14 00:18:20.315215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:41.247 [2024-12-14 00:18:20.321430] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:41.247 [2024-12-14 00:18:20.321772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:41.247 [2024-12-14 00:18:20.321799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:41.247 [2024-12-14 00:18:20.327447] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:41.247 [2024-12-14 00:18:20.327797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:41.247 [2024-12-14 00:18:20.327824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:41.248 [2024-12-14 00:18:20.333518] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:41.248 [2024-12-14 00:18:20.333863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:41.248 [2024-12-14 00:18:20.333889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:41.248 [2024-12-14 00:18:20.340053] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:41.248 [2024-12-14 00:18:20.340452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:41.248 [2024-12-14 00:18:20.340479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:41.248 [2024-12-14 00:18:20.346374] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:41.248 [2024-12-14 00:18:20.346747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:41.248 [2024-12-14 00:18:20.346778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:41.248 [2024-12-14 00:18:20.352380] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:41.248 [2024-12-14 00:18:20.352758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:41.248 [2024-12-14 00:18:20.352786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:41.248 [2024-12-14 00:18:20.359082] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:41.248 [2024-12-14 00:18:20.359448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:41.248 [2024-12-14 00:18:20.359476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:41.248 [2024-12-14 00:18:20.365720] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:41.248 [2024-12-14 00:18:20.366010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:41.248 [2024-12-14 00:18:20.366037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:41.248 [2024-12-14 00:18:20.371057] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:41.248 [2024-12-14 00:18:20.371354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:41.248 [2024-12-14 00:18:20.371382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:41.248 [2024-12-14 00:18:20.376817] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:41.248 [2024-12-14 00:18:20.377123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:41.248 [2024-12-14 00:18:20.377150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:41.248 [2024-12-14 00:18:20.382762] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:41.248 [2024-12-14 00:18:20.383065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:41.248 [2024-12-14 00:18:20.383092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:41.507 [2024-12-14 00:18:20.388548] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:41.507 [2024-12-14 00:18:20.388853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:41.507 [2024-12-14 00:18:20.388879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:41.507 [2024-12-14 00:18:20.393950] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:41.507 [2024-12-14 00:18:20.394191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:41.507 [2024-12-14 00:18:20.394217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:41.507 [2024-12-14 00:18:20.400267] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:41.507 [2024-12-14 00:18:20.400625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:41.507 [2024-12-14 00:18:20.400651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:41.507 [2024-12-14 00:18:20.406752] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:41.507 [2024-12-14 00:18:20.407013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:41.507 [2024-12-14 00:18:20.407038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:41.507 [2024-12-14 00:18:20.412335] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:41.507 [2024-12-14 00:18:20.412587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:41.507 [2024-12-14 00:18:20.412613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:41.507 [2024-12-14 00:18:20.417379] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:41.507 [2024-12-14 00:18:20.417671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:41.507 [2024-12-14 00:18:20.417697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:41.507 [2024-12-14 00:18:20.423212] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:41.507 [2024-12-14 00:18:20.423477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:41.507 [2024-12-14 00:18:20.423504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:41.507 [2024-12-14 00:18:20.428963] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:41.507 [2024-12-14 00:18:20.429239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:41.507 [2024-12-14 00:18:20.429265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:41.507 [2024-12-14 00:18:20.434533] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:41.507 [2024-12-14 00:18:20.434714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:41.507 [2024-12-14 00:18:20.434740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:41.508 [2024-12-14 00:18:20.440760] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:41.508 [2024-12-14 00:18:20.441004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:41.508 [2024-12-14 00:18:20.441030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:41.508 [2024-12-14 00:18:20.446759] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:41.508 [2024-12-14 00:18:20.446984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:41.508 [2024-12-14 00:18:20.447015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:41.508 [2024-12-14 00:18:20.451873] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:41.508 [2024-12-14 00:18:20.452065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:41.508 [2024-12-14 00:18:20.452091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:41.508 [2024-12-14 00:18:20.457304] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:41.508 [2024-12-14 00:18:20.457525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:41.508 [2024-12-14 00:18:20.457551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:41.508 [2024-12-14 00:18:20.462347] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:41.508 [2024-12-14 00:18:20.462475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:41.508 [2024-12-14 00:18:20.462501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:41.508 [2024-12-14 00:18:20.467419] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:41.508 [2024-12-14 00:18:20.467635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:41.508 [2024-12-14 00:18:20.467661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:41.508 [2024-12-14 00:18:20.472732] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:41.508 [2024-12-14 00:18:20.472942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:41.508 [2024-12-14 00:18:20.472968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:41.508 [2024-12-14 00:18:20.478036] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:41.508 [2024-12-14 00:18:20.478247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:41.508 [2024-12-14 00:18:20.478273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:41.508 [2024-12-14 00:18:20.483219] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:41.508 [2024-12-14 00:18:20.483460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:41.508 [2024-12-14 00:18:20.483485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:41.508 [2024-12-14 00:18:20.488866] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:41.508 [2024-12-14 00:18:20.489059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:41.508 [2024-12-14 00:18:20.489085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:41.508 [2024-12-14 00:18:20.493490] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:41.508 [2024-12-14 00:18:20.493711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:41.508 [2024-12-14 00:18:20.493738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:41.508 [2024-12-14 00:18:20.498220] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:41.508 [2024-12-14 00:18:20.498454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:41.508 [2024-12-14 00:18:20.498481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:41.508 [2024-12-14 00:18:20.502811] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:41.508 [2024-12-14 00:18:20.503018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:41.508 [2024-12-14 00:18:20.503044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:41.508 [2024-12-14 00:18:20.507365] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:41.508 [2024-12-14 00:18:20.507598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:41.508 [2024-12-14 00:18:20.507623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:41.508 [2024-12-14 00:18:20.511895] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:41.508 [2024-12-14 00:18:20.512127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:41.508 [2024-12-14 00:18:20.512152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:41.508 [2024-12-14 00:18:20.516419] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:41.508 [2024-12-14 00:18:20.516646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:41.508 [2024-12-14 00:18:20.516673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:41.508 [2024-12-14 00:18:20.520920] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:41.508 [2024-12-14 00:18:20.521152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:41.508 [2024-12-14 00:18:20.521179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:41.508 [2024-12-14 00:18:20.525432] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:41.508 [2024-12-14 00:18:20.525609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:41.508 [2024-12-14 00:18:20.525635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:41.508 [2024-12-14 00:18:20.530571] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:41.508 [2024-12-14 00:18:20.530746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:41.508 [2024-12-14 00:18:20.530776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:41.508 [2024-12-14 00:18:20.535427] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:41.508 [2024-12-14 00:18:20.535593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:41.508 [2024-12-14 00:18:20.535628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:41.508 [2024-12-14 00:18:20.539972] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:41.508 [2024-12-14 00:18:20.540153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:41.508 [2024-12-14 00:18:20.540179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:41.508 [2024-12-14 00:18:20.544532] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:41.508 [2024-12-14 00:18:20.544702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:41.508 [2024-12-14 00:18:20.544727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:41.508 [2024-12-14 00:18:20.549068] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:41.508 [2024-12-14 00:18:20.549243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:41.508 [2024-12-14 00:18:20.549268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:41.508 [2024-12-14 00:18:20.553580] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:41.508 [2024-12-14 00:18:20.553771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:41.508 [2024-12-14 00:18:20.553796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:41.508 [2024-12-14 00:18:20.558136] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:41.508 [2024-12-14 00:18:20.558310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:41.508 [2024-12-14 00:18:20.558335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:41.508 [2024-12-14 00:18:20.562682] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:41.508 [2024-12-14 00:18:20.562866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:41.508 [2024-12-14 00:18:20.562892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:41.508 [2024-12-14 00:18:20.567247] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:41.508 [2024-12-14 00:18:20.567424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:41.508 [2024-12-14 00:18:20.567456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:41.508 [2024-12-14 00:18:20.571778] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:41.509 [2024-12-14 00:18:20.571968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:41.509 [2024-12-14 00:18:20.571994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:41.509 [2024-12-14 00:18:20.576356] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:41.509 [2024-12-14 00:18:20.576540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:41.509 [2024-12-14 00:18:20.576566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:41.509 [2024-12-14 00:18:20.580844] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:41.509 [2024-12-14 00:18:20.581030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:41.509 [2024-12-14 00:18:20.581056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:41.509 [2024-12-14 00:18:20.585521] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:41.509 [2024-12-14 00:18:20.585704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:41.509 [2024-12-14 00:18:20.585729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:41.509 [2024-12-14 00:18:20.590879] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:41.509 [2024-12-14 00:18:20.591032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:41.509 [2024-12-14 00:18:20.591058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:41.509 [2024-12-14 00:18:20.595806] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:41.509 [2024-12-14 00:18:20.595972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:41.509 [2024-12-14 00:18:20.595997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:41.509 [2024-12-14 00:18:20.600384] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:41.509 [2024-12-14 00:18:20.600597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:41.509 [2024-12-14 00:18:20.600623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:41.509 [2024-12-14 00:18:20.605020] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:41.509 [2024-12-14 00:18:20.605190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:41.509 [2024-12-14 00:18:20.605215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:41.509 [2024-12-14 00:18:20.609503] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:41.509 [2024-12-14 00:18:20.609678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:41.509 [2024-12-14 00:18:20.609703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:41.509 [2024-12-14 00:18:20.614018] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:41.509 [2024-12-14 00:18:20.614187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:41.509 [2024-12-14 00:18:20.614212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:41.509 [2024-12-14 00:18:20.618617] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:41.509 [2024-12-14 00:18:20.618810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:41.509 [2024-12-14 00:18:20.618835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:41.509 [2024-12-14 00:18:20.623112] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:41.509 [2024-12-14 00:18:20.623293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:41.509 [2024-12-14 00:18:20.623318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:41.509 [2024-12-14 00:18:20.627621] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:41.509 [2024-12-14 00:18:20.627796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:41.509 [2024-12-14 00:18:20.627822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:41.509 [2024-12-14 00:18:20.632085] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:41.509 [2024-12-14 00:18:20.632273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:41.509 [2024-12-14 00:18:20.632298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:41.509 [2024-12-14 00:18:20.636603] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:41.509 [2024-12-14 00:18:20.636798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:41.509 [2024-12-14 00:18:20.636823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:41.509 [2024-12-14 00:18:20.641097] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:41.509 [2024-12-14 00:18:20.641271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:41.509 [2024-12-14 00:18:20.641297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:41.509 [2024-12-14 00:18:20.645596] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:41.509 [2024-12-14 00:18:20.645776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:41.509 [2024-12-14 00:18:20.645801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:41.769 [2024-12-14 00:18:20.650152] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:41.769 [2024-12-14 00:18:20.650332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:41.769 [2024-12-14 00:18:20.650361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:41.769 [2024-12-14 00:18:20.654718] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:41.769 [2024-12-14 00:18:20.654897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:41.769 [2024-12-14 00:18:20.654923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:41.769 [2024-12-14 00:18:20.659186] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:41.769 [2024-12-14 00:18:20.659364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:41.769 [2024-12-14 00:18:20.659389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:41.769 [2024-12-14 00:18:20.663736] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:41.769 [2024-12-14 00:18:20.663914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:41.769 [2024-12-14 00:18:20.663939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:41.769 [2024-12-14 00:18:20.668249] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:41.769 [2024-12-14 00:18:20.668421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:41.769 [2024-12-14 00:18:20.668454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:41.769 [2024-12-14 00:18:20.672756] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:41.769 [2024-12-14 00:18:20.672953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:41.769 [2024-12-14 00:18:20.672978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:41.769 [2024-12-14 00:18:20.677285] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:41.769 [2024-12-14 00:18:20.677487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:41.769 [2024-12-14 00:18:20.677513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:41.769 [2024-12-14 00:18:20.681792] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:41.769 [2024-12-14 00:18:20.681967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:41.769 [2024-12-14 00:18:20.681992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:41.769 [2024-12-14 00:18:20.686336] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:41.769 [2024-12-14 00:18:20.686542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:41.769 [2024-12-14 00:18:20.686569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:41.769 [2024-12-14 00:18:20.690846] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:41.769 [2024-12-14 00:18:20.691022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:41.769 [2024-12-14 00:18:20.691048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:41.769 [2024-12-14 00:18:20.695404] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:41.769 [2024-12-14 00:18:20.695578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:41.769 [2024-12-14 00:18:20.695604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:41.769 [2024-12-14 00:18:20.699926] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:41.769 [2024-12-14 00:18:20.700094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:41.769 [2024-12-14 00:18:20.700119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:41.769 [2024-12-14 00:18:20.704470] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:41.769 [2024-12-14 00:18:20.704649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:41.769 [2024-12-14 00:18:20.704675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:41.769 [2024-12-14 00:18:20.709007] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:41.769 [2024-12-14 00:18:20.709207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:41.769 [2024-12-14 00:18:20.709232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:41.769 [2024-12-14 00:18:20.713498] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:41.769 [2024-12-14 00:18:20.713675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:41.769 [2024-12-14 00:18:20.713700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:41.769 [2024-12-14 00:18:20.717994] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:41.769 [2024-12-14 00:18:20.718172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:41.769 [2024-12-14 00:18:20.718197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:41.769 [2024-12-14 00:18:20.722557] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:41.769 [2024-12-14 00:18:20.722737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:41.769 [2024-12-14 00:18:20.722762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:41.769 [2024-12-14 00:18:20.727024] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:41.769 [2024-12-14 00:18:20.727206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:41.769 [2024-12-14 00:18:20.727238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:41.769 [2024-12-14 00:18:20.731758] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:41.770 [2024-12-14 00:18:20.731914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:41.770 [2024-12-14 00:18:20.731940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:41.770 [2024-12-14 00:18:20.737096] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:41.770 [2024-12-14 00:18:20.737246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:41.770 [2024-12-14 00:18:20.737271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:41.770 [2024-12-14 00:18:20.741721] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:41.770 [2024-12-14 00:18:20.741891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:41.770 [2024-12-14 00:18:20.741917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:41.770 [2024-12-14 00:18:20.746262] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:41.770 [2024-12-14 00:18:20.746448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:41.770 [2024-12-14 00:18:20.746474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:41.770 [2024-12-14 00:18:20.750708] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:41.770 [2024-12-14 00:18:20.750887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:41.770 [2024-12-14 00:18:20.750913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:41.770 [2024-12-14 00:18:20.755221] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:41.770 [2024-12-14 00:18:20.755386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:41.770 [2024-12-14 00:18:20.755411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:41.770 [2024-12-14 00:18:20.760575] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:41.770 [2024-12-14 00:18:20.760743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:41.770 [2024-12-14 00:18:20.760769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:41.770 [2024-12-14 00:18:20.765267] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:41.770 [2024-12-14 00:18:20.765454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:41.770 [2024-12-14 00:18:20.765479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:41.770 [2024-12-14 00:18:20.769840] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:41.770 [2024-12-14 00:18:20.770021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:41.770 [2024-12-14 00:18:20.770047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:41.770 [2024-12-14 00:18:20.774384] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:41.770 [2024-12-14 00:18:20.774572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:41.770 [2024-12-14 00:18:20.774598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:41.770 [2024-12-14 00:18:20.778999] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:41.770 [2024-12-14 00:18:20.779175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:41.770 [2024-12-14 00:18:20.779202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:41.770 [2024-12-14 00:18:20.783875] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:41.770 [2024-12-14 00:18:20.784041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:41.770 [2024-12-14 00:18:20.784066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:41.770 [2024-12-14 00:18:20.789071] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:41.770 [2024-12-14 00:18:20.789200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:41.770 [2024-12-14 00:18:20.789225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:41.770 [2024-12-14 00:18:20.794107] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:41.770 [2024-12-14 00:18:20.794248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:41.770 [2024-12-14 00:18:20.794273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:41.770 [2024-12-14 00:18:20.799360] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:41.770 [2024-12-14 00:18:20.799537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:41.770 [2024-12-14 00:18:20.799562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:41.770 [2024-12-14 00:18:20.804319] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:41.770 [2024-12-14 00:18:20.804485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:41.770 [2024-12-14 00:18:20.804510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:41.770 [2024-12-14 00:18:20.809252] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:41.770 [2024-12-14 00:18:20.809417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:41.770 [2024-12-14 00:18:20.809452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:41.770 [2024-12-14 00:18:20.814262] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:41.770 [2024-12-14 00:18:20.814624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:41.770 [2024-12-14 00:18:20.814650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:41.770 [2024-12-14 00:18:20.819493] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:41.770 [2024-12-14 00:18:20.819632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:41.770 [2024-12-14 00:18:20.819657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:41.770 [2024-12-14 00:18:20.824533] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:41.770 [2024-12-14 00:18:20.824700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:41.770 [2024-12-14 00:18:20.824726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:41.770 [2024-12-14 00:18:20.829698] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:41.770 [2024-12-14 00:18:20.829863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:41.770 [2024-12-14 00:18:20.829889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:41.770 [2024-12-14 00:18:20.834879] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:41.770 [2024-12-14 00:18:20.835041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:41.770 [2024-12-14 00:18:20.835077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:41.770 [2024-12-14 00:18:20.840079] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:41.770 [2024-12-14 00:18:20.840233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:41.770 [2024-12-14 00:18:20.840258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:41.770 [2024-12-14 00:18:20.845487] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:41.770 [2024-12-14 00:18:20.845647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:41.770 [2024-12-14 00:18:20.845674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:41.770 [2024-12-14 00:18:20.850066] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:41.770 [2024-12-14 00:18:20.850237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:41.770 [2024-12-14 00:18:20.850262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:41.770 [2024-12-14 00:18:20.854602] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:41.770 [2024-12-14 00:18:20.854774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:41.770 [2024-12-14 00:18:20.854800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:41.770 [2024-12-14 00:18:20.859073] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:41.770 [2024-12-14 00:18:20.859252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:41.770 [2024-12-14 00:18:20.859277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:41.770 [2024-12-14 00:18:20.863662] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:41.770 [2024-12-14 00:18:20.863858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:41.770 [2024-12-14 00:18:20.863884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:41.770 [2024-12-14 00:18:20.868486] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:41.771 [2024-12-14 00:18:20.868652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:41.771 [2024-12-14 00:18:20.868676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:41.771 [2024-12-14 00:18:20.873034] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:41.771 [2024-12-14 00:18:20.873206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:41.771 [2024-12-14 00:18:20.873232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:41.771 [2024-12-14 00:18:20.877677] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:41.771 [2024-12-14 00:18:20.877891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:41.771 [2024-12-14 00:18:20.877916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:41.771 [2024-12-14 00:18:20.883063] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:41.771 [2024-12-14 00:18:20.883322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:41.771 [2024-12-14 00:18:20.883348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:41.771 [2024-12-14 00:18:20.888659] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:41.771 [2024-12-14 00:18:20.888939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:41.771 [2024-12-14 00:18:20.888965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:41.771 [2024-12-14 00:18:20.895167] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:41.771 [2024-12-14 00:18:20.895309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:41.771 [2024-12-14 00:18:20.895334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:41.771 [2024-12-14 00:18:20.901898] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:41.771 [2024-12-14 00:18:20.902074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:41.771 [2024-12-14 00:18:20.902099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:42.031 [2024-12-14 00:18:20.908049] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:42.031 [2024-12-14 00:18:20.908314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.031 [2024-12-14 00:18:20.908341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:42.031 [2024-12-14 00:18:20.914428] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:42.031 [2024-12-14 00:18:20.914654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.031 [2024-12-14 00:18:20.914691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:42.031 [2024-12-14 00:18:20.920528] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:42.031 [2024-12-14 00:18:20.920739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.031 [2024-12-14 00:18:20.920765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:42.031 [2024-12-14 00:18:20.926994] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:42.031 [2024-12-14 00:18:20.927187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.031 [2024-12-14 00:18:20.927212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:42.031 [2024-12-14 00:18:20.934131] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:42.031 [2024-12-14 00:18:20.934360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.031 [2024-12-14 00:18:20.934386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:42.031 [2024-12-14 00:18:20.941114] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:42.031 [2024-12-14 00:18:20.941294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.031 [2024-12-14 00:18:20.941320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:42.031 [2024-12-14 00:18:20.947906] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:42.031 [2024-12-14 00:18:20.948085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.031 [2024-12-14 00:18:20.948111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:42.031 [2024-12-14 00:18:20.954787] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:42.031 [2024-12-14 00:18:20.954952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.031 [2024-12-14 00:18:20.954982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:42.031 [2024-12-14 00:18:20.959887] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:42.031 [2024-12-14 00:18:20.960047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.031 [2024-12-14 00:18:20.960074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:42.031 [2024-12-14 00:18:20.964466] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:42.031 [2024-12-14 00:18:20.964643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.031 [2024-12-14 00:18:20.964668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:42.031 [2024-12-14 00:18:20.969011] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:42.031 [2024-12-14 00:18:20.969190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.031 [2024-12-14 00:18:20.969215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:42.031 [2024-12-14 00:18:20.973672] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:42.031 [2024-12-14 00:18:20.973846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.031 [2024-12-14 00:18:20.973871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:42.031 [2024-12-14 00:18:20.978186] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:42.031 [2024-12-14 00:18:20.978369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.031 [2024-12-14 00:18:20.978394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:42.031 [2024-12-14 00:18:20.982779] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:42.031 [2024-12-14 00:18:20.982964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.031 [2024-12-14 00:18:20.982989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:42.031 [2024-12-14 00:18:20.987305] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:42.031 [2024-12-14 00:18:20.987485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.031 [2024-12-14 00:18:20.987510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:42.031 [2024-12-14 00:18:20.991889] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:42.031 [2024-12-14 00:18:20.992066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.031 [2024-12-14 00:18:20.992091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:42.031 [2024-12-14 00:18:20.996427] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:42.031 [2024-12-14 00:18:20.996621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.031 [2024-12-14 00:18:20.996647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:42.031 [2024-12-14 00:18:21.000944] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:42.031 [2024-12-14 00:18:21.001117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.031 [2024-12-14 00:18:21.001143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:42.031 [2024-12-14 00:18:21.005410] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:42.031 [2024-12-14 00:18:21.005615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.031 [2024-12-14 00:18:21.005641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:42.031 [2024-12-14 00:18:21.010111] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:42.031 [2024-12-14 00:18:21.010280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.031 [2024-12-14 00:18:21.010305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:42.031 [2024-12-14 00:18:21.015399] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:42.031 [2024-12-14 00:18:21.015584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.031 [2024-12-14 00:18:21.015611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:42.031 [2024-12-14 00:18:21.021083] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:42.032 [2024-12-14 00:18:21.021252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.032 [2024-12-14 00:18:21.021277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:42.032 [2024-12-14 00:18:21.026089] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:42.032 [2024-12-14 00:18:21.026262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.032 [2024-12-14 00:18:21.026287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:42.032 [2024-12-14 00:18:21.030658] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:42.032 [2024-12-14 00:18:21.030837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.032 [2024-12-14 00:18:21.030862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:42.032 [2024-12-14 00:18:21.035181] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:42.032 [2024-12-14 00:18:21.035340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.032 [2024-12-14 00:18:21.035373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:42.032 [2024-12-14 00:18:21.039707] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:42.032 [2024-12-14 00:18:21.039884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.032 [2024-12-14 00:18:21.039910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:42.032 [2024-12-14 00:18:21.044200] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:42.032 [2024-12-14 00:18:21.044375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.032 [2024-12-14 00:18:21.044401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:42.032 [2024-12-14 00:18:21.048793] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:42.032 [2024-12-14 00:18:21.048959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.032 [2024-12-14 00:18:21.048986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:42.032 [2024-12-14 00:18:21.053311] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:42.032 [2024-12-14 00:18:21.053499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.032 [2024-12-14 00:18:21.053524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:42.032 [2024-12-14 00:18:21.057855] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:42.032 [2024-12-14 00:18:21.058017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.032 [2024-12-14 00:18:21.058042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:42.032 [2024-12-14 00:18:21.062400] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:42.032 [2024-12-14 00:18:21.062597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.032 [2024-12-14 00:18:21.062622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:42.032 [2024-12-14 00:18:21.066962] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:42.032 [2024-12-14 00:18:21.067145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.032 [2024-12-14 00:18:21.067170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:42.032 [2024-12-14 00:18:21.071523] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:42.032 [2024-12-14 00:18:21.071688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.032 [2024-12-14 00:18:21.071713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:42.032 [2024-12-14 00:18:21.075996] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:42.032 [2024-12-14 00:18:21.076163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.032 [2024-12-14 00:18:21.076189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:42.032 [2024-12-14 00:18:21.080604] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:42.032 [2024-12-14 00:18:21.080772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.032 [2024-12-14 00:18:21.080798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:42.032 [2024-12-14 00:18:21.085173] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:42.032 [2024-12-14 00:18:21.085342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.032 [2024-12-14 00:18:21.085367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:42.032 [2024-12-14 00:18:21.089757] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:42.032 [2024-12-14 00:18:21.089928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.032 [2024-12-14 00:18:21.089953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:42.032 [2024-12-14 00:18:21.094244] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:42.032 [2024-12-14 00:18:21.094420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.032 [2024-12-14 00:18:21.094452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:42.032 [2024-12-14 00:18:21.098754] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:42.032 [2024-12-14 00:18:21.098921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.032 [2024-12-14 00:18:21.098950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:42.032 [2024-12-14 00:18:21.103189] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:42.032 [2024-12-14 00:18:21.103369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.032 [2024-12-14 00:18:21.103394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:42.032 [2024-12-14 00:18:21.107700] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:42.032 [2024-12-14 00:18:21.107850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.032 [2024-12-14 00:18:21.107875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:42.032 [2024-12-14 00:18:21.112175] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:42.032 [2024-12-14 00:18:21.112346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.032 [2024-12-14 00:18:21.112371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:42.032 [2024-12-14 00:18:21.116663] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:42.032 [2024-12-14 00:18:21.116831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.032 [2024-12-14 00:18:21.116856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:42.032 [2024-12-14 00:18:21.121170] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:42.032 [2024-12-14 00:18:21.121374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.032 [2024-12-14 00:18:21.121399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:42.032 [2024-12-14 00:18:21.125691] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:42.032 [2024-12-14 00:18:21.125847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.032 [2024-12-14 00:18:21.125871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:42.032 [2024-12-14 00:18:21.130646] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:42.032 [2024-12-14 00:18:21.130828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.032 [2024-12-14 00:18:21.130853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:42.032 [2024-12-14 00:18:21.135585] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:42.032 [2024-12-14 00:18:21.135755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.032 [2024-12-14 00:18:21.135781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:42.032 [2024-12-14 00:18:21.140129] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:42.032 [2024-12-14 00:18:21.140302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.032 [2024-12-14 00:18:21.140327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:42.032 [2024-12-14 00:18:21.144649] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:42.032 [2024-12-14 00:18:21.144816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.032 [2024-12-14 00:18:21.144840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:42.033 [2024-12-14 00:18:21.149110] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:42.033 [2024-12-14 00:18:21.149283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.033 [2024-12-14 00:18:21.149308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:42.033 [2024-12-14 00:18:21.153589] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:42.033 [2024-12-14 00:18:21.153756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.033 [2024-12-14 00:18:21.153808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:42.033 [2024-12-14 00:18:21.158103] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:42.033 [2024-12-14 00:18:21.158276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.033 [2024-12-14 00:18:21.158301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:42.033 [2024-12-14 00:18:21.162620] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:42.033 [2024-12-14 00:18:21.162785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.033 [2024-12-14 00:18:21.162809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:42.033 [2024-12-14 00:18:21.167236] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:42.033 [2024-12-14 00:18:21.167418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.033 [2024-12-14 00:18:21.167449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:42.293 [2024-12-14 00:18:21.171838] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:42.293 [2024-12-14 00:18:21.172012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.293 [2024-12-14 00:18:21.172038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:42.293 [2024-12-14 00:18:21.176487] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:42.293 [2024-12-14 00:18:21.176667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.293 [2024-12-14 00:18:21.176704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:42.293 [2024-12-14 00:18:21.180931] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:42.293 [2024-12-14 00:18:21.181129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.293 [2024-12-14 00:18:21.181154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:42.293 [2024-12-14 00:18:21.185372] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:42.293 [2024-12-14 00:18:21.185570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.293 [2024-12-14 00:18:21.185595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:42.293 [2024-12-14 00:18:21.189792] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:42.293 [2024-12-14 00:18:21.189969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.293 [2024-12-14 00:18:21.189995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:42.293 [2024-12-14 00:18:21.194203] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:42.293 [2024-12-14 00:18:21.194377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.293 [2024-12-14 00:18:21.194403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:42.293 [2024-12-14 00:18:21.198636] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:42.293 [2024-12-14 00:18:21.198821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.293 [2024-12-14 00:18:21.198846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:42.293 [2024-12-14 00:18:21.203059] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:42.293 [2024-12-14 00:18:21.203246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.293 [2024-12-14 00:18:21.203271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:42.293 [2024-12-14 00:18:21.207502] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:42.293 [2024-12-14 00:18:21.207675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.293 [2024-12-14 00:18:21.207700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:42.293 [2024-12-14 00:18:21.211928] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:42.293 [2024-12-14 00:18:21.212106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.293 [2024-12-14 00:18:21.212131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:42.293 [2024-12-14 00:18:21.216338] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:42.293 [2024-12-14 00:18:21.216541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.293 [2024-12-14 00:18:21.216565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:42.293 [2024-12-14 00:18:21.220729] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:42.293 [2024-12-14 00:18:21.220931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.293 [2024-12-14 00:18:21.220955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:42.293 [2024-12-14 00:18:21.225141] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:42.293 [2024-12-14 00:18:21.225343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.293 [2024-12-14 00:18:21.225368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:42.293 [2024-12-14 00:18:21.229569] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:42.293 [2024-12-14 00:18:21.229754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.293 [2024-12-14 00:18:21.229783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:42.293 [2024-12-14 00:18:21.233951] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:42.293 [2024-12-14 00:18:21.234137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.293 [2024-12-14 00:18:21.234162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:42.293 [2024-12-14 00:18:21.238303] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:42.293 [2024-12-14 00:18:21.238533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.293 [2024-12-14 00:18:21.238558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:42.293 [2024-12-14 00:18:21.242759] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:42.293 [2024-12-14 00:18:21.242932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.293 [2024-12-14 00:18:21.242957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:42.293 [2024-12-14 00:18:21.247168] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:42.293 [2024-12-14 00:18:21.247352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.293 [2024-12-14 00:18:21.247376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:42.293 [2024-12-14 00:18:21.251551] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:42.293 [2024-12-14 00:18:21.251736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.293 [2024-12-14 00:18:21.251761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:42.293 [2024-12-14 00:18:21.255953] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:42.293 [2024-12-14 00:18:21.256122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.293 [2024-12-14 00:18:21.256147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:42.293 [2024-12-14 00:18:21.260367] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:42.293 [2024-12-14 00:18:21.260536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.293 [2024-12-14 00:18:21.260561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:42.293 [2024-12-14 00:18:21.264749] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:42.293 [2024-12-14 00:18:21.264932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.293 [2024-12-14 00:18:21.264957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:42.293 [2024-12-14 00:18:21.269145] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:42.293 [2024-12-14 00:18:21.269325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.293 [2024-12-14 00:18:21.269350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:42.293 [2024-12-14 00:18:21.273566] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:42.293 [2024-12-14 00:18:21.273768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.293 [2024-12-14 00:18:21.273793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:42.293 [2024-12-14 00:18:21.277974] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:42.293 [2024-12-14 00:18:21.278156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.293 [2024-12-14 00:18:21.278180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:42.293 [2024-12-14 00:18:21.282373] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:42.293 [2024-12-14 00:18:21.282564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.293 [2024-12-14 00:18:21.282589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:42.293 [2024-12-14 00:18:21.286792] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:42.293 [2024-12-14 00:18:21.286963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.294 [2024-12-14 00:18:21.286987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:42.294 [2024-12-14 00:18:21.291245] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:42.294 [2024-12-14 00:18:21.291448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.294 [2024-12-14 00:18:21.291474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:42.294 [2024-12-14 00:18:21.295660] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:42.294 [2024-12-14 00:18:21.295835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.294 [2024-12-14 00:18:21.295860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:42.294 [2024-12-14 00:18:21.300054] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:42.294 [2024-12-14 00:18:21.300246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.294 [2024-12-14 00:18:21.300271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:42.294 [2024-12-14 00:18:21.304458] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:42.294 [2024-12-14 00:18:21.304651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.294 [2024-12-14 00:18:21.304680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:42.294 5711.00 IOPS, 713.88 MiB/s [2024-12-13T23:18:21.435Z] [2024-12-14 00:18:21.309997] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000173ff3c8 00:37:42.294 [2024-12-14 00:18:21.310075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.294 [2024-12-14 00:18:21.310101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:42.294 00:37:42.294 Latency(us) 00:37:42.294 [2024-12-13T23:18:21.435Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:42.294 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:37:42.294 nvme0n1 : 2.00 5709.39 713.67 0.00 0.00 2797.34 1927.07 11796.48 00:37:42.294 [2024-12-13T23:18:21.435Z] =================================================================================================================== 00:37:42.294 [2024-12-13T23:18:21.435Z] Total : 5709.39 713.67 0.00 0.00 2797.34 1927.07 11796.48 00:37:42.294 { 00:37:42.294 "results": [ 00:37:42.294 { 00:37:42.294 "job": "nvme0n1", 00:37:42.294 "core_mask": "0x2", 00:37:42.294 "workload": "randwrite", 00:37:42.294 "status": "finished", 00:37:42.294 "queue_depth": 16, 00:37:42.294 "io_size": 131072, 00:37:42.294 "runtime": 2.004243, 00:37:42.294 "iops": 5709.387534345885, 00:37:42.294 "mibps": 713.6734417932356, 00:37:42.294 "io_failed": 0, 00:37:42.294 "io_timeout": 0, 00:37:42.294 "avg_latency_us": 2797.3442142628264, 00:37:42.294 "min_latency_us": 1927.0704761904763, 00:37:42.294 "max_latency_us": 11796.48 00:37:42.294 } 00:37:42.294 ], 00:37:42.294 "core_count": 1 00:37:42.294 } 00:37:42.294 00:18:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:37:42.294 00:18:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:37:42.294 00:18:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:37:42.294 | .driver_specific 00:37:42.294 | .nvme_error 00:37:42.294 | .status_code 00:37:42.294 | .command_transient_transport_error' 00:37:42.294 00:18:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:37:42.553 00:18:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 370 > 0 )) 00:37:42.553 00:18:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 47547 00:37:42.553 00:18:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 47547 ']' 00:37:42.553 00:18:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 47547 00:37:42.553 00:18:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:37:42.553 00:18:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:37:42.553 00:18:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 47547 00:37:42.553 00:18:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:37:42.553 00:18:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:37:42.553 00:18:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 47547' 00:37:42.553 killing process with pid 47547 00:37:42.553 00:18:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 47547 00:37:42.553 Received shutdown signal, test time was about 2.000000 seconds 00:37:42.553 00:37:42.553 Latency(us) 00:37:42.553 [2024-12-13T23:18:21.694Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:42.553 [2024-12-13T23:18:21.694Z] =================================================================================================================== 00:37:42.553 [2024-12-13T23:18:21.694Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:37:42.553 00:18:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 47547 00:37:43.537 00:18:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 44789 00:37:43.537 00:18:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 44789 ']' 00:37:43.537 00:18:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 44789 00:37:43.537 00:18:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:37:43.537 00:18:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:37:43.537 00:18:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 44789 00:37:43.537 00:18:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:37:43.537 00:18:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:37:43.537 00:18:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 44789' 00:37:43.537 killing process with pid 44789 00:37:43.537 00:18:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 44789 00:37:43.537 00:18:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 44789 00:37:44.914 00:37:44.914 real 0m21.291s 00:37:44.914 user 0m40.037s 00:37:44.914 sys 0m4.826s 00:37:44.914 00:18:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:44.914 00:18:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:37:44.914 ************************************ 00:37:44.914 END TEST nvmf_digest_error 00:37:44.914 ************************************ 00:37:44.914 00:18:23 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:37:44.914 00:18:23 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:37:44.914 00:18:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@516 -- # nvmfcleanup 00:37:44.914 00:18:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@121 -- # sync 00:37:44.914 00:18:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:37:44.914 00:18:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@124 -- # set +e 00:37:44.914 00:18:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@125 -- # for i in {1..20} 00:37:44.914 00:18:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:37:44.914 rmmod nvme_tcp 00:37:44.914 rmmod nvme_fabrics 00:37:44.914 rmmod nvme_keyring 00:37:44.914 00:18:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:37:44.914 00:18:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@128 -- # set -e 00:37:44.914 00:18:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@129 -- # return 0 00:37:44.914 00:18:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@517 -- # '[' -n 44789 ']' 00:37:44.914 00:18:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@518 -- # killprocess 44789 00:37:44.914 00:18:23 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@954 -- # '[' -z 44789 ']' 00:37:44.914 00:18:23 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@958 -- # kill -0 44789 00:37:44.914 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (44789) - No such process 00:37:44.914 00:18:23 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@981 -- # echo 'Process with pid 44789 is not found' 00:37:44.914 Process with pid 44789 is not found 00:37:44.914 00:18:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:37:44.914 00:18:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:37:44.914 00:18:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:37:44.914 00:18:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@297 -- # iptr 00:37:44.914 00:18:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-save 00:37:44.914 00:18:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:37:44.914 00:18:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-restore 00:37:44.914 00:18:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:37:44.914 00:18:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@302 -- # remove_spdk_ns 00:37:44.914 00:18:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:44.914 00:18:23 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:44.914 00:18:23 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:46.818 00:18:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:37:46.818 00:37:46.818 real 0m51.243s 00:37:46.818 user 1m23.743s 00:37:46.818 sys 0m13.566s 00:37:46.818 00:18:25 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:46.818 00:18:25 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:37:46.818 ************************************ 00:37:46.818 END TEST nvmf_digest 00:37:46.818 ************************************ 00:37:46.818 00:18:25 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ 0 -eq 1 ]] 00:37:46.818 00:18:25 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@41 -- # [[ 0 -eq 1 ]] 00:37:46.818 00:18:25 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@46 -- # [[ phy == phy ]] 00:37:46.818 00:18:25 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@47 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:37:46.818 00:18:25 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:37:46.818 00:18:25 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:37:46.818 00:18:25 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:37:46.818 ************************************ 00:37:46.818 START TEST nvmf_bdevperf 00:37:46.818 ************************************ 00:37:46.818 00:18:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:37:46.818 * Looking for test storage... 00:37:46.818 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:37:46.818 00:18:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:37:46.818 00:18:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1711 -- # lcov --version 00:37:46.818 00:18:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:37:47.077 00:18:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:37:47.077 00:18:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:37:47.077 00:18:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:37:47.077 00:18:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:37:47.077 00:18:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # IFS=.-: 00:37:47.077 00:18:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # read -ra ver1 00:37:47.077 00:18:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # IFS=.-: 00:37:47.077 00:18:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # read -ra ver2 00:37:47.077 00:18:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@338 -- # local 'op=<' 00:37:47.077 00:18:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@340 -- # ver1_l=2 00:37:47.077 00:18:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@341 -- # ver2_l=1 00:37:47.077 00:18:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:37:47.077 00:18:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@344 -- # case "$op" in 00:37:47.077 00:18:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@345 -- # : 1 00:37:47.077 00:18:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v = 0 )) 00:37:47.077 00:18:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:37:47.077 00:18:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # decimal 1 00:37:47.077 00:18:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=1 00:37:47.077 00:18:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:37:47.077 00:18:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 1 00:37:47.077 00:18:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # ver1[v]=1 00:37:47.077 00:18:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # decimal 2 00:37:47.077 00:18:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=2 00:37:47.077 00:18:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:37:47.077 00:18:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 2 00:37:47.077 00:18:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # ver2[v]=2 00:37:47.077 00:18:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:37:47.077 00:18:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:37:47.077 00:18:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # return 0 00:37:47.077 00:18:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:37:47.077 00:18:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:37:47.077 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:47.077 --rc genhtml_branch_coverage=1 00:37:47.077 --rc genhtml_function_coverage=1 00:37:47.077 --rc genhtml_legend=1 00:37:47.077 --rc geninfo_all_blocks=1 00:37:47.077 --rc geninfo_unexecuted_blocks=1 00:37:47.077 00:37:47.077 ' 00:37:47.077 00:18:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:37:47.077 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:47.077 --rc genhtml_branch_coverage=1 00:37:47.077 --rc genhtml_function_coverage=1 00:37:47.077 --rc genhtml_legend=1 00:37:47.077 --rc geninfo_all_blocks=1 00:37:47.077 --rc geninfo_unexecuted_blocks=1 00:37:47.077 00:37:47.077 ' 00:37:47.077 00:18:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:37:47.077 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:47.077 --rc genhtml_branch_coverage=1 00:37:47.077 --rc genhtml_function_coverage=1 00:37:47.077 --rc genhtml_legend=1 00:37:47.077 --rc geninfo_all_blocks=1 00:37:47.077 --rc geninfo_unexecuted_blocks=1 00:37:47.077 00:37:47.078 ' 00:37:47.078 00:18:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:37:47.078 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:47.078 --rc genhtml_branch_coverage=1 00:37:47.078 --rc genhtml_function_coverage=1 00:37:47.078 --rc genhtml_legend=1 00:37:47.078 --rc geninfo_all_blocks=1 00:37:47.078 --rc geninfo_unexecuted_blocks=1 00:37:47.078 00:37:47.078 ' 00:37:47.078 00:18:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:37:47.078 00:18:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:37:47.078 00:18:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:47.078 00:18:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:47.078 00:18:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:47.078 00:18:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:47.078 00:18:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:47.078 00:18:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:47.078 00:18:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:47.078 00:18:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:47.078 00:18:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:47.078 00:18:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:47.078 00:18:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:37:47.078 00:18:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:37:47.078 00:18:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:47.078 00:18:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:47.078 00:18:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:37:47.078 00:18:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:47.078 00:18:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:47.078 00:18:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@15 -- # shopt -s extglob 00:37:47.078 00:18:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:47.078 00:18:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:47.078 00:18:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:47.078 00:18:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:47.078 00:18:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:47.078 00:18:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:47.078 00:18:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:37:47.078 00:18:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:47.078 00:18:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@51 -- # : 0 00:37:47.078 00:18:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:37:47.078 00:18:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:37:47.078 00:18:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:47.078 00:18:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:47.078 00:18:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:47.078 00:18:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:37:47.078 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:37:47.078 00:18:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:37:47.078 00:18:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:37:47.078 00:18:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:37:47.078 00:18:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:37:47.078 00:18:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:37:47.078 00:18:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:37:47.078 00:18:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:37:47.078 00:18:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:37:47.078 00:18:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@476 -- # prepare_net_devs 00:37:47.078 00:18:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:37:47.078 00:18:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:37:47.078 00:18:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:47.078 00:18:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:47.078 00:18:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:47.078 00:18:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:37:47.078 00:18:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:37:47.078 00:18:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@309 -- # xtrace_disable 00:37:47.078 00:18:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:52.356 00:18:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:37:52.356 00:18:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # pci_devs=() 00:37:52.356 00:18:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # local -a pci_devs 00:37:52.356 00:18:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:37:52.356 00:18:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:37:52.356 00:18:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # pci_drivers=() 00:37:52.356 00:18:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:37:52.356 00:18:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # net_devs=() 00:37:52.356 00:18:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # local -ga net_devs 00:37:52.356 00:18:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # e810=() 00:37:52.356 00:18:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # local -ga e810 00:37:52.356 00:18:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # x722=() 00:37:52.356 00:18:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # local -ga x722 00:37:52.356 00:18:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # mlx=() 00:37:52.356 00:18:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # local -ga mlx 00:37:52.356 00:18:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:37:52.356 00:18:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:37:52.356 00:18:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:37:52.356 00:18:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:37:52.356 00:18:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:37:52.356 00:18:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:37:52.356 00:18:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:37:52.356 00:18:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:37:52.356 00:18:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:37:52.356 00:18:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:37:52.356 00:18:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:37:52.356 00:18:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:37:52.356 00:18:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:37:52.356 00:18:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:37:52.356 00:18:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:37:52.356 00:18:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:37:52.357 00:18:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:37:52.357 00:18:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:37:52.357 00:18:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:52.357 00:18:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:37:52.357 Found 0000:af:00.0 (0x8086 - 0x159b) 00:37:52.357 00:18:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:52.357 00:18:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:52.357 00:18:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:52.357 00:18:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:52.357 00:18:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:52.357 00:18:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:52.357 00:18:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:37:52.357 Found 0000:af:00.1 (0x8086 - 0x159b) 00:37:52.357 00:18:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:52.357 00:18:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:52.357 00:18:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:52.357 00:18:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:52.357 00:18:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:52.357 00:18:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:37:52.357 00:18:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:37:52.357 00:18:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:37:52.357 00:18:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:37:52.357 00:18:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:52.357 00:18:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:37:52.357 00:18:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:52.357 00:18:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:37:52.357 00:18:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:37:52.357 00:18:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:52.357 00:18:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:37:52.357 Found net devices under 0000:af:00.0: cvl_0_0 00:37:52.357 00:18:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:37:52.357 00:18:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:37:52.357 00:18:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:52.357 00:18:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:37:52.357 00:18:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:52.357 00:18:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:37:52.357 00:18:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:37:52.357 00:18:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:52.357 00:18:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:37:52.357 Found net devices under 0000:af:00.1: cvl_0_1 00:37:52.357 00:18:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:37:52.357 00:18:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:37:52.357 00:18:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # is_hw=yes 00:37:52.357 00:18:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:37:52.357 00:18:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:37:52.357 00:18:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:37:52.357 00:18:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:37:52.357 00:18:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:37:52.357 00:18:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:37:52.357 00:18:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:37:52.357 00:18:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:37:52.357 00:18:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:37:52.357 00:18:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:37:52.357 00:18:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:37:52.357 00:18:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:37:52.357 00:18:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:37:52.357 00:18:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:37:52.357 00:18:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:37:52.357 00:18:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:37:52.357 00:18:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:37:52.357 00:18:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:37:52.357 00:18:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:37:52.357 00:18:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:37:52.357 00:18:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:37:52.357 00:18:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:37:52.357 00:18:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:37:52.357 00:18:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:37:52.357 00:18:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:37:52.357 00:18:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:37:52.357 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:37:52.357 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.357 ms 00:37:52.357 00:37:52.357 --- 10.0.0.2 ping statistics --- 00:37:52.357 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:52.357 rtt min/avg/max/mdev = 0.357/0.357/0.357/0.000 ms 00:37:52.357 00:18:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:37:52.357 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:37:52.357 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.193 ms 00:37:52.357 00:37:52.357 --- 10.0.0.1 ping statistics --- 00:37:52.357 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:52.357 rtt min/avg/max/mdev = 0.193/0.193/0.193/0.000 ms 00:37:52.357 00:18:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:37:52.357 00:18:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@450 -- # return 0 00:37:52.357 00:18:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:37:52.357 00:18:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:37:52.357 00:18:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:37:52.357 00:18:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:37:52.357 00:18:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:37:52.357 00:18:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:37:52.357 00:18:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:37:52.357 00:18:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:37:52.357 00:18:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:37:52.357 00:18:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:37:52.357 00:18:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@726 -- # xtrace_disable 00:37:52.357 00:18:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:52.357 00:18:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # nvmfpid=51909 00:37:52.357 00:18:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # waitforlisten 51909 00:37:52.357 00:18:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:37:52.357 00:18:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 51909 ']' 00:37:52.357 00:18:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:52.357 00:18:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:37:52.357 00:18:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:52.357 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:52.357 00:18:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:37:52.357 00:18:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:52.357 [2024-12-14 00:18:31.345797] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:37:52.357 [2024-12-14 00:18:31.345907] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:52.357 [2024-12-14 00:18:31.461514] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:37:52.617 [2024-12-14 00:18:31.569662] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:37:52.617 [2024-12-14 00:18:31.569707] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:37:52.617 [2024-12-14 00:18:31.569717] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:37:52.617 [2024-12-14 00:18:31.569727] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:37:52.617 [2024-12-14 00:18:31.569735] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:37:52.617 [2024-12-14 00:18:31.571818] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:37:52.617 [2024-12-14 00:18:31.571884] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:37:52.617 [2024-12-14 00:18:31.571894] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:37:53.184 00:18:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:37:53.184 00:18:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:37:53.184 00:18:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:37:53.184 00:18:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@732 -- # xtrace_disable 00:37:53.184 00:18:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:53.184 00:18:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:37:53.184 00:18:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:37:53.184 00:18:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:53.184 00:18:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:53.184 [2024-12-14 00:18:32.198293] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:53.184 00:18:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:53.184 00:18:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:37:53.184 00:18:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:53.184 00:18:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:53.184 Malloc0 00:37:53.184 00:18:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:53.184 00:18:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:37:53.184 00:18:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:53.184 00:18:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:53.184 00:18:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:53.184 00:18:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:37:53.184 00:18:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:53.184 00:18:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:53.184 00:18:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:53.184 00:18:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:37:53.184 00:18:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:53.184 00:18:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:53.184 [2024-12-14 00:18:32.323152] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:53.443 00:18:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:53.443 00:18:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:37:53.443 00:18:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:37:53.443 00:18:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # config=() 00:37:53.443 00:18:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # local subsystem config 00:37:53.443 00:18:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:37:53.443 00:18:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:37:53.443 { 00:37:53.443 "params": { 00:37:53.443 "name": "Nvme$subsystem", 00:37:53.443 "trtype": "$TEST_TRANSPORT", 00:37:53.443 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:53.443 "adrfam": "ipv4", 00:37:53.443 "trsvcid": "$NVMF_PORT", 00:37:53.443 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:53.443 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:53.443 "hdgst": ${hdgst:-false}, 00:37:53.443 "ddgst": ${ddgst:-false} 00:37:53.443 }, 00:37:53.443 "method": "bdev_nvme_attach_controller" 00:37:53.443 } 00:37:53.443 EOF 00:37:53.443 )") 00:37:53.443 00:18:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # cat 00:37:53.443 00:18:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # jq . 00:37:53.443 00:18:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@585 -- # IFS=, 00:37:53.443 00:18:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:37:53.443 "params": { 00:37:53.443 "name": "Nvme1", 00:37:53.443 "trtype": "tcp", 00:37:53.443 "traddr": "10.0.0.2", 00:37:53.443 "adrfam": "ipv4", 00:37:53.443 "trsvcid": "4420", 00:37:53.443 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:37:53.443 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:37:53.443 "hdgst": false, 00:37:53.443 "ddgst": false 00:37:53.443 }, 00:37:53.443 "method": "bdev_nvme_attach_controller" 00:37:53.443 }' 00:37:53.443 [2024-12-14 00:18:32.398929] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:37:53.443 [2024-12-14 00:18:32.399021] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid52053 ] 00:37:53.443 [2024-12-14 00:18:32.512206] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:53.702 [2024-12-14 00:18:32.626655] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:37:54.269 Running I/O for 1 seconds... 00:37:55.205 9795.00 IOPS, 38.26 MiB/s 00:37:55.205 Latency(us) 00:37:55.205 [2024-12-13T23:18:34.346Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:55.205 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:37:55.205 Verification LBA range: start 0x0 length 0x4000 00:37:55.205 Nvme1n1 : 1.01 9835.72 38.42 0.00 0.00 12961.54 2605.84 10735.42 00:37:55.205 [2024-12-13T23:18:34.346Z] =================================================================================================================== 00:37:55.205 [2024-12-13T23:18:34.346Z] Total : 9835.72 38.42 0.00 0.00 12961.54 2605.84 10735.42 00:37:56.142 00:18:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=52457 00:37:56.142 00:18:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:37:56.142 00:18:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:37:56.142 00:18:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:37:56.142 00:18:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # config=() 00:37:56.142 00:18:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # local subsystem config 00:37:56.142 00:18:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:37:56.142 00:18:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:37:56.142 { 00:37:56.142 "params": { 00:37:56.142 "name": "Nvme$subsystem", 00:37:56.142 "trtype": "$TEST_TRANSPORT", 00:37:56.142 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:56.142 "adrfam": "ipv4", 00:37:56.142 "trsvcid": "$NVMF_PORT", 00:37:56.142 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:56.142 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:56.142 "hdgst": ${hdgst:-false}, 00:37:56.142 "ddgst": ${ddgst:-false} 00:37:56.142 }, 00:37:56.142 "method": "bdev_nvme_attach_controller" 00:37:56.142 } 00:37:56.142 EOF 00:37:56.142 )") 00:37:56.142 00:18:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # cat 00:37:56.142 00:18:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # jq . 00:37:56.142 00:18:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@585 -- # IFS=, 00:37:56.142 00:18:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:37:56.142 "params": { 00:37:56.142 "name": "Nvme1", 00:37:56.142 "trtype": "tcp", 00:37:56.142 "traddr": "10.0.0.2", 00:37:56.142 "adrfam": "ipv4", 00:37:56.142 "trsvcid": "4420", 00:37:56.142 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:37:56.142 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:37:56.142 "hdgst": false, 00:37:56.142 "ddgst": false 00:37:56.142 }, 00:37:56.142 "method": "bdev_nvme_attach_controller" 00:37:56.142 }' 00:37:56.142 [2024-12-14 00:18:35.170257] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:37:56.142 [2024-12-14 00:18:35.170346] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid52457 ] 00:37:56.400 [2024-12-14 00:18:35.283487] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:56.400 [2024-12-14 00:18:35.397561] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:37:56.968 Running I/O for 15 seconds... 00:37:58.841 9706.00 IOPS, 37.91 MiB/s [2024-12-13T23:18:38.243Z] 9708.50 IOPS, 37.92 MiB/s [2024-12-13T23:18:38.243Z] 00:18:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 51909 00:37:59.102 00:18:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:37:59.102 [2024-12-14 00:18:38.136351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:35360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:59.102 [2024-12-14 00:18:38.136401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:59.102 [2024-12-14 00:18:38.136434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:35368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:59.102 [2024-12-14 00:18:38.136453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:59.102 [2024-12-14 00:18:38.136467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:35376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:59.102 [2024-12-14 00:18:38.136478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:59.102 [2024-12-14 00:18:38.136490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:35384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:59.102 [2024-12-14 00:18:38.136501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:59.102 [2024-12-14 00:18:38.136513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:35392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:59.102 [2024-12-14 00:18:38.136522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:59.102 [2024-12-14 00:18:38.136535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:35400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:59.102 [2024-12-14 00:18:38.136545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:59.102 [2024-12-14 00:18:38.136557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:35408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:59.102 [2024-12-14 00:18:38.136568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:59.102 [2024-12-14 00:18:38.136579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:35416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:59.102 [2024-12-14 00:18:38.136589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:59.102 [2024-12-14 00:18:38.136601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:35424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:59.102 [2024-12-14 00:18:38.136610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:59.102 [2024-12-14 00:18:38.136626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:35432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:59.102 [2024-12-14 00:18:38.136636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:59.102 [2024-12-14 00:18:38.136648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:35440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:59.102 [2024-12-14 00:18:38.136657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:59.102 [2024-12-14 00:18:38.136681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:35448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:59.102 [2024-12-14 00:18:38.136693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:59.102 [2024-12-14 00:18:38.136705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:35456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:59.102 [2024-12-14 00:18:38.136716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:59.102 [2024-12-14 00:18:38.136730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:35464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:59.102 [2024-12-14 00:18:38.136749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:59.102 [2024-12-14 00:18:38.136762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:35472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:59.102 [2024-12-14 00:18:38.136775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:59.102 [2024-12-14 00:18:38.136786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:35480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:59.102 [2024-12-14 00:18:38.136798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:59.102 [2024-12-14 00:18:38.136809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:35488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:59.102 [2024-12-14 00:18:38.136819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:59.102 [2024-12-14 00:18:38.136830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:35496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:59.102 [2024-12-14 00:18:38.136840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:59.103 [2024-12-14 00:18:38.136850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:35504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:59.103 [2024-12-14 00:18:38.136859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:59.103 [2024-12-14 00:18:38.136870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:35512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:59.103 [2024-12-14 00:18:38.136880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:59.103 [2024-12-14 00:18:38.136891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:35520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:59.103 [2024-12-14 00:18:38.136900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:59.103 [2024-12-14 00:18:38.136911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:35528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:59.103 [2024-12-14 00:18:38.136921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:59.103 [2024-12-14 00:18:38.136931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:35536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:59.103 [2024-12-14 00:18:38.136941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:59.103 [2024-12-14 00:18:38.136951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:35544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:59.103 [2024-12-14 00:18:38.136961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:59.103 [2024-12-14 00:18:38.136971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:35552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:59.103 [2024-12-14 00:18:38.136980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:59.103 [2024-12-14 00:18:38.136991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:35560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:59.103 [2024-12-14 00:18:38.137000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:59.103 [2024-12-14 00:18:38.137012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:35568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:59.103 [2024-12-14 00:18:38.137021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:59.103 [2024-12-14 00:18:38.137033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:35584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:59.103 [2024-12-14 00:18:38.137042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:59.103 [2024-12-14 00:18:38.137054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:35592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:59.103 [2024-12-14 00:18:38.137063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:59.103 [2024-12-14 00:18:38.137074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:35600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:59.103 [2024-12-14 00:18:38.137084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:59.103 [2024-12-14 00:18:38.137095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:35608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:59.103 [2024-12-14 00:18:38.137105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:59.103 [2024-12-14 00:18:38.137115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:35616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:59.103 [2024-12-14 00:18:38.137124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:59.103 [2024-12-14 00:18:38.137136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:35624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:59.103 [2024-12-14 00:18:38.137145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:59.103 [2024-12-14 00:18:38.137156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:35632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:59.103 [2024-12-14 00:18:38.137166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:59.103 [2024-12-14 00:18:38.137177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:35640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:59.103 [2024-12-14 00:18:38.137186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:59.103 [2024-12-14 00:18:38.137197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:35648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:59.103 [2024-12-14 00:18:38.137206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:59.103 [2024-12-14 00:18:38.137217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:35656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:59.103 [2024-12-14 00:18:38.137226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:59.103 [2024-12-14 00:18:38.137237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:35664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:59.103 [2024-12-14 00:18:38.137246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:59.103 [2024-12-14 00:18:38.137257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:35672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:59.103 [2024-12-14 00:18:38.137271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:59.103 [2024-12-14 00:18:38.137283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:35680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:59.103 [2024-12-14 00:18:38.137292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:59.103 [2024-12-14 00:18:38.137303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:35688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:59.103 [2024-12-14 00:18:38.137313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:59.103 [2024-12-14 00:18:38.137324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:35696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:59.103 [2024-12-14 00:18:38.137333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:59.103 [2024-12-14 00:18:38.137344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:35704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:59.103 [2024-12-14 00:18:38.137354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:59.103 [2024-12-14 00:18:38.137365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:35712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:59.103 [2024-12-14 00:18:38.137374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:59.103 [2024-12-14 00:18:38.137385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:35720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:59.103 [2024-12-14 00:18:38.137394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:59.103 [2024-12-14 00:18:38.137405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:35728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:59.103 [2024-12-14 00:18:38.137414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:59.103 [2024-12-14 00:18:38.137425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:35736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:59.103 [2024-12-14 00:18:38.137435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:59.103 [2024-12-14 00:18:38.137452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:35744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:59.103 [2024-12-14 00:18:38.137461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:59.103 [2024-12-14 00:18:38.137472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:35752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:59.103 [2024-12-14 00:18:38.137481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:59.103 [2024-12-14 00:18:38.137493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:35760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:59.103 [2024-12-14 00:18:38.137502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:59.103 [2024-12-14 00:18:38.137513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:35768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:59.103 [2024-12-14 00:18:38.137522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:59.103 [2024-12-14 00:18:38.137545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:35776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:59.103 [2024-12-14 00:18:38.137554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:59.103 [2024-12-14 00:18:38.137565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:35784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:59.103 [2024-12-14 00:18:38.137575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:59.103 [2024-12-14 00:18:38.137586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:35792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:59.103 [2024-12-14 00:18:38.137595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:59.103 [2024-12-14 00:18:38.137606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:35800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:59.103 [2024-12-14 00:18:38.137618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:59.103 [2024-12-14 00:18:38.137630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:35808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:59.103 [2024-12-14 00:18:38.137639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:59.103 [2024-12-14 00:18:38.137650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:35816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:59.103 [2024-12-14 00:18:38.137659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:59.103 [2024-12-14 00:18:38.137670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:35824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:59.103 [2024-12-14 00:18:38.137679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:59.103 [2024-12-14 00:18:38.137690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:35832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:59.103 [2024-12-14 00:18:38.137700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:59.104 [2024-12-14 00:18:38.137711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:35840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:59.104 [2024-12-14 00:18:38.137720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:59.104 [2024-12-14 00:18:38.137730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:35848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:59.104 [2024-12-14 00:18:38.137739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:59.104 [2024-12-14 00:18:38.137750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:35856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:59.104 [2024-12-14 00:18:38.137759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:59.104 [2024-12-14 00:18:38.137770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:35864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:59.104 [2024-12-14 00:18:38.137779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:59.104 [2024-12-14 00:18:38.137790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:35872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:59.104 [2024-12-14 00:18:38.137800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:59.104 [2024-12-14 00:18:38.137812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:35880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:59.104 [2024-12-14 00:18:38.137821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:59.104 [2024-12-14 00:18:38.137833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:35888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:59.104 [2024-12-14 00:18:38.137842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:59.104 [2024-12-14 00:18:38.137853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:35896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:59.104 [2024-12-14 00:18:38.137863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:59.104 [2024-12-14 00:18:38.137874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:35904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:59.104 [2024-12-14 00:18:38.137883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:59.104 [2024-12-14 00:18:38.137894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:35912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:59.104 [2024-12-14 00:18:38.137903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:59.104 [2024-12-14 00:18:38.137915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:35920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:59.104 [2024-12-14 00:18:38.137924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:59.104 [2024-12-14 00:18:38.137935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:35928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:59.104 [2024-12-14 00:18:38.137944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:59.104 [2024-12-14 00:18:38.137955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:35936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:59.104 [2024-12-14 00:18:38.137964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:59.104 [2024-12-14 00:18:38.137975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:35944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:59.104 [2024-12-14 00:18:38.137984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:59.104 [2024-12-14 00:18:38.137995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:35952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:59.104 [2024-12-14 00:18:38.138004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:59.104 [2024-12-14 00:18:38.138015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:35960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:59.104 [2024-12-14 00:18:38.138024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:59.104 [2024-12-14 00:18:38.138044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:35968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:59.104 [2024-12-14 00:18:38.138053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:59.104 [2024-12-14 00:18:38.138064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:35976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:59.104 [2024-12-14 00:18:38.138075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:59.104 [2024-12-14 00:18:38.138086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:35984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:59.104 [2024-12-14 00:18:38.138096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:59.104 [2024-12-14 00:18:38.138107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:35992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:59.104 [2024-12-14 00:18:38.138116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:59.104 [2024-12-14 00:18:38.138128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:36000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:59.104 [2024-12-14 00:18:38.138138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:59.104 [2024-12-14 00:18:38.138149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:36008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:59.104 [2024-12-14 00:18:38.138158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:59.104 [2024-12-14 00:18:38.138169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:36016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:59.104 [2024-12-14 00:18:38.138179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:59.104 [2024-12-14 00:18:38.138189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:36024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:59.104 [2024-12-14 00:18:38.138199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:59.104 [2024-12-14 00:18:38.138209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:36032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:59.104 [2024-12-14 00:18:38.138218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:59.104 [2024-12-14 00:18:38.138229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:36040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:59.104 [2024-12-14 00:18:38.138239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:59.104 [2024-12-14 00:18:38.138249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:36048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:59.104 [2024-12-14 00:18:38.138258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:59.104 [2024-12-14 00:18:38.138269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:36056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:59.104 [2024-12-14 00:18:38.138279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:59.104 [2024-12-14 00:18:38.138290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:36064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:59.104 [2024-12-14 00:18:38.138299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:59.104 [2024-12-14 00:18:38.138310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:36072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:59.104 [2024-12-14 00:18:38.138319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:59.104 [2024-12-14 00:18:38.138333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:36080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:59.104 [2024-12-14 00:18:38.138343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:59.104 [2024-12-14 00:18:38.138353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:36088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:59.104 [2024-12-14 00:18:38.138362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:59.104 [2024-12-14 00:18:38.138373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:36096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:59.104 [2024-12-14 00:18:38.138383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:59.104 [2024-12-14 00:18:38.138394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:36104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:59.104 [2024-12-14 00:18:38.138403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:59.104 [2024-12-14 00:18:38.138414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:36112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:59.104 [2024-12-14 00:18:38.138423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:59.104 [2024-12-14 00:18:38.138434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:36120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:59.104 [2024-12-14 00:18:38.138451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:59.104 [2024-12-14 00:18:38.138462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:36128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:59.104 [2024-12-14 00:18:38.138471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:59.104 [2024-12-14 00:18:38.138482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:36136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:59.104 [2024-12-14 00:18:38.138491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:59.104 [2024-12-14 00:18:38.138502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:36144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:59.104 [2024-12-14 00:18:38.138512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:59.104 [2024-12-14 00:18:38.138523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:36152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:59.104 [2024-12-14 00:18:38.138532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:59.105 [2024-12-14 00:18:38.138543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:36160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:59.105 [2024-12-14 00:18:38.138552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:59.105 [2024-12-14 00:18:38.138564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:35576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:59.105 [2024-12-14 00:18:38.138573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:59.105 [2024-12-14 00:18:38.138584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:36168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:59.105 [2024-12-14 00:18:38.138593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:59.105 [2024-12-14 00:18:38.138606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:36176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:59.105 [2024-12-14 00:18:38.138616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:59.105 [2024-12-14 00:18:38.138627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:36184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:59.105 [2024-12-14 00:18:38.138636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:59.105 [2024-12-14 00:18:38.138647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:36192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:59.105 [2024-12-14 00:18:38.138656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:59.105 [2024-12-14 00:18:38.138668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:36200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:59.105 [2024-12-14 00:18:38.138677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:59.105 [2024-12-14 00:18:38.138688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:36208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:59.105 [2024-12-14 00:18:38.138697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:59.105 [2024-12-14 00:18:38.138708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:36216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:59.105 [2024-12-14 00:18:38.138717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:59.105 [2024-12-14 00:18:38.138732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:36224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:59.105 [2024-12-14 00:18:38.138741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:59.105 [2024-12-14 00:18:38.138752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:36232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:59.105 [2024-12-14 00:18:38.138761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:59.105 [2024-12-14 00:18:38.138772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:36240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:59.105 [2024-12-14 00:18:38.138782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:59.105 [2024-12-14 00:18:38.138793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:36248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:59.105 [2024-12-14 00:18:38.138802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:59.105 [2024-12-14 00:18:38.138813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:36256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:59.105 [2024-12-14 00:18:38.138822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:59.105 [2024-12-14 00:18:38.138834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:36264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:59.105 [2024-12-14 00:18:38.138843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:59.105 [2024-12-14 00:18:38.138854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:36272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:59.105 [2024-12-14 00:18:38.138864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:59.105 [2024-12-14 00:18:38.138875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:36280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:59.105 [2024-12-14 00:18:38.138884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:59.105 [2024-12-14 00:18:38.138895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:36288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:59.105 [2024-12-14 00:18:38.138904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:59.105 [2024-12-14 00:18:38.138915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:36296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:59.105 [2024-12-14 00:18:38.138924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:59.105 [2024-12-14 00:18:38.138935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:36304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:59.105 [2024-12-14 00:18:38.138944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:59.105 [2024-12-14 00:18:38.138955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:36312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:59.105 [2024-12-14 00:18:38.138964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:59.105 [2024-12-14 00:18:38.138975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:36320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:59.105 [2024-12-14 00:18:38.138984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:59.105 [2024-12-14 00:18:38.138996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:36328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:59.105 [2024-12-14 00:18:38.139005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:59.105 [2024-12-14 00:18:38.139015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:36336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:59.105 [2024-12-14 00:18:38.139024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:59.105 [2024-12-14 00:18:38.139035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:36344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:59.105 [2024-12-14 00:18:38.139044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:59.105 [2024-12-14 00:18:38.139057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:36352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:59.105 [2024-12-14 00:18:38.139066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:59.105 [2024-12-14 00:18:38.139076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:36360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:59.105 [2024-12-14 00:18:38.139085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:59.105 [2024-12-14 00:18:38.139096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:36368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:59.105 [2024-12-14 00:18:38.139105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:59.105 [2024-12-14 00:18:38.139117] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000326480 is same with the state(6) to be set 00:37:59.105 [2024-12-14 00:18:38.139131] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:37:59.105 [2024-12-14 00:18:38.139143] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:37:59.105 [2024-12-14 00:18:38.139152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:36376 len:8 PRP1 0x0 PRP2 0x0 00:37:59.105 [2024-12-14 00:18:38.139165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:59.105 [2024-12-14 00:18:38.142542] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:59.105 [2024-12-14 00:18:38.142617] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:59.105 [2024-12-14 00:18:38.143193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:59.105 [2024-12-14 00:18:38.143216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:59.105 [2024-12-14 00:18:38.143228] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:59.105 [2024-12-14 00:18:38.143431] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:59.105 [2024-12-14 00:18:38.143637] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:59.105 [2024-12-14 00:18:38.143648] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:59.105 [2024-12-14 00:18:38.143659] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:59.105 [2024-12-14 00:18:38.143671] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:59.105 [2024-12-14 00:18:38.156090] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:59.105 [2024-12-14 00:18:38.156458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:59.105 [2024-12-14 00:18:38.156522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:59.105 [2024-12-14 00:18:38.156557] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:59.105 [2024-12-14 00:18:38.157208] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:59.105 [2024-12-14 00:18:38.157663] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:59.105 [2024-12-14 00:18:38.157675] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:59.105 [2024-12-14 00:18:38.157685] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:59.105 [2024-12-14 00:18:38.157694] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:59.105 [2024-12-14 00:18:38.169164] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:59.105 [2024-12-14 00:18:38.169572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:59.105 [2024-12-14 00:18:38.169634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:59.106 [2024-12-14 00:18:38.169668] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:59.106 [2024-12-14 00:18:38.170321] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:59.106 [2024-12-14 00:18:38.170715] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:59.106 [2024-12-14 00:18:38.170727] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:59.106 [2024-12-14 00:18:38.170736] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:59.106 [2024-12-14 00:18:38.170746] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:59.106 [2024-12-14 00:18:38.182305] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:59.106 [2024-12-14 00:18:38.182624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:59.106 [2024-12-14 00:18:38.182646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:59.106 [2024-12-14 00:18:38.182656] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:59.106 [2024-12-14 00:18:38.182848] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:59.106 [2024-12-14 00:18:38.183037] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:59.106 [2024-12-14 00:18:38.183048] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:59.106 [2024-12-14 00:18:38.183056] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:59.106 [2024-12-14 00:18:38.183065] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:59.106 [2024-12-14 00:18:38.195432] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:59.106 [2024-12-14 00:18:38.195870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:59.106 [2024-12-14 00:18:38.195930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:59.106 [2024-12-14 00:18:38.195962] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:59.106 [2024-12-14 00:18:38.196434] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:59.106 [2024-12-14 00:18:38.196630] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:59.106 [2024-12-14 00:18:38.196641] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:59.106 [2024-12-14 00:18:38.196650] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:59.106 [2024-12-14 00:18:38.196659] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:59.106 [2024-12-14 00:18:38.208522] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:59.106 [2024-12-14 00:18:38.208845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:59.106 [2024-12-14 00:18:38.208866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:59.106 [2024-12-14 00:18:38.208876] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:59.106 [2024-12-14 00:18:38.209066] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:59.106 [2024-12-14 00:18:38.209255] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:59.106 [2024-12-14 00:18:38.209266] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:59.106 [2024-12-14 00:18:38.209278] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:59.106 [2024-12-14 00:18:38.209286] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:59.106 [2024-12-14 00:18:38.221714] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:59.106 [2024-12-14 00:18:38.222076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:59.106 [2024-12-14 00:18:38.222135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:59.106 [2024-12-14 00:18:38.222168] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:59.106 [2024-12-14 00:18:38.222666] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:59.106 [2024-12-14 00:18:38.222855] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:59.106 [2024-12-14 00:18:38.222866] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:59.106 [2024-12-14 00:18:38.222875] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:59.106 [2024-12-14 00:18:38.222884] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:59.106 [2024-12-14 00:18:38.234960] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:59.106 [2024-12-14 00:18:38.235351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:59.106 [2024-12-14 00:18:38.235373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:59.106 [2024-12-14 00:18:38.235383] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:59.106 [2024-12-14 00:18:38.235584] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:59.106 [2024-12-14 00:18:38.235778] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:59.106 [2024-12-14 00:18:38.235789] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:59.106 [2024-12-14 00:18:38.235798] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:59.106 [2024-12-14 00:18:38.235807] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:59.366 [2024-12-14 00:18:38.248430] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:59.366 [2024-12-14 00:18:38.248784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:59.366 [2024-12-14 00:18:38.248841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:59.366 [2024-12-14 00:18:38.248874] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:59.366 [2024-12-14 00:18:38.249451] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:59.366 [2024-12-14 00:18:38.249646] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:59.366 [2024-12-14 00:18:38.249657] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:59.366 [2024-12-14 00:18:38.249666] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:59.366 [2024-12-14 00:18:38.249675] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:59.366 [2024-12-14 00:18:38.261677] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:59.366 [2024-12-14 00:18:38.262015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:59.366 [2024-12-14 00:18:38.262036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:59.366 [2024-12-14 00:18:38.262047] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:59.366 [2024-12-14 00:18:38.262235] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:59.366 [2024-12-14 00:18:38.262424] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:59.367 [2024-12-14 00:18:38.262435] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:59.367 [2024-12-14 00:18:38.262450] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:59.367 [2024-12-14 00:18:38.262458] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:59.367 [2024-12-14 00:18:38.274807] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:59.367 [2024-12-14 00:18:38.275272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:59.367 [2024-12-14 00:18:38.275330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:59.367 [2024-12-14 00:18:38.275364] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:59.367 [2024-12-14 00:18:38.275980] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:59.367 [2024-12-14 00:18:38.276169] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:59.367 [2024-12-14 00:18:38.276180] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:59.367 [2024-12-14 00:18:38.276189] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:59.367 [2024-12-14 00:18:38.276198] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:59.367 [2024-12-14 00:18:38.287933] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:59.367 [2024-12-14 00:18:38.288387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:59.367 [2024-12-14 00:18:38.288408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:59.367 [2024-12-14 00:18:38.288418] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:59.367 [2024-12-14 00:18:38.288618] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:59.367 [2024-12-14 00:18:38.288806] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:59.367 [2024-12-14 00:18:38.288817] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:59.367 [2024-12-14 00:18:38.288826] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:59.367 [2024-12-14 00:18:38.288835] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:59.367 [2024-12-14 00:18:38.301094] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:59.367 [2024-12-14 00:18:38.301548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:59.367 [2024-12-14 00:18:38.301573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:59.367 [2024-12-14 00:18:38.301583] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:59.367 [2024-12-14 00:18:38.301761] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:59.367 [2024-12-14 00:18:38.301941] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:59.367 [2024-12-14 00:18:38.301951] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:59.367 [2024-12-14 00:18:38.301959] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:59.367 [2024-12-14 00:18:38.301968] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:59.367 [2024-12-14 00:18:38.314188] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:59.367 [2024-12-14 00:18:38.314667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:59.367 [2024-12-14 00:18:38.314700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:59.367 [2024-12-14 00:18:38.314710] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:59.367 [2024-12-14 00:18:38.314890] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:59.367 [2024-12-14 00:18:38.315068] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:59.367 [2024-12-14 00:18:38.315078] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:59.367 [2024-12-14 00:18:38.315086] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:59.367 [2024-12-14 00:18:38.315095] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:59.367 [2024-12-14 00:18:38.327227] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:59.367 [2024-12-14 00:18:38.327701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:59.367 [2024-12-14 00:18:38.327760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:59.367 [2024-12-14 00:18:38.327792] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:59.367 [2024-12-14 00:18:38.328457] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:59.367 [2024-12-14 00:18:38.328928] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:59.367 [2024-12-14 00:18:38.328938] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:59.367 [2024-12-14 00:18:38.328947] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:59.367 [2024-12-14 00:18:38.328956] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:59.367 [2024-12-14 00:18:38.340316] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:59.367 [2024-12-14 00:18:38.340728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:59.367 [2024-12-14 00:18:38.340785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:59.367 [2024-12-14 00:18:38.340817] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:59.367 [2024-12-14 00:18:38.341490] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:59.367 [2024-12-14 00:18:38.342078] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:59.367 [2024-12-14 00:18:38.342088] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:59.367 [2024-12-14 00:18:38.342097] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:59.367 [2024-12-14 00:18:38.342106] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:59.367 [2024-12-14 00:18:38.353376] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:59.367 [2024-12-14 00:18:38.353871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:59.367 [2024-12-14 00:18:38.353899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:59.367 [2024-12-14 00:18:38.353909] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:59.367 [2024-12-14 00:18:38.354098] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:59.367 [2024-12-14 00:18:38.354286] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:59.367 [2024-12-14 00:18:38.354297] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:59.367 [2024-12-14 00:18:38.354305] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:59.367 [2024-12-14 00:18:38.354314] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:59.367 [2024-12-14 00:18:38.366561] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:59.367 [2024-12-14 00:18:38.367046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:59.367 [2024-12-14 00:18:38.367067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:59.367 [2024-12-14 00:18:38.367076] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:59.367 [2024-12-14 00:18:38.367253] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:59.367 [2024-12-14 00:18:38.367431] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:59.367 [2024-12-14 00:18:38.367447] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:59.367 [2024-12-14 00:18:38.367455] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:59.367 [2024-12-14 00:18:38.367479] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:59.367 [2024-12-14 00:18:38.379683] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:59.367 [2024-12-14 00:18:38.380093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:59.367 [2024-12-14 00:18:38.380151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:59.367 [2024-12-14 00:18:38.380183] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:59.367 [2024-12-14 00:18:38.380848] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:59.367 [2024-12-14 00:18:38.381299] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:59.367 [2024-12-14 00:18:38.381313] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:59.367 [2024-12-14 00:18:38.381321] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:59.367 [2024-12-14 00:18:38.381330] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:59.367 [2024-12-14 00:18:38.392759] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:59.368 [2024-12-14 00:18:38.393155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:59.368 [2024-12-14 00:18:38.393176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:59.368 [2024-12-14 00:18:38.393186] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:59.368 [2024-12-14 00:18:38.393380] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:59.368 [2024-12-14 00:18:38.393581] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:59.368 [2024-12-14 00:18:38.393593] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:59.368 [2024-12-14 00:18:38.393601] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:59.368 [2024-12-14 00:18:38.393611] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:59.368 [2024-12-14 00:18:38.406168] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:59.368 [2024-12-14 00:18:38.406657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:59.368 [2024-12-14 00:18:38.406680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:59.368 [2024-12-14 00:18:38.406690] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:59.368 [2024-12-14 00:18:38.406883] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:59.368 [2024-12-14 00:18:38.407077] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:59.368 [2024-12-14 00:18:38.407088] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:59.368 [2024-12-14 00:18:38.407097] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:59.368 [2024-12-14 00:18:38.407106] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:59.368 [2024-12-14 00:18:38.419494] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:59.368 [2024-12-14 00:18:38.419937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:59.368 [2024-12-14 00:18:38.419958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:59.368 [2024-12-14 00:18:38.419968] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:59.368 [2024-12-14 00:18:38.420161] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:59.368 [2024-12-14 00:18:38.420355] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:59.368 [2024-12-14 00:18:38.420366] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:59.368 [2024-12-14 00:18:38.420379] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:59.368 [2024-12-14 00:18:38.420388] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:59.368 [2024-12-14 00:18:38.432722] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:59.368 [2024-12-14 00:18:38.433202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:59.368 [2024-12-14 00:18:38.433223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:59.368 [2024-12-14 00:18:38.433233] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:59.368 [2024-12-14 00:18:38.433421] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:59.368 [2024-12-14 00:18:38.433634] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:59.368 [2024-12-14 00:18:38.433646] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:59.368 [2024-12-14 00:18:38.433655] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:59.368 [2024-12-14 00:18:38.433664] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:59.368 [2024-12-14 00:18:38.445931] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:59.368 [2024-12-14 00:18:38.446403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:59.368 [2024-12-14 00:18:38.446473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:59.368 [2024-12-14 00:18:38.446507] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:59.368 [2024-12-14 00:18:38.447155] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:59.368 [2024-12-14 00:18:38.447546] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:59.368 [2024-12-14 00:18:38.447564] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:59.368 [2024-12-14 00:18:38.447578] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:59.368 [2024-12-14 00:18:38.447591] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:59.368 [2024-12-14 00:18:38.459902] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:59.368 [2024-12-14 00:18:38.460303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:59.368 [2024-12-14 00:18:38.460325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:59.368 [2024-12-14 00:18:38.460336] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:59.368 [2024-12-14 00:18:38.460547] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:59.368 [2024-12-14 00:18:38.460753] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:59.368 [2024-12-14 00:18:38.460765] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:59.368 [2024-12-14 00:18:38.460774] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:59.368 [2024-12-14 00:18:38.460784] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:59.368 [2024-12-14 00:18:38.473069] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:59.368 [2024-12-14 00:18:38.473524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:59.368 [2024-12-14 00:18:38.473546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:59.368 [2024-12-14 00:18:38.473555] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:59.368 [2024-12-14 00:18:38.473734] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:59.368 [2024-12-14 00:18:38.473910] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:59.368 [2024-12-14 00:18:38.473921] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:59.368 [2024-12-14 00:18:38.473929] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:59.368 [2024-12-14 00:18:38.473937] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:59.368 [2024-12-14 00:18:38.486256] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:59.368 [2024-12-14 00:18:38.486713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:59.368 [2024-12-14 00:18:38.486735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:59.368 [2024-12-14 00:18:38.486745] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:59.368 [2024-12-14 00:18:38.486934] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:59.368 [2024-12-14 00:18:38.487122] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:59.368 [2024-12-14 00:18:38.487136] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:59.368 [2024-12-14 00:18:38.487145] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:59.368 [2024-12-14 00:18:38.487155] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:59.368 [2024-12-14 00:18:38.499361] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:59.368 [2024-12-14 00:18:38.499785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:59.368 [2024-12-14 00:18:38.499807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:59.368 [2024-12-14 00:18:38.499818] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:59.368 [2024-12-14 00:18:38.500006] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:59.368 [2024-12-14 00:18:38.500211] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:59.368 [2024-12-14 00:18:38.500222] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:59.368 [2024-12-14 00:18:38.500232] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:59.368 [2024-12-14 00:18:38.500241] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:59.629 [2024-12-14 00:18:38.512707] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:59.629 [2024-12-14 00:18:38.513169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:59.629 [2024-12-14 00:18:38.513191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:59.629 [2024-12-14 00:18:38.513204] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:59.629 [2024-12-14 00:18:38.513394] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:59.629 [2024-12-14 00:18:38.513589] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:59.629 [2024-12-14 00:18:38.513600] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:59.629 [2024-12-14 00:18:38.513609] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:59.629 [2024-12-14 00:18:38.513618] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:59.629 [2024-12-14 00:18:38.525764] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:59.629 [2024-12-14 00:18:38.526223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:59.629 [2024-12-14 00:18:38.526283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:59.629 [2024-12-14 00:18:38.526315] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:59.629 [2024-12-14 00:18:38.526981] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:59.629 [2024-12-14 00:18:38.527487] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:59.629 [2024-12-14 00:18:38.527498] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:59.629 [2024-12-14 00:18:38.527507] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:59.629 [2024-12-14 00:18:38.527516] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:59.629 [2024-12-14 00:18:38.538809] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:59.629 [2024-12-14 00:18:38.539274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:59.629 [2024-12-14 00:18:38.539332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:59.629 [2024-12-14 00:18:38.539364] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:59.629 [2024-12-14 00:18:38.540046] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:59.629 [2024-12-14 00:18:38.540510] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:59.629 [2024-12-14 00:18:38.540522] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:59.629 [2024-12-14 00:18:38.540530] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:59.629 [2024-12-14 00:18:38.540539] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:59.629 [2024-12-14 00:18:38.551868] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:59.629 [2024-12-14 00:18:38.552334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:59.629 [2024-12-14 00:18:38.552355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:59.629 [2024-12-14 00:18:38.552365] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:59.629 [2024-12-14 00:18:38.552563] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:59.629 [2024-12-14 00:18:38.552752] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:59.629 [2024-12-14 00:18:38.552762] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:59.629 [2024-12-14 00:18:38.552771] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:59.629 [2024-12-14 00:18:38.552780] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:59.629 [2024-12-14 00:18:38.565025] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:59.629 [2024-12-14 00:18:38.565451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:59.629 [2024-12-14 00:18:38.565471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:59.629 [2024-12-14 00:18:38.565481] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:59.629 [2024-12-14 00:18:38.565659] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:59.629 [2024-12-14 00:18:38.565838] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:59.629 [2024-12-14 00:18:38.565848] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:59.629 [2024-12-14 00:18:38.565856] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:59.629 [2024-12-14 00:18:38.565865] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:59.629 [2024-12-14 00:18:38.578261] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:59.629 [2024-12-14 00:18:38.578749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:59.629 [2024-12-14 00:18:38.578771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:59.629 [2024-12-14 00:18:38.578781] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:59.629 [2024-12-14 00:18:38.578969] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:59.629 [2024-12-14 00:18:38.579157] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:59.629 [2024-12-14 00:18:38.579168] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:59.629 [2024-12-14 00:18:38.579176] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:59.629 [2024-12-14 00:18:38.579185] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:59.629 [2024-12-14 00:18:38.591351] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:59.629 [2024-12-14 00:18:38.591837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:59.629 [2024-12-14 00:18:38.591858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:59.629 [2024-12-14 00:18:38.591868] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:59.629 [2024-12-14 00:18:38.592056] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:59.629 [2024-12-14 00:18:38.592243] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:59.629 [2024-12-14 00:18:38.592260] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:59.629 [2024-12-14 00:18:38.592269] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:59.629 [2024-12-14 00:18:38.592278] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:59.629 [2024-12-14 00:18:38.604577] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:59.629 [2024-12-14 00:18:38.605059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:59.629 [2024-12-14 00:18:38.605117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:59.629 [2024-12-14 00:18:38.605149] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:59.629 [2024-12-14 00:18:38.605653] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:59.629 [2024-12-14 00:18:38.605843] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:59.629 [2024-12-14 00:18:38.605853] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:59.629 [2024-12-14 00:18:38.605862] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:59.629 [2024-12-14 00:18:38.605871] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:59.630 [2024-12-14 00:18:38.617649] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:59.630 [2024-12-14 00:18:38.618117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:59.630 [2024-12-14 00:18:38.618173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:59.630 [2024-12-14 00:18:38.618204] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:59.630 [2024-12-14 00:18:38.618651] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:59.630 [2024-12-14 00:18:38.618840] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:59.630 [2024-12-14 00:18:38.618851] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:59.630 [2024-12-14 00:18:38.618859] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:59.630 [2024-12-14 00:18:38.618868] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:59.630 [2024-12-14 00:18:38.630783] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:59.630 [2024-12-14 00:18:38.631233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:59.630 [2024-12-14 00:18:38.631254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:59.630 [2024-12-14 00:18:38.631263] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:59.630 [2024-12-14 00:18:38.631448] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:59.630 [2024-12-14 00:18:38.631653] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:59.630 [2024-12-14 00:18:38.631663] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:59.630 [2024-12-14 00:18:38.631672] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:59.630 [2024-12-14 00:18:38.631684] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:59.630 [2024-12-14 00:18:38.643900] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:59.630 [2024-12-14 00:18:38.644387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:59.630 [2024-12-14 00:18:38.644410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:59.630 [2024-12-14 00:18:38.644420] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:59.630 [2024-12-14 00:18:38.644618] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:59.630 [2024-12-14 00:18:38.644812] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:59.630 [2024-12-14 00:18:38.644823] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:59.630 [2024-12-14 00:18:38.644831] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:59.630 [2024-12-14 00:18:38.644841] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:59.630 [2024-12-14 00:18:38.657281] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:59.630 [2024-12-14 00:18:38.657770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:59.630 [2024-12-14 00:18:38.657792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:59.630 [2024-12-14 00:18:38.657802] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:59.630 [2024-12-14 00:18:38.657995] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:59.630 [2024-12-14 00:18:38.658188] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:59.630 [2024-12-14 00:18:38.658200] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:59.630 [2024-12-14 00:18:38.658209] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:59.630 [2024-12-14 00:18:38.658218] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:59.630 [2024-12-14 00:18:38.670385] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:59.630 [2024-12-14 00:18:38.670884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:59.630 [2024-12-14 00:18:38.670941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:59.630 [2024-12-14 00:18:38.670972] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:59.630 [2024-12-14 00:18:38.671528] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:59.630 [2024-12-14 00:18:38.671716] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:59.630 [2024-12-14 00:18:38.671727] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:59.630 [2024-12-14 00:18:38.671736] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:59.630 [2024-12-14 00:18:38.671744] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:59.630 [2024-12-14 00:18:38.683528] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:59.630 [2024-12-14 00:18:38.683873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:59.630 [2024-12-14 00:18:38.683893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:59.630 [2024-12-14 00:18:38.683902] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:59.630 [2024-12-14 00:18:38.684080] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:59.630 [2024-12-14 00:18:38.684258] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:59.630 [2024-12-14 00:18:38.684268] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:59.630 [2024-12-14 00:18:38.684276] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:59.630 [2024-12-14 00:18:38.684284] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:59.630 [2024-12-14 00:18:38.696607] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:59.630 [2024-12-14 00:18:38.697021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:59.630 [2024-12-14 00:18:38.697078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:59.630 [2024-12-14 00:18:38.697109] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:59.630 [2024-12-14 00:18:38.697605] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:59.630 [2024-12-14 00:18:38.697794] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:59.630 [2024-12-14 00:18:38.697806] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:59.630 [2024-12-14 00:18:38.697816] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:59.630 [2024-12-14 00:18:38.697826] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:59.630 [2024-12-14 00:18:38.709775] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:59.630 [2024-12-14 00:18:38.710187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:59.630 [2024-12-14 00:18:38.710208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:59.630 [2024-12-14 00:18:38.710217] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:59.630 [2024-12-14 00:18:38.710405] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:59.630 [2024-12-14 00:18:38.710601] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:59.630 [2024-12-14 00:18:38.710613] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:59.630 [2024-12-14 00:18:38.710622] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:59.630 [2024-12-14 00:18:38.710631] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:59.630 [2024-12-14 00:18:38.722835] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:59.630 [2024-12-14 00:18:38.723306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:59.630 [2024-12-14 00:18:38.723364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:59.630 [2024-12-14 00:18:38.723404] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:59.630 [2024-12-14 00:18:38.723902] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:59.630 [2024-12-14 00:18:38.724091] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:59.630 [2024-12-14 00:18:38.724128] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:59.630 [2024-12-14 00:18:38.724136] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:59.630 [2024-12-14 00:18:38.724146] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:59.630 [2024-12-14 00:18:38.735945] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:59.630 [2024-12-14 00:18:38.736395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:59.630 [2024-12-14 00:18:38.736416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:59.630 [2024-12-14 00:18:38.736426] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:59.630 [2024-12-14 00:18:38.736641] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:59.630 [2024-12-14 00:18:38.736835] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:59.630 [2024-12-14 00:18:38.736846] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:59.630 [2024-12-14 00:18:38.736855] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:59.630 [2024-12-14 00:18:38.736864] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:59.630 [2024-12-14 00:18:38.749192] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:59.631 [2024-12-14 00:18:38.749692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:59.631 [2024-12-14 00:18:38.749752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:59.631 [2024-12-14 00:18:38.749785] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:59.631 [2024-12-14 00:18:38.750396] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:59.631 [2024-12-14 00:18:38.750717] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:59.631 [2024-12-14 00:18:38.750736] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:59.631 [2024-12-14 00:18:38.750750] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:59.631 [2024-12-14 00:18:38.750764] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:59.631 [2024-12-14 00:18:38.763250] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:59.631 [2024-12-14 00:18:38.763734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:59.631 [2024-12-14 00:18:38.763758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:59.631 [2024-12-14 00:18:38.763769] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:59.631 [2024-12-14 00:18:38.763974] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:59.631 [2024-12-14 00:18:38.764183] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:59.631 [2024-12-14 00:18:38.764195] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:59.631 [2024-12-14 00:18:38.764205] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:59.631 [2024-12-14 00:18:38.764214] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:59.891 [2024-12-14 00:18:38.776500] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:59.891 [2024-12-14 00:18:38.776971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:59.892 [2024-12-14 00:18:38.776993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:59.892 [2024-12-14 00:18:38.777003] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:59.892 [2024-12-14 00:18:38.777191] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:59.892 [2024-12-14 00:18:38.777380] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:59.892 [2024-12-14 00:18:38.777391] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:59.892 [2024-12-14 00:18:38.777400] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:59.892 [2024-12-14 00:18:38.777409] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:59.892 [2024-12-14 00:18:38.789721] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:59.892 [2024-12-14 00:18:38.790195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:59.892 [2024-12-14 00:18:38.790248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:59.892 [2024-12-14 00:18:38.790281] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:59.892 [2024-12-14 00:18:38.790851] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:59.892 [2024-12-14 00:18:38.791164] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:59.892 [2024-12-14 00:18:38.791182] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:59.892 [2024-12-14 00:18:38.791196] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:59.892 [2024-12-14 00:18:38.791209] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:59.892 [2024-12-14 00:18:38.803627] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:59.892 [2024-12-14 00:18:38.804085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:59.892 [2024-12-14 00:18:38.804142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:59.892 [2024-12-14 00:18:38.804174] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:59.892 [2024-12-14 00:18:38.804840] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:59.892 [2024-12-14 00:18:38.805216] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:59.892 [2024-12-14 00:18:38.805228] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:59.892 [2024-12-14 00:18:38.805241] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:59.892 [2024-12-14 00:18:38.805250] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:59.892 [2024-12-14 00:18:38.816754] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:59.892 [2024-12-14 00:18:38.817241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:59.892 [2024-12-14 00:18:38.817300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:59.892 [2024-12-14 00:18:38.817333] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:59.892 [2024-12-14 00:18:38.817996] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:59.892 [2024-12-14 00:18:38.818324] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:59.892 [2024-12-14 00:18:38.818335] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:59.892 [2024-12-14 00:18:38.818343] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:59.892 [2024-12-14 00:18:38.818352] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:59.892 [2024-12-14 00:18:38.829815] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:59.892 [2024-12-14 00:18:38.830277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:59.892 [2024-12-14 00:18:38.830338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:59.892 [2024-12-14 00:18:38.830370] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:59.892 [2024-12-14 00:18:38.830896] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:59.892 [2024-12-14 00:18:38.831084] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:59.892 [2024-12-14 00:18:38.831096] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:59.892 [2024-12-14 00:18:38.831104] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:59.892 [2024-12-14 00:18:38.831113] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:59.892 [2024-12-14 00:18:38.844048] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:59.892 [2024-12-14 00:18:38.844497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:59.892 [2024-12-14 00:18:38.844520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:59.892 [2024-12-14 00:18:38.844531] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:59.892 [2024-12-14 00:18:38.845145] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:59.892 [2024-12-14 00:18:38.845351] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:59.892 [2024-12-14 00:18:38.845362] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:59.892 [2024-12-14 00:18:38.845372] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:59.892 [2024-12-14 00:18:38.845381] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:59.892 [2024-12-14 00:18:38.857300] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:59.892 [2024-12-14 00:18:38.857795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:59.892 [2024-12-14 00:18:38.857854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:59.892 [2024-12-14 00:18:38.857887] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:59.892 [2024-12-14 00:18:38.858309] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:59.892 [2024-12-14 00:18:38.858504] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:59.892 [2024-12-14 00:18:38.858516] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:59.892 [2024-12-14 00:18:38.858525] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:59.892 [2024-12-14 00:18:38.858534] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:59.892 [2024-12-14 00:18:38.870406] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:59.892 [2024-12-14 00:18:38.870869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:59.892 [2024-12-14 00:18:38.870928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:59.892 [2024-12-14 00:18:38.870960] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:59.892 [2024-12-14 00:18:38.871465] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:59.892 [2024-12-14 00:18:38.871655] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:59.892 [2024-12-14 00:18:38.871666] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:59.892 [2024-12-14 00:18:38.871674] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:59.892 [2024-12-14 00:18:38.871683] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:59.892 [2024-12-14 00:18:38.884360] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:59.892 [2024-12-14 00:18:38.884838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:59.892 [2024-12-14 00:18:38.884897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:59.892 [2024-12-14 00:18:38.884929] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:59.892 [2024-12-14 00:18:38.885449] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:59.892 [2024-12-14 00:18:38.885655] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:59.892 [2024-12-14 00:18:38.885667] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:59.892 [2024-12-14 00:18:38.885676] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:59.892 [2024-12-14 00:18:38.885686] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:59.892 [2024-12-14 00:18:38.897544] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:59.892 [2024-12-14 00:18:38.898001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:59.892 [2024-12-14 00:18:38.898029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:59.892 [2024-12-14 00:18:38.898040] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:59.892 [2024-12-14 00:18:38.898234] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:59.892 [2024-12-14 00:18:38.898428] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:59.892 [2024-12-14 00:18:38.898446] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:59.892 [2024-12-14 00:18:38.898455] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:59.892 [2024-12-14 00:18:38.898464] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:59.892 [2024-12-14 00:18:38.910841] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:59.893 [2024-12-14 00:18:38.911281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:59.893 [2024-12-14 00:18:38.911302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:59.893 [2024-12-14 00:18:38.911313] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:59.893 [2024-12-14 00:18:38.911513] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:59.893 [2024-12-14 00:18:38.911718] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:59.893 [2024-12-14 00:18:38.911729] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:59.893 [2024-12-14 00:18:38.911737] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:59.893 [2024-12-14 00:18:38.911752] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:59.893 [2024-12-14 00:18:38.924107] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:59.893 [2024-12-14 00:18:38.924543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:59.893 [2024-12-14 00:18:38.924564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:59.893 [2024-12-14 00:18:38.924574] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:59.893 [2024-12-14 00:18:38.924762] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:59.893 [2024-12-14 00:18:38.924949] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:59.893 [2024-12-14 00:18:38.924960] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:59.893 [2024-12-14 00:18:38.924969] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:59.893 [2024-12-14 00:18:38.924978] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:59.893 [2024-12-14 00:18:38.937153] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:59.893 [2024-12-14 00:18:38.937609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:59.893 [2024-12-14 00:18:38.937631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:59.893 [2024-12-14 00:18:38.937641] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:59.893 [2024-12-14 00:18:38.937831] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:59.893 [2024-12-14 00:18:38.938027] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:59.893 [2024-12-14 00:18:38.938037] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:59.893 [2024-12-14 00:18:38.938045] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:59.893 [2024-12-14 00:18:38.938053] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:59.893 [2024-12-14 00:18:38.950369] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:59.893 [2024-12-14 00:18:38.950822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:59.893 [2024-12-14 00:18:38.950843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:59.893 [2024-12-14 00:18:38.950853] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:59.893 [2024-12-14 00:18:38.951041] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:59.893 [2024-12-14 00:18:38.951229] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:59.893 [2024-12-14 00:18:38.951240] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:59.893 [2024-12-14 00:18:38.951248] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:59.893 [2024-12-14 00:18:38.951257] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:59.893 [2024-12-14 00:18:38.963418] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:59.893 [2024-12-14 00:18:38.963886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:59.893 [2024-12-14 00:18:38.963909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:59.893 [2024-12-14 00:18:38.963919] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:59.893 [2024-12-14 00:18:38.964106] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:59.893 [2024-12-14 00:18:38.964293] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:59.893 [2024-12-14 00:18:38.964304] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:59.893 [2024-12-14 00:18:38.964313] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:59.893 [2024-12-14 00:18:38.964322] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:59.893 6934.67 IOPS, 27.09 MiB/s [2024-12-13T23:18:39.034Z] [2024-12-14 00:18:38.976500] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:59.893 [2024-12-14 00:18:38.976968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:59.893 [2024-12-14 00:18:38.976989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:59.893 [2024-12-14 00:18:38.976999] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:59.893 [2024-12-14 00:18:38.977188] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:59.893 [2024-12-14 00:18:38.977379] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:59.893 [2024-12-14 00:18:38.977390] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:59.893 [2024-12-14 00:18:38.977398] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:59.893 [2024-12-14 00:18:38.977407] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:59.893 [2024-12-14 00:18:38.989706] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:59.893 [2024-12-14 00:18:38.990168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:59.893 [2024-12-14 00:18:38.990226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:59.893 [2024-12-14 00:18:38.990258] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:59.893 [2024-12-14 00:18:38.990740] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:59.893 [2024-12-14 00:18:38.990928] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:59.893 [2024-12-14 00:18:38.990939] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:59.893 [2024-12-14 00:18:38.990948] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:59.893 [2024-12-14 00:18:38.990957] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:59.893 [2024-12-14 00:18:39.002906] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:59.893 [2024-12-14 00:18:39.003312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:59.893 [2024-12-14 00:18:39.003333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:59.893 [2024-12-14 00:18:39.003342] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:59.893 [2024-12-14 00:18:39.003546] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:59.893 [2024-12-14 00:18:39.003734] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:59.893 [2024-12-14 00:18:39.003745] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:59.893 [2024-12-14 00:18:39.003753] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:59.893 [2024-12-14 00:18:39.003762] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:59.893 [2024-12-14 00:18:39.016051] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:59.893 [2024-12-14 00:18:39.016490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:59.893 [2024-12-14 00:18:39.016549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:59.893 [2024-12-14 00:18:39.016582] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:59.893 [2024-12-14 00:18:39.017229] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:37:59.893 [2024-12-14 00:18:39.017731] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:37:59.893 [2024-12-14 00:18:39.017743] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:37:59.893 [2024-12-14 00:18:39.017755] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:37:59.893 [2024-12-14 00:18:39.017764] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:37:59.893 [2024-12-14 00:18:39.029398] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:37:59.893 [2024-12-14 00:18:39.029856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:59.893 [2024-12-14 00:18:39.029877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:37:59.893 [2024-12-14 00:18:39.029887] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:37:59.893 [2024-12-14 00:18:39.030075] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:38:00.154 [2024-12-14 00:18:39.030262] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:00.154 [2024-12-14 00:18:39.030275] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:00.154 [2024-12-14 00:18:39.030283] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:00.154 [2024-12-14 00:18:39.030292] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:00.154 [2024-12-14 00:18:39.042487] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:00.154 [2024-12-14 00:18:39.042941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.154 [2024-12-14 00:18:39.042962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:38:00.154 [2024-12-14 00:18:39.042972] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:38:00.154 [2024-12-14 00:18:39.043160] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:38:00.154 [2024-12-14 00:18:39.043347] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:00.154 [2024-12-14 00:18:39.043358] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:00.154 [2024-12-14 00:18:39.043367] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:00.154 [2024-12-14 00:18:39.043376] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:00.154 [2024-12-14 00:18:39.055769] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:00.154 [2024-12-14 00:18:39.056219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.154 [2024-12-14 00:18:39.056240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:38:00.154 [2024-12-14 00:18:39.056249] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:38:00.154 [2024-12-14 00:18:39.056444] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:38:00.154 [2024-12-14 00:18:39.056633] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:00.154 [2024-12-14 00:18:39.056643] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:00.154 [2024-12-14 00:18:39.056652] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:00.154 [2024-12-14 00:18:39.056661] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:00.154 [2024-12-14 00:18:39.068852] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:00.154 [2024-12-14 00:18:39.069201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.154 [2024-12-14 00:18:39.069222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:38:00.154 [2024-12-14 00:18:39.069231] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:38:00.154 [2024-12-14 00:18:39.069409] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:38:00.154 [2024-12-14 00:18:39.069617] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:00.154 [2024-12-14 00:18:39.069629] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:00.154 [2024-12-14 00:18:39.069637] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:00.154 [2024-12-14 00:18:39.069646] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:00.154 [2024-12-14 00:18:39.082018] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:00.154 [2024-12-14 00:18:39.082465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.154 [2024-12-14 00:18:39.082525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:38:00.154 [2024-12-14 00:18:39.082558] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:38:00.154 [2024-12-14 00:18:39.082978] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:38:00.154 [2024-12-14 00:18:39.083167] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:00.154 [2024-12-14 00:18:39.083178] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:00.154 [2024-12-14 00:18:39.083186] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:00.154 [2024-12-14 00:18:39.083195] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:00.154 [2024-12-14 00:18:39.095053] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:00.154 [2024-12-14 00:18:39.095473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.154 [2024-12-14 00:18:39.095494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:38:00.154 [2024-12-14 00:18:39.095504] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:38:00.154 [2024-12-14 00:18:39.095681] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:38:00.154 [2024-12-14 00:18:39.095859] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:00.154 [2024-12-14 00:18:39.095869] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:00.154 [2024-12-14 00:18:39.095878] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:00.154 [2024-12-14 00:18:39.095886] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:00.154 [2024-12-14 00:18:39.108153] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:00.154 [2024-12-14 00:18:39.108605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.154 [2024-12-14 00:18:39.108629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:38:00.155 [2024-12-14 00:18:39.108639] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:38:00.155 [2024-12-14 00:18:39.108827] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:38:00.155 [2024-12-14 00:18:39.109015] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:00.155 [2024-12-14 00:18:39.109026] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:00.155 [2024-12-14 00:18:39.109034] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:00.155 [2024-12-14 00:18:39.109043] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:00.155 [2024-12-14 00:18:39.121321] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:00.155 [2024-12-14 00:18:39.121766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.155 [2024-12-14 00:18:39.121787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:38:00.155 [2024-12-14 00:18:39.121797] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:38:00.155 [2024-12-14 00:18:39.121984] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:38:00.155 [2024-12-14 00:18:39.122172] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:00.155 [2024-12-14 00:18:39.122183] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:00.155 [2024-12-14 00:18:39.122191] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:00.155 [2024-12-14 00:18:39.122207] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:00.155 [2024-12-14 00:18:39.134415] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:00.155 [2024-12-14 00:18:39.134866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.155 [2024-12-14 00:18:39.134887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:38:00.155 [2024-12-14 00:18:39.134896] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:38:00.155 [2024-12-14 00:18:39.135083] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:38:00.155 [2024-12-14 00:18:39.135271] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:00.155 [2024-12-14 00:18:39.135282] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:00.155 [2024-12-14 00:18:39.135290] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:00.155 [2024-12-14 00:18:39.135299] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:00.155 [2024-12-14 00:18:39.147582] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:00.155 [2024-12-14 00:18:39.148018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.155 [2024-12-14 00:18:39.148038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:38:00.155 [2024-12-14 00:18:39.148047] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:38:00.155 [2024-12-14 00:18:39.148257] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:38:00.155 [2024-12-14 00:18:39.148457] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:00.155 [2024-12-14 00:18:39.148469] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:00.155 [2024-12-14 00:18:39.148478] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:00.155 [2024-12-14 00:18:39.148487] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:00.155 [2024-12-14 00:18:39.160878] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:00.155 [2024-12-14 00:18:39.161318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.155 [2024-12-14 00:18:39.161339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:38:00.155 [2024-12-14 00:18:39.161349] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:38:00.155 [2024-12-14 00:18:39.161550] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:38:00.155 [2024-12-14 00:18:39.161744] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:00.155 [2024-12-14 00:18:39.161755] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:00.155 [2024-12-14 00:18:39.161764] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:00.155 [2024-12-14 00:18:39.161773] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:00.155 [2024-12-14 00:18:39.174130] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:00.155 [2024-12-14 00:18:39.174595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.155 [2024-12-14 00:18:39.174652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:38:00.155 [2024-12-14 00:18:39.174684] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:38:00.155 [2024-12-14 00:18:39.175150] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:38:00.155 [2024-12-14 00:18:39.175469] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:00.155 [2024-12-14 00:18:39.175487] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:00.155 [2024-12-14 00:18:39.175501] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:00.155 [2024-12-14 00:18:39.175514] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:00.155 [2024-12-14 00:18:39.188219] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:00.155 [2024-12-14 00:18:39.188680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.155 [2024-12-14 00:18:39.188702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:38:00.155 [2024-12-14 00:18:39.188713] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:38:00.155 [2024-12-14 00:18:39.188918] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:38:00.155 [2024-12-14 00:18:39.189123] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:00.155 [2024-12-14 00:18:39.189138] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:00.155 [2024-12-14 00:18:39.189147] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:00.155 [2024-12-14 00:18:39.189157] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:00.155 [2024-12-14 00:18:39.201298] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:00.155 [2024-12-14 00:18:39.201741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.155 [2024-12-14 00:18:39.201762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:38:00.155 [2024-12-14 00:18:39.201772] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:38:00.155 [2024-12-14 00:18:39.201960] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:38:00.155 [2024-12-14 00:18:39.202147] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:00.155 [2024-12-14 00:18:39.202158] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:00.155 [2024-12-14 00:18:39.202167] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:00.155 [2024-12-14 00:18:39.202175] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:00.155 [2024-12-14 00:18:39.214376] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:00.155 [2024-12-14 00:18:39.214825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.155 [2024-12-14 00:18:39.214884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:38:00.155 [2024-12-14 00:18:39.214916] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:38:00.155 [2024-12-14 00:18:39.215349] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:38:00.155 [2024-12-14 00:18:39.215555] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:00.155 [2024-12-14 00:18:39.215567] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:00.155 [2024-12-14 00:18:39.215576] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:00.155 [2024-12-14 00:18:39.215584] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:00.155 [2024-12-14 00:18:39.227469] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:00.155 [2024-12-14 00:18:39.227926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.155 [2024-12-14 00:18:39.227982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:38:00.155 [2024-12-14 00:18:39.228013] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:38:00.155 [2024-12-14 00:18:39.228676] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:38:00.155 [2024-12-14 00:18:39.229199] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:00.155 [2024-12-14 00:18:39.229210] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:00.155 [2024-12-14 00:18:39.229222] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:00.155 [2024-12-14 00:18:39.229231] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:00.155 [2024-12-14 00:18:39.240691] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:00.155 [2024-12-14 00:18:39.241112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.155 [2024-12-14 00:18:39.241133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:38:00.156 [2024-12-14 00:18:39.241143] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:38:00.156 [2024-12-14 00:18:39.241329] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:38:00.156 [2024-12-14 00:18:39.241524] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:00.156 [2024-12-14 00:18:39.241536] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:00.156 [2024-12-14 00:18:39.241544] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:00.156 [2024-12-14 00:18:39.241553] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:00.156 [2024-12-14 00:18:39.253811] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:00.156 [2024-12-14 00:18:39.254255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.156 [2024-12-14 00:18:39.254312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:38:00.156 [2024-12-14 00:18:39.254345] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:38:00.156 [2024-12-14 00:18:39.255009] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:38:00.156 [2024-12-14 00:18:39.255338] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:00.156 [2024-12-14 00:18:39.255348] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:00.156 [2024-12-14 00:18:39.255357] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:00.156 [2024-12-14 00:18:39.255365] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:00.156 [2024-12-14 00:18:39.266989] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:00.156 [2024-12-14 00:18:39.267433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.156 [2024-12-14 00:18:39.267461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:38:00.156 [2024-12-14 00:18:39.267471] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:38:00.156 [2024-12-14 00:18:39.267659] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:38:00.156 [2024-12-14 00:18:39.267847] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:00.156 [2024-12-14 00:18:39.267858] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:00.156 [2024-12-14 00:18:39.267866] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:00.156 [2024-12-14 00:18:39.267875] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:00.156 [2024-12-14 00:18:39.280187] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:00.156 [2024-12-14 00:18:39.280593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.156 [2024-12-14 00:18:39.280614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:38:00.156 [2024-12-14 00:18:39.280624] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:38:00.156 [2024-12-14 00:18:39.280812] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:38:00.156 [2024-12-14 00:18:39.280999] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:00.156 [2024-12-14 00:18:39.281010] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:00.156 [2024-12-14 00:18:39.281018] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:00.156 [2024-12-14 00:18:39.281027] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:00.416 [2024-12-14 00:18:39.293507] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:00.416 [2024-12-14 00:18:39.293972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.416 [2024-12-14 00:18:39.294033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:38:00.416 [2024-12-14 00:18:39.294080] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:38:00.416 [2024-12-14 00:18:39.294494] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:38:00.416 [2024-12-14 00:18:39.294691] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:00.416 [2024-12-14 00:18:39.294702] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:00.416 [2024-12-14 00:18:39.294711] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:00.416 [2024-12-14 00:18:39.294720] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:00.416 [2024-12-14 00:18:39.306826] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:00.416 [2024-12-14 00:18:39.307284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.416 [2024-12-14 00:18:39.307305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:38:00.416 [2024-12-14 00:18:39.307316] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:38:00.416 [2024-12-14 00:18:39.307517] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:38:00.416 [2024-12-14 00:18:39.307710] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:00.416 [2024-12-14 00:18:39.307721] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:00.416 [2024-12-14 00:18:39.307730] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:00.416 [2024-12-14 00:18:39.307739] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:00.416 [2024-12-14 00:18:39.320137] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:00.416 [2024-12-14 00:18:39.320598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.417 [2024-12-14 00:18:39.320672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:38:00.417 [2024-12-14 00:18:39.320713] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:38:00.417 [2024-12-14 00:18:39.321211] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:38:00.417 [2024-12-14 00:18:39.321399] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:00.417 [2024-12-14 00:18:39.321410] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:00.417 [2024-12-14 00:18:39.321419] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:00.417 [2024-12-14 00:18:39.321427] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:00.417 [2024-12-14 00:18:39.333335] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:00.417 [2024-12-14 00:18:39.333737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.417 [2024-12-14 00:18:39.333757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:38:00.417 [2024-12-14 00:18:39.333767] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:38:00.417 [2024-12-14 00:18:39.333955] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:38:00.417 [2024-12-14 00:18:39.334144] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:00.417 [2024-12-14 00:18:39.334155] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:00.417 [2024-12-14 00:18:39.334163] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:00.417 [2024-12-14 00:18:39.334172] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:00.417 [2024-12-14 00:18:39.346505] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:00.417 [2024-12-14 00:18:39.346953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.417 [2024-12-14 00:18:39.346973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:38:00.417 [2024-12-14 00:18:39.346983] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:38:00.417 [2024-12-14 00:18:39.347170] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:38:00.417 [2024-12-14 00:18:39.347358] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:00.417 [2024-12-14 00:18:39.347368] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:00.417 [2024-12-14 00:18:39.347377] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:00.417 [2024-12-14 00:18:39.347385] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:00.417 [2024-12-14 00:18:39.359615] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:00.417 [2024-12-14 00:18:39.359956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.417 [2024-12-14 00:18:39.359976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:38:00.417 [2024-12-14 00:18:39.359986] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:38:00.417 [2024-12-14 00:18:39.360177] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:38:00.417 [2024-12-14 00:18:39.360366] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:00.417 [2024-12-14 00:18:39.360376] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:00.417 [2024-12-14 00:18:39.360385] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:00.417 [2024-12-14 00:18:39.360393] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:00.417 [2024-12-14 00:18:39.372768] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:00.417 [2024-12-14 00:18:39.373177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.417 [2024-12-14 00:18:39.373198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:38:00.417 [2024-12-14 00:18:39.373208] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:38:00.417 [2024-12-14 00:18:39.373396] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:38:00.417 [2024-12-14 00:18:39.373593] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:00.417 [2024-12-14 00:18:39.373604] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:00.417 [2024-12-14 00:18:39.373613] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:00.417 [2024-12-14 00:18:39.373621] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:00.417 [2024-12-14 00:18:39.385882] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:00.417 [2024-12-14 00:18:39.386327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.417 [2024-12-14 00:18:39.386348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:38:00.417 [2024-12-14 00:18:39.386358] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:38:00.417 [2024-12-14 00:18:39.386568] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:38:00.417 [2024-12-14 00:18:39.386770] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:00.417 [2024-12-14 00:18:39.386781] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:00.417 [2024-12-14 00:18:39.386790] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:00.417 [2024-12-14 00:18:39.386799] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:00.417 [2024-12-14 00:18:39.399070] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:00.417 [2024-12-14 00:18:39.399569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.417 [2024-12-14 00:18:39.399591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:38:00.417 [2024-12-14 00:18:39.399601] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:38:00.417 [2024-12-14 00:18:39.399795] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:38:00.417 [2024-12-14 00:18:39.399988] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:00.417 [2024-12-14 00:18:39.400003] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:00.417 [2024-12-14 00:18:39.400012] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:00.417 [2024-12-14 00:18:39.400021] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:00.417 [2024-12-14 00:18:39.412416] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:00.417 [2024-12-14 00:18:39.412834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.417 [2024-12-14 00:18:39.412855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:38:00.417 [2024-12-14 00:18:39.412865] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:38:00.417 [2024-12-14 00:18:39.413058] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:38:00.417 [2024-12-14 00:18:39.413251] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:00.417 [2024-12-14 00:18:39.413261] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:00.417 [2024-12-14 00:18:39.413270] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:00.417 [2024-12-14 00:18:39.413279] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:00.417 [2024-12-14 00:18:39.425743] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:00.417 [2024-12-14 00:18:39.426180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.417 [2024-12-14 00:18:39.426202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:38:00.417 [2024-12-14 00:18:39.426211] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:38:00.417 [2024-12-14 00:18:39.426405] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:38:00.417 [2024-12-14 00:18:39.426604] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:00.417 [2024-12-14 00:18:39.426616] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:00.417 [2024-12-14 00:18:39.426625] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:00.417 [2024-12-14 00:18:39.426633] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:00.417 [2024-12-14 00:18:39.438932] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:00.417 [2024-12-14 00:18:39.439387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.417 [2024-12-14 00:18:39.439408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:38:00.417 [2024-12-14 00:18:39.439418] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:38:00.417 [2024-12-14 00:18:39.439612] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:38:00.417 [2024-12-14 00:18:39.439799] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:00.417 [2024-12-14 00:18:39.439810] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:00.417 [2024-12-14 00:18:39.439818] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:00.417 [2024-12-14 00:18:39.439831] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:00.417 [2024-12-14 00:18:39.452163] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:00.417 [2024-12-14 00:18:39.452490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.418 [2024-12-14 00:18:39.452512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:38:00.418 [2024-12-14 00:18:39.452522] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:38:00.418 [2024-12-14 00:18:39.452710] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:38:00.418 [2024-12-14 00:18:39.452899] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:00.418 [2024-12-14 00:18:39.452909] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:00.418 [2024-12-14 00:18:39.452917] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:00.418 [2024-12-14 00:18:39.452926] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:00.418 [2024-12-14 00:18:39.465225] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:00.418 [2024-12-14 00:18:39.465584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.418 [2024-12-14 00:18:39.465606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:38:00.418 [2024-12-14 00:18:39.465616] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:38:00.418 [2024-12-14 00:18:39.465804] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:38:00.418 [2024-12-14 00:18:39.465992] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:00.418 [2024-12-14 00:18:39.466003] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:00.418 [2024-12-14 00:18:39.466011] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:00.418 [2024-12-14 00:18:39.466020] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:00.418 [2024-12-14 00:18:39.478585] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:00.418 [2024-12-14 00:18:39.478951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.418 [2024-12-14 00:18:39.478973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:38:00.418 [2024-12-14 00:18:39.478983] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:38:00.418 [2024-12-14 00:18:39.479171] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:38:00.418 [2024-12-14 00:18:39.479367] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:00.418 [2024-12-14 00:18:39.479378] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:00.418 [2024-12-14 00:18:39.479387] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:00.418 [2024-12-14 00:18:39.479396] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:00.418 [2024-12-14 00:18:39.491872] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:00.418 [2024-12-14 00:18:39.492277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.418 [2024-12-14 00:18:39.492341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:38:00.418 [2024-12-14 00:18:39.492374] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:38:00.418 [2024-12-14 00:18:39.492909] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:38:00.418 [2024-12-14 00:18:39.493099] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:00.418 [2024-12-14 00:18:39.493110] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:00.418 [2024-12-14 00:18:39.493119] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:00.418 [2024-12-14 00:18:39.493127] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:00.418 [2024-12-14 00:18:39.505186] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:00.418 [2024-12-14 00:18:39.505623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.418 [2024-12-14 00:18:39.505645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:38:00.418 [2024-12-14 00:18:39.505655] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:38:00.418 [2024-12-14 00:18:39.505848] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:38:00.418 [2024-12-14 00:18:39.506026] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:00.418 [2024-12-14 00:18:39.506036] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:00.418 [2024-12-14 00:18:39.506044] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:00.418 [2024-12-14 00:18:39.506053] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:00.418 [2024-12-14 00:18:39.518358] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:00.418 [2024-12-14 00:18:39.518786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.418 [2024-12-14 00:18:39.518808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:38:00.418 [2024-12-14 00:18:39.518820] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:38:00.418 [2024-12-14 00:18:39.519007] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:38:00.418 [2024-12-14 00:18:39.519196] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:00.418 [2024-12-14 00:18:39.519207] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:00.418 [2024-12-14 00:18:39.519215] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:00.418 [2024-12-14 00:18:39.519224] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:00.418 [2024-12-14 00:18:39.531520] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:00.418 [2024-12-14 00:18:39.531859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.418 [2024-12-14 00:18:39.531880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:38:00.418 [2024-12-14 00:18:39.531893] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:38:00.418 [2024-12-14 00:18:39.532082] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:38:00.418 [2024-12-14 00:18:39.532271] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:00.418 [2024-12-14 00:18:39.532282] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:00.418 [2024-12-14 00:18:39.532290] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:00.418 [2024-12-14 00:18:39.532299] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:00.418 [2024-12-14 00:18:39.544712] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:00.418 [2024-12-14 00:18:39.545183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.418 [2024-12-14 00:18:39.545241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:38:00.418 [2024-12-14 00:18:39.545274] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:38:00.418 [2024-12-14 00:18:39.545791] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:38:00.418 [2024-12-14 00:18:39.545980] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:00.418 [2024-12-14 00:18:39.545991] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:00.418 [2024-12-14 00:18:39.545999] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:00.418 [2024-12-14 00:18:39.546008] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:00.679 [2024-12-14 00:18:39.558033] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:00.679 [2024-12-14 00:18:39.558504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.679 [2024-12-14 00:18:39.558527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:38:00.679 [2024-12-14 00:18:39.558537] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:38:00.679 [2024-12-14 00:18:39.558726] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:38:00.679 [2024-12-14 00:18:39.558914] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:00.679 [2024-12-14 00:18:39.558925] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:00.679 [2024-12-14 00:18:39.558934] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:00.679 [2024-12-14 00:18:39.558943] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:00.679 [2024-12-14 00:18:39.571109] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:00.679 [2024-12-14 00:18:39.571522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.679 [2024-12-14 00:18:39.571543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:38:00.679 [2024-12-14 00:18:39.571553] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:38:00.679 [2024-12-14 00:18:39.571741] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:38:00.679 [2024-12-14 00:18:39.571933] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:00.679 [2024-12-14 00:18:39.571944] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:00.679 [2024-12-14 00:18:39.571953] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:00.679 [2024-12-14 00:18:39.571962] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:00.679 [2024-12-14 00:18:39.584357] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:00.679 [2024-12-14 00:18:39.584698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.679 [2024-12-14 00:18:39.584756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:38:00.679 [2024-12-14 00:18:39.584788] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:38:00.679 [2024-12-14 00:18:39.585454] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:38:00.679 [2024-12-14 00:18:39.586003] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:00.679 [2024-12-14 00:18:39.586014] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:00.679 [2024-12-14 00:18:39.586023] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:00.679 [2024-12-14 00:18:39.586031] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:00.679 [2024-12-14 00:18:39.597534] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:00.679 [2024-12-14 00:18:39.597900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.679 [2024-12-14 00:18:39.597920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:38:00.679 [2024-12-14 00:18:39.597929] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:38:00.679 [2024-12-14 00:18:39.598117] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:38:00.679 [2024-12-14 00:18:39.598304] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:00.679 [2024-12-14 00:18:39.598315] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:00.679 [2024-12-14 00:18:39.598324] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:00.679 [2024-12-14 00:18:39.598333] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:00.679 [2024-12-14 00:18:39.610681] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:00.679 [2024-12-14 00:18:39.611142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.679 [2024-12-14 00:18:39.611199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:38:00.679 [2024-12-14 00:18:39.611230] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:38:00.679 [2024-12-14 00:18:39.611893] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:38:00.679 [2024-12-14 00:18:39.612260] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:00.679 [2024-12-14 00:18:39.612271] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:00.679 [2024-12-14 00:18:39.612293] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:00.679 [2024-12-14 00:18:39.612301] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:00.679 [2024-12-14 00:18:39.623895] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:00.679 [2024-12-14 00:18:39.624328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.679 [2024-12-14 00:18:39.624385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:38:00.679 [2024-12-14 00:18:39.624417] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:38:00.679 [2024-12-14 00:18:39.624930] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:38:00.679 [2024-12-14 00:18:39.625119] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:00.679 [2024-12-14 00:18:39.625130] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:00.679 [2024-12-14 00:18:39.625139] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:00.679 [2024-12-14 00:18:39.625147] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:00.679 [2024-12-14 00:18:39.637055] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:00.679 [2024-12-14 00:18:39.637498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.679 [2024-12-14 00:18:39.637520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:38:00.679 [2024-12-14 00:18:39.637530] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:38:00.679 [2024-12-14 00:18:39.637718] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:38:00.679 [2024-12-14 00:18:39.637907] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:00.679 [2024-12-14 00:18:39.637917] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:00.679 [2024-12-14 00:18:39.637926] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:00.679 [2024-12-14 00:18:39.637934] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:00.679 [2024-12-14 00:18:39.650280] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:00.679 [2024-12-14 00:18:39.650679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.679 [2024-12-14 00:18:39.650700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:38:00.679 [2024-12-14 00:18:39.650711] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:38:00.679 [2024-12-14 00:18:39.650904] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:38:00.680 [2024-12-14 00:18:39.651098] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:00.680 [2024-12-14 00:18:39.651109] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:00.680 [2024-12-14 00:18:39.651118] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:00.680 [2024-12-14 00:18:39.651127] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:00.680 [2024-12-14 00:18:39.663714] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:00.680 [2024-12-14 00:18:39.664149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.680 [2024-12-14 00:18:39.664170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:38:00.680 [2024-12-14 00:18:39.664180] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:38:00.680 [2024-12-14 00:18:39.664372] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:38:00.680 [2024-12-14 00:18:39.664582] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:00.680 [2024-12-14 00:18:39.664594] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:00.680 [2024-12-14 00:18:39.664609] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:00.680 [2024-12-14 00:18:39.664617] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:00.680 [2024-12-14 00:18:39.677001] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:00.680 [2024-12-14 00:18:39.677474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.680 [2024-12-14 00:18:39.677496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:38:00.680 [2024-12-14 00:18:39.677506] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:38:00.680 [2024-12-14 00:18:39.677708] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:38:00.680 [2024-12-14 00:18:39.677896] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:00.680 [2024-12-14 00:18:39.677908] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:00.680 [2024-12-14 00:18:39.677916] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:00.680 [2024-12-14 00:18:39.677925] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:00.680 [2024-12-14 00:18:39.690313] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:00.680 [2024-12-14 00:18:39.690795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.680 [2024-12-14 00:18:39.690854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:38:00.680 [2024-12-14 00:18:39.690886] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:38:00.680 [2024-12-14 00:18:39.691364] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:38:00.680 [2024-12-14 00:18:39.691557] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:00.680 [2024-12-14 00:18:39.691569] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:00.680 [2024-12-14 00:18:39.691577] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:00.680 [2024-12-14 00:18:39.691587] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:00.680 [2024-12-14 00:18:39.703522] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:00.680 [2024-12-14 00:18:39.704000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.680 [2024-12-14 00:18:39.704025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:38:00.680 [2024-12-14 00:18:39.704036] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:38:00.680 [2024-12-14 00:18:39.704224] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:38:00.680 [2024-12-14 00:18:39.704412] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:00.680 [2024-12-14 00:18:39.704424] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:00.680 [2024-12-14 00:18:39.704434] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:00.680 [2024-12-14 00:18:39.704449] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:00.680 [2024-12-14 00:18:39.716662] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:00.680 [2024-12-14 00:18:39.717027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.680 [2024-12-14 00:18:39.717047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:38:00.680 [2024-12-14 00:18:39.717057] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:38:00.680 [2024-12-14 00:18:39.717245] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:38:00.680 [2024-12-14 00:18:39.717433] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:00.680 [2024-12-14 00:18:39.717449] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:00.680 [2024-12-14 00:18:39.717458] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:00.680 [2024-12-14 00:18:39.717466] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:00.680 [2024-12-14 00:18:39.729817] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:00.680 [2024-12-14 00:18:39.730286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.680 [2024-12-14 00:18:39.730306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:38:00.680 [2024-12-14 00:18:39.730316] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:38:00.680 [2024-12-14 00:18:39.730509] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:38:00.680 [2024-12-14 00:18:39.730697] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:00.680 [2024-12-14 00:18:39.730708] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:00.680 [2024-12-14 00:18:39.730717] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:00.680 [2024-12-14 00:18:39.730725] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:00.680 [2024-12-14 00:18:39.743038] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:00.680 [2024-12-14 00:18:39.743488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.680 [2024-12-14 00:18:39.743510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:38:00.680 [2024-12-14 00:18:39.743520] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:38:00.680 [2024-12-14 00:18:39.743712] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:38:00.680 [2024-12-14 00:18:39.743900] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:00.680 [2024-12-14 00:18:39.743911] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:00.680 [2024-12-14 00:18:39.743919] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:00.680 [2024-12-14 00:18:39.743928] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:00.680 [2024-12-14 00:18:39.756390] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:00.680 [2024-12-14 00:18:39.756781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.680 [2024-12-14 00:18:39.756802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:38:00.680 [2024-12-14 00:18:39.756812] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:38:00.680 [2024-12-14 00:18:39.756999] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:38:00.680 [2024-12-14 00:18:39.757188] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:00.680 [2024-12-14 00:18:39.757198] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:00.680 [2024-12-14 00:18:39.757207] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:00.680 [2024-12-14 00:18:39.757216] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:00.680 [2024-12-14 00:18:39.769554] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:00.680 [2024-12-14 00:18:39.770010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.680 [2024-12-14 00:18:39.770031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:38:00.680 [2024-12-14 00:18:39.770041] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:38:00.680 [2024-12-14 00:18:39.770228] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:38:00.680 [2024-12-14 00:18:39.770417] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:00.680 [2024-12-14 00:18:39.770428] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:00.680 [2024-12-14 00:18:39.770436] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:00.680 [2024-12-14 00:18:39.770453] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:00.680 [2024-12-14 00:18:39.782668] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:00.680 [2024-12-14 00:18:39.783119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.680 [2024-12-14 00:18:39.783139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:38:00.680 [2024-12-14 00:18:39.783149] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:38:00.680 [2024-12-14 00:18:39.783327] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:38:00.680 [2024-12-14 00:18:39.783534] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:00.681 [2024-12-14 00:18:39.783546] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:00.681 [2024-12-14 00:18:39.783555] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:00.681 [2024-12-14 00:18:39.783564] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:00.681 [2024-12-14 00:18:39.795708] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:00.681 [2024-12-14 00:18:39.796131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.681 [2024-12-14 00:18:39.796152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:38:00.681 [2024-12-14 00:18:39.796161] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:38:00.681 [2024-12-14 00:18:39.796339] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:38:00.681 [2024-12-14 00:18:39.796545] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:00.681 [2024-12-14 00:18:39.796556] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:00.681 [2024-12-14 00:18:39.796565] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:00.681 [2024-12-14 00:18:39.796574] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:00.681 [2024-12-14 00:18:39.808812] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:00.681 [2024-12-14 00:18:39.809283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.681 [2024-12-14 00:18:39.809340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:38:00.681 [2024-12-14 00:18:39.809372] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:38:00.681 [2024-12-14 00:18:39.810042] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:38:00.681 [2024-12-14 00:18:39.810581] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:00.681 [2024-12-14 00:18:39.810593] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:00.681 [2024-12-14 00:18:39.810602] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:00.681 [2024-12-14 00:18:39.810611] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:00.941 [2024-12-14 00:18:39.822053] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:00.941 [2024-12-14 00:18:39.822444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.941 [2024-12-14 00:18:39.822465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:38:00.941 [2024-12-14 00:18:39.822475] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:38:00.941 [2024-12-14 00:18:39.822664] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:38:00.941 [2024-12-14 00:18:39.822852] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:00.941 [2024-12-14 00:18:39.822863] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:00.941 [2024-12-14 00:18:39.822875] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:00.941 [2024-12-14 00:18:39.822884] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:00.941 [2024-12-14 00:18:39.835127] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:00.941 [2024-12-14 00:18:39.835521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.941 [2024-12-14 00:18:39.835543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:38:00.941 [2024-12-14 00:18:39.835553] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:38:00.941 [2024-12-14 00:18:39.835742] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:38:00.941 [2024-12-14 00:18:39.835929] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:00.941 [2024-12-14 00:18:39.835940] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:00.941 [2024-12-14 00:18:39.835949] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:00.941 [2024-12-14 00:18:39.835958] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:00.941 [2024-12-14 00:18:39.848261] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:00.941 [2024-12-14 00:18:39.848667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.941 [2024-12-14 00:18:39.848699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:38:00.941 [2024-12-14 00:18:39.848709] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:38:00.941 [2024-12-14 00:18:39.848896] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:38:00.941 [2024-12-14 00:18:39.849084] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:00.941 [2024-12-14 00:18:39.849095] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:00.941 [2024-12-14 00:18:39.849103] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:00.941 [2024-12-14 00:18:39.849112] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:00.941 [2024-12-14 00:18:39.861412] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:00.941 [2024-12-14 00:18:39.861890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.941 [2024-12-14 00:18:39.861911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:38:00.941 [2024-12-14 00:18:39.861921] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:38:00.941 [2024-12-14 00:18:39.862109] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:38:00.941 [2024-12-14 00:18:39.862296] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:00.941 [2024-12-14 00:18:39.862307] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:00.941 [2024-12-14 00:18:39.862316] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:00.941 [2024-12-14 00:18:39.862325] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:00.941 [2024-12-14 00:18:39.874565] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:00.941 [2024-12-14 00:18:39.874990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.941 [2024-12-14 00:18:39.875011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:38:00.941 [2024-12-14 00:18:39.875021] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:38:00.942 [2024-12-14 00:18:39.875209] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:38:00.942 [2024-12-14 00:18:39.875397] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:00.942 [2024-12-14 00:18:39.875408] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:00.942 [2024-12-14 00:18:39.875416] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:00.942 [2024-12-14 00:18:39.875425] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:00.942 [2024-12-14 00:18:39.887723] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:00.942 [2024-12-14 00:18:39.888160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.942 [2024-12-14 00:18:39.888180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:38:00.942 [2024-12-14 00:18:39.888190] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:38:00.942 [2024-12-14 00:18:39.888367] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:38:00.942 [2024-12-14 00:18:39.888572] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:00.942 [2024-12-14 00:18:39.888583] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:00.942 [2024-12-14 00:18:39.888592] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:00.942 [2024-12-14 00:18:39.888600] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:00.942 [2024-12-14 00:18:39.900931] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:00.942 [2024-12-14 00:18:39.901344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.942 [2024-12-14 00:18:39.901367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:38:00.942 [2024-12-14 00:18:39.901377] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:38:00.942 [2024-12-14 00:18:39.901576] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:38:00.942 [2024-12-14 00:18:39.901772] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:00.942 [2024-12-14 00:18:39.901783] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:00.942 [2024-12-14 00:18:39.901791] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:00.942 [2024-12-14 00:18:39.901801] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:00.942 [2024-12-14 00:18:39.914203] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:00.942 [2024-12-14 00:18:39.914652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.942 [2024-12-14 00:18:39.914674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:38:00.942 [2024-12-14 00:18:39.914687] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:38:00.942 [2024-12-14 00:18:39.914881] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:38:00.942 [2024-12-14 00:18:39.915075] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:00.942 [2024-12-14 00:18:39.915086] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:00.942 [2024-12-14 00:18:39.915095] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:00.942 [2024-12-14 00:18:39.915104] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:00.942 [2024-12-14 00:18:39.927546] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:00.942 [2024-12-14 00:18:39.928028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.942 [2024-12-14 00:18:39.928100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:38:00.942 [2024-12-14 00:18:39.928132] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:38:00.942 [2024-12-14 00:18:39.928653] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:38:00.942 [2024-12-14 00:18:39.928848] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:00.942 [2024-12-14 00:18:39.928858] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:00.942 [2024-12-14 00:18:39.928868] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:00.942 [2024-12-14 00:18:39.928877] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:00.942 [2024-12-14 00:18:39.940686] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:00.942 [2024-12-14 00:18:39.941134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.942 [2024-12-14 00:18:39.941155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:38:00.942 [2024-12-14 00:18:39.941164] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:38:00.942 [2024-12-14 00:18:39.941342] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:38:00.942 [2024-12-14 00:18:39.941546] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:00.942 [2024-12-14 00:18:39.941558] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:00.942 [2024-12-14 00:18:39.941567] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:00.942 [2024-12-14 00:18:39.941576] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:00.942 [2024-12-14 00:18:39.953856] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:00.942 [2024-12-14 00:18:39.954310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.942 [2024-12-14 00:18:39.954330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:38:00.942 [2024-12-14 00:18:39.954340] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:38:00.942 [2024-12-14 00:18:39.954547] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:38:00.942 [2024-12-14 00:18:39.954735] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:00.942 [2024-12-14 00:18:39.954745] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:00.942 [2024-12-14 00:18:39.954754] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:00.942 [2024-12-14 00:18:39.954762] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:00.942 [2024-12-14 00:18:39.966999] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:00.942 [2024-12-14 00:18:39.967476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.942 [2024-12-14 00:18:39.967535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:38:00.942 [2024-12-14 00:18:39.967567] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:38:00.942 [2024-12-14 00:18:39.968005] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:38:00.942 [2024-12-14 00:18:39.968183] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:00.942 [2024-12-14 00:18:39.968193] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:00.942 [2024-12-14 00:18:39.968201] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:00.942 [2024-12-14 00:18:39.968210] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:00.942 5201.00 IOPS, 20.32 MiB/s [2024-12-13T23:18:40.083Z] [2024-12-14 00:18:39.980164] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:00.942 [2024-12-14 00:18:39.980633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.942 [2024-12-14 00:18:39.980692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:38:00.942 [2024-12-14 00:18:39.980725] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:38:00.942 [2024-12-14 00:18:39.981373] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:38:00.942 [2024-12-14 00:18:39.981948] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:00.942 [2024-12-14 00:18:39.981959] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:00.942 [2024-12-14 00:18:39.981968] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:00.942 [2024-12-14 00:18:39.981976] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:00.942 [2024-12-14 00:18:39.993349] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:00.942 [2024-12-14 00:18:39.993830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.942 [2024-12-14 00:18:39.993890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:38:00.942 [2024-12-14 00:18:39.993923] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:38:00.942 [2024-12-14 00:18:39.994589] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:38:00.942 [2024-12-14 00:18:39.995174] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:00.942 [2024-12-14 00:18:39.995195] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:00.942 [2024-12-14 00:18:39.995209] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:00.943 [2024-12-14 00:18:39.995223] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:00.943 [2024-12-14 00:18:40.007487] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:00.943 [2024-12-14 00:18:40.007909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.943 [2024-12-14 00:18:40.007934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:38:00.943 [2024-12-14 00:18:40.007946] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:38:00.943 [2024-12-14 00:18:40.008154] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:38:00.943 [2024-12-14 00:18:40.008359] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:00.943 [2024-12-14 00:18:40.008372] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:00.943 [2024-12-14 00:18:40.008381] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:00.943 [2024-12-14 00:18:40.008391] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:00.943 [2024-12-14 00:18:40.020820] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:00.943 [2024-12-14 00:18:40.021219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.943 [2024-12-14 00:18:40.021243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:38:00.943 [2024-12-14 00:18:40.021256] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:38:00.943 [2024-12-14 00:18:40.021458] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:38:00.943 [2024-12-14 00:18:40.021654] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:00.943 [2024-12-14 00:18:40.021666] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:00.943 [2024-12-14 00:18:40.021674] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:00.943 [2024-12-14 00:18:40.021684] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:00.943 [2024-12-14 00:18:40.034277] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:00.943 [2024-12-14 00:18:40.034750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.943 [2024-12-14 00:18:40.034773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:38:00.943 [2024-12-14 00:18:40.034784] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:38:00.943 [2024-12-14 00:18:40.034979] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:38:00.943 [2024-12-14 00:18:40.035173] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:00.943 [2024-12-14 00:18:40.035193] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:00.943 [2024-12-14 00:18:40.035203] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:00.943 [2024-12-14 00:18:40.035215] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:00.943 [2024-12-14 00:18:40.047606] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:00.943 [2024-12-14 00:18:40.048061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.943 [2024-12-14 00:18:40.048090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:38:00.943 [2024-12-14 00:18:40.048102] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:38:00.943 [2024-12-14 00:18:40.048298] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:38:00.943 [2024-12-14 00:18:40.048500] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:00.943 [2024-12-14 00:18:40.048513] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:00.943 [2024-12-14 00:18:40.048522] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:00.943 [2024-12-14 00:18:40.048531] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:00.943 [2024-12-14 00:18:40.060948] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:00.943 [2024-12-14 00:18:40.061418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.943 [2024-12-14 00:18:40.061444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:38:00.943 [2024-12-14 00:18:40.061455] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:38:00.943 [2024-12-14 00:18:40.061651] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:38:00.943 [2024-12-14 00:18:40.061846] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:00.943 [2024-12-14 00:18:40.061857] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:00.943 [2024-12-14 00:18:40.061866] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:00.943 [2024-12-14 00:18:40.061875] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:00.943 [2024-12-14 00:18:40.074328] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:00.943 [2024-12-14 00:18:40.074732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:00.943 [2024-12-14 00:18:40.074756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:38:00.943 [2024-12-14 00:18:40.074768] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:38:00.943 [2024-12-14 00:18:40.074989] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:38:00.943 [2024-12-14 00:18:40.075312] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:00.943 [2024-12-14 00:18:40.075346] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:00.943 [2024-12-14 00:18:40.075385] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:00.943 [2024-12-14 00:18:40.075417] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:01.203 [2024-12-14 00:18:40.087649] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:01.203 [2024-12-14 00:18:40.088124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.203 [2024-12-14 00:18:40.088184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:38:01.203 [2024-12-14 00:18:40.088219] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:38:01.203 [2024-12-14 00:18:40.088888] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:38:01.203 [2024-12-14 00:18:40.089320] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:01.203 [2024-12-14 00:18:40.089332] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:01.203 [2024-12-14 00:18:40.089342] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:01.203 [2024-12-14 00:18:40.089351] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:01.203 [2024-12-14 00:18:40.101025] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:01.203 [2024-12-14 00:18:40.101518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.204 [2024-12-14 00:18:40.101543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:38:01.204 [2024-12-14 00:18:40.101554] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:38:01.204 [2024-12-14 00:18:40.101750] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:38:01.204 [2024-12-14 00:18:40.101947] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:01.204 [2024-12-14 00:18:40.101958] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:01.204 [2024-12-14 00:18:40.101967] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:01.204 [2024-12-14 00:18:40.101977] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:01.204 [2024-12-14 00:18:40.114455] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:01.204 [2024-12-14 00:18:40.114932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.204 [2024-12-14 00:18:40.114954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:38:01.204 [2024-12-14 00:18:40.114965] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:38:01.204 [2024-12-14 00:18:40.115159] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:38:01.204 [2024-12-14 00:18:40.115354] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:01.204 [2024-12-14 00:18:40.115365] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:01.204 [2024-12-14 00:18:40.115374] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:01.204 [2024-12-14 00:18:40.115384] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:01.204 [2024-12-14 00:18:40.127663] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:01.204 [2024-12-14 00:18:40.128149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.204 [2024-12-14 00:18:40.128172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:38:01.204 [2024-12-14 00:18:40.128187] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:38:01.204 [2024-12-14 00:18:40.128382] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:38:01.204 [2024-12-14 00:18:40.128582] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:01.204 [2024-12-14 00:18:40.128594] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:01.204 [2024-12-14 00:18:40.128603] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:01.204 [2024-12-14 00:18:40.128613] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:01.204 [2024-12-14 00:18:40.141053] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:01.204 [2024-12-14 00:18:40.141496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.204 [2024-12-14 00:18:40.141565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:38:01.204 [2024-12-14 00:18:40.141597] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:38:01.204 [2024-12-14 00:18:40.142244] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:38:01.204 [2024-12-14 00:18:40.142433] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:01.204 [2024-12-14 00:18:40.142452] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:01.204 [2024-12-14 00:18:40.142461] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:01.204 [2024-12-14 00:18:40.142486] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:01.204 [2024-12-14 00:18:40.154304] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:01.204 [2024-12-14 00:18:40.154790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.204 [2024-12-14 00:18:40.154812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:38:01.204 [2024-12-14 00:18:40.154822] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:38:01.204 [2024-12-14 00:18:40.155015] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:38:01.204 [2024-12-14 00:18:40.155209] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:01.204 [2024-12-14 00:18:40.155221] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:01.204 [2024-12-14 00:18:40.155230] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:01.204 [2024-12-14 00:18:40.155240] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:01.204 [2024-12-14 00:18:40.167603] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:01.204 [2024-12-14 00:18:40.168067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.204 [2024-12-14 00:18:40.168089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:38:01.204 [2024-12-14 00:18:40.168099] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:38:01.204 [2024-12-14 00:18:40.168294] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:38:01.204 [2024-12-14 00:18:40.168497] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:01.204 [2024-12-14 00:18:40.168509] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:01.204 [2024-12-14 00:18:40.168518] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:01.204 [2024-12-14 00:18:40.168528] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:01.204 [2024-12-14 00:18:40.180909] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:01.204 [2024-12-14 00:18:40.181369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.204 [2024-12-14 00:18:40.181430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:38:01.204 [2024-12-14 00:18:40.181482] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:38:01.204 [2024-12-14 00:18:40.182130] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:38:01.204 [2024-12-14 00:18:40.182355] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:01.204 [2024-12-14 00:18:40.182367] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:01.204 [2024-12-14 00:18:40.182376] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:01.204 [2024-12-14 00:18:40.182385] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:01.204 [2024-12-14 00:18:40.194276] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:01.204 [2024-12-14 00:18:40.194757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.204 [2024-12-14 00:18:40.194779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:38:01.204 [2024-12-14 00:18:40.194789] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:38:01.204 [2024-12-14 00:18:40.194983] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:38:01.204 [2024-12-14 00:18:40.195177] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:01.204 [2024-12-14 00:18:40.195188] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:01.204 [2024-12-14 00:18:40.195197] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:01.204 [2024-12-14 00:18:40.195206] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:01.204 [2024-12-14 00:18:40.207602] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:01.204 [2024-12-14 00:18:40.207975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.204 [2024-12-14 00:18:40.207998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:38:01.204 [2024-12-14 00:18:40.208008] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:38:01.204 [2024-12-14 00:18:40.208202] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:38:01.204 [2024-12-14 00:18:40.208397] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:01.204 [2024-12-14 00:18:40.208411] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:01.204 [2024-12-14 00:18:40.208420] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:01.204 [2024-12-14 00:18:40.208429] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:01.204 [2024-12-14 00:18:40.220943] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:01.204 [2024-12-14 00:18:40.221433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.204 [2024-12-14 00:18:40.221460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:38:01.204 [2024-12-14 00:18:40.221471] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:38:01.204 [2024-12-14 00:18:40.221664] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:38:01.204 [2024-12-14 00:18:40.221859] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:01.204 [2024-12-14 00:18:40.221870] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:01.204 [2024-12-14 00:18:40.221879] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:01.204 [2024-12-14 00:18:40.221889] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:01.204 [2024-12-14 00:18:40.234292] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:01.204 [2024-12-14 00:18:40.234779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.205 [2024-12-14 00:18:40.234801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:38:01.205 [2024-12-14 00:18:40.234812] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:38:01.205 [2024-12-14 00:18:40.235011] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:38:01.205 [2024-12-14 00:18:40.235206] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:01.205 [2024-12-14 00:18:40.235217] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:01.205 [2024-12-14 00:18:40.235226] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:01.205 [2024-12-14 00:18:40.235235] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:01.205 [2024-12-14 00:18:40.247633] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:01.205 [2024-12-14 00:18:40.248104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.205 [2024-12-14 00:18:40.248154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:38:01.205 [2024-12-14 00:18:40.248188] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:38:01.205 [2024-12-14 00:18:40.248743] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:38:01.205 [2024-12-14 00:18:40.248938] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:01.205 [2024-12-14 00:18:40.248949] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:01.205 [2024-12-14 00:18:40.248958] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:01.205 [2024-12-14 00:18:40.248971] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:01.205 [2024-12-14 00:18:40.261030] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:01.205 [2024-12-14 00:18:40.261479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.205 [2024-12-14 00:18:40.261502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:38:01.205 [2024-12-14 00:18:40.261512] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:38:01.205 [2024-12-14 00:18:40.261706] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:38:01.205 [2024-12-14 00:18:40.261901] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:01.205 [2024-12-14 00:18:40.261912] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:01.205 [2024-12-14 00:18:40.261921] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:01.205 [2024-12-14 00:18:40.261931] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:01.205 [2024-12-14 00:18:40.274321] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:01.205 [2024-12-14 00:18:40.274797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.205 [2024-12-14 00:18:40.274868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:38:01.205 [2024-12-14 00:18:40.274901] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:38:01.205 [2024-12-14 00:18:40.275486] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:38:01.205 [2024-12-14 00:18:40.275681] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:01.205 [2024-12-14 00:18:40.275692] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:01.205 [2024-12-14 00:18:40.275701] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:01.205 [2024-12-14 00:18:40.275710] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:01.205 [2024-12-14 00:18:40.287685] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:01.205 [2024-12-14 00:18:40.288059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.205 [2024-12-14 00:18:40.288081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:38:01.205 [2024-12-14 00:18:40.288091] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:38:01.205 [2024-12-14 00:18:40.288285] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:38:01.205 [2024-12-14 00:18:40.288485] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:01.205 [2024-12-14 00:18:40.288498] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:01.205 [2024-12-14 00:18:40.288507] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:01.205 [2024-12-14 00:18:40.288516] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:01.205 [2024-12-14 00:18:40.301076] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:01.205 [2024-12-14 00:18:40.301467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.205 [2024-12-14 00:18:40.301489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:38:01.205 [2024-12-14 00:18:40.301499] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:38:01.205 [2024-12-14 00:18:40.301693] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:38:01.205 [2024-12-14 00:18:40.301887] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:01.205 [2024-12-14 00:18:40.301898] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:01.205 [2024-12-14 00:18:40.301907] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:01.205 [2024-12-14 00:18:40.301916] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:01.205 [2024-12-14 00:18:40.314263] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:01.205 [2024-12-14 00:18:40.314751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.205 [2024-12-14 00:18:40.314773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:38:01.205 [2024-12-14 00:18:40.314783] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:38:01.205 [2024-12-14 00:18:40.314977] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:38:01.205 [2024-12-14 00:18:40.315171] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:01.205 [2024-12-14 00:18:40.315182] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:01.205 [2024-12-14 00:18:40.315191] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:01.205 [2024-12-14 00:18:40.315200] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:01.205 [2024-12-14 00:18:40.327579] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:01.205 [2024-12-14 00:18:40.328072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.205 [2024-12-14 00:18:40.328094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:38:01.205 [2024-12-14 00:18:40.328104] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:38:01.205 [2024-12-14 00:18:40.328298] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:38:01.205 [2024-12-14 00:18:40.328499] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:01.205 [2024-12-14 00:18:40.328511] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:01.205 [2024-12-14 00:18:40.328520] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:01.205 [2024-12-14 00:18:40.328530] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:01.205 [2024-12-14 00:18:40.340911] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:01.205 [2024-12-14 00:18:40.341329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.205 [2024-12-14 00:18:40.341387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:38:01.205 [2024-12-14 00:18:40.341428] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:38:01.205 [2024-12-14 00:18:40.341923] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:38:01.205 [2024-12-14 00:18:40.342117] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:01.205 [2024-12-14 00:18:40.342128] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:01.205 [2024-12-14 00:18:40.342137] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:01.205 [2024-12-14 00:18:40.342146] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:01.466 [2024-12-14 00:18:40.354362] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:01.466 [2024-12-14 00:18:40.354835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.466 [2024-12-14 00:18:40.354857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:38:01.466 [2024-12-14 00:18:40.354867] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:38:01.466 [2024-12-14 00:18:40.355061] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:38:01.466 [2024-12-14 00:18:40.355255] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:01.466 [2024-12-14 00:18:40.355266] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:01.466 [2024-12-14 00:18:40.355275] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:01.466 [2024-12-14 00:18:40.355284] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:01.466 [2024-12-14 00:18:40.367667] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:01.466 [2024-12-14 00:18:40.368155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.466 [2024-12-14 00:18:40.368211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:38:01.466 [2024-12-14 00:18:40.368243] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:38:01.466 [2024-12-14 00:18:40.368781] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:38:01.466 [2024-12-14 00:18:40.369084] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:01.466 [2024-12-14 00:18:40.369101] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:01.466 [2024-12-14 00:18:40.369116] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:01.466 [2024-12-14 00:18:40.369129] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:01.466 [2024-12-14 00:18:40.381742] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:01.466 [2024-12-14 00:18:40.382226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.466 [2024-12-14 00:18:40.382249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:38:01.466 [2024-12-14 00:18:40.382260] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:38:01.466 [2024-12-14 00:18:40.382472] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:38:01.466 [2024-12-14 00:18:40.382682] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:01.466 [2024-12-14 00:18:40.382694] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:01.466 [2024-12-14 00:18:40.382724] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:01.466 [2024-12-14 00:18:40.382733] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:01.466 [2024-12-14 00:18:40.395138] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:01.466 [2024-12-14 00:18:40.395603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.466 [2024-12-14 00:18:40.395626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:38:01.466 [2024-12-14 00:18:40.395637] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:38:01.466 [2024-12-14 00:18:40.395831] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:38:01.466 [2024-12-14 00:18:40.396025] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:01.466 [2024-12-14 00:18:40.396036] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:01.466 [2024-12-14 00:18:40.396045] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:01.466 [2024-12-14 00:18:40.396054] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:01.466 [2024-12-14 00:18:40.408321] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:01.466 [2024-12-14 00:18:40.408805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.466 [2024-12-14 00:18:40.408826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:38:01.466 [2024-12-14 00:18:40.408837] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:38:01.466 [2024-12-14 00:18:40.409030] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:38:01.466 [2024-12-14 00:18:40.409224] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:01.466 [2024-12-14 00:18:40.409235] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:01.466 [2024-12-14 00:18:40.409244] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:01.466 [2024-12-14 00:18:40.409253] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:01.466 [2024-12-14 00:18:40.421646] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:01.466 [2024-12-14 00:18:40.422127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.466 [2024-12-14 00:18:40.422185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:38:01.466 [2024-12-14 00:18:40.422216] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:38:01.466 [2024-12-14 00:18:40.422846] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:38:01.466 [2024-12-14 00:18:40.423040] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:01.466 [2024-12-14 00:18:40.423057] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:01.466 [2024-12-14 00:18:40.423069] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:01.466 [2024-12-14 00:18:40.423078] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:01.466 [2024-12-14 00:18:40.434940] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:01.466 [2024-12-14 00:18:40.435298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.466 [2024-12-14 00:18:40.435319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:38:01.466 [2024-12-14 00:18:40.435329] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:38:01.466 [2024-12-14 00:18:40.435529] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:38:01.466 [2024-12-14 00:18:40.435724] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:01.467 [2024-12-14 00:18:40.435735] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:01.467 [2024-12-14 00:18:40.435745] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:01.467 [2024-12-14 00:18:40.435754] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:01.467 [2024-12-14 00:18:40.448323] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:01.467 [2024-12-14 00:18:40.448796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.467 [2024-12-14 00:18:40.448866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:38:01.467 [2024-12-14 00:18:40.448898] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:38:01.467 [2024-12-14 00:18:40.449565] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:38:01.467 [2024-12-14 00:18:40.450072] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:01.467 [2024-12-14 00:18:40.450082] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:01.467 [2024-12-14 00:18:40.450091] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:01.467 [2024-12-14 00:18:40.450115] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:01.467 [2024-12-14 00:18:40.462570] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:01.467 [2024-12-14 00:18:40.462960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.467 [2024-12-14 00:18:40.462982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:38:01.467 [2024-12-14 00:18:40.462993] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:38:01.467 [2024-12-14 00:18:40.463199] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:38:01.467 [2024-12-14 00:18:40.463405] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:01.467 [2024-12-14 00:18:40.463416] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:01.467 [2024-12-14 00:18:40.463426] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:01.467 [2024-12-14 00:18:40.463436] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:01.467 [2024-12-14 00:18:40.475815] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:01.467 [2024-12-14 00:18:40.476241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.467 [2024-12-14 00:18:40.476647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:38:01.467 [2024-12-14 00:18:40.476797] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:38:01.467 [2024-12-14 00:18:40.477247] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:38:01.467 [2024-12-14 00:18:40.477448] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:01.467 [2024-12-14 00:18:40.477461] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:01.467 [2024-12-14 00:18:40.477470] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:01.467 [2024-12-14 00:18:40.477480] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:01.467 [2024-12-14 00:18:40.489082] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:01.467 [2024-12-14 00:18:40.489564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.467 [2024-12-14 00:18:40.489637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:38:01.467 [2024-12-14 00:18:40.489670] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:38:01.467 [2024-12-14 00:18:40.490174] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:38:01.467 [2024-12-14 00:18:40.490502] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:01.467 [2024-12-14 00:18:40.490521] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:01.467 [2024-12-14 00:18:40.490535] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:01.467 [2024-12-14 00:18:40.490550] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:01.467 [2024-12-14 00:18:40.503224] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:01.467 [2024-12-14 00:18:40.503712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.467 [2024-12-14 00:18:40.503735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:38:01.467 [2024-12-14 00:18:40.503745] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:38:01.467 [2024-12-14 00:18:40.503950] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:38:01.467 [2024-12-14 00:18:40.504156] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:01.467 [2024-12-14 00:18:40.504168] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:01.467 [2024-12-14 00:18:40.504178] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:01.467 [2024-12-14 00:18:40.504188] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:01.467 [2024-12-14 00:18:40.516241] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:01.467 [2024-12-14 00:18:40.516704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.467 [2024-12-14 00:18:40.516771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:38:01.467 [2024-12-14 00:18:40.516804] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:38:01.467 [2024-12-14 00:18:40.517259] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:38:01.467 [2024-12-14 00:18:40.517454] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:01.467 [2024-12-14 00:18:40.517466] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:01.467 [2024-12-14 00:18:40.517474] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:01.467 [2024-12-14 00:18:40.517483] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:01.467 [2024-12-14 00:18:40.529427] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:01.467 [2024-12-14 00:18:40.529897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.467 [2024-12-14 00:18:40.529919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:38:01.467 [2024-12-14 00:18:40.529929] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:38:01.467 [2024-12-14 00:18:40.530117] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:38:01.467 [2024-12-14 00:18:40.530306] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:01.467 [2024-12-14 00:18:40.530317] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:01.467 [2024-12-14 00:18:40.530326] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:01.467 [2024-12-14 00:18:40.530335] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:01.467 [2024-12-14 00:18:40.542599] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:01.467 [2024-12-14 00:18:40.543260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.467 [2024-12-14 00:18:40.543285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:38:01.467 [2024-12-14 00:18:40.543297] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:38:01.467 [2024-12-14 00:18:40.543499] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:38:01.467 [2024-12-14 00:18:40.543689] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:01.467 [2024-12-14 00:18:40.543701] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:01.467 [2024-12-14 00:18:40.543711] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:01.467 [2024-12-14 00:18:40.543720] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:01.467 [2024-12-14 00:18:40.555820] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:01.467 [2024-12-14 00:18:40.556187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.467 [2024-12-14 00:18:40.556208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:38:01.467 [2024-12-14 00:18:40.556219] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:38:01.467 [2024-12-14 00:18:40.556410] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:38:01.467 [2024-12-14 00:18:40.556626] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:01.467 [2024-12-14 00:18:40.556638] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:01.467 [2024-12-14 00:18:40.556647] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:01.467 [2024-12-14 00:18:40.556657] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:01.467 [2024-12-14 00:18:40.569120] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:01.467 [2024-12-14 00:18:40.569460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.467 [2024-12-14 00:18:40.569482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:38:01.468 [2024-12-14 00:18:40.569492] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:38:01.468 [2024-12-14 00:18:40.569681] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:38:01.468 [2024-12-14 00:18:40.569870] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:01.468 [2024-12-14 00:18:40.569881] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:01.468 [2024-12-14 00:18:40.569890] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:01.468 [2024-12-14 00:18:40.569899] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:01.468 [2024-12-14 00:18:40.582253] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:01.468 [2024-12-14 00:18:40.582615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.468 [2024-12-14 00:18:40.582637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:38:01.468 [2024-12-14 00:18:40.582646] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:38:01.468 [2024-12-14 00:18:40.582835] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:38:01.468 [2024-12-14 00:18:40.583024] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:01.468 [2024-12-14 00:18:40.583035] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:01.468 [2024-12-14 00:18:40.583043] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:01.468 [2024-12-14 00:18:40.583052] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:01.468 [2024-12-14 00:18:40.595347] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:01.468 [2024-12-14 00:18:40.595823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.468 [2024-12-14 00:18:40.595893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:38:01.468 [2024-12-14 00:18:40.595925] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:38:01.468 [2024-12-14 00:18:40.596465] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:38:01.468 [2024-12-14 00:18:40.596654] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:01.468 [2024-12-14 00:18:40.596668] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:01.468 [2024-12-14 00:18:40.596677] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:01.468 [2024-12-14 00:18:40.596686] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:01.728 [2024-12-14 00:18:40.608560] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:01.728 [2024-12-14 00:18:40.608958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.728 [2024-12-14 00:18:40.609016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:38:01.728 [2024-12-14 00:18:40.609048] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:38:01.728 [2024-12-14 00:18:40.609552] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:38:01.728 [2024-12-14 00:18:40.609746] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:01.728 [2024-12-14 00:18:40.609758] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:01.728 [2024-12-14 00:18:40.609767] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:01.728 [2024-12-14 00:18:40.609782] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:01.728 [2024-12-14 00:18:40.621633] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:01.728 [2024-12-14 00:18:40.622085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.728 [2024-12-14 00:18:40.622131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:38:01.728 [2024-12-14 00:18:40.622165] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:38:01.728 [2024-12-14 00:18:40.622730] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:38:01.728 [2024-12-14 00:18:40.622919] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:01.728 [2024-12-14 00:18:40.622930] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:01.728 [2024-12-14 00:18:40.622939] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:01.728 [2024-12-14 00:18:40.622948] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:01.728 [2024-12-14 00:18:40.634697] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:01.728 [2024-12-14 00:18:40.635047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.728 [2024-12-14 00:18:40.635068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:38:01.728 [2024-12-14 00:18:40.635078] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:38:01.728 [2024-12-14 00:18:40.635257] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:38:01.728 [2024-12-14 00:18:40.635435] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:01.728 [2024-12-14 00:18:40.635452] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:01.728 [2024-12-14 00:18:40.635463] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:01.728 [2024-12-14 00:18:40.635488] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:01.728 [2024-12-14 00:18:40.647766] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:01.728 [2024-12-14 00:18:40.648193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.728 [2024-12-14 00:18:40.648213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:38:01.728 [2024-12-14 00:18:40.648223] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:38:01.728 [2024-12-14 00:18:40.648411] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:38:01.728 [2024-12-14 00:18:40.648607] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:01.728 [2024-12-14 00:18:40.648618] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:01.728 [2024-12-14 00:18:40.648627] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:01.728 [2024-12-14 00:18:40.648636] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:01.729 [2024-12-14 00:18:40.660934] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:01.729 [2024-12-14 00:18:40.661426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.729 [2024-12-14 00:18:40.661455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:38:01.729 [2024-12-14 00:18:40.661466] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:38:01.729 [2024-12-14 00:18:40.661660] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:38:01.729 [2024-12-14 00:18:40.661855] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:01.729 [2024-12-14 00:18:40.661866] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:01.729 [2024-12-14 00:18:40.661876] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:01.729 [2024-12-14 00:18:40.661885] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:01.729 [2024-12-14 00:18:40.674288] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:01.729 [2024-12-14 00:18:40.674734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.729 [2024-12-14 00:18:40.674756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:38:01.729 [2024-12-14 00:18:40.674766] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:38:01.729 [2024-12-14 00:18:40.674961] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:38:01.729 [2024-12-14 00:18:40.675163] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:01.729 [2024-12-14 00:18:40.675174] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:01.729 [2024-12-14 00:18:40.675184] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:01.729 [2024-12-14 00:18:40.675192] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:01.729 [2024-12-14 00:18:40.687627] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:01.729 [2024-12-14 00:18:40.687978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.729 [2024-12-14 00:18:40.687999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:38:01.729 [2024-12-14 00:18:40.688009] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:38:01.729 [2024-12-14 00:18:40.688196] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:38:01.729 [2024-12-14 00:18:40.688385] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:01.729 [2024-12-14 00:18:40.688396] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:01.729 [2024-12-14 00:18:40.688404] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:01.729 [2024-12-14 00:18:40.688413] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:01.729 [2024-12-14 00:18:40.700747] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:01.729 [2024-12-14 00:18:40.701210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.729 [2024-12-14 00:18:40.701267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:38:01.729 [2024-12-14 00:18:40.701299] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:38:01.729 [2024-12-14 00:18:40.701770] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:38:01.729 [2024-12-14 00:18:40.701958] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:01.729 [2024-12-14 00:18:40.701969] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:01.729 [2024-12-14 00:18:40.701978] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:01.729 [2024-12-14 00:18:40.701987] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:01.729 [2024-12-14 00:18:40.713952] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:01.729 [2024-12-14 00:18:40.714424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.729 [2024-12-14 00:18:40.714496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:38:01.729 [2024-12-14 00:18:40.714529] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:38:01.729 [2024-12-14 00:18:40.715178] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:38:01.729 [2024-12-14 00:18:40.715758] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:01.729 [2024-12-14 00:18:40.715769] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:01.729 [2024-12-14 00:18:40.715778] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:01.729 [2024-12-14 00:18:40.715787] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:01.729 [2024-12-14 00:18:40.727104] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:01.729 [2024-12-14 00:18:40.727571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.729 [2024-12-14 00:18:40.727593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:38:01.729 [2024-12-14 00:18:40.727607] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:38:01.729 [2024-12-14 00:18:40.727796] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:38:01.729 [2024-12-14 00:18:40.727985] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:01.729 [2024-12-14 00:18:40.727996] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:01.729 [2024-12-14 00:18:40.728005] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:01.729 [2024-12-14 00:18:40.728014] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:01.729 [2024-12-14 00:18:40.740212] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:01.729 [2024-12-14 00:18:40.740650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.729 [2024-12-14 00:18:40.740671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:38:01.729 [2024-12-14 00:18:40.740681] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:38:01.729 [2024-12-14 00:18:40.740869] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:38:01.729 [2024-12-14 00:18:40.741058] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:01.729 [2024-12-14 00:18:40.741069] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:01.729 [2024-12-14 00:18:40.741078] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:01.729 [2024-12-14 00:18:40.741087] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:01.729 [2024-12-14 00:18:40.753654] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:01.729 [2024-12-14 00:18:40.754026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.729 [2024-12-14 00:18:40.754047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:38:01.729 [2024-12-14 00:18:40.754057] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:38:01.729 [2024-12-14 00:18:40.754250] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:38:01.729 [2024-12-14 00:18:40.754450] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:01.729 [2024-12-14 00:18:40.754462] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:01.729 [2024-12-14 00:18:40.754471] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:01.729 [2024-12-14 00:18:40.754480] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:01.729 [2024-12-14 00:18:40.766782] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:01.729 [2024-12-14 00:18:40.767116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.729 [2024-12-14 00:18:40.767138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:38:01.729 [2024-12-14 00:18:40.767148] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:38:01.729 [2024-12-14 00:18:40.767339] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:38:01.729 [2024-12-14 00:18:40.767533] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:01.729 [2024-12-14 00:18:40.767544] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:01.729 [2024-12-14 00:18:40.767553] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:01.729 [2024-12-14 00:18:40.767563] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:01.729 [2024-12-14 00:18:40.779913] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:01.729 [2024-12-14 00:18:40.780388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.729 [2024-12-14 00:18:40.780458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:38:01.729 [2024-12-14 00:18:40.780492] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:38:01.729 [2024-12-14 00:18:40.780949] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:38:01.729 [2024-12-14 00:18:40.781137] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:01.729 [2024-12-14 00:18:40.781148] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:01.729 [2024-12-14 00:18:40.781156] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:01.729 [2024-12-14 00:18:40.781165] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:01.730 [2024-12-14 00:18:40.793075] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:01.730 [2024-12-14 00:18:40.793504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.730 [2024-12-14 00:18:40.793527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:38:01.730 [2024-12-14 00:18:40.793537] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:38:01.730 [2024-12-14 00:18:40.793725] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:38:01.730 [2024-12-14 00:18:40.793914] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:01.730 [2024-12-14 00:18:40.793924] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:01.730 [2024-12-14 00:18:40.793933] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:01.730 [2024-12-14 00:18:40.793942] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:01.730 [2024-12-14 00:18:40.806246] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:01.730 [2024-12-14 00:18:40.806757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.730 [2024-12-14 00:18:40.806778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:38:01.730 [2024-12-14 00:18:40.806788] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:38:01.730 [2024-12-14 00:18:40.806977] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:38:01.730 [2024-12-14 00:18:40.807166] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:01.730 [2024-12-14 00:18:40.807182] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:01.730 [2024-12-14 00:18:40.807191] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:01.730 [2024-12-14 00:18:40.807200] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:01.730 [2024-12-14 00:18:40.819400] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:01.730 [2024-12-14 00:18:40.819764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.730 [2024-12-14 00:18:40.819785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:38:01.730 [2024-12-14 00:18:40.819794] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:38:01.730 [2024-12-14 00:18:40.819972] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:38:01.730 [2024-12-14 00:18:40.820151] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:01.730 [2024-12-14 00:18:40.820162] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:01.730 [2024-12-14 00:18:40.820170] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:01.730 [2024-12-14 00:18:40.820179] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:01.730 [2024-12-14 00:18:40.832505] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:01.730 [2024-12-14 00:18:40.832907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.730 [2024-12-14 00:18:40.832927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:38:01.730 [2024-12-14 00:18:40.832937] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:38:01.730 [2024-12-14 00:18:40.833126] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:38:01.730 [2024-12-14 00:18:40.833314] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:01.730 [2024-12-14 00:18:40.833325] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:01.730 [2024-12-14 00:18:40.833333] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:01.730 [2024-12-14 00:18:40.833342] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:01.730 [2024-12-14 00:18:40.845705] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:01.730 [2024-12-14 00:18:40.846133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.730 [2024-12-14 00:18:40.846193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:38:01.730 [2024-12-14 00:18:40.846226] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:38:01.730 [2024-12-14 00:18:40.846890] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:38:01.730 [2024-12-14 00:18:40.847308] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:01.730 [2024-12-14 00:18:40.847319] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:01.730 [2024-12-14 00:18:40.847328] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:01.730 [2024-12-14 00:18:40.847340] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:01.730 [2024-12-14 00:18:40.858896] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:01.730 [2024-12-14 00:18:40.859330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.730 [2024-12-14 00:18:40.859387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:38:01.730 [2024-12-14 00:18:40.859419] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:38:01.730 [2024-12-14 00:18:40.859878] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:38:01.730 [2024-12-14 00:18:40.860067] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:01.730 [2024-12-14 00:18:40.860078] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:01.730 [2024-12-14 00:18:40.860086] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:01.730 [2024-12-14 00:18:40.860095] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:01.991 [2024-12-14 00:18:40.872185] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:01.991 [2024-12-14 00:18:40.872611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.991 [2024-12-14 00:18:40.872671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:38:01.991 [2024-12-14 00:18:40.872705] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:38:01.991 [2024-12-14 00:18:40.873228] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:38:01.991 [2024-12-14 00:18:40.873417] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:01.991 [2024-12-14 00:18:40.873430] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:01.991 [2024-12-14 00:18:40.873443] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:01.991 [2024-12-14 00:18:40.873452] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:01.991 [2024-12-14 00:18:40.885298] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:01.991 [2024-12-14 00:18:40.885752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.991 [2024-12-14 00:18:40.885773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:38:01.991 [2024-12-14 00:18:40.885783] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:38:01.991 [2024-12-14 00:18:40.885970] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:38:01.991 [2024-12-14 00:18:40.886158] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:01.991 [2024-12-14 00:18:40.886169] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:01.991 [2024-12-14 00:18:40.886177] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:01.991 [2024-12-14 00:18:40.886186] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:01.991 [2024-12-14 00:18:40.898433] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:01.991 [2024-12-14 00:18:40.898773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.991 [2024-12-14 00:18:40.898793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:38:01.991 [2024-12-14 00:18:40.898803] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:38:01.991 [2024-12-14 00:18:40.898990] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:38:01.991 [2024-12-14 00:18:40.899177] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:01.991 [2024-12-14 00:18:40.899188] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:01.991 [2024-12-14 00:18:40.899196] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:01.991 [2024-12-14 00:18:40.899205] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:01.991 [2024-12-14 00:18:40.911625] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:01.991 [2024-12-14 00:18:40.911963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.991 [2024-12-14 00:18:40.911985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:38:01.991 [2024-12-14 00:18:40.911996] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:38:01.991 [2024-12-14 00:18:40.912191] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:38:01.991 [2024-12-14 00:18:40.912386] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:01.991 [2024-12-14 00:18:40.912397] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:01.991 [2024-12-14 00:18:40.912407] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:01.991 [2024-12-14 00:18:40.912415] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:01.991 [2024-12-14 00:18:40.924905] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:01.991 [2024-12-14 00:18:40.925226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.991 [2024-12-14 00:18:40.925247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:38:01.991 [2024-12-14 00:18:40.925257] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:38:01.991 [2024-12-14 00:18:40.925457] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:38:01.991 [2024-12-14 00:18:40.925651] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:01.991 [2024-12-14 00:18:40.925663] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:01.991 [2024-12-14 00:18:40.925672] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:01.991 [2024-12-14 00:18:40.925681] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:01.992 [2024-12-14 00:18:40.938165] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:01.992 [2024-12-14 00:18:40.938546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.992 [2024-12-14 00:18:40.938567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:38:01.992 [2024-12-14 00:18:40.938581] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:38:01.992 [2024-12-14 00:18:40.938770] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:38:01.992 [2024-12-14 00:18:40.938960] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:01.992 [2024-12-14 00:18:40.938972] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:01.992 [2024-12-14 00:18:40.938980] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:01.992 [2024-12-14 00:18:40.938990] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:01.992 [2024-12-14 00:18:40.951362] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:01.992 [2024-12-14 00:18:40.951837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.992 [2024-12-14 00:18:40.951897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:38:01.992 [2024-12-14 00:18:40.951930] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:38:01.992 [2024-12-14 00:18:40.952465] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:38:01.992 [2024-12-14 00:18:40.952654] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:01.992 [2024-12-14 00:18:40.952665] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:01.992 [2024-12-14 00:18:40.952674] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:01.992 [2024-12-14 00:18:40.952683] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:01.992 [2024-12-14 00:18:40.964530] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:01.992 [2024-12-14 00:18:40.964824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.992 [2024-12-14 00:18:40.964846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:38:01.992 [2024-12-14 00:18:40.964856] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:38:01.992 [2024-12-14 00:18:40.965045] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:38:01.992 [2024-12-14 00:18:40.965233] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:01.992 [2024-12-14 00:18:40.965245] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:01.992 [2024-12-14 00:18:40.965253] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:01.992 [2024-12-14 00:18:40.965262] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:01.992 4160.80 IOPS, 16.25 MiB/s [2024-12-13T23:18:41.133Z] [2024-12-14 00:18:40.977811] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:01.992 [2024-12-14 00:18:40.978222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.992 [2024-12-14 00:18:40.978244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:38:01.992 [2024-12-14 00:18:40.978254] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:38:01.992 [2024-12-14 00:18:40.978454] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:38:01.992 [2024-12-14 00:18:40.978662] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:01.992 [2024-12-14 00:18:40.978672] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:01.992 [2024-12-14 00:18:40.978681] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:01.992 [2024-12-14 00:18:40.978690] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:01.992 [2024-12-14 00:18:40.991076] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:01.992 [2024-12-14 00:18:40.991430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.992 [2024-12-14 00:18:40.991456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:38:01.992 [2024-12-14 00:18:40.991489] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:38:01.992 [2024-12-14 00:18:40.991682] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:38:01.992 [2024-12-14 00:18:40.991877] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:01.992 [2024-12-14 00:18:40.991888] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:01.992 [2024-12-14 00:18:40.991897] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:01.992 [2024-12-14 00:18:40.991906] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:01.992 [2024-12-14 00:18:41.004298] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:01.992 [2024-12-14 00:18:41.004649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.992 [2024-12-14 00:18:41.004671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:38:01.992 [2024-12-14 00:18:41.004681] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:38:01.992 [2024-12-14 00:18:41.004869] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:38:01.992 [2024-12-14 00:18:41.005057] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:01.992 [2024-12-14 00:18:41.005068] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:01.992 [2024-12-14 00:18:41.005076] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:01.992 [2024-12-14 00:18:41.005085] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:01.992 [2024-12-14 00:18:41.017492] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:01.992 [2024-12-14 00:18:41.017882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.992 [2024-12-14 00:18:41.017903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:38:01.992 [2024-12-14 00:18:41.017913] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:38:01.992 [2024-12-14 00:18:41.018101] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:38:01.992 [2024-12-14 00:18:41.018290] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:01.992 [2024-12-14 00:18:41.018304] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:01.992 [2024-12-14 00:18:41.018312] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:01.992 [2024-12-14 00:18:41.018321] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:01.992 [2024-12-14 00:18:41.030611] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:01.992 [2024-12-14 00:18:41.031001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.992 [2024-12-14 00:18:41.031060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:38:01.992 [2024-12-14 00:18:41.031093] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:38:01.992 [2024-12-14 00:18:41.031604] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:38:01.992 [2024-12-14 00:18:41.031792] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:01.992 [2024-12-14 00:18:41.031803] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:01.992 [2024-12-14 00:18:41.031811] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:01.992 [2024-12-14 00:18:41.031820] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:01.992 [2024-12-14 00:18:41.043900] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:01.992 [2024-12-14 00:18:41.044220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.992 [2024-12-14 00:18:41.044241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:38:01.992 [2024-12-14 00:18:41.044250] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:38:01.992 [2024-12-14 00:18:41.044444] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:38:01.992 [2024-12-14 00:18:41.044634] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:01.992 [2024-12-14 00:18:41.044645] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:01.992 [2024-12-14 00:18:41.044654] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:01.992 [2024-12-14 00:18:41.044662] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:01.992 [2024-12-14 00:18:41.057271] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:01.992 [2024-12-14 00:18:41.057656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.992 [2024-12-14 00:18:41.057716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:38:01.992 [2024-12-14 00:18:41.057748] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:38:01.992 [2024-12-14 00:18:41.058344] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:38:01.992 [2024-12-14 00:18:41.058541] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:01.992 [2024-12-14 00:18:41.058552] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:01.992 [2024-12-14 00:18:41.058561] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:01.992 [2024-12-14 00:18:41.058573] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:01.993 [2024-12-14 00:18:41.070461] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:01.993 [2024-12-14 00:18:41.070956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.993 [2024-12-14 00:18:41.071013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:38:01.993 [2024-12-14 00:18:41.071044] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:38:01.993 [2024-12-14 00:18:41.071605] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:38:01.993 [2024-12-14 00:18:41.071794] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:01.993 [2024-12-14 00:18:41.071806] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:01.993 [2024-12-14 00:18:41.071814] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:01.993 [2024-12-14 00:18:41.071823] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:01.993 [2024-12-14 00:18:41.083696] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:01.993 [2024-12-14 00:18:41.084115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.993 [2024-12-14 00:18:41.084173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:38:01.993 [2024-12-14 00:18:41.084205] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:38:01.993 [2024-12-14 00:18:41.084675] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:38:01.993 [2024-12-14 00:18:41.084865] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:01.993 [2024-12-14 00:18:41.084875] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:01.993 [2024-12-14 00:18:41.084884] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:01.993 [2024-12-14 00:18:41.084892] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:01.993 [2024-12-14 00:18:41.096900] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:01.993 [2024-12-14 00:18:41.097275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.993 [2024-12-14 00:18:41.097296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:38:01.993 [2024-12-14 00:18:41.097306] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:38:01.993 [2024-12-14 00:18:41.097500] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:38:01.993 [2024-12-14 00:18:41.097689] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:01.993 [2024-12-14 00:18:41.097700] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:01.993 [2024-12-14 00:18:41.097708] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:01.993 [2024-12-14 00:18:41.097718] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:01.993 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 51909 Killed "${NVMF_APP[@]}" "$@" 00:38:01.993 00:18:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:38:01.993 00:18:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:38:01.993 00:18:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:38:01.993 00:18:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@726 -- # xtrace_disable 00:38:01.993 00:18:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:38:01.993 [2024-12-14 00:18:41.110294] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:01.993 [2024-12-14 00:18:41.110674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.993 [2024-12-14 00:18:41.110698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:38:01.993 [2024-12-14 00:18:41.110708] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:38:01.993 [2024-12-14 00:18:41.110901] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:38:01.993 [2024-12-14 00:18:41.111096] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:01.993 [2024-12-14 00:18:41.111107] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:01.993 [2024-12-14 00:18:41.111116] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:01.993 [2024-12-14 00:18:41.111125] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:01.993 00:18:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # nvmfpid=53491 00:38:01.993 00:18:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # waitforlisten 53491 00:38:01.993 00:18:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:38:01.993 00:18:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 53491 ']' 00:38:01.993 00:18:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:01.993 00:18:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:38:01.993 00:18:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:01.993 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:01.993 00:18:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:38:01.993 00:18:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:38:01.993 [2024-12-14 00:18:41.123701] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:01.993 [2024-12-14 00:18:41.124078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:01.993 [2024-12-14 00:18:41.124100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:38:01.993 [2024-12-14 00:18:41.124110] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:38:01.993 [2024-12-14 00:18:41.124302] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:38:01.993 [2024-12-14 00:18:41.124505] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:01.993 [2024-12-14 00:18:41.124518] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:01.993 [2024-12-14 00:18:41.124530] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:01.993 [2024-12-14 00:18:41.124540] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:02.254 [2024-12-14 00:18:41.137158] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:02.254 [2024-12-14 00:18:41.137562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.254 [2024-12-14 00:18:41.137585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:38:02.254 [2024-12-14 00:18:41.137595] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:38:02.254 [2024-12-14 00:18:41.137789] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:38:02.254 [2024-12-14 00:18:41.137983] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:02.254 [2024-12-14 00:18:41.137996] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:02.254 [2024-12-14 00:18:41.138005] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:02.254 [2024-12-14 00:18:41.138014] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:02.254 [2024-12-14 00:18:41.150600] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:02.254 [2024-12-14 00:18:41.150971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.254 [2024-12-14 00:18:41.150993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:38:02.254 [2024-12-14 00:18:41.151003] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:38:02.254 [2024-12-14 00:18:41.151196] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:38:02.254 [2024-12-14 00:18:41.151391] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:02.254 [2024-12-14 00:18:41.151402] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:02.254 [2024-12-14 00:18:41.151411] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:02.254 [2024-12-14 00:18:41.151420] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:02.254 [2024-12-14 00:18:41.164012] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:02.254 [2024-12-14 00:18:41.164507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.254 [2024-12-14 00:18:41.164530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:38:02.254 [2024-12-14 00:18:41.164542] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:38:02.254 [2024-12-14 00:18:41.164738] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:38:02.254 [2024-12-14 00:18:41.164933] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:02.254 [2024-12-14 00:18:41.164945] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:02.254 [2024-12-14 00:18:41.164955] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:02.254 [2024-12-14 00:18:41.164964] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:02.254 [2024-12-14 00:18:41.177443] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:02.254 [2024-12-14 00:18:41.177892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.254 [2024-12-14 00:18:41.177918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:38:02.254 [2024-12-14 00:18:41.177930] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:38:02.254 [2024-12-14 00:18:41.178127] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:38:02.254 [2024-12-14 00:18:41.178332] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:02.254 [2024-12-14 00:18:41.178343] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:02.254 [2024-12-14 00:18:41.178352] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:02.254 [2024-12-14 00:18:41.178362] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:02.254 [2024-12-14 00:18:41.190777] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:02.254 [2024-12-14 00:18:41.191234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.254 [2024-12-14 00:18:41.191256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:38:02.254 [2024-12-14 00:18:41.191267] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:38:02.254 [2024-12-14 00:18:41.191473] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:38:02.254 [2024-12-14 00:18:41.191671] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:02.254 [2024-12-14 00:18:41.191682] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:02.254 [2024-12-14 00:18:41.191691] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:02.254 [2024-12-14 00:18:41.191701] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:02.254 [2024-12-14 00:18:41.193879] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:38:02.254 [2024-12-14 00:18:41.193954] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:38:02.254 [2024-12-14 00:18:41.204171] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:02.254 [2024-12-14 00:18:41.204617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.254 [2024-12-14 00:18:41.204640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:38:02.254 [2024-12-14 00:18:41.204651] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:38:02.254 [2024-12-14 00:18:41.204850] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:38:02.254 [2024-12-14 00:18:41.205048] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:02.254 [2024-12-14 00:18:41.205059] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:02.254 [2024-12-14 00:18:41.205069] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:02.254 [2024-12-14 00:18:41.205078] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:02.254 [2024-12-14 00:18:41.217546] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:02.254 [2024-12-14 00:18:41.217986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.254 [2024-12-14 00:18:41.218008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:38:02.254 [2024-12-14 00:18:41.218020] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:38:02.254 [2024-12-14 00:18:41.218218] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:38:02.254 [2024-12-14 00:18:41.218416] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:02.254 [2024-12-14 00:18:41.218428] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:02.254 [2024-12-14 00:18:41.218444] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:02.254 [2024-12-14 00:18:41.218455] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:02.254 [2024-12-14 00:18:41.230990] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:02.254 [2024-12-14 00:18:41.231479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.254 [2024-12-14 00:18:41.231502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:38:02.254 [2024-12-14 00:18:41.231513] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:38:02.254 [2024-12-14 00:18:41.231712] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:38:02.254 [2024-12-14 00:18:41.231908] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:02.254 [2024-12-14 00:18:41.231920] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:02.254 [2024-12-14 00:18:41.231929] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:02.254 [2024-12-14 00:18:41.231938] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:02.254 [2024-12-14 00:18:41.244513] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:02.254 [2024-12-14 00:18:41.244969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.254 [2024-12-14 00:18:41.244991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:38:02.254 [2024-12-14 00:18:41.245002] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:38:02.254 [2024-12-14 00:18:41.245199] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:38:02.255 [2024-12-14 00:18:41.245396] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:02.255 [2024-12-14 00:18:41.245407] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:02.255 [2024-12-14 00:18:41.245416] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:02.255 [2024-12-14 00:18:41.245426] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:02.255 [2024-12-14 00:18:41.257981] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:02.255 [2024-12-14 00:18:41.258429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.255 [2024-12-14 00:18:41.258455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:38:02.255 [2024-12-14 00:18:41.258470] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:38:02.255 [2024-12-14 00:18:41.258667] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:38:02.255 [2024-12-14 00:18:41.258864] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:02.255 [2024-12-14 00:18:41.258875] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:02.255 [2024-12-14 00:18:41.258884] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:02.255 [2024-12-14 00:18:41.258893] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:02.255 [2024-12-14 00:18:41.271348] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:02.255 [2024-12-14 00:18:41.271814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.255 [2024-12-14 00:18:41.271838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:38:02.255 [2024-12-14 00:18:41.271849] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:38:02.255 [2024-12-14 00:18:41.272046] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:38:02.255 [2024-12-14 00:18:41.272242] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:02.255 [2024-12-14 00:18:41.272254] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:02.255 [2024-12-14 00:18:41.272263] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:02.255 [2024-12-14 00:18:41.272273] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:02.255 [2024-12-14 00:18:41.284638] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:02.255 [2024-12-14 00:18:41.285026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.255 [2024-12-14 00:18:41.285047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:38:02.255 [2024-12-14 00:18:41.285058] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:38:02.255 [2024-12-14 00:18:41.285254] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:38:02.255 [2024-12-14 00:18:41.285455] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:02.255 [2024-12-14 00:18:41.285468] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:02.255 [2024-12-14 00:18:41.285477] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:02.255 [2024-12-14 00:18:41.285487] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:02.255 [2024-12-14 00:18:41.297963] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:02.255 [2024-12-14 00:18:41.298405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.255 [2024-12-14 00:18:41.298427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:38:02.255 [2024-12-14 00:18:41.298443] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:38:02.255 [2024-12-14 00:18:41.298640] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:38:02.255 [2024-12-14 00:18:41.298840] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:02.255 [2024-12-14 00:18:41.298851] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:02.255 [2024-12-14 00:18:41.298859] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:02.255 [2024-12-14 00:18:41.298869] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:02.255 [2024-12-14 00:18:41.311297] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:02.255 [2024-12-14 00:18:41.311678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.255 [2024-12-14 00:18:41.311699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:38:02.255 [2024-12-14 00:18:41.311709] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:38:02.255 [2024-12-14 00:18:41.311904] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:38:02.255 [2024-12-14 00:18:41.312101] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:02.255 [2024-12-14 00:18:41.312113] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:02.255 [2024-12-14 00:18:41.312122] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:02.255 [2024-12-14 00:18:41.312131] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:02.255 [2024-12-14 00:18:41.317991] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:38:02.255 [2024-12-14 00:18:41.324734] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:02.255 [2024-12-14 00:18:41.325156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.255 [2024-12-14 00:18:41.325177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:38:02.255 [2024-12-14 00:18:41.325188] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:38:02.255 [2024-12-14 00:18:41.325383] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:38:02.255 [2024-12-14 00:18:41.325584] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:02.255 [2024-12-14 00:18:41.325596] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:02.255 [2024-12-14 00:18:41.325606] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:02.255 [2024-12-14 00:18:41.325615] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:02.255 [2024-12-14 00:18:41.338118] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:02.255 [2024-12-14 00:18:41.338591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.255 [2024-12-14 00:18:41.338616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:38:02.255 [2024-12-14 00:18:41.338627] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:38:02.255 [2024-12-14 00:18:41.338826] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:38:02.255 [2024-12-14 00:18:41.339025] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:02.255 [2024-12-14 00:18:41.339040] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:02.255 [2024-12-14 00:18:41.339049] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:02.255 [2024-12-14 00:18:41.339059] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:02.255 [2024-12-14 00:18:41.351411] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:02.255 [2024-12-14 00:18:41.351861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.255 [2024-12-14 00:18:41.351883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:38:02.255 [2024-12-14 00:18:41.351894] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:38:02.255 [2024-12-14 00:18:41.352087] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:38:02.255 [2024-12-14 00:18:41.352279] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:02.255 [2024-12-14 00:18:41.352290] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:02.255 [2024-12-14 00:18:41.352298] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:02.255 [2024-12-14 00:18:41.352308] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:02.255 [2024-12-14 00:18:41.364701] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:02.255 [2024-12-14 00:18:41.365138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.255 [2024-12-14 00:18:41.365160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:38:02.255 [2024-12-14 00:18:41.365170] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:38:02.255 [2024-12-14 00:18:41.365361] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:38:02.255 [2024-12-14 00:18:41.365575] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:02.255 [2024-12-14 00:18:41.365588] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:02.255 [2024-12-14 00:18:41.365605] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:02.255 [2024-12-14 00:18:41.365614] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:02.255 [2024-12-14 00:18:41.377916] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:02.255 [2024-12-14 00:18:41.378360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.255 [2024-12-14 00:18:41.378381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:38:02.255 [2024-12-14 00:18:41.378392] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:38:02.255 [2024-12-14 00:18:41.378608] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:38:02.256 [2024-12-14 00:18:41.378805] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:02.256 [2024-12-14 00:18:41.378817] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:02.256 [2024-12-14 00:18:41.378826] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:02.256 [2024-12-14 00:18:41.378839] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:02.256 [2024-12-14 00:18:41.391164] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:02.256 [2024-12-14 00:18:41.391601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.256 [2024-12-14 00:18:41.391623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:38:02.256 [2024-12-14 00:18:41.391638] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:38:02.256 [2024-12-14 00:18:41.391833] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:38:02.256 [2024-12-14 00:18:41.392028] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:02.256 [2024-12-14 00:18:41.392040] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:02.256 [2024-12-14 00:18:41.392049] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:02.256 [2024-12-14 00:18:41.392058] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:02.590 [2024-12-14 00:18:41.404761] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:02.590 [2024-12-14 00:18:41.405165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.590 [2024-12-14 00:18:41.405193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:38:02.590 [2024-12-14 00:18:41.405208] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:38:02.590 [2024-12-14 00:18:41.405414] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:38:02.590 [2024-12-14 00:18:41.405626] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:02.590 [2024-12-14 00:18:41.405641] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:02.590 [2024-12-14 00:18:41.405651] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:02.590 [2024-12-14 00:18:41.405664] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:02.590 [2024-12-14 00:18:41.418242] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:02.590 [2024-12-14 00:18:41.418732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.590 [2024-12-14 00:18:41.418754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:38:02.590 [2024-12-14 00:18:41.418766] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:38:02.590 [2024-12-14 00:18:41.418964] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:38:02.590 [2024-12-14 00:18:41.419161] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:02.590 [2024-12-14 00:18:41.419173] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:02.590 [2024-12-14 00:18:41.419182] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:02.590 [2024-12-14 00:18:41.419191] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:02.590 [2024-12-14 00:18:41.429224] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:38:02.590 [2024-12-14 00:18:41.429256] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:38:02.590 [2024-12-14 00:18:41.429267] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:38:02.590 [2024-12-14 00:18:41.429278] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:38:02.590 [2024-12-14 00:18:41.429286] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:38:02.590 [2024-12-14 00:18:41.431392] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:38:02.590 [2024-12-14 00:18:41.431418] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:38:02.590 [2024-12-14 00:18:41.431426] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:38:02.590 [2024-12-14 00:18:41.431609] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:02.590 [2024-12-14 00:18:41.432056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.590 [2024-12-14 00:18:41.432078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:38:02.590 [2024-12-14 00:18:41.432089] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:38:02.590 [2024-12-14 00:18:41.432286] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:38:02.590 [2024-12-14 00:18:41.432493] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:02.590 [2024-12-14 00:18:41.432506] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:02.590 [2024-12-14 00:18:41.432516] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:02.590 [2024-12-14 00:18:41.432525] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:02.590 [2024-12-14 00:18:41.444995] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:02.590 [2024-12-14 00:18:41.445468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.590 [2024-12-14 00:18:41.445493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:38:02.590 [2024-12-14 00:18:41.445507] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:38:02.590 [2024-12-14 00:18:41.445705] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:38:02.590 [2024-12-14 00:18:41.445904] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:02.590 [2024-12-14 00:18:41.445915] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:02.590 [2024-12-14 00:18:41.445925] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:02.590 [2024-12-14 00:18:41.445935] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:02.590 [2024-12-14 00:18:41.458411] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:02.590 [2024-12-14 00:18:41.458859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.590 [2024-12-14 00:18:41.458881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:38:02.590 [2024-12-14 00:18:41.458892] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:38:02.590 [2024-12-14 00:18:41.459089] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:38:02.590 [2024-12-14 00:18:41.459287] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:02.590 [2024-12-14 00:18:41.459303] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:02.590 [2024-12-14 00:18:41.459312] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:02.590 [2024-12-14 00:18:41.459322] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:02.590 [2024-12-14 00:18:41.471769] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:02.590 [2024-12-14 00:18:41.472226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.591 [2024-12-14 00:18:41.472248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:38:02.591 [2024-12-14 00:18:41.472259] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:38:02.591 [2024-12-14 00:18:41.472462] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:38:02.591 [2024-12-14 00:18:41.472662] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:02.591 [2024-12-14 00:18:41.472673] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:02.591 [2024-12-14 00:18:41.472683] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:02.591 [2024-12-14 00:18:41.472692] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:02.591 [2024-12-14 00:18:41.485147] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:02.591 [2024-12-14 00:18:41.485597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.591 [2024-12-14 00:18:41.485621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:38:02.591 [2024-12-14 00:18:41.485633] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:38:02.591 [2024-12-14 00:18:41.485829] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:38:02.591 [2024-12-14 00:18:41.486028] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:02.591 [2024-12-14 00:18:41.486039] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:02.591 [2024-12-14 00:18:41.486048] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:02.591 [2024-12-14 00:18:41.486058] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:02.591 [2024-12-14 00:18:41.498484] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:02.591 [2024-12-14 00:18:41.498916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.591 [2024-12-14 00:18:41.498939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:38:02.591 [2024-12-14 00:18:41.498950] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:38:02.591 [2024-12-14 00:18:41.499148] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:38:02.591 [2024-12-14 00:18:41.499345] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:02.591 [2024-12-14 00:18:41.499356] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:02.591 [2024-12-14 00:18:41.499366] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:02.591 [2024-12-14 00:18:41.499380] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:02.591 [2024-12-14 00:18:41.511848] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:02.591 [2024-12-14 00:18:41.512321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.591 [2024-12-14 00:18:41.512347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:38:02.591 [2024-12-14 00:18:41.512359] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:38:02.591 [2024-12-14 00:18:41.512564] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:38:02.591 [2024-12-14 00:18:41.512764] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:02.591 [2024-12-14 00:18:41.512775] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:02.591 [2024-12-14 00:18:41.512786] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:02.591 [2024-12-14 00:18:41.512796] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:02.591 [2024-12-14 00:18:41.525284] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:02.591 [2024-12-14 00:18:41.525719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.591 [2024-12-14 00:18:41.525743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:38:02.591 [2024-12-14 00:18:41.525754] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:38:02.591 [2024-12-14 00:18:41.525953] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:38:02.591 [2024-12-14 00:18:41.526152] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:02.591 [2024-12-14 00:18:41.526164] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:02.591 [2024-12-14 00:18:41.526173] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:02.591 [2024-12-14 00:18:41.526184] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:02.591 [2024-12-14 00:18:41.538646] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:02.591 [2024-12-14 00:18:41.539098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.591 [2024-12-14 00:18:41.539120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:38:02.591 [2024-12-14 00:18:41.539131] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:38:02.591 [2024-12-14 00:18:41.539327] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:38:02.591 [2024-12-14 00:18:41.539531] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:02.591 [2024-12-14 00:18:41.539543] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:02.591 [2024-12-14 00:18:41.539553] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:02.591 [2024-12-14 00:18:41.539562] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:02.591 [2024-12-14 00:18:41.552010] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:02.591 [2024-12-14 00:18:41.552485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.591 [2024-12-14 00:18:41.552508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:38:02.591 [2024-12-14 00:18:41.552519] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:38:02.591 [2024-12-14 00:18:41.552715] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:38:02.591 [2024-12-14 00:18:41.552912] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:02.591 [2024-12-14 00:18:41.552924] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:02.591 [2024-12-14 00:18:41.552933] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:02.591 [2024-12-14 00:18:41.552943] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:02.591 [2024-12-14 00:18:41.565441] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:02.591 [2024-12-14 00:18:41.565885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.591 [2024-12-14 00:18:41.565907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:38:02.591 [2024-12-14 00:18:41.565918] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:38:02.591 [2024-12-14 00:18:41.566114] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:38:02.591 [2024-12-14 00:18:41.566310] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:02.591 [2024-12-14 00:18:41.566323] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:02.591 [2024-12-14 00:18:41.566332] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:02.591 [2024-12-14 00:18:41.566342] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:02.591 [2024-12-14 00:18:41.578779] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:02.591 [2024-12-14 00:18:41.579261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.591 [2024-12-14 00:18:41.579282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:38:02.591 [2024-12-14 00:18:41.579293] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:38:02.591 [2024-12-14 00:18:41.579495] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:38:02.591 [2024-12-14 00:18:41.579693] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:02.591 [2024-12-14 00:18:41.579705] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:02.591 [2024-12-14 00:18:41.579714] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:02.591 [2024-12-14 00:18:41.579723] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:02.591 [2024-12-14 00:18:41.592123] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:02.591 [2024-12-14 00:18:41.592552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.591 [2024-12-14 00:18:41.592575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:38:02.591 [2024-12-14 00:18:41.592590] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:38:02.591 [2024-12-14 00:18:41.592784] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:38:02.591 [2024-12-14 00:18:41.592979] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:02.591 [2024-12-14 00:18:41.592990] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:02.591 [2024-12-14 00:18:41.592998] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:02.591 [2024-12-14 00:18:41.593008] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:02.591 [2024-12-14 00:18:41.605418] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:02.591 [2024-12-14 00:18:41.605857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.592 [2024-12-14 00:18:41.605879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:38:02.592 [2024-12-14 00:18:41.605890] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:38:02.592 [2024-12-14 00:18:41.606084] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:38:02.592 [2024-12-14 00:18:41.606279] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:02.592 [2024-12-14 00:18:41.606291] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:02.592 [2024-12-14 00:18:41.606299] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:02.592 [2024-12-14 00:18:41.606308] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:02.592 [2024-12-14 00:18:41.618711] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:02.592 [2024-12-14 00:18:41.619156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.592 [2024-12-14 00:18:41.619177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:38:02.592 [2024-12-14 00:18:41.619188] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:38:02.592 [2024-12-14 00:18:41.619382] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:38:02.592 [2024-12-14 00:18:41.619582] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:02.592 [2024-12-14 00:18:41.619594] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:02.592 [2024-12-14 00:18:41.619603] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:02.592 [2024-12-14 00:18:41.619612] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:02.592 [2024-12-14 00:18:41.632005] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:02.592 [2024-12-14 00:18:41.632455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.592 [2024-12-14 00:18:41.632478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:38:02.592 [2024-12-14 00:18:41.632489] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:38:02.592 [2024-12-14 00:18:41.632682] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:38:02.592 [2024-12-14 00:18:41.632881] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:02.592 [2024-12-14 00:18:41.632892] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:02.592 [2024-12-14 00:18:41.632902] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:02.592 [2024-12-14 00:18:41.632911] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:02.592 [2024-12-14 00:18:41.645295] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:02.592 [2024-12-14 00:18:41.645744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.592 [2024-12-14 00:18:41.645767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:38:02.592 [2024-12-14 00:18:41.645778] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:38:02.592 [2024-12-14 00:18:41.645984] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:38:02.592 [2024-12-14 00:18:41.646179] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:02.592 [2024-12-14 00:18:41.646191] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:02.592 [2024-12-14 00:18:41.646200] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:02.592 [2024-12-14 00:18:41.646211] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:02.592 [2024-12-14 00:18:41.658862] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:02.592 [2024-12-14 00:18:41.659360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.592 [2024-12-14 00:18:41.659392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:38:02.592 [2024-12-14 00:18:41.659409] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:38:02.592 [2024-12-14 00:18:41.659627] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:38:02.592 [2024-12-14 00:18:41.659837] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:02.592 [2024-12-14 00:18:41.659855] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:02.592 [2024-12-14 00:18:41.659867] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:02.592 [2024-12-14 00:18:41.659877] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:02.592 [2024-12-14 00:18:41.672219] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:02.592 [2024-12-14 00:18:41.672635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.592 [2024-12-14 00:18:41.672659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:38:02.592 [2024-12-14 00:18:41.672670] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:38:02.592 [2024-12-14 00:18:41.672869] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:38:02.592 [2024-12-14 00:18:41.673067] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:02.592 [2024-12-14 00:18:41.673082] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:02.592 [2024-12-14 00:18:41.673092] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:02.592 [2024-12-14 00:18:41.673102] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:02.592 [2024-12-14 00:18:41.685578] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:02.592 [2024-12-14 00:18:41.686037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.592 [2024-12-14 00:18:41.686061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:38:02.592 [2024-12-14 00:18:41.686073] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:38:02.592 [2024-12-14 00:18:41.686284] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:38:02.592 [2024-12-14 00:18:41.686498] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:02.592 [2024-12-14 00:18:41.686512] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:02.592 [2024-12-14 00:18:41.686525] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:02.592 [2024-12-14 00:18:41.686539] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:02.592 [2024-12-14 00:18:41.698957] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:02.592 [2024-12-14 00:18:41.699429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.592 [2024-12-14 00:18:41.699457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:38:02.592 [2024-12-14 00:18:41.699468] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:38:02.592 [2024-12-14 00:18:41.699666] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:38:02.592 [2024-12-14 00:18:41.699863] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:02.592 [2024-12-14 00:18:41.699874] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:02.592 [2024-12-14 00:18:41.699884] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:02.592 [2024-12-14 00:18:41.699893] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:02.592 [2024-12-14 00:18:41.712316] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:02.592 [2024-12-14 00:18:41.712794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.592 [2024-12-14 00:18:41.712817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:38:02.592 [2024-12-14 00:18:41.712828] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:38:02.592 [2024-12-14 00:18:41.713023] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:38:02.592 [2024-12-14 00:18:41.713218] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:02.592 [2024-12-14 00:18:41.713229] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:02.592 [2024-12-14 00:18:41.713238] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:02.592 [2024-12-14 00:18:41.713252] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:02.862 [2024-12-14 00:18:41.725700] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:02.862 [2024-12-14 00:18:41.726149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.862 [2024-12-14 00:18:41.726171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:38:02.862 [2024-12-14 00:18:41.726182] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:38:02.862 [2024-12-14 00:18:41.726378] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:38:02.862 [2024-12-14 00:18:41.726581] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:02.862 [2024-12-14 00:18:41.726593] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:02.862 [2024-12-14 00:18:41.726602] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:02.862 [2024-12-14 00:18:41.726611] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:02.862 [2024-12-14 00:18:41.739016] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:02.862 [2024-12-14 00:18:41.739388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.862 [2024-12-14 00:18:41.739409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:38:02.862 [2024-12-14 00:18:41.739420] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:38:02.862 [2024-12-14 00:18:41.739622] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:38:02.862 [2024-12-14 00:18:41.739817] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:02.862 [2024-12-14 00:18:41.739828] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:02.862 [2024-12-14 00:18:41.739837] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:02.862 [2024-12-14 00:18:41.739846] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:02.862 [2024-12-14 00:18:41.752418] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:02.862 [2024-12-14 00:18:41.752848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.862 [2024-12-14 00:18:41.752896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:38:02.862 [2024-12-14 00:18:41.752907] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:38:02.862 [2024-12-14 00:18:41.753103] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:38:02.862 [2024-12-14 00:18:41.753299] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:02.862 [2024-12-14 00:18:41.753310] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:02.862 [2024-12-14 00:18:41.753319] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:02.862 [2024-12-14 00:18:41.753329] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:02.862 [2024-12-14 00:18:41.765743] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:02.862 [2024-12-14 00:18:41.766197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.862 [2024-12-14 00:18:41.766218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:38:02.862 [2024-12-14 00:18:41.766229] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:38:02.862 [2024-12-14 00:18:41.766425] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:38:02.862 [2024-12-14 00:18:41.766629] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:02.862 [2024-12-14 00:18:41.766641] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:02.862 [2024-12-14 00:18:41.766651] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:02.862 [2024-12-14 00:18:41.766660] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:02.862 [2024-12-14 00:18:41.779107] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:02.862 [2024-12-14 00:18:41.779570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.862 [2024-12-14 00:18:41.779594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:38:02.862 [2024-12-14 00:18:41.779605] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:38:02.862 [2024-12-14 00:18:41.779801] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:38:02.862 [2024-12-14 00:18:41.779997] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:02.863 [2024-12-14 00:18:41.780009] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:02.863 [2024-12-14 00:18:41.780018] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:02.863 [2024-12-14 00:18:41.780027] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:02.863 [2024-12-14 00:18:41.792429] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:02.863 [2024-12-14 00:18:41.792880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.863 [2024-12-14 00:18:41.792902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:38:02.863 [2024-12-14 00:18:41.792913] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:38:02.863 [2024-12-14 00:18:41.793107] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:38:02.863 [2024-12-14 00:18:41.793302] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:02.863 [2024-12-14 00:18:41.793313] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:02.863 [2024-12-14 00:18:41.793322] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:02.863 [2024-12-14 00:18:41.793331] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:02.863 [2024-12-14 00:18:41.805738] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:02.863 [2024-12-14 00:18:41.806204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.863 [2024-12-14 00:18:41.806226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:38:02.863 [2024-12-14 00:18:41.806240] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:38:02.863 [2024-12-14 00:18:41.806434] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:38:02.863 [2024-12-14 00:18:41.806636] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:02.863 [2024-12-14 00:18:41.806648] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:02.863 [2024-12-14 00:18:41.806656] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:02.863 [2024-12-14 00:18:41.806666] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:02.863 [2024-12-14 00:18:41.819061] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:02.863 [2024-12-14 00:18:41.819524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.863 [2024-12-14 00:18:41.819546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:38:02.863 [2024-12-14 00:18:41.819556] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:38:02.863 [2024-12-14 00:18:41.819752] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:38:02.863 [2024-12-14 00:18:41.819947] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:02.863 [2024-12-14 00:18:41.819958] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:02.863 [2024-12-14 00:18:41.819968] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:02.863 [2024-12-14 00:18:41.819977] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:02.863 [2024-12-14 00:18:41.832369] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:02.863 [2024-12-14 00:18:41.832801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.863 [2024-12-14 00:18:41.832823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:38:02.863 [2024-12-14 00:18:41.832834] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:38:02.863 [2024-12-14 00:18:41.833028] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:38:02.863 [2024-12-14 00:18:41.833224] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:02.863 [2024-12-14 00:18:41.833235] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:02.863 [2024-12-14 00:18:41.833244] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:02.863 [2024-12-14 00:18:41.833253] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:02.863 [2024-12-14 00:18:41.845654] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:02.863 [2024-12-14 00:18:41.846124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.863 [2024-12-14 00:18:41.846145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:38:02.863 [2024-12-14 00:18:41.846155] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:38:02.863 [2024-12-14 00:18:41.846350] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:38:02.863 [2024-12-14 00:18:41.846555] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:02.863 [2024-12-14 00:18:41.846567] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:02.863 [2024-12-14 00:18:41.846577] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:02.863 [2024-12-14 00:18:41.846587] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:02.863 [2024-12-14 00:18:41.859008] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:02.863 [2024-12-14 00:18:41.859447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.863 [2024-12-14 00:18:41.859468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:38:02.863 [2024-12-14 00:18:41.859479] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:38:02.863 [2024-12-14 00:18:41.859673] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:38:02.863 [2024-12-14 00:18:41.859869] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:02.863 [2024-12-14 00:18:41.859880] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:02.863 [2024-12-14 00:18:41.859889] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:02.863 [2024-12-14 00:18:41.859899] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:02.863 [2024-12-14 00:18:41.872289] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:02.863 [2024-12-14 00:18:41.872739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.863 [2024-12-14 00:18:41.872761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:38:02.863 [2024-12-14 00:18:41.872772] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:38:02.863 [2024-12-14 00:18:41.872966] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:38:02.863 [2024-12-14 00:18:41.873161] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:02.863 [2024-12-14 00:18:41.873172] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:02.863 [2024-12-14 00:18:41.873181] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:02.863 [2024-12-14 00:18:41.873190] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:02.863 [2024-12-14 00:18:41.885589] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:02.863 [2024-12-14 00:18:41.886059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.863 [2024-12-14 00:18:41.886080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:38:02.863 [2024-12-14 00:18:41.886090] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:38:02.863 [2024-12-14 00:18:41.886283] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:38:02.863 [2024-12-14 00:18:41.886485] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:02.863 [2024-12-14 00:18:41.886497] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:02.863 [2024-12-14 00:18:41.886510] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:02.863 [2024-12-14 00:18:41.886519] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:02.863 [2024-12-14 00:18:41.898939] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:02.863 [2024-12-14 00:18:41.899406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.863 [2024-12-14 00:18:41.899428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:38:02.863 [2024-12-14 00:18:41.899443] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:38:02.863 [2024-12-14 00:18:41.899639] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:38:02.863 [2024-12-14 00:18:41.899833] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:02.863 [2024-12-14 00:18:41.899845] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:02.863 [2024-12-14 00:18:41.899854] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:02.863 [2024-12-14 00:18:41.899864] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:02.863 [2024-12-14 00:18:41.912258] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:02.863 [2024-12-14 00:18:41.912746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.863 [2024-12-14 00:18:41.912769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:38:02.863 [2024-12-14 00:18:41.912780] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:38:02.863 [2024-12-14 00:18:41.912975] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:38:02.864 [2024-12-14 00:18:41.913170] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:02.864 [2024-12-14 00:18:41.913182] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:02.864 [2024-12-14 00:18:41.913191] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:02.864 [2024-12-14 00:18:41.913200] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:02.864 [2024-12-14 00:18:41.925595] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:02.864 [2024-12-14 00:18:41.926061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.864 [2024-12-14 00:18:41.926082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:38:02.864 [2024-12-14 00:18:41.926093] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:38:02.864 [2024-12-14 00:18:41.926287] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:38:02.864 [2024-12-14 00:18:41.926488] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:02.864 [2024-12-14 00:18:41.926500] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:02.864 [2024-12-14 00:18:41.926510] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:02.864 [2024-12-14 00:18:41.926519] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:02.864 [2024-12-14 00:18:41.938918] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:02.864 [2024-12-14 00:18:41.939395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.864 [2024-12-14 00:18:41.939417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:38:02.864 [2024-12-14 00:18:41.939427] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:38:02.864 [2024-12-14 00:18:41.939633] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:38:02.864 [2024-12-14 00:18:41.939829] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:02.864 [2024-12-14 00:18:41.939841] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:02.864 [2024-12-14 00:18:41.939850] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:02.864 [2024-12-14 00:18:41.939860] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:02.864 [2024-12-14 00:18:41.952347] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:02.864 [2024-12-14 00:18:41.952808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.864 [2024-12-14 00:18:41.952830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:38:02.864 [2024-12-14 00:18:41.952840] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:38:02.864 [2024-12-14 00:18:41.953033] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:38:02.864 [2024-12-14 00:18:41.953229] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:02.864 [2024-12-14 00:18:41.953241] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:02.864 [2024-12-14 00:18:41.953250] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:02.864 [2024-12-14 00:18:41.953260] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:02.864 [2024-12-14 00:18:41.965672] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:02.864 [2024-12-14 00:18:41.966142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.864 [2024-12-14 00:18:41.966163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:38:02.864 [2024-12-14 00:18:41.966174] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:38:02.864 [2024-12-14 00:18:41.966367] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:38:02.864 [2024-12-14 00:18:41.966569] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:02.864 [2024-12-14 00:18:41.966582] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:02.864 [2024-12-14 00:18:41.966591] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:02.864 [2024-12-14 00:18:41.966601] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:02.864 3467.33 IOPS, 13.54 MiB/s [2024-12-13T23:18:42.005Z] [2024-12-14 00:18:41.979076] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:02.864 [2024-12-14 00:18:41.979551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.864 [2024-12-14 00:18:41.979580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:38:02.864 [2024-12-14 00:18:41.979592] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:38:02.864 [2024-12-14 00:18:41.979785] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:38:02.864 [2024-12-14 00:18:41.979981] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:02.864 [2024-12-14 00:18:41.979993] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:02.864 [2024-12-14 00:18:41.980002] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:02.864 [2024-12-14 00:18:41.980011] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:02.864 [2024-12-14 00:18:41.992406] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:02.864 [2024-12-14 00:18:41.992808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:02.864 [2024-12-14 00:18:41.992830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:38:02.864 [2024-12-14 00:18:41.992841] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:38:02.864 [2024-12-14 00:18:41.993036] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:38:02.864 [2024-12-14 00:18:41.993231] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:02.864 [2024-12-14 00:18:41.993242] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:02.864 [2024-12-14 00:18:41.993251] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:02.864 [2024-12-14 00:18:41.993261] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:03.124 [2024-12-14 00:18:42.005843] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:03.124 [2024-12-14 00:18:42.006302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.124 [2024-12-14 00:18:42.006322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:38:03.124 [2024-12-14 00:18:42.006333] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:38:03.124 [2024-12-14 00:18:42.006535] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:38:03.124 [2024-12-14 00:18:42.006732] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:03.124 [2024-12-14 00:18:42.006744] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:03.124 [2024-12-14 00:18:42.006754] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:03.124 [2024-12-14 00:18:42.006764] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:03.124 00:18:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:38:03.124 00:18:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:38:03.124 00:18:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:38:03.124 00:18:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@732 -- # xtrace_disable 00:38:03.124 00:18:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:38:03.124 [2024-12-14 00:18:42.019154] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:03.124 [2024-12-14 00:18:42.019643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.124 [2024-12-14 00:18:42.019665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:38:03.124 [2024-12-14 00:18:42.019675] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:38:03.124 [2024-12-14 00:18:42.019870] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:38:03.124 [2024-12-14 00:18:42.020065] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:03.124 [2024-12-14 00:18:42.020077] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:03.124 [2024-12-14 00:18:42.020087] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:03.124 [2024-12-14 00:18:42.020097] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:03.124 [2024-12-14 00:18:42.032526] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:03.124 [2024-12-14 00:18:42.032884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.124 [2024-12-14 00:18:42.032905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:38:03.124 [2024-12-14 00:18:42.032916] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:38:03.124 [2024-12-14 00:18:42.033111] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:38:03.124 [2024-12-14 00:18:42.033307] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:03.124 [2024-12-14 00:18:42.033318] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:03.124 [2024-12-14 00:18:42.033327] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:03.124 [2024-12-14 00:18:42.033337] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:03.124 00:18:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:38:03.124 00:18:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:38:03.124 00:18:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:03.124 00:18:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:38:03.124 [2024-12-14 00:18:42.045917] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:03.124 [2024-12-14 00:18:42.046247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.124 [2024-12-14 00:18:42.046269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:38:03.124 [2024-12-14 00:18:42.046279] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:38:03.124 [2024-12-14 00:18:42.046480] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:38:03.124 [2024-12-14 00:18:42.046676] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:03.124 [2024-12-14 00:18:42.046687] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:03.124 [2024-12-14 00:18:42.046697] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:03.124 [2024-12-14 00:18:42.046710] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:03.124 [2024-12-14 00:18:42.049307] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:38:03.124 00:18:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:03.124 00:18:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:38:03.124 00:18:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:03.124 00:18:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:38:03.125 [2024-12-14 00:18:42.059254] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:03.125 [2024-12-14 00:18:42.059715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.125 [2024-12-14 00:18:42.059736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:38:03.125 [2024-12-14 00:18:42.059747] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:38:03.125 [2024-12-14 00:18:42.059958] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:38:03.125 [2024-12-14 00:18:42.060155] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:03.125 [2024-12-14 00:18:42.060166] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:03.125 [2024-12-14 00:18:42.060175] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:03.125 [2024-12-14 00:18:42.060185] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:03.125 [2024-12-14 00:18:42.072593] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:03.125 [2024-12-14 00:18:42.073036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.125 [2024-12-14 00:18:42.073057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:38:03.125 [2024-12-14 00:18:42.073068] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:38:03.125 [2024-12-14 00:18:42.073262] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:38:03.125 [2024-12-14 00:18:42.073462] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:03.125 [2024-12-14 00:18:42.073474] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:03.125 [2024-12-14 00:18:42.073483] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:03.125 [2024-12-14 00:18:42.073493] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:03.125 [2024-12-14 00:18:42.085888] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:03.125 [2024-12-14 00:18:42.086330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.125 [2024-12-14 00:18:42.086351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:38:03.125 [2024-12-14 00:18:42.086361] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:38:03.125 [2024-12-14 00:18:42.086561] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:38:03.125 [2024-12-14 00:18:42.086758] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:03.125 [2024-12-14 00:18:42.086772] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:03.125 [2024-12-14 00:18:42.086782] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:03.125 [2024-12-14 00:18:42.086791] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:03.125 [2024-12-14 00:18:42.099252] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:03.125 [2024-12-14 00:18:42.099676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.125 [2024-12-14 00:18:42.099701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:38:03.125 [2024-12-14 00:18:42.099713] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:38:03.125 [2024-12-14 00:18:42.099914] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:38:03.125 [2024-12-14 00:18:42.100113] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:03.125 [2024-12-14 00:18:42.100125] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:03.125 [2024-12-14 00:18:42.100135] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:03.125 [2024-12-14 00:18:42.100144] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:03.125 [2024-12-14 00:18:42.112617] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:03.125 [2024-12-14 00:18:42.113041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.125 [2024-12-14 00:18:42.113063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:38:03.125 [2024-12-14 00:18:42.113074] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:38:03.125 [2024-12-14 00:18:42.113270] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:38:03.125 [2024-12-14 00:18:42.113474] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:03.125 [2024-12-14 00:18:42.113487] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:03.125 [2024-12-14 00:18:42.113497] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:03.125 [2024-12-14 00:18:42.113507] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:03.125 [2024-12-14 00:18:42.125921] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:03.125 [2024-12-14 00:18:42.126370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.125 [2024-12-14 00:18:42.126392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:38:03.125 [2024-12-14 00:18:42.126403] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:38:03.125 [2024-12-14 00:18:42.126604] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:38:03.125 [2024-12-14 00:18:42.126800] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:03.125 [2024-12-14 00:18:42.126818] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:03.125 [2024-12-14 00:18:42.126827] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:03.125 [2024-12-14 00:18:42.126840] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:03.125 [2024-12-14 00:18:42.139283] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:03.125 [2024-12-14 00:18:42.139709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.125 [2024-12-14 00:18:42.139731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:38:03.125 [2024-12-14 00:18:42.139742] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:38:03.125 [2024-12-14 00:18:42.139938] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:38:03.125 [2024-12-14 00:18:42.140133] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:03.125 [2024-12-14 00:18:42.140145] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:03.125 [2024-12-14 00:18:42.140154] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:03.125 [2024-12-14 00:18:42.140163] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:03.125 [2024-12-14 00:18:42.152594] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:03.125 [2024-12-14 00:18:42.153037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.125 [2024-12-14 00:18:42.153058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:38:03.125 [2024-12-14 00:18:42.153069] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:38:03.125 [2024-12-14 00:18:42.153265] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:38:03.125 [2024-12-14 00:18:42.153468] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:03.125 [2024-12-14 00:18:42.153480] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:03.125 [2024-12-14 00:18:42.153490] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:03.125 [2024-12-14 00:18:42.153502] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:03.125 Malloc0 00:38:03.125 00:18:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:03.125 00:18:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:38:03.125 00:18:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:03.125 00:18:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:38:03.125 [2024-12-14 00:18:42.165936] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:03.125 [2024-12-14 00:18:42.166366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:03.125 [2024-12-14 00:18:42.166387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:38:03.125 [2024-12-14 00:18:42.166397] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:38:03.125 [2024-12-14 00:18:42.166598] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:38:03.125 [2024-12-14 00:18:42.166796] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:03.125 [2024-12-14 00:18:42.166808] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:03.125 [2024-12-14 00:18:42.166820] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:03.125 [2024-12-14 00:18:42.166831] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:03.125 00:18:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:03.125 00:18:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:38:03.125 00:18:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:03.125 00:18:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:38:03.125 00:18:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:03.125 00:18:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:38:03.125 00:18:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:03.126 00:18:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:38:03.126 [2024-12-14 00:18:42.179061] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:03.126 [2024-12-14 00:18:42.179246] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:03.126 00:18:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:03.126 00:18:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 52457 00:38:03.126 [2024-12-14 00:18:42.216571] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller successful. 00:38:04.999 4015.57 IOPS, 15.69 MiB/s [2024-12-13T23:18:45.082Z] 4735.88 IOPS, 18.50 MiB/s [2024-12-13T23:18:46.017Z] 5289.11 IOPS, 20.66 MiB/s [2024-12-13T23:18:47.395Z] 5740.40 IOPS, 22.42 MiB/s [2024-12-13T23:18:48.330Z] 6106.91 IOPS, 23.86 MiB/s [2024-12-13T23:18:49.266Z] 6431.08 IOPS, 25.12 MiB/s [2024-12-13T23:18:50.203Z] 6695.46 IOPS, 26.15 MiB/s [2024-12-13T23:18:51.139Z] 6919.21 IOPS, 27.03 MiB/s 00:38:11.998 Latency(us) 00:38:11.998 [2024-12-13T23:18:51.139Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:11.998 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:38:11.998 Verification LBA range: start 0x0 length 0x4000 00:38:11.998 Nvme1n1 : 15.01 7101.11 27.74 11818.45 0.00 6743.20 721.68 41693.38 00:38:11.998 [2024-12-13T23:18:51.139Z] =================================================================================================================== 00:38:11.998 [2024-12-13T23:18:51.139Z] Total : 7101.11 27.74 11818.45 0.00 6743.20 721.68 41693.38 00:38:12.935 00:18:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:38:12.935 00:18:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:38:12.935 00:18:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:12.935 00:18:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:38:12.935 00:18:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:12.935 00:18:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:38:12.935 00:18:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:38:12.935 00:18:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@516 -- # nvmfcleanup 00:38:12.935 00:18:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@121 -- # sync 00:38:12.935 00:18:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:38:12.935 00:18:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@124 -- # set +e 00:38:12.935 00:18:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@125 -- # for i in {1..20} 00:38:12.935 00:18:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:38:12.935 rmmod nvme_tcp 00:38:12.935 rmmod nvme_fabrics 00:38:12.935 rmmod nvme_keyring 00:38:12.935 00:18:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:38:12.935 00:18:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@128 -- # set -e 00:38:12.935 00:18:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@129 -- # return 0 00:38:12.935 00:18:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@517 -- # '[' -n 53491 ']' 00:38:12.935 00:18:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@518 -- # killprocess 53491 00:38:12.935 00:18:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@954 -- # '[' -z 53491 ']' 00:38:12.935 00:18:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@958 -- # kill -0 53491 00:38:12.935 00:18:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@959 -- # uname 00:38:12.935 00:18:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:38:12.935 00:18:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 53491 00:38:12.935 00:18:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:38:12.936 00:18:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:38:12.936 00:18:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 53491' 00:38:12.936 killing process with pid 53491 00:38:12.936 00:18:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@973 -- # kill 53491 00:38:12.936 00:18:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@978 -- # wait 53491 00:38:14.313 00:18:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:38:14.313 00:18:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:38:14.313 00:18:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:38:14.313 00:18:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@297 -- # iptr 00:38:14.313 00:18:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:38:14.313 00:18:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # iptables-save 00:38:14.313 00:18:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # iptables-restore 00:38:14.313 00:18:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:38:14.313 00:18:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:38:14.313 00:18:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:14.313 00:18:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:14.313 00:18:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:16.847 00:18:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:38:16.847 00:38:16.847 real 0m29.588s 00:38:16.847 user 1m14.508s 00:38:16.847 sys 0m6.414s 00:38:16.847 00:18:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:16.847 00:18:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:38:16.847 ************************************ 00:38:16.847 END TEST nvmf_bdevperf 00:38:16.847 ************************************ 00:38:16.847 00:18:55 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@48 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:38:16.847 00:18:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:38:16.847 00:18:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:38:16.847 00:18:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:38:16.847 ************************************ 00:38:16.847 START TEST nvmf_target_disconnect 00:38:16.847 ************************************ 00:38:16.847 00:18:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:38:16.847 * Looking for test storage... 00:38:16.847 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:38:16.847 00:18:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:38:16.847 00:18:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1711 -- # lcov --version 00:38:16.847 00:18:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:38:16.847 00:18:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:38:16.847 00:18:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:38:16.847 00:18:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:38:16.847 00:18:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:38:16.847 00:18:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:38:16.847 00:18:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:38:16.847 00:18:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:38:16.847 00:18:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:38:16.847 00:18:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:38:16.847 00:18:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:38:16.847 00:18:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:38:16.847 00:18:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:38:16.847 00:18:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:38:16.847 00:18:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@345 -- # : 1 00:38:16.847 00:18:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:38:16.847 00:18:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:38:16.847 00:18:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # decimal 1 00:38:16.847 00:18:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=1 00:38:16.847 00:18:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:38:16.847 00:18:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 1 00:38:16.847 00:18:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:38:16.847 00:18:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # decimal 2 00:38:16.847 00:18:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=2 00:38:16.847 00:18:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:38:16.847 00:18:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 2 00:38:16.847 00:18:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:38:16.847 00:18:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:38:16.847 00:18:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:38:16.847 00:18:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # return 0 00:38:16.847 00:18:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:38:16.847 00:18:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:38:16.847 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:16.847 --rc genhtml_branch_coverage=1 00:38:16.847 --rc genhtml_function_coverage=1 00:38:16.847 --rc genhtml_legend=1 00:38:16.847 --rc geninfo_all_blocks=1 00:38:16.847 --rc geninfo_unexecuted_blocks=1 00:38:16.847 00:38:16.847 ' 00:38:16.847 00:18:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:38:16.847 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:16.847 --rc genhtml_branch_coverage=1 00:38:16.847 --rc genhtml_function_coverage=1 00:38:16.847 --rc genhtml_legend=1 00:38:16.847 --rc geninfo_all_blocks=1 00:38:16.848 --rc geninfo_unexecuted_blocks=1 00:38:16.848 00:38:16.848 ' 00:38:16.848 00:18:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:38:16.848 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:16.848 --rc genhtml_branch_coverage=1 00:38:16.848 --rc genhtml_function_coverage=1 00:38:16.848 --rc genhtml_legend=1 00:38:16.848 --rc geninfo_all_blocks=1 00:38:16.848 --rc geninfo_unexecuted_blocks=1 00:38:16.848 00:38:16.848 ' 00:38:16.848 00:18:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:38:16.848 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:16.848 --rc genhtml_branch_coverage=1 00:38:16.848 --rc genhtml_function_coverage=1 00:38:16.848 --rc genhtml_legend=1 00:38:16.848 --rc geninfo_all_blocks=1 00:38:16.848 --rc geninfo_unexecuted_blocks=1 00:38:16.848 00:38:16.848 ' 00:38:16.848 00:18:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:38:16.848 00:18:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:38:16.848 00:18:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:38:16.848 00:18:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:38:16.848 00:18:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:38:16.848 00:18:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:38:16.848 00:18:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:38:16.848 00:18:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:38:16.848 00:18:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:38:16.848 00:18:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:38:16.848 00:18:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:38:16.848 00:18:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:38:16.848 00:18:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:38:16.848 00:18:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:38:16.848 00:18:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:38:16.848 00:18:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:38:16.848 00:18:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:38:16.848 00:18:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:38:16.848 00:18:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:38:16.848 00:18:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:38:16.848 00:18:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:16.848 00:18:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:16.848 00:18:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:16.848 00:18:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:16.848 00:18:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:16.848 00:18:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:16.848 00:18:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:38:16.848 00:18:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:16.848 00:18:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@51 -- # : 0 00:38:16.848 00:18:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:38:16.848 00:18:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:38:16.848 00:18:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:38:16.848 00:18:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:38:16.848 00:18:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:38:16.848 00:18:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:38:16.848 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:38:16.848 00:18:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:38:16.848 00:18:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:38:16.848 00:18:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:38:16.848 00:18:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:38:16.848 00:18:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:38:16.848 00:18:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:38:16.848 00:18:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:38:16.848 00:18:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:38:16.848 00:18:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:38:16.848 00:18:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@476 -- # prepare_net_devs 00:38:16.848 00:18:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@438 -- # local -g is_hw=no 00:38:16.848 00:18:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@440 -- # remove_spdk_ns 00:38:16.848 00:18:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:16.848 00:18:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:16.848 00:18:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:16.848 00:18:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:38:16.848 00:18:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:38:16.848 00:18:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:38:16.848 00:18:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:38:22.121 00:19:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:38:22.121 00:19:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:38:22.121 00:19:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:38:22.121 00:19:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:38:22.121 00:19:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:38:22.121 00:19:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:38:22.121 00:19:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:38:22.121 00:19:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:38:22.121 00:19:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:38:22.121 00:19:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # e810=() 00:38:22.121 00:19:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:38:22.121 00:19:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # x722=() 00:38:22.121 00:19:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:38:22.121 00:19:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:38:22.121 00:19:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:38:22.121 00:19:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:38:22.121 00:19:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:38:22.121 00:19:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:38:22.121 00:19:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:38:22.121 00:19:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:38:22.121 00:19:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:38:22.121 00:19:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:38:22.121 00:19:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:38:22.121 00:19:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:38:22.121 00:19:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:38:22.121 00:19:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:38:22.121 00:19:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:38:22.121 00:19:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:38:22.121 00:19:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:38:22.121 00:19:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:38:22.121 00:19:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:38:22.121 00:19:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:38:22.121 00:19:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:38:22.121 00:19:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:38:22.121 00:19:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:38:22.121 Found 0000:af:00.0 (0x8086 - 0x159b) 00:38:22.121 00:19:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:38:22.121 00:19:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:38:22.121 00:19:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:22.121 00:19:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:22.122 00:19:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:38:22.122 00:19:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:38:22.122 00:19:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:38:22.122 Found 0000:af:00.1 (0x8086 - 0x159b) 00:38:22.122 00:19:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:38:22.122 00:19:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:38:22.122 00:19:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:22.122 00:19:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:22.122 00:19:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:38:22.122 00:19:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:38:22.122 00:19:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:38:22.122 00:19:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:38:22.122 00:19:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:38:22.122 00:19:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:22.122 00:19:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:38:22.122 00:19:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:22.122 00:19:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:38:22.122 00:19:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:38:22.122 00:19:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:22.122 00:19:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:38:22.122 Found net devices under 0000:af:00.0: cvl_0_0 00:38:22.122 00:19:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:38:22.122 00:19:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:38:22.122 00:19:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:22.122 00:19:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:38:22.122 00:19:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:22.122 00:19:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:38:22.122 00:19:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:38:22.122 00:19:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:22.122 00:19:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:38:22.122 Found net devices under 0000:af:00.1: cvl_0_1 00:38:22.122 00:19:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:38:22.122 00:19:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:38:22.122 00:19:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # is_hw=yes 00:38:22.122 00:19:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:38:22.122 00:19:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:38:22.122 00:19:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:38:22.122 00:19:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:38:22.122 00:19:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:38:22.122 00:19:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:38:22.122 00:19:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:38:22.122 00:19:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:38:22.122 00:19:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:38:22.122 00:19:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:38:22.122 00:19:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:38:22.122 00:19:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:38:22.122 00:19:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:38:22.122 00:19:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:38:22.122 00:19:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:38:22.122 00:19:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:38:22.122 00:19:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:38:22.122 00:19:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:38:22.122 00:19:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:38:22.122 00:19:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:38:22.122 00:19:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:38:22.122 00:19:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:38:22.122 00:19:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:38:22.122 00:19:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:38:22.122 00:19:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:38:22.122 00:19:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:38:22.122 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:38:22.122 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.298 ms 00:38:22.122 00:38:22.122 --- 10.0.0.2 ping statistics --- 00:38:22.122 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:22.122 rtt min/avg/max/mdev = 0.298/0.298/0.298/0.000 ms 00:38:22.122 00:19:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:38:22.122 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:38:22.122 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.134 ms 00:38:22.122 00:38:22.122 --- 10.0.0.1 ping statistics --- 00:38:22.122 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:22.122 rtt min/avg/max/mdev = 0.134/0.134/0.134/0.000 ms 00:38:22.122 00:19:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:38:22.122 00:19:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@450 -- # return 0 00:38:22.122 00:19:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:38:22.122 00:19:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:38:22.122 00:19:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:38:22.122 00:19:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:38:22.122 00:19:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:38:22.122 00:19:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:38:22.122 00:19:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:38:22.122 00:19:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:38:22.122 00:19:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:38:22.122 00:19:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1111 -- # xtrace_disable 00:38:22.122 00:19:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:38:22.122 ************************************ 00:38:22.122 START TEST nvmf_target_disconnect_tc1 00:38:22.122 ************************************ 00:38:22.122 00:19:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1129 -- # nvmf_target_disconnect_tc1 00:38:22.122 00:19:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:38:22.122 00:19:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@652 -- # local es=0 00:38:22.122 00:19:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:38:22.122 00:19:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:38:22.122 00:19:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:38:22.122 00:19:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:38:22.122 00:19:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:38:22.122 00:19:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:38:22.122 00:19:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:38:22.122 00:19:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:38:22.122 00:19:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect ]] 00:38:22.122 00:19:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:38:22.122 [2024-12-14 00:19:01.245812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:22.122 [2024-12-14 00:19:01.245993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325580 with addr=10.0.0.2, port=4420 00:38:22.122 [2024-12-14 00:19:01.246206] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:38:22.122 [2024-12-14 00:19:01.246246] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:38:22.122 [2024-12-14 00:19:01.246280] nvme.c: 951:spdk_nvme_probe_ext: *ERROR*: Create probe context failed 00:38:22.122 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:38:22.122 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:38:22.381 Initializing NVMe Controllers 00:38:22.381 00:19:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@655 -- # es=1 00:38:22.381 00:19:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:38:22.381 00:19:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:38:22.381 00:19:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:38:22.381 00:38:22.381 real 0m0.192s 00:38:22.381 user 0m0.082s 00:38:22.381 sys 0m0.110s 00:38:22.381 00:19:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:22.381 00:19:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:38:22.381 ************************************ 00:38:22.381 END TEST nvmf_target_disconnect_tc1 00:38:22.381 ************************************ 00:38:22.381 00:19:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:38:22.381 00:19:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:38:22.381 00:19:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1111 -- # xtrace_disable 00:38:22.381 00:19:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:38:22.381 ************************************ 00:38:22.381 START TEST nvmf_target_disconnect_tc2 00:38:22.381 ************************************ 00:38:22.381 00:19:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1129 -- # nvmf_target_disconnect_tc2 00:38:22.381 00:19:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 10.0.0.2 00:38:22.381 00:19:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:38:22.382 00:19:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:38:22.382 00:19:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:38:22.382 00:19:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:38:22.382 00:19:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # nvmfpid=58790 00:38:22.382 00:19:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # waitforlisten 58790 00:38:22.382 00:19:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:38:22.382 00:19:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # '[' -z 58790 ']' 00:38:22.382 00:19:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:22.382 00:19:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:38:22.382 00:19:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:22.382 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:22.382 00:19:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:38:22.382 00:19:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:38:22.382 [2024-12-14 00:19:01.431895] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:38:22.382 [2024-12-14 00:19:01.431986] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:38:22.641 [2024-12-14 00:19:01.562368] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:38:22.641 [2024-12-14 00:19:01.668787] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:38:22.641 [2024-12-14 00:19:01.668834] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:38:22.641 [2024-12-14 00:19:01.668844] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:38:22.641 [2024-12-14 00:19:01.668854] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:38:22.641 [2024-12-14 00:19:01.668861] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:38:22.641 [2024-12-14 00:19:01.671146] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 5 00:38:22.641 [2024-12-14 00:19:01.671213] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 6 00:38:22.641 [2024-12-14 00:19:01.671284] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 4 00:38:22.641 [2024-12-14 00:19:01.671307] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 7 00:38:23.207 00:19:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:38:23.207 00:19:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@868 -- # return 0 00:38:23.207 00:19:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:38:23.207 00:19:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:38:23.207 00:19:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:38:23.207 00:19:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:38:23.207 00:19:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:38:23.207 00:19:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:23.207 00:19:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:38:23.475 Malloc0 00:38:23.475 00:19:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:23.475 00:19:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:38:23.475 00:19:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:23.475 00:19:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:38:23.475 [2024-12-14 00:19:02.381847] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:38:23.475 00:19:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:23.475 00:19:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:38:23.475 00:19:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:23.475 00:19:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:38:23.476 00:19:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:23.476 00:19:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:38:23.476 00:19:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:23.476 00:19:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:38:23.476 00:19:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:23.476 00:19:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:38:23.476 00:19:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:23.476 00:19:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:38:23.476 [2024-12-14 00:19:02.414164] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:23.476 00:19:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:23.476 00:19:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:38:23.476 00:19:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:23.476 00:19:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:38:23.476 00:19:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:23.476 00:19:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:38:23.476 00:19:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=58988 00:38:23.476 00:19:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:38:25.381 00:19:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 58790 00:38:25.381 00:19:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:38:25.381 Read completed with error (sct=0, sc=8) 00:38:25.381 starting I/O failed 00:38:25.381 Read completed with error (sct=0, sc=8) 00:38:25.381 starting I/O failed 00:38:25.381 Read completed with error (sct=0, sc=8) 00:38:25.381 starting I/O failed 00:38:25.381 Read completed with error (sct=0, sc=8) 00:38:25.381 starting I/O failed 00:38:25.381 Write completed with error (sct=0, sc=8) 00:38:25.381 starting I/O failed 00:38:25.381 Write completed with error (sct=0, sc=8) 00:38:25.381 starting I/O failed 00:38:25.381 Read completed with error (sct=0, sc=8) 00:38:25.381 starting I/O failed 00:38:25.381 Read completed with error (sct=0, sc=8) 00:38:25.381 starting I/O failed 00:38:25.381 Write completed with error (sct=0, sc=8) 00:38:25.381 starting I/O failed 00:38:25.381 Write completed with error (sct=0, sc=8) 00:38:25.381 starting I/O failed 00:38:25.381 Read completed with error (sct=0, sc=8) 00:38:25.381 starting I/O failed 00:38:25.381 Read completed with error (sct=0, sc=8) 00:38:25.381 starting I/O failed 00:38:25.381 Read completed with error (sct=0, sc=8) 00:38:25.381 starting I/O failed 00:38:25.381 Write completed with error (sct=0, sc=8) 00:38:25.381 starting I/O failed 00:38:25.381 Read completed with error (sct=0, sc=8) 00:38:25.381 starting I/O failed 00:38:25.381 Write completed with error (sct=0, sc=8) 00:38:25.381 starting I/O failed 00:38:25.381 Read completed with error (sct=0, sc=8) 00:38:25.381 starting I/O failed 00:38:25.381 Write completed with error (sct=0, sc=8) 00:38:25.381 starting I/O failed 00:38:25.381 Write completed with error (sct=0, sc=8) 00:38:25.381 starting I/O failed 00:38:25.381 Read completed with error (sct=0, sc=8) 00:38:25.381 starting I/O failed 00:38:25.381 Read completed with error (sct=0, sc=8) 00:38:25.381 starting I/O failed 00:38:25.381 Write completed with error (sct=0, sc=8) 00:38:25.381 starting I/O failed 00:38:25.381 Write completed with error (sct=0, sc=8) 00:38:25.381 starting I/O failed 00:38:25.381 Read completed with error (sct=0, sc=8) 00:38:25.381 starting I/O failed 00:38:25.381 Write completed with error (sct=0, sc=8) 00:38:25.381 starting I/O failed 00:38:25.381 Write completed with error (sct=0, sc=8) 00:38:25.381 starting I/O failed 00:38:25.381 Write completed with error (sct=0, sc=8) 00:38:25.381 starting I/O failed 00:38:25.381 Read completed with error (sct=0, sc=8) 00:38:25.381 starting I/O failed 00:38:25.381 Write completed with error (sct=0, sc=8) 00:38:25.381 starting I/O failed 00:38:25.381 Read completed with error (sct=0, sc=8) 00:38:25.381 starting I/O failed 00:38:25.381 Write completed with error (sct=0, sc=8) 00:38:25.381 starting I/O failed 00:38:25.381 Write completed with error (sct=0, sc=8) 00:38:25.381 starting I/O failed 00:38:25.381 Read completed with error (sct=0, sc=8) 00:38:25.381 starting I/O failed 00:38:25.381 Read completed with error (sct=0, sc=8) 00:38:25.381 starting I/O failed 00:38:25.381 Read completed with error (sct=0, sc=8) 00:38:25.381 starting I/O failed 00:38:25.381 Read completed with error (sct=0, sc=8) 00:38:25.381 starting I/O failed 00:38:25.381 Read completed with error (sct=0, sc=8) 00:38:25.381 starting I/O failed 00:38:25.381 Read completed with error (sct=0, sc=8) 00:38:25.381 starting I/O failed 00:38:25.381 [2024-12-14 00:19:04.457504] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:25.381 Read completed with error (sct=0, sc=8) 00:38:25.381 starting I/O failed 00:38:25.381 Read completed with error (sct=0, sc=8) 00:38:25.381 starting I/O failed 00:38:25.381 Read completed with error (sct=0, sc=8) 00:38:25.381 starting I/O failed 00:38:25.381 Read completed with error (sct=0, sc=8) 00:38:25.381 starting I/O failed 00:38:25.381 Read completed with error (sct=0, sc=8) 00:38:25.381 starting I/O failed 00:38:25.381 Read completed with error (sct=0, sc=8) 00:38:25.382 starting I/O failed 00:38:25.382 Read completed with error (sct=0, sc=8) 00:38:25.382 starting I/O failed 00:38:25.382 Read completed with error (sct=0, sc=8) 00:38:25.382 starting I/O failed 00:38:25.382 Read completed with error (sct=0, sc=8) 00:38:25.382 starting I/O failed 00:38:25.382 Read completed with error (sct=0, sc=8) 00:38:25.382 starting I/O failed 00:38:25.382 Write completed with error (sct=0, sc=8) 00:38:25.382 starting I/O failed 00:38:25.382 Read completed with error (sct=0, sc=8) 00:38:25.382 starting I/O failed 00:38:25.382 Read completed with error (sct=0, sc=8) 00:38:25.382 starting I/O failed 00:38:25.382 Write completed with error (sct=0, sc=8) 00:38:25.382 starting I/O failed 00:38:25.382 Read completed with error (sct=0, sc=8) 00:38:25.382 starting I/O failed 00:38:25.382 Read completed with error (sct=0, sc=8) 00:38:25.382 starting I/O failed 00:38:25.382 Write completed with error (sct=0, sc=8) 00:38:25.382 starting I/O failed 00:38:25.382 Write completed with error (sct=0, sc=8) 00:38:25.382 starting I/O failed 00:38:25.382 Write completed with error (sct=0, sc=8) 00:38:25.382 starting I/O failed 00:38:25.382 Read completed with error (sct=0, sc=8) 00:38:25.382 starting I/O failed 00:38:25.382 Read completed with error (sct=0, sc=8) 00:38:25.382 starting I/O failed 00:38:25.382 Read completed with error (sct=0, sc=8) 00:38:25.382 starting I/O failed 00:38:25.382 Write completed with error (sct=0, sc=8) 00:38:25.382 starting I/O failed 00:38:25.382 Write completed with error (sct=0, sc=8) 00:38:25.382 starting I/O failed 00:38:25.382 Write completed with error (sct=0, sc=8) 00:38:25.382 starting I/O failed 00:38:25.382 Read completed with error (sct=0, sc=8) 00:38:25.382 starting I/O failed 00:38:25.382 [2024-12-14 00:19:04.457867] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:25.382 Read completed with error (sct=0, sc=8) 00:38:25.382 starting I/O failed 00:38:25.382 Read completed with error (sct=0, sc=8) 00:38:25.382 starting I/O failed 00:38:25.382 Read completed with error (sct=0, sc=8) 00:38:25.382 starting I/O failed 00:38:25.382 Read completed with error (sct=0, sc=8) 00:38:25.382 starting I/O failed 00:38:25.382 Read completed with error (sct=0, sc=8) 00:38:25.382 starting I/O failed 00:38:25.382 Read completed with error (sct=0, sc=8) 00:38:25.382 starting I/O failed 00:38:25.382 Read completed with error (sct=0, sc=8) 00:38:25.382 starting I/O failed 00:38:25.382 Read completed with error (sct=0, sc=8) 00:38:25.382 starting I/O failed 00:38:25.382 Read completed with error (sct=0, sc=8) 00:38:25.382 starting I/O failed 00:38:25.382 Read completed with error (sct=0, sc=8) 00:38:25.382 starting I/O failed 00:38:25.382 Write completed with error (sct=0, sc=8) 00:38:25.382 starting I/O failed 00:38:25.382 Write completed with error (sct=0, sc=8) 00:38:25.382 starting I/O failed 00:38:25.382 Write completed with error (sct=0, sc=8) 00:38:25.382 starting I/O failed 00:38:25.382 Read completed with error (sct=0, sc=8) 00:38:25.382 starting I/O failed 00:38:25.382 Read completed with error (sct=0, sc=8) 00:38:25.382 starting I/O failed 00:38:25.382 Read completed with error (sct=0, sc=8) 00:38:25.382 starting I/O failed 00:38:25.382 Write completed with error (sct=0, sc=8) 00:38:25.382 starting I/O failed 00:38:25.382 Read completed with error (sct=0, sc=8) 00:38:25.382 starting I/O failed 00:38:25.382 Write completed with error (sct=0, sc=8) 00:38:25.382 starting I/O failed 00:38:25.382 Write completed with error (sct=0, sc=8) 00:38:25.382 starting I/O failed 00:38:25.382 Read completed with error (sct=0, sc=8) 00:38:25.382 starting I/O failed 00:38:25.382 Read completed with error (sct=0, sc=8) 00:38:25.382 starting I/O failed 00:38:25.382 Write completed with error (sct=0, sc=8) 00:38:25.382 starting I/O failed 00:38:25.382 Write completed with error (sct=0, sc=8) 00:38:25.382 starting I/O failed 00:38:25.382 Write completed with error (sct=0, sc=8) 00:38:25.382 starting I/O failed 00:38:25.382 Read completed with error (sct=0, sc=8) 00:38:25.382 starting I/O failed 00:38:25.382 Read completed with error (sct=0, sc=8) 00:38:25.382 starting I/O failed 00:38:25.382 Read completed with error (sct=0, sc=8) 00:38:25.382 starting I/O failed 00:38:25.382 Write completed with error (sct=0, sc=8) 00:38:25.382 starting I/O failed 00:38:25.382 Write completed with error (sct=0, sc=8) 00:38:25.382 starting I/O failed 00:38:25.382 Read completed with error (sct=0, sc=8) 00:38:25.382 starting I/O failed 00:38:25.382 Read completed with error (sct=0, sc=8) 00:38:25.382 starting I/O failed 00:38:25.382 [2024-12-14 00:19:04.458244] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:25.382 Read completed with error (sct=0, sc=8) 00:38:25.382 starting I/O failed 00:38:25.382 Read completed with error (sct=0, sc=8) 00:38:25.382 starting I/O failed 00:38:25.382 Read completed with error (sct=0, sc=8) 00:38:25.382 starting I/O failed 00:38:25.382 Read completed with error (sct=0, sc=8) 00:38:25.382 starting I/O failed 00:38:25.382 Read completed with error (sct=0, sc=8) 00:38:25.382 starting I/O failed 00:38:25.382 Read completed with error (sct=0, sc=8) 00:38:25.382 starting I/O failed 00:38:25.382 Read completed with error (sct=0, sc=8) 00:38:25.382 starting I/O failed 00:38:25.382 Read completed with error (sct=0, sc=8) 00:38:25.382 starting I/O failed 00:38:25.382 Read completed with error (sct=0, sc=8) 00:38:25.382 starting I/O failed 00:38:25.382 Read completed with error (sct=0, sc=8) 00:38:25.382 starting I/O failed 00:38:25.382 Read completed with error (sct=0, sc=8) 00:38:25.382 starting I/O failed 00:38:25.382 Read completed with error (sct=0, sc=8) 00:38:25.382 starting I/O failed 00:38:25.382 Read completed with error (sct=0, sc=8) 00:38:25.382 starting I/O failed 00:38:25.382 Read completed with error (sct=0, sc=8) 00:38:25.382 starting I/O failed 00:38:25.382 Read completed with error (sct=0, sc=8) 00:38:25.382 starting I/O failed 00:38:25.382 Read completed with error (sct=0, sc=8) 00:38:25.382 starting I/O failed 00:38:25.382 Write completed with error (sct=0, sc=8) 00:38:25.382 starting I/O failed 00:38:25.382 Read completed with error (sct=0, sc=8) 00:38:25.382 starting I/O failed 00:38:25.382 Read completed with error (sct=0, sc=8) 00:38:25.382 starting I/O failed 00:38:25.382 Write completed with error (sct=0, sc=8) 00:38:25.382 starting I/O failed 00:38:25.382 Write completed with error (sct=0, sc=8) 00:38:25.382 starting I/O failed 00:38:25.382 Read completed with error (sct=0, sc=8) 00:38:25.382 starting I/O failed 00:38:25.382 Write completed with error (sct=0, sc=8) 00:38:25.382 starting I/O failed 00:38:25.382 Read completed with error (sct=0, sc=8) 00:38:25.382 starting I/O failed 00:38:25.382 Write completed with error (sct=0, sc=8) 00:38:25.382 starting I/O failed 00:38:25.382 Write completed with error (sct=0, sc=8) 00:38:25.382 starting I/O failed 00:38:25.382 Read completed with error (sct=0, sc=8) 00:38:25.382 starting I/O failed 00:38:25.382 Write completed with error (sct=0, sc=8) 00:38:25.382 starting I/O failed 00:38:25.382 Write completed with error (sct=0, sc=8) 00:38:25.382 starting I/O failed 00:38:25.382 Write completed with error (sct=0, sc=8) 00:38:25.382 starting I/O failed 00:38:25.382 Read completed with error (sct=0, sc=8) 00:38:25.382 starting I/O failed 00:38:25.382 Write completed with error (sct=0, sc=8) 00:38:25.382 starting I/O failed 00:38:25.382 [2024-12-14 00:19:04.458596] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:38:25.382 [2024-12-14 00:19:04.458707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.382 [2024-12-14 00:19:04.458731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:25.382 qpair failed and we were unable to recover it. 00:38:25.382 [2024-12-14 00:19:04.458986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.382 [2024-12-14 00:19:04.459000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:25.382 qpair failed and we were unable to recover it. 00:38:25.382 [2024-12-14 00:19:04.459234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.382 [2024-12-14 00:19:04.459249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:25.382 qpair failed and we were unable to recover it. 00:38:25.382 [2024-12-14 00:19:04.459499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.382 [2024-12-14 00:19:04.459513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:25.382 qpair failed and we were unable to recover it. 00:38:25.382 [2024-12-14 00:19:04.459658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.382 [2024-12-14 00:19:04.459672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:25.382 qpair failed and we were unable to recover it. 00:38:25.382 [2024-12-14 00:19:04.459782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.382 [2024-12-14 00:19:04.459799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:25.382 qpair failed and we were unable to recover it. 00:38:25.382 [2024-12-14 00:19:04.459957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.382 [2024-12-14 00:19:04.459971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:25.382 qpair failed and we were unable to recover it. 00:38:25.382 [2024-12-14 00:19:04.460128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.382 [2024-12-14 00:19:04.460142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:25.382 qpair failed and we were unable to recover it. 00:38:25.382 [2024-12-14 00:19:04.460424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.382 [2024-12-14 00:19:04.460478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:25.382 qpair failed and we were unable to recover it. 00:38:25.382 [2024-12-14 00:19:04.460725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.382 [2024-12-14 00:19:04.460769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:25.382 qpair failed and we were unable to recover it. 00:38:25.382 [2024-12-14 00:19:04.460985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.382 [2024-12-14 00:19:04.461028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:25.383 qpair failed and we were unable to recover it. 00:38:25.383 [2024-12-14 00:19:04.461231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.383 [2024-12-14 00:19:04.461273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:25.383 qpair failed and we were unable to recover it. 00:38:25.383 [2024-12-14 00:19:04.461507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.383 [2024-12-14 00:19:04.461551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:25.383 qpair failed and we were unable to recover it. 00:38:25.383 [2024-12-14 00:19:04.461760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.383 [2024-12-14 00:19:04.461802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:25.383 qpair failed and we were unable to recover it. 00:38:25.383 [2024-12-14 00:19:04.462085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.383 [2024-12-14 00:19:04.462128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:25.383 qpair failed and we were unable to recover it. 00:38:25.383 [2024-12-14 00:19:04.462306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.383 [2024-12-14 00:19:04.462320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:25.383 qpair failed and we were unable to recover it. 00:38:25.383 [2024-12-14 00:19:04.462552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.383 [2024-12-14 00:19:04.462566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:25.383 qpair failed and we were unable to recover it. 00:38:25.383 [2024-12-14 00:19:04.462710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.383 [2024-12-14 00:19:04.462723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:25.383 qpair failed and we were unable to recover it. 00:38:25.383 [2024-12-14 00:19:04.462880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.383 [2024-12-14 00:19:04.462895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:25.383 qpair failed and we were unable to recover it. 00:38:25.383 [2024-12-14 00:19:04.463058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.383 [2024-12-14 00:19:04.463072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:25.383 qpair failed and we were unable to recover it. 00:38:25.383 [2024-12-14 00:19:04.463312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.383 [2024-12-14 00:19:04.463325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:25.383 qpair failed and we were unable to recover it. 00:38:25.383 [2024-12-14 00:19:04.463573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.383 [2024-12-14 00:19:04.463587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:25.383 qpair failed and we were unable to recover it. 00:38:25.383 [2024-12-14 00:19:04.463688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.383 [2024-12-14 00:19:04.463702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:25.383 qpair failed and we were unable to recover it. 00:38:25.383 [2024-12-14 00:19:04.463878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.383 [2024-12-14 00:19:04.463891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:25.383 qpair failed and we were unable to recover it. 00:38:25.383 [2024-12-14 00:19:04.464114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.383 [2024-12-14 00:19:04.464127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:25.383 qpair failed and we were unable to recover it. 00:38:25.383 [2024-12-14 00:19:04.464274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.383 [2024-12-14 00:19:04.464288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:25.383 qpair failed and we were unable to recover it. 00:38:25.383 [2024-12-14 00:19:04.464531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.383 [2024-12-14 00:19:04.464575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:25.383 qpair failed and we were unable to recover it. 00:38:25.383 [2024-12-14 00:19:04.464787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.383 [2024-12-14 00:19:04.464829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:25.383 qpair failed and we were unable to recover it. 00:38:25.383 [2024-12-14 00:19:04.465038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.383 [2024-12-14 00:19:04.465080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:25.383 qpair failed and we were unable to recover it. 00:38:25.383 [2024-12-14 00:19:04.465369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.383 [2024-12-14 00:19:04.465411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:25.383 qpair failed and we were unable to recover it. 00:38:25.383 [2024-12-14 00:19:04.465693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.383 [2024-12-14 00:19:04.465708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:25.383 qpair failed and we were unable to recover it. 00:38:25.383 [2024-12-14 00:19:04.465933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.383 [2024-12-14 00:19:04.465947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:25.383 qpair failed and we were unable to recover it. 00:38:25.383 [2024-12-14 00:19:04.466046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.383 [2024-12-14 00:19:04.466060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:25.383 qpair failed and we were unable to recover it. 00:38:25.383 [2024-12-14 00:19:04.466234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.383 [2024-12-14 00:19:04.466247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:25.383 qpair failed and we were unable to recover it. 00:38:25.383 [2024-12-14 00:19:04.466405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.383 [2024-12-14 00:19:04.466419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:25.383 qpair failed and we were unable to recover it. 00:38:25.383 [2024-12-14 00:19:04.466512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.383 [2024-12-14 00:19:04.466525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:25.383 qpair failed and we were unable to recover it. 00:38:25.383 [2024-12-14 00:19:04.466749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.383 [2024-12-14 00:19:04.466763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:25.383 qpair failed and we were unable to recover it. 00:38:25.383 [2024-12-14 00:19:04.466989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.383 [2024-12-14 00:19:04.467002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:25.383 qpair failed and we were unable to recover it. 00:38:25.383 [2024-12-14 00:19:04.467170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.383 [2024-12-14 00:19:04.467184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:25.383 qpair failed and we were unable to recover it. 00:38:25.383 [2024-12-14 00:19:04.467366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.383 [2024-12-14 00:19:04.467409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:25.383 qpair failed and we were unable to recover it. 00:38:25.383 [2024-12-14 00:19:04.467646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.383 [2024-12-14 00:19:04.467689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:25.383 qpair failed and we were unable to recover it. 00:38:25.383 [2024-12-14 00:19:04.467845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.383 [2024-12-14 00:19:04.467891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:25.383 qpair failed and we were unable to recover it. 00:38:25.383 [2024-12-14 00:19:04.468183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.383 [2024-12-14 00:19:04.468225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:25.383 qpair failed and we were unable to recover it. 00:38:25.383 [2024-12-14 00:19:04.468505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.383 [2024-12-14 00:19:04.468549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:25.383 qpair failed and we were unable to recover it. 00:38:25.383 [2024-12-14 00:19:04.468800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.383 [2024-12-14 00:19:04.468814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:25.383 qpair failed and we were unable to recover it. 00:38:25.383 [2024-12-14 00:19:04.469046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.383 [2024-12-14 00:19:04.469063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:25.383 qpair failed and we were unable to recover it. 00:38:25.383 [2024-12-14 00:19:04.469283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.383 [2024-12-14 00:19:04.469297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:25.383 qpair failed and we were unable to recover it. 00:38:25.383 [2024-12-14 00:19:04.469532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.383 [2024-12-14 00:19:04.469546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:25.383 qpair failed and we were unable to recover it. 00:38:25.383 [2024-12-14 00:19:04.469759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.383 [2024-12-14 00:19:04.469773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:25.383 qpair failed and we were unable to recover it. 00:38:25.383 [2024-12-14 00:19:04.470039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.384 [2024-12-14 00:19:04.470053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:25.384 qpair failed and we were unable to recover it. 00:38:25.384 [2024-12-14 00:19:04.470286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.384 [2024-12-14 00:19:04.470304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:25.384 qpair failed and we were unable to recover it. 00:38:25.384 [2024-12-14 00:19:04.470554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.384 [2024-12-14 00:19:04.470572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:25.384 qpair failed and we were unable to recover it. 00:38:25.384 [2024-12-14 00:19:04.470733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.384 [2024-12-14 00:19:04.470751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:25.384 qpair failed and we were unable to recover it. 00:38:25.384 [2024-12-14 00:19:04.470986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.384 [2024-12-14 00:19:04.471003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:25.384 qpair failed and we were unable to recover it. 00:38:25.384 [2024-12-14 00:19:04.471265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.384 [2024-12-14 00:19:04.471283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:25.384 qpair failed and we were unable to recover it. 00:38:25.384 [2024-12-14 00:19:04.471445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.384 [2024-12-14 00:19:04.471463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:25.384 qpair failed and we were unable to recover it. 00:38:25.384 [2024-12-14 00:19:04.471700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.384 [2024-12-14 00:19:04.471742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:25.384 qpair failed and we were unable to recover it. 00:38:25.384 [2024-12-14 00:19:04.471970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.384 [2024-12-14 00:19:04.472012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:25.384 qpair failed and we were unable to recover it. 00:38:25.384 [2024-12-14 00:19:04.472329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.384 [2024-12-14 00:19:04.472372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:25.384 qpair failed and we were unable to recover it. 00:38:25.384 [2024-12-14 00:19:04.472530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.384 [2024-12-14 00:19:04.472549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:25.384 qpair failed and we were unable to recover it. 00:38:25.384 [2024-12-14 00:19:04.472713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.384 [2024-12-14 00:19:04.472731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:25.384 qpair failed and we were unable to recover it. 00:38:25.384 [2024-12-14 00:19:04.472886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.384 [2024-12-14 00:19:04.472904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:25.384 qpair failed and we were unable to recover it. 00:38:25.384 [2024-12-14 00:19:04.473070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.384 [2024-12-14 00:19:04.473088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:25.384 qpair failed and we were unable to recover it. 00:38:25.384 [2024-12-14 00:19:04.473308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.384 [2024-12-14 00:19:04.473332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:25.384 qpair failed and we were unable to recover it. 00:38:25.384 [2024-12-14 00:19:04.473408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.384 [2024-12-14 00:19:04.473426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:25.384 qpair failed and we were unable to recover it. 00:38:25.384 [2024-12-14 00:19:04.473532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.384 [2024-12-14 00:19:04.473553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:25.384 qpair failed and we were unable to recover it. 00:38:25.384 [2024-12-14 00:19:04.473767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.384 [2024-12-14 00:19:04.473785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:25.384 qpair failed and we were unable to recover it. 00:38:25.384 [2024-12-14 00:19:04.473949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.384 [2024-12-14 00:19:04.473967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:25.384 qpair failed and we were unable to recover it. 00:38:25.384 [2024-12-14 00:19:04.474167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.384 [2024-12-14 00:19:04.474209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:25.384 qpair failed and we were unable to recover it. 00:38:25.384 [2024-12-14 00:19:04.474542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.384 [2024-12-14 00:19:04.474585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:25.384 qpair failed and we were unable to recover it. 00:38:25.384 [2024-12-14 00:19:04.474804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.384 [2024-12-14 00:19:04.474847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:25.384 qpair failed and we were unable to recover it. 00:38:25.384 [2024-12-14 00:19:04.475072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.384 [2024-12-14 00:19:04.475114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:25.384 qpair failed and we were unable to recover it. 00:38:25.384 [2024-12-14 00:19:04.475403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.384 [2024-12-14 00:19:04.475471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:25.384 qpair failed and we were unable to recover it. 00:38:25.384 [2024-12-14 00:19:04.475753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.384 [2024-12-14 00:19:04.475796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:25.384 qpair failed and we were unable to recover it. 00:38:25.384 [2024-12-14 00:19:04.476087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.384 [2024-12-14 00:19:04.476129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:25.384 qpair failed and we were unable to recover it. 00:38:25.384 [2024-12-14 00:19:04.476372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.384 [2024-12-14 00:19:04.476391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:25.384 qpair failed and we were unable to recover it. 00:38:25.384 [2024-12-14 00:19:04.476627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.384 [2024-12-14 00:19:04.476646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:25.384 qpair failed and we were unable to recover it. 00:38:25.384 [2024-12-14 00:19:04.476802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.384 [2024-12-14 00:19:04.476819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:25.384 qpair failed and we were unable to recover it. 00:38:25.384 [2024-12-14 00:19:04.477010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.384 [2024-12-14 00:19:04.477029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:25.384 qpair failed and we were unable to recover it. 00:38:25.384 [2024-12-14 00:19:04.477242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.384 [2024-12-14 00:19:04.477260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:25.384 qpair failed and we were unable to recover it. 00:38:25.384 [2024-12-14 00:19:04.477525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.384 [2024-12-14 00:19:04.477544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:25.384 qpair failed and we were unable to recover it. 00:38:25.384 [2024-12-14 00:19:04.477773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.384 [2024-12-14 00:19:04.477791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:25.384 qpair failed and we were unable to recover it. 00:38:25.384 [2024-12-14 00:19:04.478052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.384 [2024-12-14 00:19:04.478070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:25.384 qpair failed and we were unable to recover it. 00:38:25.384 [2024-12-14 00:19:04.478219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.384 [2024-12-14 00:19:04.478237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:25.384 qpair failed and we were unable to recover it. 00:38:25.384 [2024-12-14 00:19:04.478482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.384 [2024-12-14 00:19:04.478525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:25.384 qpair failed and we were unable to recover it. 00:38:25.384 [2024-12-14 00:19:04.478717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.384 [2024-12-14 00:19:04.478767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:25.384 qpair failed and we were unable to recover it. 00:38:25.384 [2024-12-14 00:19:04.479046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.384 [2024-12-14 00:19:04.479088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:25.384 qpair failed and we were unable to recover it. 00:38:25.384 [2024-12-14 00:19:04.479292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.384 [2024-12-14 00:19:04.479335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:25.384 qpair failed and we were unable to recover it. 00:38:25.385 [2024-12-14 00:19:04.479594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.385 [2024-12-14 00:19:04.479638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:25.385 qpair failed and we were unable to recover it. 00:38:25.385 [2024-12-14 00:19:04.479789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.385 [2024-12-14 00:19:04.479831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:25.385 qpair failed and we were unable to recover it. 00:38:25.385 [2024-12-14 00:19:04.480061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.385 [2024-12-14 00:19:04.480104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:25.385 qpair failed and we were unable to recover it. 00:38:25.385 [2024-12-14 00:19:04.480306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.385 [2024-12-14 00:19:04.480348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:25.385 qpair failed and we were unable to recover it. 00:38:25.385 [2024-12-14 00:19:04.480636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.385 [2024-12-14 00:19:04.480690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:25.385 qpair failed and we were unable to recover it. 00:38:25.385 [2024-12-14 00:19:04.480923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.385 [2024-12-14 00:19:04.480943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:25.385 qpair failed and we were unable to recover it. 00:38:25.385 [2024-12-14 00:19:04.481098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.385 [2024-12-14 00:19:04.481120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:25.385 qpair failed and we were unable to recover it. 00:38:25.385 [2024-12-14 00:19:04.481306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.385 [2024-12-14 00:19:04.481348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:25.385 qpair failed and we were unable to recover it. 00:38:25.385 [2024-12-14 00:19:04.481627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.385 [2024-12-14 00:19:04.481670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:25.385 qpair failed and we were unable to recover it. 00:38:25.385 [2024-12-14 00:19:04.481938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.385 [2024-12-14 00:19:04.481980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:25.385 qpair failed and we were unable to recover it. 00:38:25.385 [2024-12-14 00:19:04.482196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.385 [2024-12-14 00:19:04.482240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:25.385 qpair failed and we were unable to recover it. 00:38:25.385 [2024-12-14 00:19:04.482492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.385 [2024-12-14 00:19:04.482514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:25.385 qpair failed and we were unable to recover it. 00:38:25.385 [2024-12-14 00:19:04.482779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.385 [2024-12-14 00:19:04.482800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:25.385 qpair failed and we were unable to recover it. 00:38:25.385 [2024-12-14 00:19:04.482953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.385 [2024-12-14 00:19:04.482974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:25.385 qpair failed and we were unable to recover it. 00:38:25.385 [2024-12-14 00:19:04.483163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.385 [2024-12-14 00:19:04.483184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:25.385 qpair failed and we were unable to recover it. 00:38:25.385 [2024-12-14 00:19:04.483420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.385 [2024-12-14 00:19:04.483489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:25.385 qpair failed and we were unable to recover it. 00:38:25.385 [2024-12-14 00:19:04.483786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.385 [2024-12-14 00:19:04.483829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:25.385 qpair failed and we were unable to recover it. 00:38:25.385 [2024-12-14 00:19:04.484152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.385 [2024-12-14 00:19:04.484198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:25.385 qpair failed and we were unable to recover it. 00:38:25.385 [2024-12-14 00:19:04.484517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.385 [2024-12-14 00:19:04.484563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:25.385 qpair failed and we were unable to recover it. 00:38:25.385 [2024-12-14 00:19:04.484825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.385 [2024-12-14 00:19:04.484869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:25.385 qpair failed and we were unable to recover it. 00:38:25.385 [2024-12-14 00:19:04.485144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.385 [2024-12-14 00:19:04.485186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:25.385 qpair failed and we were unable to recover it. 00:38:25.385 [2024-12-14 00:19:04.485457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.385 [2024-12-14 00:19:04.485500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:25.385 qpair failed and we were unable to recover it. 00:38:25.385 [2024-12-14 00:19:04.485762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.385 [2024-12-14 00:19:04.485805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:25.385 qpair failed and we were unable to recover it. 00:38:25.385 [2024-12-14 00:19:04.486083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.385 [2024-12-14 00:19:04.486125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:25.385 qpair failed and we were unable to recover it. 00:38:25.385 [2024-12-14 00:19:04.486411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.385 [2024-12-14 00:19:04.486433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:25.385 qpair failed and we were unable to recover it. 00:38:25.385 [2024-12-14 00:19:04.486705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.385 [2024-12-14 00:19:04.486727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:25.385 qpair failed and we were unable to recover it. 00:38:25.385 [2024-12-14 00:19:04.486834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.385 [2024-12-14 00:19:04.486859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:25.385 qpair failed and we were unable to recover it. 00:38:25.385 [2024-12-14 00:19:04.487050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.385 [2024-12-14 00:19:04.487071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:25.385 qpair failed and we were unable to recover it. 00:38:25.385 [2024-12-14 00:19:04.487328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.385 [2024-12-14 00:19:04.487350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:25.385 qpair failed and we were unable to recover it. 00:38:25.385 [2024-12-14 00:19:04.487616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.385 [2024-12-14 00:19:04.487637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:25.385 qpair failed and we were unable to recover it. 00:38:25.385 [2024-12-14 00:19:04.487788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.385 [2024-12-14 00:19:04.487809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:25.385 qpair failed and we were unable to recover it. 00:38:25.385 [2024-12-14 00:19:04.488016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.385 [2024-12-14 00:19:04.488059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:25.385 qpair failed and we were unable to recover it. 00:38:25.385 [2024-12-14 00:19:04.488352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.385 [2024-12-14 00:19:04.488394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:25.385 qpair failed and we were unable to recover it. 00:38:25.385 [2024-12-14 00:19:04.488684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.385 [2024-12-14 00:19:04.488706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:25.385 qpair failed and we were unable to recover it. 00:38:25.385 [2024-12-14 00:19:04.488976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.385 [2024-12-14 00:19:04.488998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:25.385 qpair failed and we were unable to recover it. 00:38:25.385 [2024-12-14 00:19:04.489123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.385 [2024-12-14 00:19:04.489144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:25.385 qpair failed and we were unable to recover it. 00:38:25.385 [2024-12-14 00:19:04.489307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.385 [2024-12-14 00:19:04.489328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:25.385 qpair failed and we were unable to recover it. 00:38:25.385 [2024-12-14 00:19:04.489496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.385 [2024-12-14 00:19:04.489521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:25.385 qpair failed and we were unable to recover it. 00:38:25.385 [2024-12-14 00:19:04.489652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.385 [2024-12-14 00:19:04.489673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:25.386 qpair failed and we were unable to recover it. 00:38:25.386 [2024-12-14 00:19:04.489859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.386 [2024-12-14 00:19:04.489889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:25.386 qpair failed and we were unable to recover it. 00:38:25.386 [2024-12-14 00:19:04.490124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.386 [2024-12-14 00:19:04.490146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:25.386 qpair failed and we were unable to recover it. 00:38:25.386 [2024-12-14 00:19:04.490257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.386 [2024-12-14 00:19:04.490277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:25.386 qpair failed and we were unable to recover it. 00:38:25.386 [2024-12-14 00:19:04.490521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.386 [2024-12-14 00:19:04.490543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:25.386 qpair failed and we were unable to recover it. 00:38:25.386 [2024-12-14 00:19:04.490757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.386 [2024-12-14 00:19:04.490778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:25.386 qpair failed and we were unable to recover it. 00:38:25.386 [2024-12-14 00:19:04.491024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.386 [2024-12-14 00:19:04.491045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:25.386 qpair failed and we were unable to recover it. 00:38:25.386 [2024-12-14 00:19:04.491140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.386 [2024-12-14 00:19:04.491161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:25.386 qpair failed and we were unable to recover it. 00:38:25.386 [2024-12-14 00:19:04.491354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.386 [2024-12-14 00:19:04.491375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:25.386 qpair failed and we were unable to recover it. 00:38:25.386 [2024-12-14 00:19:04.491557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.386 [2024-12-14 00:19:04.491579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:25.386 qpair failed and we were unable to recover it. 00:38:25.386 [2024-12-14 00:19:04.491743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.386 [2024-12-14 00:19:04.491764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:25.386 qpair failed and we were unable to recover it. 00:38:25.386 [2024-12-14 00:19:04.491878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.386 [2024-12-14 00:19:04.491899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:25.386 qpair failed and we were unable to recover it. 00:38:25.386 [2024-12-14 00:19:04.492069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.386 [2024-12-14 00:19:04.492091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:25.386 qpair failed and we were unable to recover it. 00:38:25.386 [2024-12-14 00:19:04.492289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.386 [2024-12-14 00:19:04.492331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:25.386 qpair failed and we were unable to recover it. 00:38:25.386 [2024-12-14 00:19:04.492541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.386 [2024-12-14 00:19:04.492584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:25.386 qpair failed and we were unable to recover it. 00:38:25.386 [2024-12-14 00:19:04.492797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.386 [2024-12-14 00:19:04.492838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:25.386 qpair failed and we were unable to recover it. 00:38:25.386 [2024-12-14 00:19:04.493154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.386 [2024-12-14 00:19:04.493197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:25.386 qpair failed and we were unable to recover it. 00:38:25.386 [2024-12-14 00:19:04.493498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.386 [2024-12-14 00:19:04.493520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:25.386 qpair failed and we were unable to recover it. 00:38:25.386 [2024-12-14 00:19:04.493786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.386 [2024-12-14 00:19:04.493807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:25.386 qpair failed and we were unable to recover it. 00:38:25.386 [2024-12-14 00:19:04.493907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.386 [2024-12-14 00:19:04.493928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:25.386 qpair failed and we were unable to recover it. 00:38:25.386 [2024-12-14 00:19:04.494147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.386 [2024-12-14 00:19:04.494168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:25.386 qpair failed and we were unable to recover it. 00:38:25.386 [2024-12-14 00:19:04.494343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.386 [2024-12-14 00:19:04.494365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:25.386 qpair failed and we were unable to recover it. 00:38:25.386 [2024-12-14 00:19:04.494588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.386 [2024-12-14 00:19:04.494609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:25.386 qpair failed and we were unable to recover it. 00:38:25.386 [2024-12-14 00:19:04.494846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.386 [2024-12-14 00:19:04.494867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:25.386 qpair failed and we were unable to recover it. 00:38:25.386 [2024-12-14 00:19:04.494965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.386 [2024-12-14 00:19:04.494986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:25.386 qpair failed and we were unable to recover it. 00:38:25.386 [2024-12-14 00:19:04.495236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.386 [2024-12-14 00:19:04.495257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:25.386 qpair failed and we were unable to recover it. 00:38:25.386 [2024-12-14 00:19:04.495482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.386 [2024-12-14 00:19:04.495504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:25.386 qpair failed and we were unable to recover it. 00:38:25.386 [2024-12-14 00:19:04.495693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.386 [2024-12-14 00:19:04.495715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:25.386 qpair failed and we were unable to recover it. 00:38:25.386 [2024-12-14 00:19:04.495903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.386 [2024-12-14 00:19:04.495924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:25.386 qpair failed and we were unable to recover it. 00:38:25.386 [2024-12-14 00:19:04.496096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.386 [2024-12-14 00:19:04.496117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:25.386 qpair failed and we were unable to recover it. 00:38:25.386 [2024-12-14 00:19:04.496308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.386 [2024-12-14 00:19:04.496351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:25.386 qpair failed and we were unable to recover it. 00:38:25.386 [2024-12-14 00:19:04.496644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.386 [2024-12-14 00:19:04.496688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:25.386 qpair failed and we were unable to recover it. 00:38:25.386 [2024-12-14 00:19:04.496953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.386 [2024-12-14 00:19:04.496996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:25.387 qpair failed and we were unable to recover it. 00:38:25.387 [2024-12-14 00:19:04.497290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.387 [2024-12-14 00:19:04.497332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:25.387 qpair failed and we were unable to recover it. 00:38:25.387 [2024-12-14 00:19:04.497614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.387 [2024-12-14 00:19:04.497659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:25.387 qpair failed and we were unable to recover it. 00:38:25.387 [2024-12-14 00:19:04.497915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.387 [2024-12-14 00:19:04.497957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:25.387 qpair failed and we were unable to recover it. 00:38:25.387 [2024-12-14 00:19:04.498187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.387 [2024-12-14 00:19:04.498231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:25.387 qpair failed and we were unable to recover it. 00:38:25.387 [2024-12-14 00:19:04.498540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.387 [2024-12-14 00:19:04.498583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:25.387 qpair failed and we were unable to recover it. 00:38:25.387 [2024-12-14 00:19:04.498830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.387 [2024-12-14 00:19:04.498872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:25.387 qpair failed and we were unable to recover it. 00:38:25.387 [2024-12-14 00:19:04.499155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.387 [2024-12-14 00:19:04.499217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:25.387 qpair failed and we were unable to recover it. 00:38:25.387 [2024-12-14 00:19:04.499470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.387 [2024-12-14 00:19:04.499492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:25.387 qpair failed and we were unable to recover it. 00:38:25.387 [2024-12-14 00:19:04.499713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.387 [2024-12-14 00:19:04.499735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:25.387 qpair failed and we were unable to recover it. 00:38:25.387 [2024-12-14 00:19:04.499949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.387 [2024-12-14 00:19:04.499970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:25.387 qpair failed and we were unable to recover it. 00:38:25.387 [2024-12-14 00:19:04.500236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.387 [2024-12-14 00:19:04.500257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:25.387 qpair failed and we were unable to recover it. 00:38:25.387 [2024-12-14 00:19:04.500431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.387 [2024-12-14 00:19:04.500458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:25.387 qpair failed and we were unable to recover it. 00:38:25.387 [2024-12-14 00:19:04.500577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.387 [2024-12-14 00:19:04.500598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:25.387 qpair failed and we were unable to recover it. 00:38:25.387 [2024-12-14 00:19:04.500856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.387 [2024-12-14 00:19:04.500877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:25.387 qpair failed and we were unable to recover it. 00:38:25.387 [2024-12-14 00:19:04.501141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.387 [2024-12-14 00:19:04.501162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:25.387 qpair failed and we were unable to recover it. 00:38:25.387 [2024-12-14 00:19:04.501274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.387 [2024-12-14 00:19:04.501295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:25.387 qpair failed and we were unable to recover it. 00:38:25.387 [2024-12-14 00:19:04.501511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.387 [2024-12-14 00:19:04.501533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:25.387 qpair failed and we were unable to recover it. 00:38:25.387 [2024-12-14 00:19:04.501753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.387 [2024-12-14 00:19:04.501798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:25.387 qpair failed and we were unable to recover it. 00:38:25.387 [2024-12-14 00:19:04.502039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.387 [2024-12-14 00:19:04.502081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:25.387 qpair failed and we were unable to recover it. 00:38:25.387 [2024-12-14 00:19:04.502383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.387 [2024-12-14 00:19:04.502426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:25.387 qpair failed and we were unable to recover it. 00:38:25.387 [2024-12-14 00:19:04.502662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.387 [2024-12-14 00:19:04.502707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:25.387 qpair failed and we were unable to recover it. 00:38:25.387 [2024-12-14 00:19:04.503005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.387 [2024-12-14 00:19:04.503046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:25.387 qpair failed and we were unable to recover it. 00:38:25.387 [2024-12-14 00:19:04.503303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.387 [2024-12-14 00:19:04.503346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:25.387 qpair failed and we were unable to recover it. 00:38:25.387 [2024-12-14 00:19:04.503570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.387 [2024-12-14 00:19:04.503624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:25.387 qpair failed and we were unable to recover it. 00:38:25.387 [2024-12-14 00:19:04.503847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.387 [2024-12-14 00:19:04.503868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:25.387 qpair failed and we were unable to recover it. 00:38:25.387 [2024-12-14 00:19:04.504088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.387 [2024-12-14 00:19:04.504109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:25.387 qpair failed and we were unable to recover it. 00:38:25.387 [2024-12-14 00:19:04.504372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.387 [2024-12-14 00:19:04.504393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:25.387 qpair failed and we were unable to recover it. 00:38:25.387 [2024-12-14 00:19:04.504641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.387 [2024-12-14 00:19:04.504663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:25.387 qpair failed and we were unable to recover it. 00:38:25.387 [2024-12-14 00:19:04.504907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.387 [2024-12-14 00:19:04.504929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:25.387 qpair failed and we were unable to recover it. 00:38:25.387 [2024-12-14 00:19:04.505158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.387 [2024-12-14 00:19:04.505179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:25.387 qpair failed and we were unable to recover it. 00:38:25.387 [2024-12-14 00:19:04.505364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.387 [2024-12-14 00:19:04.505385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:25.387 qpair failed and we were unable to recover it. 00:38:25.387 [2024-12-14 00:19:04.505638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.387 [2024-12-14 00:19:04.505660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:25.387 qpair failed and we were unable to recover it. 00:38:25.387 [2024-12-14 00:19:04.505876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.387 [2024-12-14 00:19:04.505926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:25.387 qpair failed and we were unable to recover it. 00:38:25.387 [2024-12-14 00:19:04.506252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.387 [2024-12-14 00:19:04.506299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:25.387 qpair failed and we were unable to recover it. 00:38:25.387 [2024-12-14 00:19:04.506530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.387 [2024-12-14 00:19:04.506564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.387 qpair failed and we were unable to recover it. 00:38:25.387 [2024-12-14 00:19:04.506809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.387 [2024-12-14 00:19:04.506856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:25.387 qpair failed and we were unable to recover it. 00:38:25.387 [2024-12-14 00:19:04.507148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.387 [2024-12-14 00:19:04.507170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:25.387 qpair failed and we were unable to recover it. 00:38:25.387 [2024-12-14 00:19:04.507426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.387 [2024-12-14 00:19:04.507453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:25.387 qpair failed and we were unable to recover it. 00:38:25.387 [2024-12-14 00:19:04.507630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.387 [2024-12-14 00:19:04.507652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:25.387 qpair failed and we were unable to recover it. 00:38:25.388 [2024-12-14 00:19:04.507820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.388 [2024-12-14 00:19:04.507841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:25.388 qpair failed and we were unable to recover it. 00:38:25.388 [2024-12-14 00:19:04.507953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.388 [2024-12-14 00:19:04.507974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:25.388 qpair failed and we were unable to recover it. 00:38:25.388 [2024-12-14 00:19:04.508203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.388 [2024-12-14 00:19:04.508220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.388 qpair failed and we were unable to recover it. 00:38:25.388 [2024-12-14 00:19:04.508468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.388 [2024-12-14 00:19:04.508483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.388 qpair failed and we were unable to recover it. 00:38:25.388 [2024-12-14 00:19:04.508655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.388 [2024-12-14 00:19:04.508669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.388 qpair failed and we were unable to recover it. 00:38:25.388 [2024-12-14 00:19:04.508833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.388 [2024-12-14 00:19:04.508847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.388 qpair failed and we were unable to recover it. 00:38:25.388 [2024-12-14 00:19:04.509078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.388 [2024-12-14 00:19:04.509092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.388 qpair failed and we were unable to recover it. 00:38:25.388 [2024-12-14 00:19:04.509343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.388 [2024-12-14 00:19:04.509395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.388 qpair failed and we were unable to recover it. 00:38:25.388 [2024-12-14 00:19:04.509733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.388 [2024-12-14 00:19:04.509777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.388 qpair failed and we were unable to recover it. 00:38:25.388 [2024-12-14 00:19:04.510069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.388 [2024-12-14 00:19:04.510112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.388 qpair failed and we were unable to recover it. 00:38:25.388 [2024-12-14 00:19:04.510395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.388 [2024-12-14 00:19:04.510446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.388 qpair failed and we were unable to recover it. 00:38:25.388 [2024-12-14 00:19:04.510756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.388 [2024-12-14 00:19:04.510799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.388 qpair failed and we were unable to recover it. 00:38:25.388 [2024-12-14 00:19:04.511084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.388 [2024-12-14 00:19:04.511126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.388 qpair failed and we were unable to recover it. 00:38:25.388 [2024-12-14 00:19:04.511346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.388 [2024-12-14 00:19:04.511389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.388 qpair failed and we were unable to recover it. 00:38:25.388 [2024-12-14 00:19:04.511698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.388 [2024-12-14 00:19:04.511740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.388 qpair failed and we were unable to recover it. 00:38:25.388 [2024-12-14 00:19:04.511990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.388 [2024-12-14 00:19:04.512033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.388 qpair failed and we were unable to recover it. 00:38:25.388 [2024-12-14 00:19:04.512293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.388 [2024-12-14 00:19:04.512334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.388 qpair failed and we were unable to recover it. 00:38:25.388 [2024-12-14 00:19:04.512447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.388 [2024-12-14 00:19:04.512460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.388 qpair failed and we were unable to recover it. 00:38:25.388 [2024-12-14 00:19:04.512626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.388 [2024-12-14 00:19:04.512640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.388 qpair failed and we were unable to recover it. 00:38:25.388 [2024-12-14 00:19:04.512867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.388 [2024-12-14 00:19:04.512881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.388 qpair failed and we were unable to recover it. 00:38:25.388 [2024-12-14 00:19:04.513137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.388 [2024-12-14 00:19:04.513151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.388 qpair failed and we were unable to recover it. 00:38:25.388 [2024-12-14 00:19:04.513430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.388 [2024-12-14 00:19:04.513449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.388 qpair failed and we were unable to recover it. 00:38:25.388 [2024-12-14 00:19:04.513669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.388 [2024-12-14 00:19:04.513682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.388 qpair failed and we were unable to recover it. 00:38:25.388 [2024-12-14 00:19:04.513839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.388 [2024-12-14 00:19:04.513853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.388 qpair failed and we were unable to recover it. 00:38:25.388 [2024-12-14 00:19:04.514008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.388 [2024-12-14 00:19:04.514022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.388 qpair failed and we were unable to recover it. 00:38:25.388 [2024-12-14 00:19:04.514180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.388 [2024-12-14 00:19:04.514193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.388 qpair failed and we were unable to recover it. 00:38:25.388 [2024-12-14 00:19:04.514392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.388 [2024-12-14 00:19:04.514406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.388 qpair failed and we were unable to recover it. 00:38:25.388 [2024-12-14 00:19:04.514628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.388 [2024-12-14 00:19:04.514642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.388 qpair failed and we were unable to recover it. 00:38:25.388 [2024-12-14 00:19:04.514784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.388 [2024-12-14 00:19:04.514798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.388 qpair failed and we were unable to recover it. 00:38:25.388 [2024-12-14 00:19:04.515020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.388 [2024-12-14 00:19:04.515060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.388 qpair failed and we were unable to recover it. 00:38:25.388 [2024-12-14 00:19:04.515303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.388 [2024-12-14 00:19:04.515345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.388 qpair failed and we were unable to recover it. 00:38:25.388 [2024-12-14 00:19:04.515655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.388 [2024-12-14 00:19:04.515694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.388 qpair failed and we were unable to recover it. 00:38:25.388 [2024-12-14 00:19:04.515842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.388 [2024-12-14 00:19:04.515856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.388 qpair failed and we were unable to recover it. 00:38:25.388 [2024-12-14 00:19:04.516103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.388 [2024-12-14 00:19:04.516117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.388 qpair failed and we were unable to recover it. 00:38:25.388 [2024-12-14 00:19:04.516369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.388 [2024-12-14 00:19:04.516412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.388 qpair failed and we were unable to recover it. 00:38:25.388 [2024-12-14 00:19:04.516587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.388 [2024-12-14 00:19:04.516631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.388 qpair failed and we were unable to recover it. 00:38:25.388 [2024-12-14 00:19:04.516865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.388 [2024-12-14 00:19:04.516908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.388 qpair failed and we were unable to recover it. 00:38:25.388 [2024-12-14 00:19:04.517198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.388 [2024-12-14 00:19:04.517241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.388 qpair failed and we were unable to recover it. 00:38:25.388 [2024-12-14 00:19:04.517508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.388 [2024-12-14 00:19:04.517521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.388 qpair failed and we were unable to recover it. 00:38:25.389 [2024-12-14 00:19:04.517726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.389 [2024-12-14 00:19:04.517740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.389 qpair failed and we were unable to recover it. 00:38:25.389 [2024-12-14 00:19:04.517834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.389 [2024-12-14 00:19:04.517847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.389 qpair failed and we were unable to recover it. 00:38:25.666 [2024-12-14 00:19:04.518086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.666 [2024-12-14 00:19:04.518101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.666 qpair failed and we were unable to recover it. 00:38:25.666 [2024-12-14 00:19:04.518303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.666 [2024-12-14 00:19:04.518316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.666 qpair failed and we were unable to recover it. 00:38:25.666 [2024-12-14 00:19:04.518510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.666 [2024-12-14 00:19:04.518524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.666 qpair failed and we were unable to recover it. 00:38:25.666 [2024-12-14 00:19:04.518614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.666 [2024-12-14 00:19:04.518628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.666 qpair failed and we were unable to recover it. 00:38:25.666 [2024-12-14 00:19:04.518845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.667 [2024-12-14 00:19:04.518859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.667 qpair failed and we were unable to recover it. 00:38:25.667 [2024-12-14 00:19:04.519003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.667 [2024-12-14 00:19:04.519016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.667 qpair failed and we were unable to recover it. 00:38:25.667 [2024-12-14 00:19:04.519111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.667 [2024-12-14 00:19:04.519129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.667 qpair failed and we were unable to recover it. 00:38:25.667 [2024-12-14 00:19:04.519231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.667 [2024-12-14 00:19:04.519244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.667 qpair failed and we were unable to recover it. 00:38:25.667 [2024-12-14 00:19:04.519459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.667 [2024-12-14 00:19:04.519474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.667 qpair failed and we were unable to recover it. 00:38:25.667 [2024-12-14 00:19:04.519727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.667 [2024-12-14 00:19:04.519757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.667 qpair failed and we were unable to recover it. 00:38:25.667 [2024-12-14 00:19:04.520051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.667 [2024-12-14 00:19:04.520094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.667 qpair failed and we were unable to recover it. 00:38:25.667 [2024-12-14 00:19:04.520365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.667 [2024-12-14 00:19:04.520407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.667 qpair failed and we were unable to recover it. 00:38:25.667 [2024-12-14 00:19:04.520620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.667 [2024-12-14 00:19:04.520664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.667 qpair failed and we were unable to recover it. 00:38:25.667 [2024-12-14 00:19:04.520827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.667 [2024-12-14 00:19:04.520870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.667 qpair failed and we were unable to recover it. 00:38:25.667 [2024-12-14 00:19:04.521024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.667 [2024-12-14 00:19:04.521067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.667 qpair failed and we were unable to recover it. 00:38:25.667 [2024-12-14 00:19:04.521367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.667 [2024-12-14 00:19:04.521409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.667 qpair failed and we were unable to recover it. 00:38:25.667 [2024-12-14 00:19:04.521650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.667 [2024-12-14 00:19:04.521693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.667 qpair failed and we were unable to recover it. 00:38:25.667 [2024-12-14 00:19:04.521985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.667 [2024-12-14 00:19:04.522009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.667 qpair failed and we were unable to recover it. 00:38:25.667 [2024-12-14 00:19:04.522220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.667 [2024-12-14 00:19:04.522234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.667 qpair failed and we were unable to recover it. 00:38:25.667 [2024-12-14 00:19:04.522454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.667 [2024-12-14 00:19:04.522468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.667 qpair failed and we were unable to recover it. 00:38:25.667 [2024-12-14 00:19:04.522727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.667 [2024-12-14 00:19:04.522769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.667 qpair failed and we were unable to recover it. 00:38:25.667 [2024-12-14 00:19:04.523031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.667 [2024-12-14 00:19:04.523074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.667 qpair failed and we were unable to recover it. 00:38:25.667 [2024-12-14 00:19:04.523389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.667 [2024-12-14 00:19:04.523444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.667 qpair failed and we were unable to recover it. 00:38:25.667 [2024-12-14 00:19:04.523609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.667 [2024-12-14 00:19:04.523629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.667 qpair failed and we were unable to recover it. 00:38:25.667 [2024-12-14 00:19:04.523713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.667 [2024-12-14 00:19:04.523726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.667 qpair failed and we were unable to recover it. 00:38:25.667 [2024-12-14 00:19:04.523959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.667 [2024-12-14 00:19:04.523972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.667 qpair failed and we were unable to recover it. 00:38:25.667 [2024-12-14 00:19:04.524125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.667 [2024-12-14 00:19:04.524138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.667 qpair failed and we were unable to recover it. 00:38:25.667 [2024-12-14 00:19:04.524362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.667 [2024-12-14 00:19:04.524375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.667 qpair failed and we were unable to recover it. 00:38:25.667 [2024-12-14 00:19:04.524537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.667 [2024-12-14 00:19:04.524551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.667 qpair failed and we were unable to recover it. 00:38:25.667 [2024-12-14 00:19:04.524700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.667 [2024-12-14 00:19:04.524713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.667 qpair failed and we were unable to recover it. 00:38:25.667 [2024-12-14 00:19:04.524880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.667 [2024-12-14 00:19:04.524896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.667 qpair failed and we were unable to recover it. 00:38:25.667 [2024-12-14 00:19:04.525034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.667 [2024-12-14 00:19:04.525047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.667 qpair failed and we were unable to recover it. 00:38:25.667 [2024-12-14 00:19:04.525184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.667 [2024-12-14 00:19:04.525198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.667 qpair failed and we were unable to recover it. 00:38:25.667 [2024-12-14 00:19:04.525375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.667 [2024-12-14 00:19:04.525389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.667 qpair failed and we were unable to recover it. 00:38:25.667 [2024-12-14 00:19:04.525681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.667 [2024-12-14 00:19:04.525724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.667 qpair failed and we were unable to recover it. 00:38:25.667 [2024-12-14 00:19:04.526013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.667 [2024-12-14 00:19:04.526054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.667 qpair failed and we were unable to recover it. 00:38:25.667 [2024-12-14 00:19:04.526192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.667 [2024-12-14 00:19:04.526233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.667 qpair failed and we were unable to recover it. 00:38:25.667 [2024-12-14 00:19:04.526483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.667 [2024-12-14 00:19:04.526526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.667 qpair failed and we were unable to recover it. 00:38:25.667 [2024-12-14 00:19:04.526781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.667 [2024-12-14 00:19:04.526795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.667 qpair failed and we were unable to recover it. 00:38:25.667 [2024-12-14 00:19:04.527042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.667 [2024-12-14 00:19:04.527056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.667 qpair failed and we were unable to recover it. 00:38:25.667 [2024-12-14 00:19:04.527161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.667 [2024-12-14 00:19:04.527175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.667 qpair failed and we were unable to recover it. 00:38:25.667 [2024-12-14 00:19:04.527345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.667 [2024-12-14 00:19:04.527358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.667 qpair failed and we were unable to recover it. 00:38:25.668 [2024-12-14 00:19:04.527588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.668 [2024-12-14 00:19:04.527603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.668 qpair failed and we were unable to recover it. 00:38:25.668 [2024-12-14 00:19:04.527690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.668 [2024-12-14 00:19:04.527702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.668 qpair failed and we were unable to recover it. 00:38:25.668 [2024-12-14 00:19:04.527800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.668 [2024-12-14 00:19:04.527813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.668 qpair failed and we were unable to recover it. 00:38:25.668 [2024-12-14 00:19:04.527999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.668 [2024-12-14 00:19:04.528014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.668 qpair failed and we were unable to recover it. 00:38:25.668 [2024-12-14 00:19:04.528226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.668 [2024-12-14 00:19:04.528243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.668 qpair failed and we were unable to recover it. 00:38:25.668 [2024-12-14 00:19:04.528557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.668 [2024-12-14 00:19:04.528572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.668 qpair failed and we were unable to recover it. 00:38:25.668 [2024-12-14 00:19:04.528723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.668 [2024-12-14 00:19:04.528737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.668 qpair failed and we were unable to recover it. 00:38:25.668 [2024-12-14 00:19:04.528976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.668 [2024-12-14 00:19:04.528990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.668 qpair failed and we were unable to recover it. 00:38:25.668 [2024-12-14 00:19:04.529089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.668 [2024-12-14 00:19:04.529105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.668 qpair failed and we were unable to recover it. 00:38:25.668 [2024-12-14 00:19:04.529232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.668 [2024-12-14 00:19:04.529246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.668 qpair failed and we were unable to recover it. 00:38:25.668 [2024-12-14 00:19:04.529476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.668 [2024-12-14 00:19:04.529490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.668 qpair failed and we were unable to recover it. 00:38:25.668 [2024-12-14 00:19:04.529626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.668 [2024-12-14 00:19:04.529639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.668 qpair failed and we were unable to recover it. 00:38:25.668 [2024-12-14 00:19:04.529740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.668 [2024-12-14 00:19:04.529753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.668 qpair failed and we were unable to recover it. 00:38:25.668 [2024-12-14 00:19:04.529895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.668 [2024-12-14 00:19:04.529908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.668 qpair failed and we were unable to recover it. 00:38:25.668 [2024-12-14 00:19:04.530104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.668 [2024-12-14 00:19:04.530119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.668 qpair failed and we were unable to recover it. 00:38:25.668 [2024-12-14 00:19:04.530204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.668 [2024-12-14 00:19:04.530217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.668 qpair failed and we were unable to recover it. 00:38:25.668 [2024-12-14 00:19:04.530322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.668 [2024-12-14 00:19:04.530337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.668 qpair failed and we were unable to recover it. 00:38:25.668 [2024-12-14 00:19:04.530547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.668 [2024-12-14 00:19:04.530562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.668 qpair failed and we were unable to recover it. 00:38:25.668 [2024-12-14 00:19:04.530791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.668 [2024-12-14 00:19:04.530806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.668 qpair failed and we were unable to recover it. 00:38:25.668 [2024-12-14 00:19:04.531033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.668 [2024-12-14 00:19:04.531047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.668 qpair failed and we were unable to recover it. 00:38:25.668 [2024-12-14 00:19:04.531277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.668 [2024-12-14 00:19:04.531321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.668 qpair failed and we were unable to recover it. 00:38:25.668 [2024-12-14 00:19:04.531540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.668 [2024-12-14 00:19:04.531584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.668 qpair failed and we were unable to recover it. 00:38:25.668 [2024-12-14 00:19:04.531781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.668 [2024-12-14 00:19:04.531819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.668 qpair failed and we were unable to recover it. 00:38:25.668 [2024-12-14 00:19:04.532008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.668 [2024-12-14 00:19:04.532024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.668 qpair failed and we were unable to recover it. 00:38:25.668 [2024-12-14 00:19:04.532249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.668 [2024-12-14 00:19:04.532264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.668 qpair failed and we were unable to recover it. 00:38:25.668 [2024-12-14 00:19:04.532491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.668 [2024-12-14 00:19:04.532506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.668 qpair failed and we were unable to recover it. 00:38:25.668 [2024-12-14 00:19:04.532729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.668 [2024-12-14 00:19:04.532743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.668 qpair failed and we were unable to recover it. 00:38:25.668 [2024-12-14 00:19:04.532849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.668 [2024-12-14 00:19:04.532862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.668 qpair failed and we were unable to recover it. 00:38:25.668 [2024-12-14 00:19:04.532956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.668 [2024-12-14 00:19:04.532969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.668 qpair failed and we were unable to recover it. 00:38:25.668 [2024-12-14 00:19:04.533131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.668 [2024-12-14 00:19:04.533172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.668 qpair failed and we were unable to recover it. 00:38:25.668 [2024-12-14 00:19:04.533378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.668 [2024-12-14 00:19:04.533421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.668 qpair failed and we were unable to recover it. 00:38:25.668 [2024-12-14 00:19:04.533779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.668 [2024-12-14 00:19:04.533868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:25.668 qpair failed and we were unable to recover it. 00:38:25.668 [2024-12-14 00:19:04.534235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.668 [2024-12-14 00:19:04.534323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:25.668 qpair failed and we were unable to recover it. 00:38:25.668 [2024-12-14 00:19:04.534533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.668 [2024-12-14 00:19:04.534560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:25.668 qpair failed and we were unable to recover it. 00:38:25.668 [2024-12-14 00:19:04.534720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.668 [2024-12-14 00:19:04.534737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.668 qpair failed and we were unable to recover it. 00:38:25.668 [2024-12-14 00:19:04.534974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.669 [2024-12-14 00:19:04.535024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.669 qpair failed and we were unable to recover it. 00:38:25.669 [2024-12-14 00:19:04.535315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.669 [2024-12-14 00:19:04.535358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.669 qpair failed and we were unable to recover it. 00:38:25.669 [2024-12-14 00:19:04.535660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.669 [2024-12-14 00:19:04.535675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.669 qpair failed and we were unable to recover it. 00:38:25.669 [2024-12-14 00:19:04.535905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.669 [2024-12-14 00:19:04.535919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.669 qpair failed and we were unable to recover it. 00:38:25.669 [2024-12-14 00:19:04.536065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.669 [2024-12-14 00:19:04.536079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.669 qpair failed and we were unable to recover it. 00:38:25.669 [2024-12-14 00:19:04.536288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.669 [2024-12-14 00:19:04.536302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.669 qpair failed and we were unable to recover it. 00:38:25.669 [2024-12-14 00:19:04.536506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.669 [2024-12-14 00:19:04.536521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.669 qpair failed and we were unable to recover it. 00:38:25.669 [2024-12-14 00:19:04.536673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.669 [2024-12-14 00:19:04.536687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.669 qpair failed and we were unable to recover it. 00:38:25.669 [2024-12-14 00:19:04.536926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.669 [2024-12-14 00:19:04.536940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.669 qpair failed and we were unable to recover it. 00:38:25.669 [2024-12-14 00:19:04.537201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.669 [2024-12-14 00:19:04.537249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.669 qpair failed and we were unable to recover it. 00:38:25.669 [2024-12-14 00:19:04.537503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.669 [2024-12-14 00:19:04.537547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.669 qpair failed and we were unable to recover it. 00:38:25.669 [2024-12-14 00:19:04.537818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.669 [2024-12-14 00:19:04.537832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.669 qpair failed and we were unable to recover it. 00:38:25.669 [2024-12-14 00:19:04.538062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.669 [2024-12-14 00:19:04.538078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.669 qpair failed and we were unable to recover it. 00:38:25.669 [2024-12-14 00:19:04.538249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.669 [2024-12-14 00:19:04.538267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.669 qpair failed and we were unable to recover it. 00:38:25.669 [2024-12-14 00:19:04.538428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.669 [2024-12-14 00:19:04.538447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.669 qpair failed and we were unable to recover it. 00:38:25.669 [2024-12-14 00:19:04.538634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.669 [2024-12-14 00:19:04.538675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.669 qpair failed and we were unable to recover it. 00:38:25.669 [2024-12-14 00:19:04.538939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.669 [2024-12-14 00:19:04.538981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.669 qpair failed and we were unable to recover it. 00:38:25.669 [2024-12-14 00:19:04.539198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.669 [2024-12-14 00:19:04.539241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.669 qpair failed and we were unable to recover it. 00:38:25.669 [2024-12-14 00:19:04.539403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.669 [2024-12-14 00:19:04.539481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.669 qpair failed and we were unable to recover it. 00:38:25.669 [2024-12-14 00:19:04.539685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.669 [2024-12-14 00:19:04.539728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.669 qpair failed and we were unable to recover it. 00:38:25.669 [2024-12-14 00:19:04.539948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.669 [2024-12-14 00:19:04.539990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.669 qpair failed and we were unable to recover it. 00:38:25.669 [2024-12-14 00:19:04.540278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.669 [2024-12-14 00:19:04.540319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.669 qpair failed and we were unable to recover it. 00:38:25.669 [2024-12-14 00:19:04.540588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.669 [2024-12-14 00:19:04.540602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.669 qpair failed and we were unable to recover it. 00:38:25.669 [2024-12-14 00:19:04.540839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.669 [2024-12-14 00:19:04.540852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.669 qpair failed and we were unable to recover it. 00:38:25.669 [2024-12-14 00:19:04.540941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.669 [2024-12-14 00:19:04.540954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.669 qpair failed and we were unable to recover it. 00:38:25.669 [2024-12-14 00:19:04.541060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.669 [2024-12-14 00:19:04.541073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.669 qpair failed and we were unable to recover it. 00:38:25.669 [2024-12-14 00:19:04.541284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.669 [2024-12-14 00:19:04.541298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.669 qpair failed and we were unable to recover it. 00:38:25.669 [2024-12-14 00:19:04.541532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.669 [2024-12-14 00:19:04.541575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.669 qpair failed and we were unable to recover it. 00:38:25.669 [2024-12-14 00:19:04.541859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.669 [2024-12-14 00:19:04.541902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.669 qpair failed and we were unable to recover it. 00:38:25.669 [2024-12-14 00:19:04.542044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.669 [2024-12-14 00:19:04.542085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.669 qpair failed and we were unable to recover it. 00:38:25.669 [2024-12-14 00:19:04.542378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.669 [2024-12-14 00:19:04.542419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.669 qpair failed and we were unable to recover it. 00:38:25.669 [2024-12-14 00:19:04.542656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.669 [2024-12-14 00:19:04.542710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.669 qpair failed and we were unable to recover it. 00:38:25.669 [2024-12-14 00:19:04.542934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.669 [2024-12-14 00:19:04.542948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.669 qpair failed and we were unable to recover it. 00:38:25.669 [2024-12-14 00:19:04.543127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.669 [2024-12-14 00:19:04.543141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.669 qpair failed and we were unable to recover it. 00:38:25.669 [2024-12-14 00:19:04.543289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.669 [2024-12-14 00:19:04.543303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.669 qpair failed and we were unable to recover it. 00:38:25.669 [2024-12-14 00:19:04.543509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.669 [2024-12-14 00:19:04.543523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.669 qpair failed and we were unable to recover it. 00:38:25.670 [2024-12-14 00:19:04.543760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.670 [2024-12-14 00:19:04.543774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.670 qpair failed and we were unable to recover it. 00:38:25.670 [2024-12-14 00:19:04.544006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.670 [2024-12-14 00:19:04.544020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.670 qpair failed and we were unable to recover it. 00:38:25.670 [2024-12-14 00:19:04.544225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.670 [2024-12-14 00:19:04.544239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.670 qpair failed and we were unable to recover it. 00:38:25.670 [2024-12-14 00:19:04.544394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.670 [2024-12-14 00:19:04.544408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.670 qpair failed and we were unable to recover it. 00:38:25.670 [2024-12-14 00:19:04.544625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.670 [2024-12-14 00:19:04.544663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.670 qpair failed and we were unable to recover it. 00:38:25.670 [2024-12-14 00:19:04.544827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.670 [2024-12-14 00:19:04.544869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.670 qpair failed and we were unable to recover it. 00:38:25.670 [2024-12-14 00:19:04.545158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.670 [2024-12-14 00:19:04.545200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.670 qpair failed and we were unable to recover it. 00:38:25.670 [2024-12-14 00:19:04.545405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.670 [2024-12-14 00:19:04.545460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.670 qpair failed and we were unable to recover it. 00:38:25.670 [2024-12-14 00:19:04.545753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.670 [2024-12-14 00:19:04.545796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.670 qpair failed and we were unable to recover it. 00:38:25.670 [2024-12-14 00:19:04.545969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.670 [2024-12-14 00:19:04.545982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.670 qpair failed and we were unable to recover it. 00:38:25.670 [2024-12-14 00:19:04.546177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.670 [2024-12-14 00:19:04.546220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.670 qpair failed and we were unable to recover it. 00:38:25.670 [2024-12-14 00:19:04.546372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.670 [2024-12-14 00:19:04.546414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.670 qpair failed and we were unable to recover it. 00:38:25.670 [2024-12-14 00:19:04.546603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.670 [2024-12-14 00:19:04.546646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.670 qpair failed and we were unable to recover it. 00:38:25.670 [2024-12-14 00:19:04.546846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.670 [2024-12-14 00:19:04.546862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.670 qpair failed and we were unable to recover it. 00:38:25.670 [2024-12-14 00:19:04.547122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.670 [2024-12-14 00:19:04.547136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.670 qpair failed and we were unable to recover it. 00:38:25.670 [2024-12-14 00:19:04.547307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.670 [2024-12-14 00:19:04.547320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.670 qpair failed and we were unable to recover it. 00:38:25.670 [2024-12-14 00:19:04.547472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.670 [2024-12-14 00:19:04.547487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.670 qpair failed and we were unable to recover it. 00:38:25.670 [2024-12-14 00:19:04.547692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.670 [2024-12-14 00:19:04.547706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.670 qpair failed and we were unable to recover it. 00:38:25.670 [2024-12-14 00:19:04.547805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.670 [2024-12-14 00:19:04.547819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.670 qpair failed and we were unable to recover it. 00:38:25.670 [2024-12-14 00:19:04.547964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.670 [2024-12-14 00:19:04.547978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.670 qpair failed and we were unable to recover it. 00:38:25.670 [2024-12-14 00:19:04.548101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.670 [2024-12-14 00:19:04.548115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.670 qpair failed and we were unable to recover it. 00:38:25.670 [2024-12-14 00:19:04.548253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.670 [2024-12-14 00:19:04.548266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.670 qpair failed and we were unable to recover it. 00:38:25.670 [2024-12-14 00:19:04.548511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.670 [2024-12-14 00:19:04.548555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.670 qpair failed and we were unable to recover it. 00:38:25.670 [2024-12-14 00:19:04.548748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.670 [2024-12-14 00:19:04.548790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.670 qpair failed and we were unable to recover it. 00:38:25.670 [2024-12-14 00:19:04.549013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.670 [2024-12-14 00:19:04.549056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.670 qpair failed and we were unable to recover it. 00:38:25.670 [2024-12-14 00:19:04.549315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.670 [2024-12-14 00:19:04.549357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.670 qpair failed and we were unable to recover it. 00:38:25.670 [2024-12-14 00:19:04.549563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.670 [2024-12-14 00:19:04.549607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.670 qpair failed and we were unable to recover it. 00:38:25.670 [2024-12-14 00:19:04.549839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.670 [2024-12-14 00:19:04.549853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.670 qpair failed and we were unable to recover it. 00:38:25.670 [2024-12-14 00:19:04.549951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.670 [2024-12-14 00:19:04.549965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.670 qpair failed and we were unable to recover it. 00:38:25.670 [2024-12-14 00:19:04.550107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.670 [2024-12-14 00:19:04.550120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.670 qpair failed and we were unable to recover it. 00:38:25.670 [2024-12-14 00:19:04.550347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.670 [2024-12-14 00:19:04.550360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.670 qpair failed and we were unable to recover it. 00:38:25.670 [2024-12-14 00:19:04.550606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.670 [2024-12-14 00:19:04.550620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.670 qpair failed and we were unable to recover it. 00:38:25.670 [2024-12-14 00:19:04.550824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.670 [2024-12-14 00:19:04.550838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.670 qpair failed and we were unable to recover it. 00:38:25.670 [2024-12-14 00:19:04.551038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.670 [2024-12-14 00:19:04.551052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.670 qpair failed and we were unable to recover it. 00:38:25.670 [2024-12-14 00:19:04.551282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.670 [2024-12-14 00:19:04.551324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.670 qpair failed and we were unable to recover it. 00:38:25.670 [2024-12-14 00:19:04.551524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.670 [2024-12-14 00:19:04.551568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.670 qpair failed and we were unable to recover it. 00:38:25.670 [2024-12-14 00:19:04.551867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.670 [2024-12-14 00:19:04.551909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.670 qpair failed and we were unable to recover it. 00:38:25.670 [2024-12-14 00:19:04.552128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.670 [2024-12-14 00:19:04.552169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.670 qpair failed and we were unable to recover it. 00:38:25.670 [2024-12-14 00:19:04.552321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.671 [2024-12-14 00:19:04.552364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.671 qpair failed and we were unable to recover it. 00:38:25.671 [2024-12-14 00:19:04.552618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.671 [2024-12-14 00:19:04.552632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.671 qpair failed and we were unable to recover it. 00:38:25.671 [2024-12-14 00:19:04.552789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.671 [2024-12-14 00:19:04.552805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.671 qpair failed and we were unable to recover it. 00:38:25.671 [2024-12-14 00:19:04.553077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.671 [2024-12-14 00:19:04.553140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.671 qpair failed and we were unable to recover it. 00:38:25.671 [2024-12-14 00:19:04.553425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.671 [2024-12-14 00:19:04.553495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.671 qpair failed and we were unable to recover it. 00:38:25.671 [2024-12-14 00:19:04.553777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.671 [2024-12-14 00:19:04.553790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.671 qpair failed and we were unable to recover it. 00:38:25.671 [2024-12-14 00:19:04.553926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.671 [2024-12-14 00:19:04.553939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.671 qpair failed and we were unable to recover it. 00:38:25.671 [2024-12-14 00:19:04.554174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.671 [2024-12-14 00:19:04.554187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.671 qpair failed and we were unable to recover it. 00:38:25.671 [2024-12-14 00:19:04.554418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.671 [2024-12-14 00:19:04.554475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.671 qpair failed and we were unable to recover it. 00:38:25.671 [2024-12-14 00:19:04.554615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.671 [2024-12-14 00:19:04.554657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.671 qpair failed and we were unable to recover it. 00:38:25.671 [2024-12-14 00:19:04.554861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.671 [2024-12-14 00:19:04.554903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.671 qpair failed and we were unable to recover it. 00:38:25.671 [2024-12-14 00:19:04.555198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.671 [2024-12-14 00:19:04.555240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.671 qpair failed and we were unable to recover it. 00:38:25.671 [2024-12-14 00:19:04.555432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.671 [2024-12-14 00:19:04.555514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.671 qpair failed and we were unable to recover it. 00:38:25.671 [2024-12-14 00:19:04.555798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.671 [2024-12-14 00:19:04.555851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.671 qpair failed and we were unable to recover it. 00:38:25.671 [2024-12-14 00:19:04.556100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.671 [2024-12-14 00:19:04.556114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.671 qpair failed and we were unable to recover it. 00:38:25.671 [2024-12-14 00:19:04.556313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.671 [2024-12-14 00:19:04.556327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.671 qpair failed and we were unable to recover it. 00:38:25.671 [2024-12-14 00:19:04.556553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.671 [2024-12-14 00:19:04.556567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.671 qpair failed and we were unable to recover it. 00:38:25.671 [2024-12-14 00:19:04.556733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.671 [2024-12-14 00:19:04.556774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.671 qpair failed and we were unable to recover it. 00:38:25.671 [2024-12-14 00:19:04.556992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.671 [2024-12-14 00:19:04.557035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.671 qpair failed and we were unable to recover it. 00:38:25.671 [2024-12-14 00:19:04.557243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.671 [2024-12-14 00:19:04.557285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.671 qpair failed and we were unable to recover it. 00:38:25.671 [2024-12-14 00:19:04.557521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.671 [2024-12-14 00:19:04.557535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.671 qpair failed and we were unable to recover it. 00:38:25.671 [2024-12-14 00:19:04.557680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.671 [2024-12-14 00:19:04.557694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.671 qpair failed and we were unable to recover it. 00:38:25.671 [2024-12-14 00:19:04.557877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.671 [2024-12-14 00:19:04.557891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.671 qpair failed and we were unable to recover it. 00:38:25.671 [2024-12-14 00:19:04.558100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.671 [2024-12-14 00:19:04.558114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.671 qpair failed and we were unable to recover it. 00:38:25.671 [2024-12-14 00:19:04.558317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.671 [2024-12-14 00:19:04.558330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.671 qpair failed and we were unable to recover it. 00:38:25.671 [2024-12-14 00:19:04.558606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.671 [2024-12-14 00:19:04.558622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.671 qpair failed and we were unable to recover it. 00:38:25.671 [2024-12-14 00:19:04.558861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.671 [2024-12-14 00:19:04.558875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.671 qpair failed and we were unable to recover it. 00:38:25.671 [2024-12-14 00:19:04.559124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.671 [2024-12-14 00:19:04.559138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.671 qpair failed and we were unable to recover it. 00:38:25.671 [2024-12-14 00:19:04.559287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.671 [2024-12-14 00:19:04.559301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.671 qpair failed and we were unable to recover it. 00:38:25.671 [2024-12-14 00:19:04.559542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.671 [2024-12-14 00:19:04.559556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.671 qpair failed and we were unable to recover it. 00:38:25.671 [2024-12-14 00:19:04.559750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.671 [2024-12-14 00:19:04.559763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.671 qpair failed and we were unable to recover it. 00:38:25.671 [2024-12-14 00:19:04.559914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.671 [2024-12-14 00:19:04.559966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.671 qpair failed and we were unable to recover it. 00:38:25.671 [2024-12-14 00:19:04.560251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.671 [2024-12-14 00:19:04.560294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.671 qpair failed and we were unable to recover it. 00:38:25.671 [2024-12-14 00:19:04.560568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.671 [2024-12-14 00:19:04.560612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.671 qpair failed and we were unable to recover it. 00:38:25.672 [2024-12-14 00:19:04.560884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.672 [2024-12-14 00:19:04.560928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.672 qpair failed and we were unable to recover it. 00:38:25.672 [2024-12-14 00:19:04.561070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.672 [2024-12-14 00:19:04.561113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.672 qpair failed and we were unable to recover it. 00:38:25.672 [2024-12-14 00:19:04.561349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.672 [2024-12-14 00:19:04.561391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.672 qpair failed and we were unable to recover it. 00:38:25.672 [2024-12-14 00:19:04.561688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.672 [2024-12-14 00:19:04.561702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.672 qpair failed and we were unable to recover it. 00:38:25.672 [2024-12-14 00:19:04.561910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.672 [2024-12-14 00:19:04.561925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.672 qpair failed and we were unable to recover it. 00:38:25.672 [2024-12-14 00:19:04.562016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.672 [2024-12-14 00:19:04.562028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.672 qpair failed and we were unable to recover it. 00:38:25.672 [2024-12-14 00:19:04.562111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.672 [2024-12-14 00:19:04.562123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.672 qpair failed and we were unable to recover it. 00:38:25.672 [2024-12-14 00:19:04.562290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.672 [2024-12-14 00:19:04.562304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.672 qpair failed and we were unable to recover it. 00:38:25.672 [2024-12-14 00:19:04.562456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.672 [2024-12-14 00:19:04.562473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.672 qpair failed and we were unable to recover it. 00:38:25.672 [2024-12-14 00:19:04.562639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.672 [2024-12-14 00:19:04.562668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.672 qpair failed and we were unable to recover it. 00:38:25.672 [2024-12-14 00:19:04.562872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.672 [2024-12-14 00:19:04.562915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.672 qpair failed and we were unable to recover it. 00:38:25.672 [2024-12-14 00:19:04.563130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.672 [2024-12-14 00:19:04.563172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.672 qpair failed and we were unable to recover it. 00:38:25.672 [2024-12-14 00:19:04.563461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.672 [2024-12-14 00:19:04.563505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.672 qpair failed and we were unable to recover it. 00:38:25.672 [2024-12-14 00:19:04.563716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.672 [2024-12-14 00:19:04.563759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.672 qpair failed and we were unable to recover it. 00:38:25.672 [2024-12-14 00:19:04.563971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.672 [2024-12-14 00:19:04.563996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.672 qpair failed and we were unable to recover it. 00:38:25.672 [2024-12-14 00:19:04.564150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.672 [2024-12-14 00:19:04.564174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.672 qpair failed and we were unable to recover it. 00:38:25.672 [2024-12-14 00:19:04.564413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.672 [2024-12-14 00:19:04.564470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.672 qpair failed and we were unable to recover it. 00:38:25.672 [2024-12-14 00:19:04.564686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.672 [2024-12-14 00:19:04.564700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.672 qpair failed and we were unable to recover it. 00:38:25.672 [2024-12-14 00:19:04.564800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.672 [2024-12-14 00:19:04.564813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.672 qpair failed and we were unable to recover it. 00:38:25.672 [2024-12-14 00:19:04.565028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.672 [2024-12-14 00:19:04.565042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.672 qpair failed and we were unable to recover it. 00:38:25.672 [2024-12-14 00:19:04.565194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.672 [2024-12-14 00:19:04.565211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.672 qpair failed and we were unable to recover it. 00:38:25.672 [2024-12-14 00:19:04.565408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.672 [2024-12-14 00:19:04.565432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.672 qpair failed and we were unable to recover it. 00:38:25.672 [2024-12-14 00:19:04.565603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.672 [2024-12-14 00:19:04.565617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.672 qpair failed and we were unable to recover it. 00:38:25.672 [2024-12-14 00:19:04.565703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.672 [2024-12-14 00:19:04.565716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.672 qpair failed and we were unable to recover it. 00:38:25.672 [2024-12-14 00:19:04.566034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.672 [2024-12-14 00:19:04.566059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.672 qpair failed and we were unable to recover it. 00:38:25.672 [2024-12-14 00:19:04.566215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.672 [2024-12-14 00:19:04.566240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.672 qpair failed and we were unable to recover it. 00:38:25.672 [2024-12-14 00:19:04.566394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.672 [2024-12-14 00:19:04.566408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.672 qpair failed and we were unable to recover it. 00:38:25.672 [2024-12-14 00:19:04.566554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.672 [2024-12-14 00:19:04.566569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.672 qpair failed and we were unable to recover it. 00:38:25.672 [2024-12-14 00:19:04.566657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.672 [2024-12-14 00:19:04.566670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.672 qpair failed and we were unable to recover it. 00:38:25.672 [2024-12-14 00:19:04.566817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.672 [2024-12-14 00:19:04.566830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.672 qpair failed and we were unable to recover it. 00:38:25.672 [2024-12-14 00:19:04.566919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.672 [2024-12-14 00:19:04.566932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.672 qpair failed and we were unable to recover it. 00:38:25.672 [2024-12-14 00:19:04.567092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.672 [2024-12-14 00:19:04.567105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.672 qpair failed and we were unable to recover it. 00:38:25.672 [2024-12-14 00:19:04.567307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.672 [2024-12-14 00:19:04.567322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.672 qpair failed and we were unable to recover it. 00:38:25.672 [2024-12-14 00:19:04.567487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.672 [2024-12-14 00:19:04.567509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.672 qpair failed and we were unable to recover it. 00:38:25.672 [2024-12-14 00:19:04.567673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.672 [2024-12-14 00:19:04.567719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.672 qpair failed and we were unable to recover it. 00:38:25.672 [2024-12-14 00:19:04.568014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.672 [2024-12-14 00:19:04.568058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.672 qpair failed and we were unable to recover it. 00:38:25.672 [2024-12-14 00:19:04.568213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.672 [2024-12-14 00:19:04.568256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.672 qpair failed and we were unable to recover it. 00:38:25.672 [2024-12-14 00:19:04.568571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.672 [2024-12-14 00:19:04.568626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.672 qpair failed and we were unable to recover it. 00:38:25.672 [2024-12-14 00:19:04.568833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.672 [2024-12-14 00:19:04.568846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.672 qpair failed and we were unable to recover it. 00:38:25.673 [2024-12-14 00:19:04.568932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.673 [2024-12-14 00:19:04.568944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.673 qpair failed and we were unable to recover it. 00:38:25.673 [2024-12-14 00:19:04.569181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.673 [2024-12-14 00:19:04.569196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.673 qpair failed and we were unable to recover it. 00:38:25.673 [2024-12-14 00:19:04.569341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.673 [2024-12-14 00:19:04.569354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.673 qpair failed and we were unable to recover it. 00:38:25.673 [2024-12-14 00:19:04.569458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.673 [2024-12-14 00:19:04.569471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.673 qpair failed and we were unable to recover it. 00:38:25.673 [2024-12-14 00:19:04.569694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.673 [2024-12-14 00:19:04.569708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.673 qpair failed and we were unable to recover it. 00:38:25.673 [2024-12-14 00:19:04.569880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.673 [2024-12-14 00:19:04.569894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.673 qpair failed and we were unable to recover it. 00:38:25.673 [2024-12-14 00:19:04.570122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.673 [2024-12-14 00:19:04.570163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.673 qpair failed and we were unable to recover it. 00:38:25.673 [2024-12-14 00:19:04.570457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.673 [2024-12-14 00:19:04.570500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.673 qpair failed and we were unable to recover it. 00:38:25.673 [2024-12-14 00:19:04.570696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.673 [2024-12-14 00:19:04.570711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.673 qpair failed and we were unable to recover it. 00:38:25.673 [2024-12-14 00:19:04.570945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.673 [2024-12-14 00:19:04.570961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.673 qpair failed and we were unable to recover it. 00:38:25.673 [2024-12-14 00:19:04.571130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.673 [2024-12-14 00:19:04.571144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.673 qpair failed and we were unable to recover it. 00:38:25.673 [2024-12-14 00:19:04.571343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.673 [2024-12-14 00:19:04.571386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.673 qpair failed and we were unable to recover it. 00:38:25.673 [2024-12-14 00:19:04.571697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.673 [2024-12-14 00:19:04.571741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.673 qpair failed and we were unable to recover it. 00:38:25.673 [2024-12-14 00:19:04.572036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.673 [2024-12-14 00:19:04.572079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.673 qpair failed and we were unable to recover it. 00:38:25.673 [2024-12-14 00:19:04.572318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.673 [2024-12-14 00:19:04.572361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.673 qpair failed and we were unable to recover it. 00:38:25.673 [2024-12-14 00:19:04.572519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.673 [2024-12-14 00:19:04.572563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.673 qpair failed and we were unable to recover it. 00:38:25.673 [2024-12-14 00:19:04.572830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.673 [2024-12-14 00:19:04.572873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.673 qpair failed and we were unable to recover it. 00:38:25.673 [2024-12-14 00:19:04.573077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.673 [2024-12-14 00:19:04.573091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.673 qpair failed and we were unable to recover it. 00:38:25.673 [2024-12-14 00:19:04.573181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.673 [2024-12-14 00:19:04.573194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.673 qpair failed and we were unable to recover it. 00:38:25.673 [2024-12-14 00:19:04.573302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.673 [2024-12-14 00:19:04.573316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.673 qpair failed and we were unable to recover it. 00:38:25.673 [2024-12-14 00:19:04.573497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.673 [2024-12-14 00:19:04.573511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.673 qpair failed and we were unable to recover it. 00:38:25.673 [2024-12-14 00:19:04.573679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.673 [2024-12-14 00:19:04.573692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.673 qpair failed and we were unable to recover it. 00:38:25.673 [2024-12-14 00:19:04.573766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.673 [2024-12-14 00:19:04.573780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.673 qpair failed and we were unable to recover it. 00:38:25.673 [2024-12-14 00:19:04.573872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.673 [2024-12-14 00:19:04.573885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.673 qpair failed and we were unable to recover it. 00:38:25.673 [2024-12-14 00:19:04.574025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.673 [2024-12-14 00:19:04.574040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.673 qpair failed and we were unable to recover it. 00:38:25.673 [2024-12-14 00:19:04.574137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.673 [2024-12-14 00:19:04.574150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.673 qpair failed and we were unable to recover it. 00:38:25.673 [2024-12-14 00:19:04.574223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.673 [2024-12-14 00:19:04.574235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.673 qpair failed and we were unable to recover it. 00:38:25.673 [2024-12-14 00:19:04.574463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.673 [2024-12-14 00:19:04.574478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.673 qpair failed and we were unable to recover it. 00:38:25.673 [2024-12-14 00:19:04.574620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.673 [2024-12-14 00:19:04.574634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.673 qpair failed and we were unable to recover it. 00:38:25.673 [2024-12-14 00:19:04.574776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.673 [2024-12-14 00:19:04.574790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.673 qpair failed and we were unable to recover it. 00:38:25.673 [2024-12-14 00:19:04.574861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.673 [2024-12-14 00:19:04.574873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.673 qpair failed and we were unable to recover it. 00:38:25.673 [2024-12-14 00:19:04.574967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.673 [2024-12-14 00:19:04.574979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.673 qpair failed and we were unable to recover it. 00:38:25.673 [2024-12-14 00:19:04.575084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.673 [2024-12-14 00:19:04.575097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.673 qpair failed and we were unable to recover it. 00:38:25.673 [2024-12-14 00:19:04.575246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.673 [2024-12-14 00:19:04.575259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.673 qpair failed and we were unable to recover it. 00:38:25.673 [2024-12-14 00:19:04.575340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.673 [2024-12-14 00:19:04.575353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.673 qpair failed and we were unable to recover it. 00:38:25.673 [2024-12-14 00:19:04.575453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.673 [2024-12-14 00:19:04.575466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.673 qpair failed and we were unable to recover it. 00:38:25.673 [2024-12-14 00:19:04.575564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.673 [2024-12-14 00:19:04.575578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.673 qpair failed and we were unable to recover it. 00:38:25.673 [2024-12-14 00:19:04.575714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.673 [2024-12-14 00:19:04.575728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.674 qpair failed and we were unable to recover it. 00:38:25.674 [2024-12-14 00:19:04.575865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.674 [2024-12-14 00:19:04.575879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.674 qpair failed and we were unable to recover it. 00:38:25.674 [2024-12-14 00:19:04.576016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.674 [2024-12-14 00:19:04.576030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.674 qpair failed and we were unable to recover it. 00:38:25.674 [2024-12-14 00:19:04.576163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.674 [2024-12-14 00:19:04.576176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.674 qpair failed and we were unable to recover it. 00:38:25.674 [2024-12-14 00:19:04.576275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.674 [2024-12-14 00:19:04.576289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.674 qpair failed and we were unable to recover it. 00:38:25.674 [2024-12-14 00:19:04.576370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.674 [2024-12-14 00:19:04.576382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.674 qpair failed and we were unable to recover it. 00:38:25.674 [2024-12-14 00:19:04.576536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.674 [2024-12-14 00:19:04.576551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.674 qpair failed and we were unable to recover it. 00:38:25.674 [2024-12-14 00:19:04.576632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.674 [2024-12-14 00:19:04.576644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.674 qpair failed and we were unable to recover it. 00:38:25.674 [2024-12-14 00:19:04.576781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.674 [2024-12-14 00:19:04.576795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.674 qpair failed and we were unable to recover it. 00:38:25.674 [2024-12-14 00:19:04.576976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.674 [2024-12-14 00:19:04.576989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.674 qpair failed and we were unable to recover it. 00:38:25.674 [2024-12-14 00:19:04.577162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.674 [2024-12-14 00:19:04.577203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.674 qpair failed and we were unable to recover it. 00:38:25.674 [2024-12-14 00:19:04.577362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.674 [2024-12-14 00:19:04.577405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.674 qpair failed and we were unable to recover it. 00:38:25.674 [2024-12-14 00:19:04.577692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.674 [2024-12-14 00:19:04.577776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:25.674 qpair failed and we were unable to recover it. 00:38:25.674 [2024-12-14 00:19:04.577996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.674 [2024-12-14 00:19:04.578043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:25.674 qpair failed and we were unable to recover it. 00:38:25.674 [2024-12-14 00:19:04.578211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.674 [2024-12-14 00:19:04.578257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:25.674 qpair failed and we were unable to recover it. 00:38:25.674 [2024-12-14 00:19:04.578454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.674 [2024-12-14 00:19:04.578470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.674 qpair failed and we were unable to recover it. 00:38:25.674 [2024-12-14 00:19:04.578555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.674 [2024-12-14 00:19:04.578567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.674 qpair failed and we were unable to recover it. 00:38:25.674 [2024-12-14 00:19:04.578713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.674 [2024-12-14 00:19:04.578726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.674 qpair failed and we were unable to recover it. 00:38:25.674 [2024-12-14 00:19:04.578875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.674 [2024-12-14 00:19:04.578889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.674 qpair failed and we were unable to recover it. 00:38:25.674 [2024-12-14 00:19:04.578984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.674 [2024-12-14 00:19:04.578996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.674 qpair failed and we were unable to recover it. 00:38:25.674 [2024-12-14 00:19:04.579086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.674 [2024-12-14 00:19:04.579099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.674 qpair failed and we were unable to recover it. 00:38:25.674 [2024-12-14 00:19:04.579244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.674 [2024-12-14 00:19:04.579258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.674 qpair failed and we were unable to recover it. 00:38:25.674 [2024-12-14 00:19:04.579332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.674 [2024-12-14 00:19:04.579344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.674 qpair failed and we were unable to recover it. 00:38:25.674 [2024-12-14 00:19:04.579434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.674 [2024-12-14 00:19:04.579454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.674 qpair failed and we were unable to recover it. 00:38:25.674 [2024-12-14 00:19:04.579606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.674 [2024-12-14 00:19:04.579628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.674 qpair failed and we were unable to recover it. 00:38:25.674 [2024-12-14 00:19:04.579800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.674 [2024-12-14 00:19:04.579814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.674 qpair failed and we were unable to recover it. 00:38:25.674 [2024-12-14 00:19:04.580022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.674 [2024-12-14 00:19:04.580064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.674 qpair failed and we were unable to recover it. 00:38:25.674 [2024-12-14 00:19:04.580305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.674 [2024-12-14 00:19:04.580347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.674 qpair failed and we were unable to recover it. 00:38:25.674 [2024-12-14 00:19:04.580581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.674 [2024-12-14 00:19:04.580627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.674 qpair failed and we were unable to recover it. 00:38:25.674 [2024-12-14 00:19:04.580765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.674 [2024-12-14 00:19:04.580779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.674 qpair failed and we were unable to recover it. 00:38:25.674 [2024-12-14 00:19:04.580916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.674 [2024-12-14 00:19:04.580930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.674 qpair failed and we were unable to recover it. 00:38:25.674 [2024-12-14 00:19:04.581093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.674 [2024-12-14 00:19:04.581106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.674 qpair failed and we were unable to recover it. 00:38:25.674 [2024-12-14 00:19:04.581318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.674 [2024-12-14 00:19:04.581360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.674 qpair failed and we were unable to recover it. 00:38:25.674 [2024-12-14 00:19:04.581706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.674 [2024-12-14 00:19:04.581763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:25.674 qpair failed and we were unable to recover it. 00:38:25.674 [2024-12-14 00:19:04.581934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.674 [2024-12-14 00:19:04.581962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:25.674 qpair failed and we were unable to recover it. 00:38:25.674 [2024-12-14 00:19:04.582111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.674 [2024-12-14 00:19:04.582139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:25.674 qpair failed and we were unable to recover it. 00:38:25.674 [2024-12-14 00:19:04.582336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.674 [2024-12-14 00:19:04.582379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.674 qpair failed and we were unable to recover it. 00:38:25.674 [2024-12-14 00:19:04.582602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.674 [2024-12-14 00:19:04.582646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.674 qpair failed and we were unable to recover it. 00:38:25.674 [2024-12-14 00:19:04.582868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.674 [2024-12-14 00:19:04.582931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.674 qpair failed and we were unable to recover it. 00:38:25.674 [2024-12-14 00:19:04.583091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.675 [2024-12-14 00:19:04.583133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.675 qpair failed and we were unable to recover it. 00:38:25.675 [2024-12-14 00:19:04.583353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.675 [2024-12-14 00:19:04.583395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.675 qpair failed and we were unable to recover it. 00:38:25.675 [2024-12-14 00:19:04.583598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.675 [2024-12-14 00:19:04.583612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.675 qpair failed and we were unable to recover it. 00:38:25.675 [2024-12-14 00:19:04.583751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.675 [2024-12-14 00:19:04.583764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.675 qpair failed and we were unable to recover it. 00:38:25.675 [2024-12-14 00:19:04.583943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.675 [2024-12-14 00:19:04.583978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.675 qpair failed and we were unable to recover it. 00:38:25.675 [2024-12-14 00:19:04.584122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.675 [2024-12-14 00:19:04.584164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.675 qpair failed and we were unable to recover it. 00:38:25.675 [2024-12-14 00:19:04.584307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.675 [2024-12-14 00:19:04.584349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.675 qpair failed and we were unable to recover it. 00:38:25.675 [2024-12-14 00:19:04.584545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.675 [2024-12-14 00:19:04.584559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.675 qpair failed and we were unable to recover it. 00:38:25.675 [2024-12-14 00:19:04.584701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.675 [2024-12-14 00:19:04.584717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.675 qpair failed and we were unable to recover it. 00:38:25.675 [2024-12-14 00:19:04.584817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.675 [2024-12-14 00:19:04.584831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.675 qpair failed and we were unable to recover it. 00:38:25.675 [2024-12-14 00:19:04.584916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.675 [2024-12-14 00:19:04.584928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.675 qpair failed and we were unable to recover it. 00:38:25.675 [2024-12-14 00:19:04.584998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.675 [2024-12-14 00:19:04.585011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.675 qpair failed and we were unable to recover it. 00:38:25.675 [2024-12-14 00:19:04.585218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.675 [2024-12-14 00:19:04.585232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.675 qpair failed and we were unable to recover it. 00:38:25.675 [2024-12-14 00:19:04.585366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.675 [2024-12-14 00:19:04.585382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.675 qpair failed and we were unable to recover it. 00:38:25.675 [2024-12-14 00:19:04.585544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.675 [2024-12-14 00:19:04.585558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.675 qpair failed and we were unable to recover it. 00:38:25.675 [2024-12-14 00:19:04.585711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.675 [2024-12-14 00:19:04.585725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.675 qpair failed and we were unable to recover it. 00:38:25.675 [2024-12-14 00:19:04.585821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.675 [2024-12-14 00:19:04.585834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.675 qpair failed and we were unable to recover it. 00:38:25.675 [2024-12-14 00:19:04.585920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.675 [2024-12-14 00:19:04.585932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.675 qpair failed and we were unable to recover it. 00:38:25.675 [2024-12-14 00:19:04.586003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.675 [2024-12-14 00:19:04.586015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.675 qpair failed and we were unable to recover it. 00:38:25.675 [2024-12-14 00:19:04.586185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.675 [2024-12-14 00:19:04.586198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.675 qpair failed and we were unable to recover it. 00:38:25.675 [2024-12-14 00:19:04.586340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.675 [2024-12-14 00:19:04.586353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.675 qpair failed and we were unable to recover it. 00:38:25.675 [2024-12-14 00:19:04.586452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.675 [2024-12-14 00:19:04.586465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.675 qpair failed and we were unable to recover it. 00:38:25.675 [2024-12-14 00:19:04.586532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.675 [2024-12-14 00:19:04.586544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.675 qpair failed and we were unable to recover it. 00:38:25.675 [2024-12-14 00:19:04.586708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.675 [2024-12-14 00:19:04.586720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.675 qpair failed and we were unable to recover it. 00:38:25.675 [2024-12-14 00:19:04.586874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.675 [2024-12-14 00:19:04.586887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.675 qpair failed and we were unable to recover it. 00:38:25.675 [2024-12-14 00:19:04.586974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.675 [2024-12-14 00:19:04.586986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.675 qpair failed and we were unable to recover it. 00:38:25.675 [2024-12-14 00:19:04.587073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.675 [2024-12-14 00:19:04.587085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.675 qpair failed and we were unable to recover it. 00:38:25.675 [2024-12-14 00:19:04.587241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.675 [2024-12-14 00:19:04.587254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.675 qpair failed and we were unable to recover it. 00:38:25.675 [2024-12-14 00:19:04.587390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.675 [2024-12-14 00:19:04.587402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.675 qpair failed and we were unable to recover it. 00:38:25.675 [2024-12-14 00:19:04.587560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.675 [2024-12-14 00:19:04.587574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.675 qpair failed and we were unable to recover it. 00:38:25.675 [2024-12-14 00:19:04.587662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.675 [2024-12-14 00:19:04.587675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.675 qpair failed and we were unable to recover it. 00:38:25.675 [2024-12-14 00:19:04.587812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.675 [2024-12-14 00:19:04.587824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.675 qpair failed and we were unable to recover it. 00:38:25.675 [2024-12-14 00:19:04.587901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.675 [2024-12-14 00:19:04.587913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.675 qpair failed and we were unable to recover it. 00:38:25.675 [2024-12-14 00:19:04.587981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.675 [2024-12-14 00:19:04.587993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.675 qpair failed and we were unable to recover it. 00:38:25.675 [2024-12-14 00:19:04.588075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.675 [2024-12-14 00:19:04.588087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.675 qpair failed and we were unable to recover it. 00:38:25.675 [2024-12-14 00:19:04.588237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.675 [2024-12-14 00:19:04.588251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.675 qpair failed and we were unable to recover it. 00:38:25.675 [2024-12-14 00:19:04.588350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.675 [2024-12-14 00:19:04.588363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.675 qpair failed and we were unable to recover it. 00:38:25.675 [2024-12-14 00:19:04.588517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.675 [2024-12-14 00:19:04.588530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.676 qpair failed and we were unable to recover it. 00:38:25.676 [2024-12-14 00:19:04.588616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.676 [2024-12-14 00:19:04.588629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.676 qpair failed and we were unable to recover it. 00:38:25.676 [2024-12-14 00:19:04.588717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.676 [2024-12-14 00:19:04.588730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.676 qpair failed and we were unable to recover it. 00:38:25.676 [2024-12-14 00:19:04.588871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.676 [2024-12-14 00:19:04.588884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.676 qpair failed and we were unable to recover it. 00:38:25.676 [2024-12-14 00:19:04.589086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.676 [2024-12-14 00:19:04.589099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.676 qpair failed and we were unable to recover it. 00:38:25.676 [2024-12-14 00:19:04.589168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.676 [2024-12-14 00:19:04.589181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.676 qpair failed and we were unable to recover it. 00:38:25.676 [2024-12-14 00:19:04.589271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.676 [2024-12-14 00:19:04.589284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.676 qpair failed and we were unable to recover it. 00:38:25.676 [2024-12-14 00:19:04.589355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.676 [2024-12-14 00:19:04.589368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.676 qpair failed and we were unable to recover it. 00:38:25.676 [2024-12-14 00:19:04.589522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.676 [2024-12-14 00:19:04.589536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.676 qpair failed and we were unable to recover it. 00:38:25.676 [2024-12-14 00:19:04.589613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.676 [2024-12-14 00:19:04.589626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.676 qpair failed and we were unable to recover it. 00:38:25.676 [2024-12-14 00:19:04.589701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.676 [2024-12-14 00:19:04.589714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.676 qpair failed and we were unable to recover it. 00:38:25.676 [2024-12-14 00:19:04.589798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.676 [2024-12-14 00:19:04.589811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.676 qpair failed and we were unable to recover it. 00:38:25.676 [2024-12-14 00:19:04.589954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.676 [2024-12-14 00:19:04.589967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.676 qpair failed and we were unable to recover it. 00:38:25.676 [2024-12-14 00:19:04.590108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.676 [2024-12-14 00:19:04.590123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.676 qpair failed and we were unable to recover it. 00:38:25.676 [2024-12-14 00:19:04.590208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.676 [2024-12-14 00:19:04.590221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.676 qpair failed and we were unable to recover it. 00:38:25.676 [2024-12-14 00:19:04.590300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.676 [2024-12-14 00:19:04.590318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.676 qpair failed and we were unable to recover it. 00:38:25.676 [2024-12-14 00:19:04.590391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.676 [2024-12-14 00:19:04.590405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.676 qpair failed and we were unable to recover it. 00:38:25.676 [2024-12-14 00:19:04.590499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.676 [2024-12-14 00:19:04.590512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.676 qpair failed and we were unable to recover it. 00:38:25.676 [2024-12-14 00:19:04.590652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.676 [2024-12-14 00:19:04.590665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.676 qpair failed and we were unable to recover it. 00:38:25.676 [2024-12-14 00:19:04.590738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.676 [2024-12-14 00:19:04.590751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.676 qpair failed and we were unable to recover it. 00:38:25.676 [2024-12-14 00:19:04.590954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.676 [2024-12-14 00:19:04.590967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.676 qpair failed and we were unable to recover it. 00:38:25.676 [2024-12-14 00:19:04.591105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.676 [2024-12-14 00:19:04.591119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.676 qpair failed and we were unable to recover it. 00:38:25.676 [2024-12-14 00:19:04.591208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.676 [2024-12-14 00:19:04.591221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.676 qpair failed and we were unable to recover it. 00:38:25.676 [2024-12-14 00:19:04.591364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.676 [2024-12-14 00:19:04.591376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.676 qpair failed and we were unable to recover it. 00:38:25.676 [2024-12-14 00:19:04.591450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.676 [2024-12-14 00:19:04.591463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.676 qpair failed and we were unable to recover it. 00:38:25.676 [2024-12-14 00:19:04.591557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.676 [2024-12-14 00:19:04.591570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.676 qpair failed and we were unable to recover it. 00:38:25.676 [2024-12-14 00:19:04.591851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.676 [2024-12-14 00:19:04.591864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.676 qpair failed and we were unable to recover it. 00:38:25.676 [2024-12-14 00:19:04.591958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.676 [2024-12-14 00:19:04.591971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.676 qpair failed and we were unable to recover it. 00:38:25.676 [2024-12-14 00:19:04.592074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.676 [2024-12-14 00:19:04.592086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.676 qpair failed and we were unable to recover it. 00:38:25.676 [2024-12-14 00:19:04.592155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.676 [2024-12-14 00:19:04.592168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.676 qpair failed and we were unable to recover it. 00:38:25.676 [2024-12-14 00:19:04.592324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.676 [2024-12-14 00:19:04.592337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.676 qpair failed and we were unable to recover it. 00:38:25.676 [2024-12-14 00:19:04.592416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.676 [2024-12-14 00:19:04.592430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.676 qpair failed and we were unable to recover it. 00:38:25.676 [2024-12-14 00:19:04.592497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.676 [2024-12-14 00:19:04.592510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.676 qpair failed and we were unable to recover it. 00:38:25.676 [2024-12-14 00:19:04.592662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.676 [2024-12-14 00:19:04.592675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.676 qpair failed and we were unable to recover it. 00:38:25.676 [2024-12-14 00:19:04.592781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.676 [2024-12-14 00:19:04.592794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.676 qpair failed and we were unable to recover it. 00:38:25.676 [2024-12-14 00:19:04.593003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.677 [2024-12-14 00:19:04.593016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.677 qpair failed and we were unable to recover it. 00:38:25.677 [2024-12-14 00:19:04.593220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.677 [2024-12-14 00:19:04.593233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.677 qpair failed and we were unable to recover it. 00:38:25.677 [2024-12-14 00:19:04.593321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.677 [2024-12-14 00:19:04.593334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.677 qpair failed and we were unable to recover it. 00:38:25.677 [2024-12-14 00:19:04.593428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.677 [2024-12-14 00:19:04.593448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.677 qpair failed and we were unable to recover it. 00:38:25.677 [2024-12-14 00:19:04.593589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.677 [2024-12-14 00:19:04.593603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.677 qpair failed and we were unable to recover it. 00:38:25.677 [2024-12-14 00:19:04.593739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.677 [2024-12-14 00:19:04.593752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.677 qpair failed and we were unable to recover it. 00:38:25.677 [2024-12-14 00:19:04.593828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.677 [2024-12-14 00:19:04.593841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.677 qpair failed and we were unable to recover it. 00:38:25.677 [2024-12-14 00:19:04.593985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.677 [2024-12-14 00:19:04.593998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.677 qpair failed and we were unable to recover it. 00:38:25.677 [2024-12-14 00:19:04.594163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.677 [2024-12-14 00:19:04.594177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.677 qpair failed and we were unable to recover it. 00:38:25.677 [2024-12-14 00:19:04.594262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.677 [2024-12-14 00:19:04.594274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.677 qpair failed and we were unable to recover it. 00:38:25.677 [2024-12-14 00:19:04.594373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.677 [2024-12-14 00:19:04.594385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.677 qpair failed and we were unable to recover it. 00:38:25.677 [2024-12-14 00:19:04.594520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.677 [2024-12-14 00:19:04.594533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.677 qpair failed and we were unable to recover it. 00:38:25.677 [2024-12-14 00:19:04.594691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.677 [2024-12-14 00:19:04.594704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.677 qpair failed and we were unable to recover it. 00:38:25.677 [2024-12-14 00:19:04.594774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.677 [2024-12-14 00:19:04.594787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.677 qpair failed and we were unable to recover it. 00:38:25.677 [2024-12-14 00:19:04.594989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.677 [2024-12-14 00:19:04.595002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.677 qpair failed and we were unable to recover it. 00:38:25.677 [2024-12-14 00:19:04.595095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.677 [2024-12-14 00:19:04.595108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.677 qpair failed and we were unable to recover it. 00:38:25.677 [2024-12-14 00:19:04.595315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.677 [2024-12-14 00:19:04.595328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.677 qpair failed and we were unable to recover it. 00:38:25.677 [2024-12-14 00:19:04.595461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.677 [2024-12-14 00:19:04.595474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.677 qpair failed and we were unable to recover it. 00:38:25.677 [2024-12-14 00:19:04.595552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.677 [2024-12-14 00:19:04.595564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.677 qpair failed and we were unable to recover it. 00:38:25.677 [2024-12-14 00:19:04.595649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.677 [2024-12-14 00:19:04.595662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.677 qpair failed and we were unable to recover it. 00:38:25.677 [2024-12-14 00:19:04.595752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.677 [2024-12-14 00:19:04.595765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.677 qpair failed and we were unable to recover it. 00:38:25.677 [2024-12-14 00:19:04.595929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.677 [2024-12-14 00:19:04.595944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.677 qpair failed and we were unable to recover it. 00:38:25.677 [2024-12-14 00:19:04.596097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.677 [2024-12-14 00:19:04.596110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.677 qpair failed and we were unable to recover it. 00:38:25.677 [2024-12-14 00:19:04.596198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.677 [2024-12-14 00:19:04.596212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.677 qpair failed and we were unable to recover it. 00:38:25.677 [2024-12-14 00:19:04.596369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.677 [2024-12-14 00:19:04.596382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.677 qpair failed and we were unable to recover it. 00:38:25.677 [2024-12-14 00:19:04.596473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.677 [2024-12-14 00:19:04.596486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.677 qpair failed and we were unable to recover it. 00:38:25.677 [2024-12-14 00:19:04.596569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.677 [2024-12-14 00:19:04.596583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.677 qpair failed and we were unable to recover it. 00:38:25.677 [2024-12-14 00:19:04.596661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.677 [2024-12-14 00:19:04.596674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.677 qpair failed and we were unable to recover it. 00:38:25.677 [2024-12-14 00:19:04.596828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.677 [2024-12-14 00:19:04.596841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.677 qpair failed and we were unable to recover it. 00:38:25.677 [2024-12-14 00:19:04.596931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.677 [2024-12-14 00:19:04.596944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.677 qpair failed and we were unable to recover it. 00:38:25.677 [2024-12-14 00:19:04.597019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.677 [2024-12-14 00:19:04.597032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.677 qpair failed and we were unable to recover it. 00:38:25.677 [2024-12-14 00:19:04.597174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.677 [2024-12-14 00:19:04.597188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.677 qpair failed and we were unable to recover it. 00:38:25.677 [2024-12-14 00:19:04.597338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.677 [2024-12-14 00:19:04.597352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.677 qpair failed and we were unable to recover it. 00:38:25.677 [2024-12-14 00:19:04.597430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.677 [2024-12-14 00:19:04.597460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.677 qpair failed and we were unable to recover it. 00:38:25.677 [2024-12-14 00:19:04.597544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.677 [2024-12-14 00:19:04.597557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.677 qpair failed and we were unable to recover it. 00:38:25.677 [2024-12-14 00:19:04.597652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.677 [2024-12-14 00:19:04.597666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.677 qpair failed and we were unable to recover it. 00:38:25.677 [2024-12-14 00:19:04.597821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.677 [2024-12-14 00:19:04.597834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.677 qpair failed and we were unable to recover it. 00:38:25.677 [2024-12-14 00:19:04.597901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.677 [2024-12-14 00:19:04.597914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.677 qpair failed and we were unable to recover it. 00:38:25.677 [2024-12-14 00:19:04.597981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.677 [2024-12-14 00:19:04.597994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.677 qpair failed and we were unable to recover it. 00:38:25.677 [2024-12-14 00:19:04.598169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.678 [2024-12-14 00:19:04.598183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.678 qpair failed and we were unable to recover it. 00:38:25.678 [2024-12-14 00:19:04.598270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.678 [2024-12-14 00:19:04.598284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.678 qpair failed and we were unable to recover it. 00:38:25.678 [2024-12-14 00:19:04.598349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.678 [2024-12-14 00:19:04.598362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.678 qpair failed and we were unable to recover it. 00:38:25.678 [2024-12-14 00:19:04.598501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.678 [2024-12-14 00:19:04.598516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.678 qpair failed and we were unable to recover it. 00:38:25.678 [2024-12-14 00:19:04.598616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.678 [2024-12-14 00:19:04.598630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.678 qpair failed and we were unable to recover it. 00:38:25.678 [2024-12-14 00:19:04.598795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.678 [2024-12-14 00:19:04.598817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.678 qpair failed and we were unable to recover it. 00:38:25.678 [2024-12-14 00:19:04.598953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.678 [2024-12-14 00:19:04.598968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.678 qpair failed and we were unable to recover it. 00:38:25.678 [2024-12-14 00:19:04.599037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.678 [2024-12-14 00:19:04.599049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.678 qpair failed and we were unable to recover it. 00:38:25.678 [2024-12-14 00:19:04.599134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.678 [2024-12-14 00:19:04.599147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.678 qpair failed and we were unable to recover it. 00:38:25.678 [2024-12-14 00:19:04.599290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.678 [2024-12-14 00:19:04.599304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.678 qpair failed and we were unable to recover it. 00:38:25.678 [2024-12-14 00:19:04.599443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.678 [2024-12-14 00:19:04.599457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.678 qpair failed and we were unable to recover it. 00:38:25.678 [2024-12-14 00:19:04.599536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.678 [2024-12-14 00:19:04.599549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.678 qpair failed and we were unable to recover it. 00:38:25.678 [2024-12-14 00:19:04.599682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.678 [2024-12-14 00:19:04.599696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.678 qpair failed and we were unable to recover it. 00:38:25.678 [2024-12-14 00:19:04.599908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.678 [2024-12-14 00:19:04.599922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.678 qpair failed and we were unable to recover it. 00:38:25.678 [2024-12-14 00:19:04.600122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.678 [2024-12-14 00:19:04.600136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.678 qpair failed and we were unable to recover it. 00:38:25.678 [2024-12-14 00:19:04.600308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.678 [2024-12-14 00:19:04.600322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.678 qpair failed and we were unable to recover it. 00:38:25.678 [2024-12-14 00:19:04.600402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.678 [2024-12-14 00:19:04.600416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.678 qpair failed and we were unable to recover it. 00:38:25.678 [2024-12-14 00:19:04.600599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.678 [2024-12-14 00:19:04.600613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.678 qpair failed and we were unable to recover it. 00:38:25.678 [2024-12-14 00:19:04.600698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.678 [2024-12-14 00:19:04.600713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.678 qpair failed and we were unable to recover it. 00:38:25.678 [2024-12-14 00:19:04.600866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.678 [2024-12-14 00:19:04.600880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.678 qpair failed and we were unable to recover it. 00:38:25.678 [2024-12-14 00:19:04.600963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.678 [2024-12-14 00:19:04.600976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.678 qpair failed and we were unable to recover it. 00:38:25.678 [2024-12-14 00:19:04.601069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.678 [2024-12-14 00:19:04.601083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.678 qpair failed and we were unable to recover it. 00:38:25.678 [2024-12-14 00:19:04.601173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.678 [2024-12-14 00:19:04.601189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.678 qpair failed and we were unable to recover it. 00:38:25.678 [2024-12-14 00:19:04.601325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.678 [2024-12-14 00:19:04.601339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.678 qpair failed and we were unable to recover it. 00:38:25.678 [2024-12-14 00:19:04.601555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.678 [2024-12-14 00:19:04.601569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.678 qpair failed and we were unable to recover it. 00:38:25.678 [2024-12-14 00:19:04.601657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.678 [2024-12-14 00:19:04.601671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.678 qpair failed and we were unable to recover it. 00:38:25.678 [2024-12-14 00:19:04.601850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.678 [2024-12-14 00:19:04.601863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.678 qpair failed and we were unable to recover it. 00:38:25.678 [2024-12-14 00:19:04.601997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.678 [2024-12-14 00:19:04.602010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.678 qpair failed and we were unable to recover it. 00:38:25.678 [2024-12-14 00:19:04.602085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.678 [2024-12-14 00:19:04.602098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.678 qpair failed and we were unable to recover it. 00:38:25.678 [2024-12-14 00:19:04.602261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.678 [2024-12-14 00:19:04.602274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.678 qpair failed and we were unable to recover it. 00:38:25.678 [2024-12-14 00:19:04.602375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.678 [2024-12-14 00:19:04.602389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.678 qpair failed and we were unable to recover it. 00:38:25.678 [2024-12-14 00:19:04.602593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.678 [2024-12-14 00:19:04.602607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.678 qpair failed and we were unable to recover it. 00:38:25.678 [2024-12-14 00:19:04.602778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.678 [2024-12-14 00:19:04.602792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.678 qpair failed and we were unable to recover it. 00:38:25.678 [2024-12-14 00:19:04.602998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.678 [2024-12-14 00:19:04.603011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.678 qpair failed and we were unable to recover it. 00:38:25.678 [2024-12-14 00:19:04.603084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.678 [2024-12-14 00:19:04.603098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.678 qpair failed and we were unable to recover it. 00:38:25.678 [2024-12-14 00:19:04.603172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.678 [2024-12-14 00:19:04.603185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.678 qpair failed and we were unable to recover it. 00:38:25.678 [2024-12-14 00:19:04.603347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.678 [2024-12-14 00:19:04.603361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.678 qpair failed and we were unable to recover it. 00:38:25.678 [2024-12-14 00:19:04.603521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.678 [2024-12-14 00:19:04.603535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.679 qpair failed and we were unable to recover it. 00:38:25.679 [2024-12-14 00:19:04.603671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.679 [2024-12-14 00:19:04.603684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.679 qpair failed and we were unable to recover it. 00:38:25.679 [2024-12-14 00:19:04.603764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.679 [2024-12-14 00:19:04.603777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.679 qpair failed and we were unable to recover it. 00:38:25.679 [2024-12-14 00:19:04.603868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.679 [2024-12-14 00:19:04.603881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.679 qpair failed and we were unable to recover it. 00:38:25.679 [2024-12-14 00:19:04.604076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.679 [2024-12-14 00:19:04.604089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.679 qpair failed and we were unable to recover it. 00:38:25.679 [2024-12-14 00:19:04.604163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.679 [2024-12-14 00:19:04.604176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.679 qpair failed and we were unable to recover it. 00:38:25.679 [2024-12-14 00:19:04.604260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.679 [2024-12-14 00:19:04.604273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.679 qpair failed and we were unable to recover it. 00:38:25.679 [2024-12-14 00:19:04.604491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.679 [2024-12-14 00:19:04.604506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.679 qpair failed and we were unable to recover it. 00:38:25.679 [2024-12-14 00:19:04.604590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.679 [2024-12-14 00:19:04.604604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.679 qpair failed and we were unable to recover it. 00:38:25.679 [2024-12-14 00:19:04.604680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.679 [2024-12-14 00:19:04.604693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.679 qpair failed and we were unable to recover it. 00:38:25.679 [2024-12-14 00:19:04.604780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.679 [2024-12-14 00:19:04.604794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.679 qpair failed and we were unable to recover it. 00:38:25.679 [2024-12-14 00:19:04.604893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.679 [2024-12-14 00:19:04.604907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.679 qpair failed and we were unable to recover it. 00:38:25.679 [2024-12-14 00:19:04.604971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.679 [2024-12-14 00:19:04.604984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.679 qpair failed and we were unable to recover it. 00:38:25.679 [2024-12-14 00:19:04.605078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.679 [2024-12-14 00:19:04.605091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.679 qpair failed and we were unable to recover it. 00:38:25.679 [2024-12-14 00:19:04.605232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.679 [2024-12-14 00:19:04.605245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.679 qpair failed and we were unable to recover it. 00:38:25.679 [2024-12-14 00:19:04.605399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.679 [2024-12-14 00:19:04.605412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.679 qpair failed and we were unable to recover it. 00:38:25.679 [2024-12-14 00:19:04.605498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.679 [2024-12-14 00:19:04.605512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.679 qpair failed and we were unable to recover it. 00:38:25.679 [2024-12-14 00:19:04.605660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.679 [2024-12-14 00:19:04.605674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.679 qpair failed and we were unable to recover it. 00:38:25.679 [2024-12-14 00:19:04.605827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.679 [2024-12-14 00:19:04.605840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.679 qpair failed and we were unable to recover it. 00:38:25.679 [2024-12-14 00:19:04.605915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.679 [2024-12-14 00:19:04.605928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.679 qpair failed and we were unable to recover it. 00:38:25.679 [2024-12-14 00:19:04.606175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.679 [2024-12-14 00:19:04.606188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.679 qpair failed and we were unable to recover it. 00:38:25.679 [2024-12-14 00:19:04.606272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.679 [2024-12-14 00:19:04.606286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.679 qpair failed and we were unable to recover it. 00:38:25.679 [2024-12-14 00:19:04.606371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.679 [2024-12-14 00:19:04.606385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.679 qpair failed and we were unable to recover it. 00:38:25.679 [2024-12-14 00:19:04.606460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.679 [2024-12-14 00:19:04.606474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.679 qpair failed and we were unable to recover it. 00:38:25.679 [2024-12-14 00:19:04.606549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.679 [2024-12-14 00:19:04.606562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.679 qpair failed and we were unable to recover it. 00:38:25.679 [2024-12-14 00:19:04.606631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.679 [2024-12-14 00:19:04.606647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.679 qpair failed and we were unable to recover it. 00:38:25.679 [2024-12-14 00:19:04.606781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.679 [2024-12-14 00:19:04.606794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.679 qpair failed and we were unable to recover it. 00:38:25.679 [2024-12-14 00:19:04.606923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.679 [2024-12-14 00:19:04.606937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.679 qpair failed and we were unable to recover it. 00:38:25.679 [2024-12-14 00:19:04.607110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.679 [2024-12-14 00:19:04.607123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.679 qpair failed and we were unable to recover it. 00:38:25.679 [2024-12-14 00:19:04.607279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.679 [2024-12-14 00:19:04.607292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.679 qpair failed and we were unable to recover it. 00:38:25.679 [2024-12-14 00:19:04.607431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.679 [2024-12-14 00:19:04.607451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.679 qpair failed and we were unable to recover it. 00:38:25.679 [2024-12-14 00:19:04.607645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.679 [2024-12-14 00:19:04.607660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.679 qpair failed and we were unable to recover it. 00:38:25.679 [2024-12-14 00:19:04.607863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.679 [2024-12-14 00:19:04.607880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.679 qpair failed and we were unable to recover it. 00:38:25.679 [2024-12-14 00:19:04.607972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.679 [2024-12-14 00:19:04.607984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.679 qpair failed and we were unable to recover it. 00:38:25.679 [2024-12-14 00:19:04.608141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.679 [2024-12-14 00:19:04.608154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.679 qpair failed and we were unable to recover it. 00:38:25.679 [2024-12-14 00:19:04.608223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.679 [2024-12-14 00:19:04.608237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.679 qpair failed and we were unable to recover it. 00:38:25.679 [2024-12-14 00:19:04.608303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.680 [2024-12-14 00:19:04.608316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.680 qpair failed and we were unable to recover it. 00:38:25.680 [2024-12-14 00:19:04.608410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.680 [2024-12-14 00:19:04.608423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.680 qpair failed and we were unable to recover it. 00:38:25.680 [2024-12-14 00:19:04.608584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.680 [2024-12-14 00:19:04.608598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.680 qpair failed and we were unable to recover it. 00:38:25.680 [2024-12-14 00:19:04.608675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.680 [2024-12-14 00:19:04.608688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.680 qpair failed and we were unable to recover it. 00:38:25.680 [2024-12-14 00:19:04.608759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.680 [2024-12-14 00:19:04.608772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.680 qpair failed and we were unable to recover it. 00:38:25.680 [2024-12-14 00:19:04.608869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.680 [2024-12-14 00:19:04.608882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.680 qpair failed and we were unable to recover it. 00:38:25.680 [2024-12-14 00:19:04.609038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.680 [2024-12-14 00:19:04.609050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.680 qpair failed and we were unable to recover it. 00:38:25.680 [2024-12-14 00:19:04.609142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.680 [2024-12-14 00:19:04.609155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.680 qpair failed and we were unable to recover it. 00:38:25.680 [2024-12-14 00:19:04.609243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.680 [2024-12-14 00:19:04.609256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.680 qpair failed and we were unable to recover it. 00:38:25.680 [2024-12-14 00:19:04.609323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.680 [2024-12-14 00:19:04.609335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.680 qpair failed and we were unable to recover it. 00:38:25.680 [2024-12-14 00:19:04.609420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.680 [2024-12-14 00:19:04.609433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.680 qpair failed and we were unable to recover it. 00:38:25.680 [2024-12-14 00:19:04.609591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.680 [2024-12-14 00:19:04.609604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.680 qpair failed and we were unable to recover it. 00:38:25.680 [2024-12-14 00:19:04.609695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.680 [2024-12-14 00:19:04.609708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.680 qpair failed and we were unable to recover it. 00:38:25.680 [2024-12-14 00:19:04.609818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.680 [2024-12-14 00:19:04.609830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.680 qpair failed and we were unable to recover it. 00:38:25.680 [2024-12-14 00:19:04.609905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.680 [2024-12-14 00:19:04.609918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.680 qpair failed and we were unable to recover it. 00:38:25.680 [2024-12-14 00:19:04.609998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.680 [2024-12-14 00:19:04.610011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.680 qpair failed and we were unable to recover it. 00:38:25.680 [2024-12-14 00:19:04.610104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.680 [2024-12-14 00:19:04.610117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.680 qpair failed and we were unable to recover it. 00:38:25.680 [2024-12-14 00:19:04.610187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.680 [2024-12-14 00:19:04.610201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.680 qpair failed and we were unable to recover it. 00:38:25.680 [2024-12-14 00:19:04.610264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.680 [2024-12-14 00:19:04.610276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.680 qpair failed and we were unable to recover it. 00:38:25.680 [2024-12-14 00:19:04.610352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.680 [2024-12-14 00:19:04.610364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.680 qpair failed and we were unable to recover it. 00:38:25.680 [2024-12-14 00:19:04.610463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.680 [2024-12-14 00:19:04.610478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.680 qpair failed and we were unable to recover it. 00:38:25.680 [2024-12-14 00:19:04.610614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.680 [2024-12-14 00:19:04.610627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.680 qpair failed and we were unable to recover it. 00:38:25.680 [2024-12-14 00:19:04.610763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.680 [2024-12-14 00:19:04.610777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.680 qpair failed and we were unable to recover it. 00:38:25.680 [2024-12-14 00:19:04.610946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.680 [2024-12-14 00:19:04.610960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.680 qpair failed and we were unable to recover it. 00:38:25.680 [2024-12-14 00:19:04.611162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.680 [2024-12-14 00:19:04.611175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.680 qpair failed and we were unable to recover it. 00:38:25.680 [2024-12-14 00:19:04.611323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.680 [2024-12-14 00:19:04.611336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.680 qpair failed and we were unable to recover it. 00:38:25.680 [2024-12-14 00:19:04.611444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.680 [2024-12-14 00:19:04.611459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.680 qpair failed and we were unable to recover it. 00:38:25.680 [2024-12-14 00:19:04.611556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.680 [2024-12-14 00:19:04.611571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.680 qpair failed and we were unable to recover it. 00:38:25.680 [2024-12-14 00:19:04.611774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.680 [2024-12-14 00:19:04.611787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.680 qpair failed and we were unable to recover it. 00:38:25.680 [2024-12-14 00:19:04.611877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.680 [2024-12-14 00:19:04.611892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.680 qpair failed and we were unable to recover it. 00:38:25.680 [2024-12-14 00:19:04.612040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.680 [2024-12-14 00:19:04.612055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.680 qpair failed and we were unable to recover it. 00:38:25.680 [2024-12-14 00:19:04.612202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.680 [2024-12-14 00:19:04.612216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.680 qpair failed and we were unable to recover it. 00:38:25.680 [2024-12-14 00:19:04.612287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.680 [2024-12-14 00:19:04.612300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.680 qpair failed and we were unable to recover it. 00:38:25.680 [2024-12-14 00:19:04.612392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.680 [2024-12-14 00:19:04.612406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.681 qpair failed and we were unable to recover it. 00:38:25.681 [2024-12-14 00:19:04.612606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.681 [2024-12-14 00:19:04.612620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.681 qpair failed and we were unable to recover it. 00:38:25.681 [2024-12-14 00:19:04.612778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.681 [2024-12-14 00:19:04.612791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.681 qpair failed and we were unable to recover it. 00:38:25.681 [2024-12-14 00:19:04.612930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.681 [2024-12-14 00:19:04.612944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.681 qpair failed and we were unable to recover it. 00:38:25.681 [2024-12-14 00:19:04.613148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.681 [2024-12-14 00:19:04.613161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.681 qpair failed and we were unable to recover it. 00:38:25.681 [2024-12-14 00:19:04.613243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.681 [2024-12-14 00:19:04.613256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.681 qpair failed and we were unable to recover it. 00:38:25.681 [2024-12-14 00:19:04.613330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.681 [2024-12-14 00:19:04.613343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.681 qpair failed and we were unable to recover it. 00:38:25.681 [2024-12-14 00:19:04.613448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.681 [2024-12-14 00:19:04.613462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.681 qpair failed and we were unable to recover it. 00:38:25.681 [2024-12-14 00:19:04.613681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.681 [2024-12-14 00:19:04.613695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.681 qpair failed and we were unable to recover it. 00:38:25.681 [2024-12-14 00:19:04.613921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.681 [2024-12-14 00:19:04.613934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.681 qpair failed and we were unable to recover it. 00:38:25.681 [2024-12-14 00:19:04.614085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.681 [2024-12-14 00:19:04.614099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.681 qpair failed and we were unable to recover it. 00:38:25.681 [2024-12-14 00:19:04.614235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.681 [2024-12-14 00:19:04.614248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.681 qpair failed and we were unable to recover it. 00:38:25.681 [2024-12-14 00:19:04.614310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.681 [2024-12-14 00:19:04.614324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.681 qpair failed and we were unable to recover it. 00:38:25.681 [2024-12-14 00:19:04.614483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.681 [2024-12-14 00:19:04.614497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.681 qpair failed and we were unable to recover it. 00:38:25.681 [2024-12-14 00:19:04.614592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.681 [2024-12-14 00:19:04.614606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.681 qpair failed and we were unable to recover it. 00:38:25.681 [2024-12-14 00:19:04.614694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.681 [2024-12-14 00:19:04.614707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.681 qpair failed and we were unable to recover it. 00:38:25.681 [2024-12-14 00:19:04.614884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.681 [2024-12-14 00:19:04.614897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.681 qpair failed and we were unable to recover it. 00:38:25.681 [2024-12-14 00:19:04.614966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.681 [2024-12-14 00:19:04.614980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.681 qpair failed and we were unable to recover it. 00:38:25.681 [2024-12-14 00:19:04.615066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.681 [2024-12-14 00:19:04.615080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.681 qpair failed and we were unable to recover it. 00:38:25.681 [2024-12-14 00:19:04.615220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.681 [2024-12-14 00:19:04.615233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.681 qpair failed and we were unable to recover it. 00:38:25.681 [2024-12-14 00:19:04.615393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.681 [2024-12-14 00:19:04.615406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.681 qpair failed and we were unable to recover it. 00:38:25.681 [2024-12-14 00:19:04.615495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.681 [2024-12-14 00:19:04.615508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.681 qpair failed and we were unable to recover it. 00:38:25.681 [2024-12-14 00:19:04.615615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.681 [2024-12-14 00:19:04.615628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.681 qpair failed and we were unable to recover it. 00:38:25.681 [2024-12-14 00:19:04.615722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.681 [2024-12-14 00:19:04.615736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.681 qpair failed and we were unable to recover it. 00:38:25.681 [2024-12-14 00:19:04.615866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.681 [2024-12-14 00:19:04.615879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.681 qpair failed and we were unable to recover it. 00:38:25.681 [2024-12-14 00:19:04.616042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.681 [2024-12-14 00:19:04.616056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.681 qpair failed and we were unable to recover it. 00:38:25.681 [2024-12-14 00:19:04.616163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.681 [2024-12-14 00:19:04.616178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.681 qpair failed and we were unable to recover it. 00:38:25.681 [2024-12-14 00:19:04.616321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.681 [2024-12-14 00:19:04.616339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.681 qpair failed and we were unable to recover it. 00:38:25.681 [2024-12-14 00:19:04.616574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.681 [2024-12-14 00:19:04.616589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.681 qpair failed and we were unable to recover it. 00:38:25.681 [2024-12-14 00:19:04.616746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.681 [2024-12-14 00:19:04.616759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.681 qpair failed and we were unable to recover it. 00:38:25.681 [2024-12-14 00:19:04.616900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.681 [2024-12-14 00:19:04.616913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.681 qpair failed and we were unable to recover it. 00:38:25.681 [2024-12-14 00:19:04.617019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.681 [2024-12-14 00:19:04.617032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.681 qpair failed and we were unable to recover it. 00:38:25.681 [2024-12-14 00:19:04.617166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.681 [2024-12-14 00:19:04.617179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.681 qpair failed and we were unable to recover it. 00:38:25.681 [2024-12-14 00:19:04.617267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.681 [2024-12-14 00:19:04.617281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.681 qpair failed and we were unable to recover it. 00:38:25.681 [2024-12-14 00:19:04.617519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.681 [2024-12-14 00:19:04.617533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.681 qpair failed and we were unable to recover it. 00:38:25.681 [2024-12-14 00:19:04.617738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.681 [2024-12-14 00:19:04.617752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.681 qpair failed and we were unable to recover it. 00:38:25.681 [2024-12-14 00:19:04.617881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.681 [2024-12-14 00:19:04.617896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.681 qpair failed and we were unable to recover it. 00:38:25.681 [2024-12-14 00:19:04.617967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.681 [2024-12-14 00:19:04.617981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.681 qpair failed and we were unable to recover it. 00:38:25.681 [2024-12-14 00:19:04.618116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.681 [2024-12-14 00:19:04.618130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.681 qpair failed and we were unable to recover it. 00:38:25.681 [2024-12-14 00:19:04.618225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.681 [2024-12-14 00:19:04.618238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.681 qpair failed and we were unable to recover it. 00:38:25.682 [2024-12-14 00:19:04.618324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.682 [2024-12-14 00:19:04.618337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.682 qpair failed and we were unable to recover it. 00:38:25.682 [2024-12-14 00:19:04.618423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.682 [2024-12-14 00:19:04.618442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.682 qpair failed and we were unable to recover it. 00:38:25.682 [2024-12-14 00:19:04.618602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.682 [2024-12-14 00:19:04.618615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.682 qpair failed and we were unable to recover it. 00:38:25.682 [2024-12-14 00:19:04.618712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.682 [2024-12-14 00:19:04.618725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.682 qpair failed and we were unable to recover it. 00:38:25.682 [2024-12-14 00:19:04.618881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.682 [2024-12-14 00:19:04.618894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.682 qpair failed and we were unable to recover it. 00:38:25.682 [2024-12-14 00:19:04.618963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.682 [2024-12-14 00:19:04.618976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.682 qpair failed and we were unable to recover it. 00:38:25.682 [2024-12-14 00:19:04.619193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.682 [2024-12-14 00:19:04.619207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.682 qpair failed and we were unable to recover it. 00:38:25.682 [2024-12-14 00:19:04.619335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.682 [2024-12-14 00:19:04.619349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.682 qpair failed and we were unable to recover it. 00:38:25.682 [2024-12-14 00:19:04.619488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.682 [2024-12-14 00:19:04.619502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.682 qpair failed and we were unable to recover it. 00:38:25.682 [2024-12-14 00:19:04.619730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.682 [2024-12-14 00:19:04.619743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.682 qpair failed and we were unable to recover it. 00:38:25.682 [2024-12-14 00:19:04.619838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.682 [2024-12-14 00:19:04.619852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.682 qpair failed and we were unable to recover it. 00:38:25.682 [2024-12-14 00:19:04.619953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.682 [2024-12-14 00:19:04.619966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.682 qpair failed and we were unable to recover it. 00:38:25.682 [2024-12-14 00:19:04.620040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.682 [2024-12-14 00:19:04.620054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.682 qpair failed and we were unable to recover it. 00:38:25.682 [2024-12-14 00:19:04.620209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.682 [2024-12-14 00:19:04.620223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.682 qpair failed and we were unable to recover it. 00:38:25.682 [2024-12-14 00:19:04.620374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.682 [2024-12-14 00:19:04.620388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.682 qpair failed and we were unable to recover it. 00:38:25.682 [2024-12-14 00:19:04.620472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.682 [2024-12-14 00:19:04.620486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.682 qpair failed and we were unable to recover it. 00:38:25.682 [2024-12-14 00:19:04.620548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.682 [2024-12-14 00:19:04.620561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.682 qpair failed and we were unable to recover it. 00:38:25.682 [2024-12-14 00:19:04.620750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.682 [2024-12-14 00:19:04.620763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.682 qpair failed and we were unable to recover it. 00:38:25.682 [2024-12-14 00:19:04.620938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.682 [2024-12-14 00:19:04.620951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.682 qpair failed and we were unable to recover it. 00:38:25.682 [2024-12-14 00:19:04.621041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.682 [2024-12-14 00:19:04.621053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.682 qpair failed and we were unable to recover it. 00:38:25.682 [2024-12-14 00:19:04.621187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.682 [2024-12-14 00:19:04.621201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.682 qpair failed and we were unable to recover it. 00:38:25.682 [2024-12-14 00:19:04.621450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.682 [2024-12-14 00:19:04.621464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.682 qpair failed and we were unable to recover it. 00:38:25.682 [2024-12-14 00:19:04.621572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.682 [2024-12-14 00:19:04.621586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.682 qpair failed and we were unable to recover it. 00:38:25.682 [2024-12-14 00:19:04.621756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.682 [2024-12-14 00:19:04.621786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:25.682 qpair failed and we were unable to recover it. 00:38:25.682 [2024-12-14 00:19:04.621886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.682 [2024-12-14 00:19:04.621909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:25.682 qpair failed and we were unable to recover it. 00:38:25.682 [2024-12-14 00:19:04.622013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.682 [2024-12-14 00:19:04.622035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:25.682 qpair failed and we were unable to recover it. 00:38:25.682 [2024-12-14 00:19:04.622208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.682 [2024-12-14 00:19:04.622223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.682 qpair failed and we were unable to recover it. 00:38:25.682 [2024-12-14 00:19:04.622429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.682 [2024-12-14 00:19:04.622447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.682 qpair failed and we were unable to recover it. 00:38:25.682 [2024-12-14 00:19:04.622531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.682 [2024-12-14 00:19:04.622544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.682 qpair failed and we were unable to recover it. 00:38:25.682 [2024-12-14 00:19:04.622700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.682 [2024-12-14 00:19:04.622714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.682 qpair failed and we were unable to recover it. 00:38:25.682 [2024-12-14 00:19:04.622855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.682 [2024-12-14 00:19:04.622869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.682 qpair failed and we were unable to recover it. 00:38:25.682 [2024-12-14 00:19:04.623007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.682 [2024-12-14 00:19:04.623020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.682 qpair failed and we were unable to recover it. 00:38:25.682 [2024-12-14 00:19:04.623155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.682 [2024-12-14 00:19:04.623168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.682 qpair failed and we were unable to recover it. 00:38:25.682 [2024-12-14 00:19:04.623239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.682 [2024-12-14 00:19:04.623252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.682 qpair failed and we were unable to recover it. 00:38:25.682 [2024-12-14 00:19:04.623325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.682 [2024-12-14 00:19:04.623339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.682 qpair failed and we were unable to recover it. 00:38:25.682 [2024-12-14 00:19:04.623499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.682 [2024-12-14 00:19:04.623513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.682 qpair failed and we were unable to recover it. 00:38:25.682 [2024-12-14 00:19:04.623653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.683 [2024-12-14 00:19:04.623668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.683 qpair failed and we were unable to recover it. 00:38:25.683 [2024-12-14 00:19:04.623739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.683 [2024-12-14 00:19:04.623752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.683 qpair failed and we were unable to recover it. 00:38:25.683 [2024-12-14 00:19:04.623928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.683 [2024-12-14 00:19:04.623942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.683 qpair failed and we were unable to recover it. 00:38:25.683 [2024-12-14 00:19:04.624091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.683 [2024-12-14 00:19:04.624104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.683 qpair failed and we were unable to recover it. 00:38:25.683 [2024-12-14 00:19:04.624199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.683 [2024-12-14 00:19:04.624212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.683 qpair failed and we were unable to recover it. 00:38:25.683 [2024-12-14 00:19:04.624449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.683 [2024-12-14 00:19:04.624463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.683 qpair failed and we were unable to recover it. 00:38:25.683 [2024-12-14 00:19:04.624541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.683 [2024-12-14 00:19:04.624555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.683 qpair failed and we were unable to recover it. 00:38:25.683 [2024-12-14 00:19:04.624702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.683 [2024-12-14 00:19:04.624715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.683 qpair failed and we were unable to recover it. 00:38:25.683 [2024-12-14 00:19:04.624861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.683 [2024-12-14 00:19:04.624874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.683 qpair failed and we were unable to recover it. 00:38:25.683 [2024-12-14 00:19:04.625015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.683 [2024-12-14 00:19:04.625030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.683 qpair failed and we were unable to recover it. 00:38:25.683 [2024-12-14 00:19:04.625119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.683 [2024-12-14 00:19:04.625133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.683 qpair failed and we were unable to recover it. 00:38:25.683 [2024-12-14 00:19:04.625277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.683 [2024-12-14 00:19:04.625291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.683 qpair failed and we were unable to recover it. 00:38:25.683 [2024-12-14 00:19:04.625463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.683 [2024-12-14 00:19:04.625478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.683 qpair failed and we were unable to recover it. 00:38:25.683 [2024-12-14 00:19:04.625639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.683 [2024-12-14 00:19:04.625653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.683 qpair failed and we were unable to recover it. 00:38:25.683 [2024-12-14 00:19:04.625879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.683 [2024-12-14 00:19:04.625893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.683 qpair failed and we were unable to recover it. 00:38:25.683 [2024-12-14 00:19:04.626041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.683 [2024-12-14 00:19:04.626055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.683 qpair failed and we were unable to recover it. 00:38:25.683 [2024-12-14 00:19:04.626127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.683 [2024-12-14 00:19:04.626139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.683 qpair failed and we were unable to recover it. 00:38:25.683 [2024-12-14 00:19:04.626365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.683 [2024-12-14 00:19:04.626380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.683 qpair failed and we were unable to recover it. 00:38:25.683 [2024-12-14 00:19:04.626472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.683 [2024-12-14 00:19:04.626491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.683 qpair failed and we were unable to recover it. 00:38:25.683 [2024-12-14 00:19:04.626590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.683 [2024-12-14 00:19:04.626603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.683 qpair failed and we were unable to recover it. 00:38:25.683 [2024-12-14 00:19:04.626741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.683 [2024-12-14 00:19:04.626754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.683 qpair failed and we were unable to recover it. 00:38:25.683 [2024-12-14 00:19:04.626912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.683 [2024-12-14 00:19:04.626926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.683 qpair failed and we were unable to recover it. 00:38:25.683 [2024-12-14 00:19:04.627071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.683 [2024-12-14 00:19:04.627084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.683 qpair failed and we were unable to recover it. 00:38:25.683 [2024-12-14 00:19:04.627228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.683 [2024-12-14 00:19:04.627241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.683 qpair failed and we were unable to recover it. 00:38:25.683 [2024-12-14 00:19:04.627390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.683 [2024-12-14 00:19:04.627403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.683 qpair failed and we were unable to recover it. 00:38:25.683 [2024-12-14 00:19:04.627553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.683 [2024-12-14 00:19:04.627567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.683 qpair failed and we were unable to recover it. 00:38:25.683 [2024-12-14 00:19:04.627636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.683 [2024-12-14 00:19:04.627650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.683 qpair failed and we were unable to recover it. 00:38:25.683 [2024-12-14 00:19:04.627804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.683 [2024-12-14 00:19:04.627818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.683 qpair failed and we were unable to recover it. 00:38:25.683 [2024-12-14 00:19:04.627905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.683 [2024-12-14 00:19:04.627919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.683 qpair failed and we were unable to recover it. 00:38:25.683 [2024-12-14 00:19:04.628108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.683 [2024-12-14 00:19:04.628122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.683 qpair failed and we were unable to recover it. 00:38:25.683 [2024-12-14 00:19:04.628266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.683 [2024-12-14 00:19:04.628280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.683 qpair failed and we were unable to recover it. 00:38:25.683 [2024-12-14 00:19:04.628378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.683 [2024-12-14 00:19:04.628390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.683 qpair failed and we were unable to recover it. 00:38:25.683 [2024-12-14 00:19:04.628544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.683 [2024-12-14 00:19:04.628558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.683 qpair failed and we were unable to recover it. 00:38:25.683 [2024-12-14 00:19:04.628712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.683 [2024-12-14 00:19:04.628725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.683 qpair failed and we were unable to recover it. 00:38:25.683 [2024-12-14 00:19:04.628880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.683 [2024-12-14 00:19:04.628893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.683 qpair failed and we were unable to recover it. 00:38:25.683 [2024-12-14 00:19:04.629043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.683 [2024-12-14 00:19:04.629057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.683 qpair failed and we were unable to recover it. 00:38:25.683 [2024-12-14 00:19:04.629136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.683 [2024-12-14 00:19:04.629149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.683 qpair failed and we were unable to recover it. 00:38:25.683 [2024-12-14 00:19:04.629326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.683 [2024-12-14 00:19:04.629339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.683 qpair failed and we were unable to recover it. 00:38:25.683 [2024-12-14 00:19:04.629493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.683 [2024-12-14 00:19:04.629508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.683 qpair failed and we were unable to recover it. 00:38:25.683 [2024-12-14 00:19:04.629670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.684 [2024-12-14 00:19:04.629683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.684 qpair failed and we were unable to recover it. 00:38:25.684 [2024-12-14 00:19:04.629830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.684 [2024-12-14 00:19:04.629846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.684 qpair failed and we were unable to recover it. 00:38:25.684 [2024-12-14 00:19:04.629913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.684 [2024-12-14 00:19:04.629927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.684 qpair failed and we were unable to recover it. 00:38:25.684 [2024-12-14 00:19:04.630118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.684 [2024-12-14 00:19:04.630132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.684 qpair failed and we were unable to recover it. 00:38:25.684 [2024-12-14 00:19:04.630281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.684 [2024-12-14 00:19:04.630295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.684 qpair failed and we were unable to recover it. 00:38:25.684 [2024-12-14 00:19:04.630399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.684 [2024-12-14 00:19:04.630412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.684 qpair failed and we were unable to recover it. 00:38:25.684 [2024-12-14 00:19:04.630516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.684 [2024-12-14 00:19:04.630532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.684 qpair failed and we were unable to recover it. 00:38:25.684 [2024-12-14 00:19:04.630597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.684 [2024-12-14 00:19:04.630618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.684 qpair failed and we were unable to recover it. 00:38:25.684 [2024-12-14 00:19:04.630717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.684 [2024-12-14 00:19:04.630730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.684 qpair failed and we were unable to recover it. 00:38:25.684 [2024-12-14 00:19:04.630817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.684 [2024-12-14 00:19:04.630830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.684 qpair failed and we were unable to recover it. 00:38:25.684 [2024-12-14 00:19:04.631036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.684 [2024-12-14 00:19:04.631050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.684 qpair failed and we were unable to recover it. 00:38:25.684 [2024-12-14 00:19:04.631202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.684 [2024-12-14 00:19:04.631215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.684 qpair failed and we were unable to recover it. 00:38:25.684 [2024-12-14 00:19:04.631368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.684 [2024-12-14 00:19:04.631381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.684 qpair failed and we were unable to recover it. 00:38:25.684 [2024-12-14 00:19:04.631458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.684 [2024-12-14 00:19:04.631472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.684 qpair failed and we were unable to recover it. 00:38:25.684 [2024-12-14 00:19:04.631569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.684 [2024-12-14 00:19:04.631582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.684 qpair failed and we were unable to recover it. 00:38:25.684 [2024-12-14 00:19:04.631758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.684 [2024-12-14 00:19:04.631772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.684 qpair failed and we were unable to recover it. 00:38:25.684 [2024-12-14 00:19:04.631848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.684 [2024-12-14 00:19:04.631861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.684 qpair failed and we were unable to recover it. 00:38:25.684 [2024-12-14 00:19:04.631999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.684 [2024-12-14 00:19:04.632012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.684 qpair failed and we were unable to recover it. 00:38:25.684 [2024-12-14 00:19:04.632102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.684 [2024-12-14 00:19:04.632116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.684 qpair failed and we were unable to recover it. 00:38:25.684 [2024-12-14 00:19:04.632212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.684 [2024-12-14 00:19:04.632226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.684 qpair failed and we were unable to recover it. 00:38:25.684 [2024-12-14 00:19:04.632311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.684 [2024-12-14 00:19:04.632328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.684 qpair failed and we were unable to recover it. 00:38:25.684 [2024-12-14 00:19:04.632455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.684 [2024-12-14 00:19:04.632469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.684 qpair failed and we were unable to recover it. 00:38:25.684 [2024-12-14 00:19:04.632559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.684 [2024-12-14 00:19:04.632573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.684 qpair failed and we were unable to recover it. 00:38:25.684 [2024-12-14 00:19:04.632707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.684 [2024-12-14 00:19:04.632721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.684 qpair failed and we were unable to recover it. 00:38:25.684 [2024-12-14 00:19:04.632794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.684 [2024-12-14 00:19:04.632807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.684 qpair failed and we were unable to recover it. 00:38:25.684 [2024-12-14 00:19:04.632990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.684 [2024-12-14 00:19:04.633003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.684 qpair failed and we were unable to recover it. 00:38:25.684 [2024-12-14 00:19:04.633089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.684 [2024-12-14 00:19:04.633103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.684 qpair failed and we were unable to recover it. 00:38:25.684 [2024-12-14 00:19:04.633176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.684 [2024-12-14 00:19:04.633188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.684 qpair failed and we were unable to recover it. 00:38:25.684 [2024-12-14 00:19:04.633407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.684 [2024-12-14 00:19:04.633421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.684 qpair failed and we were unable to recover it. 00:38:25.684 [2024-12-14 00:19:04.633509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.684 [2024-12-14 00:19:04.633522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.684 qpair failed and we were unable to recover it. 00:38:25.684 [2024-12-14 00:19:04.633657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.684 [2024-12-14 00:19:04.633671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.684 qpair failed and we were unable to recover it. 00:38:25.684 [2024-12-14 00:19:04.633815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.684 [2024-12-14 00:19:04.633828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.684 qpair failed and we were unable to recover it. 00:38:25.684 [2024-12-14 00:19:04.633922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.684 [2024-12-14 00:19:04.633935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.684 qpair failed and we were unable to recover it. 00:38:25.684 [2024-12-14 00:19:04.634008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.684 [2024-12-14 00:19:04.634021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.684 qpair failed and we were unable to recover it. 00:38:25.684 [2024-12-14 00:19:04.634110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.684 [2024-12-14 00:19:04.634123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.684 qpair failed and we were unable to recover it. 00:38:25.684 [2024-12-14 00:19:04.634294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.684 [2024-12-14 00:19:04.634308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.684 qpair failed and we were unable to recover it. 00:38:25.684 [2024-12-14 00:19:04.634459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.684 [2024-12-14 00:19:04.634473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.684 qpair failed and we were unable to recover it. 00:38:25.684 [2024-12-14 00:19:04.634623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.684 [2024-12-14 00:19:04.634637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.684 qpair failed and we were unable to recover it. 00:38:25.685 [2024-12-14 00:19:04.634784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.685 [2024-12-14 00:19:04.634798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.685 qpair failed and we were unable to recover it. 00:38:25.685 [2024-12-14 00:19:04.634879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.685 [2024-12-14 00:19:04.634894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.685 qpair failed and we were unable to recover it. 00:38:25.685 [2024-12-14 00:19:04.635095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.685 [2024-12-14 00:19:04.635109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.685 qpair failed and we were unable to recover it. 00:38:25.685 [2024-12-14 00:19:04.635254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.685 [2024-12-14 00:19:04.635267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.685 qpair failed and we were unable to recover it. 00:38:25.685 [2024-12-14 00:19:04.635376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.685 [2024-12-14 00:19:04.635391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.685 qpair failed and we were unable to recover it. 00:38:25.685 [2024-12-14 00:19:04.635523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.685 [2024-12-14 00:19:04.635541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.685 qpair failed and we were unable to recover it. 00:38:25.685 [2024-12-14 00:19:04.635628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.685 [2024-12-14 00:19:04.635642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.685 qpair failed and we were unable to recover it. 00:38:25.685 [2024-12-14 00:19:04.635722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.685 [2024-12-14 00:19:04.635735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.685 qpair failed and we were unable to recover it. 00:38:25.685 [2024-12-14 00:19:04.635815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.685 [2024-12-14 00:19:04.635828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.685 qpair failed and we were unable to recover it. 00:38:25.685 [2024-12-14 00:19:04.635908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.685 [2024-12-14 00:19:04.635921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.685 qpair failed and we were unable to recover it. 00:38:25.685 [2024-12-14 00:19:04.636013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.685 [2024-12-14 00:19:04.636026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.685 qpair failed and we were unable to recover it. 00:38:25.685 [2024-12-14 00:19:04.636162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.685 [2024-12-14 00:19:04.636175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.685 qpair failed and we were unable to recover it. 00:38:25.685 [2024-12-14 00:19:04.636252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.685 [2024-12-14 00:19:04.636264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.685 qpair failed and we were unable to recover it. 00:38:25.685 [2024-12-14 00:19:04.636413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.685 [2024-12-14 00:19:04.636427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.685 qpair failed and we were unable to recover it. 00:38:25.685 [2024-12-14 00:19:04.636582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.685 [2024-12-14 00:19:04.636595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.685 qpair failed and we were unable to recover it. 00:38:25.685 [2024-12-14 00:19:04.636750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.685 [2024-12-14 00:19:04.636764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.685 qpair failed and we were unable to recover it. 00:38:25.685 [2024-12-14 00:19:04.636854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.685 [2024-12-14 00:19:04.636868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.685 qpair failed and we were unable to recover it. 00:38:25.685 [2024-12-14 00:19:04.637011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.685 [2024-12-14 00:19:04.637025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.685 qpair failed and we were unable to recover it. 00:38:25.685 [2024-12-14 00:19:04.637094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.685 [2024-12-14 00:19:04.637107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.685 qpair failed and we were unable to recover it. 00:38:25.685 [2024-12-14 00:19:04.637245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.685 [2024-12-14 00:19:04.637259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.685 qpair failed and we were unable to recover it. 00:38:25.685 [2024-12-14 00:19:04.637494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.685 [2024-12-14 00:19:04.637508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.685 qpair failed and we were unable to recover it. 00:38:25.685 [2024-12-14 00:19:04.637664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.685 [2024-12-14 00:19:04.637678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.685 qpair failed and we were unable to recover it. 00:38:25.685 [2024-12-14 00:19:04.637871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.685 [2024-12-14 00:19:04.637885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.685 qpair failed and we were unable to recover it. 00:38:25.685 [2024-12-14 00:19:04.638031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.685 [2024-12-14 00:19:04.638045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.685 qpair failed and we were unable to recover it. 00:38:25.685 [2024-12-14 00:19:04.638140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.685 [2024-12-14 00:19:04.638153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.685 qpair failed and we were unable to recover it. 00:38:25.685 [2024-12-14 00:19:04.638239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.685 [2024-12-14 00:19:04.638253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.685 qpair failed and we were unable to recover it. 00:38:25.685 [2024-12-14 00:19:04.638409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.685 [2024-12-14 00:19:04.638423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.685 qpair failed and we were unable to recover it. 00:38:25.685 [2024-12-14 00:19:04.638619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.685 [2024-12-14 00:19:04.638633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.685 qpair failed and we were unable to recover it. 00:38:25.685 [2024-12-14 00:19:04.638773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.685 [2024-12-14 00:19:04.638787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.685 qpair failed and we were unable to recover it. 00:38:25.685 [2024-12-14 00:19:04.638934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.685 [2024-12-14 00:19:04.638948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.685 qpair failed and we were unable to recover it. 00:38:25.685 [2024-12-14 00:19:04.639119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.685 [2024-12-14 00:19:04.639135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.685 qpair failed and we were unable to recover it. 00:38:25.685 [2024-12-14 00:19:04.639218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.685 [2024-12-14 00:19:04.639231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.685 qpair failed and we were unable to recover it. 00:38:25.685 [2024-12-14 00:19:04.639319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.686 [2024-12-14 00:19:04.639332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.686 qpair failed and we were unable to recover it. 00:38:25.686 [2024-12-14 00:19:04.639415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.686 [2024-12-14 00:19:04.639431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.686 qpair failed and we were unable to recover it. 00:38:25.686 [2024-12-14 00:19:04.639574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.686 [2024-12-14 00:19:04.639588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.686 qpair failed and we were unable to recover it. 00:38:25.686 [2024-12-14 00:19:04.639721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.686 [2024-12-14 00:19:04.639734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.686 qpair failed and we were unable to recover it. 00:38:25.686 [2024-12-14 00:19:04.639833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.686 [2024-12-14 00:19:04.639847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.686 qpair failed and we were unable to recover it. 00:38:25.686 [2024-12-14 00:19:04.639981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.686 [2024-12-14 00:19:04.639994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.686 qpair failed and we were unable to recover it. 00:38:25.686 [2024-12-14 00:19:04.640146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.686 [2024-12-14 00:19:04.640159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.686 qpair failed and we were unable to recover it. 00:38:25.686 [2024-12-14 00:19:04.640247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.686 [2024-12-14 00:19:04.640260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.686 qpair failed and we were unable to recover it. 00:38:25.686 [2024-12-14 00:19:04.640344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.686 [2024-12-14 00:19:04.640358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.686 qpair failed and we were unable to recover it. 00:38:25.686 [2024-12-14 00:19:04.640499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.686 [2024-12-14 00:19:04.640514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.686 qpair failed and we were unable to recover it. 00:38:25.686 [2024-12-14 00:19:04.640596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.686 [2024-12-14 00:19:04.640610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.686 qpair failed and we were unable to recover it. 00:38:25.686 [2024-12-14 00:19:04.640686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.686 [2024-12-14 00:19:04.640699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.686 qpair failed and we were unable to recover it. 00:38:25.686 [2024-12-14 00:19:04.640877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.686 [2024-12-14 00:19:04.640891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.686 qpair failed and we were unable to recover it. 00:38:25.686 [2024-12-14 00:19:04.640974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.686 [2024-12-14 00:19:04.640987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.686 qpair failed and we were unable to recover it. 00:38:25.686 [2024-12-14 00:19:04.641120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.686 [2024-12-14 00:19:04.641134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.686 qpair failed and we were unable to recover it. 00:38:25.686 [2024-12-14 00:19:04.641303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.686 [2024-12-14 00:19:04.641317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.686 qpair failed and we were unable to recover it. 00:38:25.686 [2024-12-14 00:19:04.641392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.686 [2024-12-14 00:19:04.641404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.686 qpair failed and we were unable to recover it. 00:38:25.686 [2024-12-14 00:19:04.641566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.686 [2024-12-14 00:19:04.641580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.686 qpair failed and we were unable to recover it. 00:38:25.686 [2024-12-14 00:19:04.641666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.686 [2024-12-14 00:19:04.641679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.686 qpair failed and we were unable to recover it. 00:38:25.686 [2024-12-14 00:19:04.641752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.686 [2024-12-14 00:19:04.641764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.686 qpair failed and we were unable to recover it. 00:38:25.686 [2024-12-14 00:19:04.641840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.686 [2024-12-14 00:19:04.641853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.686 qpair failed and we were unable to recover it. 00:38:25.686 [2024-12-14 00:19:04.641947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.686 [2024-12-14 00:19:04.641961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.686 qpair failed and we were unable to recover it. 00:38:25.686 [2024-12-14 00:19:04.642103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.686 [2024-12-14 00:19:04.642117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.686 qpair failed and we were unable to recover it. 00:38:25.686 [2024-12-14 00:19:04.642276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.686 [2024-12-14 00:19:04.642290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.686 qpair failed and we were unable to recover it. 00:38:25.686 [2024-12-14 00:19:04.642433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.686 [2024-12-14 00:19:04.642458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.686 qpair failed and we were unable to recover it. 00:38:25.686 [2024-12-14 00:19:04.642609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.686 [2024-12-14 00:19:04.642623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.686 qpair failed and we were unable to recover it. 00:38:25.686 [2024-12-14 00:19:04.642757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.686 [2024-12-14 00:19:04.642771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.686 qpair failed and we were unable to recover it. 00:38:25.686 [2024-12-14 00:19:04.642855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.686 [2024-12-14 00:19:04.642868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.686 qpair failed and we were unable to recover it. 00:38:25.686 [2024-12-14 00:19:04.642954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.686 [2024-12-14 00:19:04.642968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.686 qpair failed and we were unable to recover it. 00:38:25.686 [2024-12-14 00:19:04.643035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.686 [2024-12-14 00:19:04.643048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.686 qpair failed and we were unable to recover it. 00:38:25.686 [2024-12-14 00:19:04.643129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.686 [2024-12-14 00:19:04.643142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.686 qpair failed and we were unable to recover it. 00:38:25.686 [2024-12-14 00:19:04.643295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.686 [2024-12-14 00:19:04.643311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.686 qpair failed and we were unable to recover it. 00:38:25.686 [2024-12-14 00:19:04.643413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.686 [2024-12-14 00:19:04.643427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.686 qpair failed and we were unable to recover it. 00:38:25.686 [2024-12-14 00:19:04.643535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.686 [2024-12-14 00:19:04.643550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.686 qpair failed and we were unable to recover it. 00:38:25.686 [2024-12-14 00:19:04.643757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.686 [2024-12-14 00:19:04.643771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.686 qpair failed and we were unable to recover it. 00:38:25.686 [2024-12-14 00:19:04.643853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.686 [2024-12-14 00:19:04.643867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.686 qpair failed and we were unable to recover it. 00:38:25.686 [2024-12-14 00:19:04.644028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.686 [2024-12-14 00:19:04.644044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.686 qpair failed and we were unable to recover it. 00:38:25.686 [2024-12-14 00:19:04.644130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.686 [2024-12-14 00:19:04.644148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.686 qpair failed and we were unable to recover it. 00:38:25.686 [2024-12-14 00:19:04.644370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.686 [2024-12-14 00:19:04.644385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.686 qpair failed and we were unable to recover it. 00:38:25.686 [2024-12-14 00:19:04.644451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.687 [2024-12-14 00:19:04.644464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.687 qpair failed and we were unable to recover it. 00:38:25.687 [2024-12-14 00:19:04.644550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.687 [2024-12-14 00:19:04.644564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.687 qpair failed and we were unable to recover it. 00:38:25.687 [2024-12-14 00:19:04.644645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.687 [2024-12-14 00:19:04.644659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.687 qpair failed and we were unable to recover it. 00:38:25.687 [2024-12-14 00:19:04.644729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.687 [2024-12-14 00:19:04.644741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.687 qpair failed and we were unable to recover it. 00:38:25.687 [2024-12-14 00:19:04.644844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.687 [2024-12-14 00:19:04.644857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.687 qpair failed and we were unable to recover it. 00:38:25.687 [2024-12-14 00:19:04.644946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.687 [2024-12-14 00:19:04.644971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.687 qpair failed and we were unable to recover it. 00:38:25.687 [2024-12-14 00:19:04.645177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.687 [2024-12-14 00:19:04.645192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.687 qpair failed and we were unable to recover it. 00:38:25.687 [2024-12-14 00:19:04.645421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.687 [2024-12-14 00:19:04.645434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.687 qpair failed and we were unable to recover it. 00:38:25.687 [2024-12-14 00:19:04.645642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.687 [2024-12-14 00:19:04.645656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.687 qpair failed and we were unable to recover it. 00:38:25.687 [2024-12-14 00:19:04.645810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.687 [2024-12-14 00:19:04.645824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.687 qpair failed and we were unable to recover it. 00:38:25.687 [2024-12-14 00:19:04.645979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.687 [2024-12-14 00:19:04.645992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.687 qpair failed and we were unable to recover it. 00:38:25.687 [2024-12-14 00:19:04.646164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.687 [2024-12-14 00:19:04.646179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.687 qpair failed and we were unable to recover it. 00:38:25.687 [2024-12-14 00:19:04.646261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.687 [2024-12-14 00:19:04.646274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.687 qpair failed and we were unable to recover it. 00:38:25.687 [2024-12-14 00:19:04.646489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.687 [2024-12-14 00:19:04.646503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.687 qpair failed and we were unable to recover it. 00:38:25.687 [2024-12-14 00:19:04.646578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.687 [2024-12-14 00:19:04.646590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.687 qpair failed and we were unable to recover it. 00:38:25.687 [2024-12-14 00:19:04.646681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.687 [2024-12-14 00:19:04.646695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.687 qpair failed and we were unable to recover it. 00:38:25.687 [2024-12-14 00:19:04.646906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.687 [2024-12-14 00:19:04.646920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.687 qpair failed and we were unable to recover it. 00:38:25.687 [2024-12-14 00:19:04.647068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.687 [2024-12-14 00:19:04.647082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.687 qpair failed and we were unable to recover it. 00:38:25.687 [2024-12-14 00:19:04.647237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.687 [2024-12-14 00:19:04.647250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.687 qpair failed and we were unable to recover it. 00:38:25.687 [2024-12-14 00:19:04.647391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.687 [2024-12-14 00:19:04.647405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.687 qpair failed and we were unable to recover it. 00:38:25.687 [2024-12-14 00:19:04.647561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.687 [2024-12-14 00:19:04.647575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.687 qpair failed and we were unable to recover it. 00:38:25.687 [2024-12-14 00:19:04.647665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.687 [2024-12-14 00:19:04.647679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.687 qpair failed and we were unable to recover it. 00:38:25.687 [2024-12-14 00:19:04.647757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.687 [2024-12-14 00:19:04.647770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.687 qpair failed and we were unable to recover it. 00:38:25.687 [2024-12-14 00:19:04.647941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.687 [2024-12-14 00:19:04.647961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.687 qpair failed and we were unable to recover it. 00:38:25.687 [2024-12-14 00:19:04.648096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.687 [2024-12-14 00:19:04.648110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.687 qpair failed and we were unable to recover it. 00:38:25.687 [2024-12-14 00:19:04.648327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.687 [2024-12-14 00:19:04.648341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.687 qpair failed and we were unable to recover it. 00:38:25.687 [2024-12-14 00:19:04.648453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.687 [2024-12-14 00:19:04.648480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:25.687 qpair failed and we were unable to recover it. 00:38:25.687 [2024-12-14 00:19:04.648657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.687 [2024-12-14 00:19:04.648679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:25.687 qpair failed and we were unable to recover it. 00:38:25.687 [2024-12-14 00:19:04.648831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.687 [2024-12-14 00:19:04.648852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:25.687 qpair failed and we were unable to recover it. 00:38:25.687 [2024-12-14 00:19:04.649009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.687 [2024-12-14 00:19:04.649031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:25.687 qpair failed and we were unable to recover it. 00:38:25.687 [2024-12-14 00:19:04.649138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.687 [2024-12-14 00:19:04.649159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:25.687 qpair failed and we were unable to recover it. 00:38:25.687 [2024-12-14 00:19:04.649319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.687 [2024-12-14 00:19:04.649339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:25.687 qpair failed and we were unable to recover it. 00:38:25.687 [2024-12-14 00:19:04.649567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.687 [2024-12-14 00:19:04.649586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.687 qpair failed and we were unable to recover it. 00:38:25.687 [2024-12-14 00:19:04.649675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.687 [2024-12-14 00:19:04.649688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.687 qpair failed and we were unable to recover it. 00:38:25.687 [2024-12-14 00:19:04.649837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.687 [2024-12-14 00:19:04.649860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.687 qpair failed and we were unable to recover it. 00:38:25.687 [2024-12-14 00:19:04.649943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.687 [2024-12-14 00:19:04.649957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.687 qpair failed and we were unable to recover it. 00:38:25.687 [2024-12-14 00:19:04.650040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.687 [2024-12-14 00:19:04.650053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.687 qpair failed and we were unable to recover it. 00:38:25.687 [2024-12-14 00:19:04.650209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.687 [2024-12-14 00:19:04.650222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.687 qpair failed and we were unable to recover it. 00:38:25.687 [2024-12-14 00:19:04.650301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.687 [2024-12-14 00:19:04.650315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.687 qpair failed and we were unable to recover it. 00:38:25.688 [2024-12-14 00:19:04.650465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.688 [2024-12-14 00:19:04.650483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.688 qpair failed and we were unable to recover it. 00:38:25.688 [2024-12-14 00:19:04.650567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.688 [2024-12-14 00:19:04.650580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.688 qpair failed and we were unable to recover it. 00:38:25.688 [2024-12-14 00:19:04.650659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.688 [2024-12-14 00:19:04.650672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.688 qpair failed and we were unable to recover it. 00:38:25.688 [2024-12-14 00:19:04.650761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.688 [2024-12-14 00:19:04.650775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.688 qpair failed and we were unable to recover it. 00:38:25.688 [2024-12-14 00:19:04.651006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.688 [2024-12-14 00:19:04.651021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.688 qpair failed and we were unable to recover it. 00:38:25.688 [2024-12-14 00:19:04.651114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.688 [2024-12-14 00:19:04.651127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.688 qpair failed and we were unable to recover it. 00:38:25.688 [2024-12-14 00:19:04.651342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.688 [2024-12-14 00:19:04.651355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.688 qpair failed and we were unable to recover it. 00:38:25.688 [2024-12-14 00:19:04.651455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.688 [2024-12-14 00:19:04.651469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.688 qpair failed and we were unable to recover it. 00:38:25.688 [2024-12-14 00:19:04.651570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.688 [2024-12-14 00:19:04.651584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.688 qpair failed and we were unable to recover it. 00:38:25.688 [2024-12-14 00:19:04.651795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.688 [2024-12-14 00:19:04.651808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.688 qpair failed and we were unable to recover it. 00:38:25.688 [2024-12-14 00:19:04.651953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.688 [2024-12-14 00:19:04.651969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.688 qpair failed and we were unable to recover it. 00:38:25.688 [2024-12-14 00:19:04.652058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.688 [2024-12-14 00:19:04.652071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.688 qpair failed and we were unable to recover it. 00:38:25.688 [2024-12-14 00:19:04.652300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.688 [2024-12-14 00:19:04.652315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.688 qpair failed and we were unable to recover it. 00:38:25.688 [2024-12-14 00:19:04.652465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.688 [2024-12-14 00:19:04.652479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.688 qpair failed and we were unable to recover it. 00:38:25.688 [2024-12-14 00:19:04.652550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.688 [2024-12-14 00:19:04.652562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.688 qpair failed and we were unable to recover it. 00:38:25.688 [2024-12-14 00:19:04.652707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.688 [2024-12-14 00:19:04.652720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.688 qpair failed and we were unable to recover it. 00:38:25.688 [2024-12-14 00:19:04.652873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.688 [2024-12-14 00:19:04.652887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.688 qpair failed and we were unable to recover it. 00:38:25.688 [2024-12-14 00:19:04.652988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.688 [2024-12-14 00:19:04.653001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.688 qpair failed and we were unable to recover it. 00:38:25.688 [2024-12-14 00:19:04.653154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.688 [2024-12-14 00:19:04.653167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.688 qpair failed and we were unable to recover it. 00:38:25.688 [2024-12-14 00:19:04.653320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.688 [2024-12-14 00:19:04.653333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.688 qpair failed and we were unable to recover it. 00:38:25.688 [2024-12-14 00:19:04.653426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.688 [2024-12-14 00:19:04.653444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.688 qpair failed and we were unable to recover it. 00:38:25.688 [2024-12-14 00:19:04.653545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.688 [2024-12-14 00:19:04.653568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.688 qpair failed and we were unable to recover it. 00:38:25.688 [2024-12-14 00:19:04.653788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.688 [2024-12-14 00:19:04.653803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.688 qpair failed and we were unable to recover it. 00:38:25.688 [2024-12-14 00:19:04.653946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.688 [2024-12-14 00:19:04.653960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.688 qpair failed and we were unable to recover it. 00:38:25.688 [2024-12-14 00:19:04.654049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.688 [2024-12-14 00:19:04.654063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.688 qpair failed and we were unable to recover it. 00:38:25.688 [2024-12-14 00:19:04.654231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.688 [2024-12-14 00:19:04.654245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.688 qpair failed and we were unable to recover it. 00:38:25.688 [2024-12-14 00:19:04.654337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.688 [2024-12-14 00:19:04.654350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.688 qpair failed and we were unable to recover it. 00:38:25.688 [2024-12-14 00:19:04.654496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.688 [2024-12-14 00:19:04.654509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.688 qpair failed and we were unable to recover it. 00:38:25.688 [2024-12-14 00:19:04.654686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.688 [2024-12-14 00:19:04.654702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.688 qpair failed and we were unable to recover it. 00:38:25.688 [2024-12-14 00:19:04.654837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.688 [2024-12-14 00:19:04.654855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.688 qpair failed and we were unable to recover it. 00:38:25.688 [2024-12-14 00:19:04.654993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.688 [2024-12-14 00:19:04.655007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.688 qpair failed and we were unable to recover it. 00:38:25.688 [2024-12-14 00:19:04.655185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.688 [2024-12-14 00:19:04.655205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.688 qpair failed and we were unable to recover it. 00:38:25.688 [2024-12-14 00:19:04.655350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.688 [2024-12-14 00:19:04.655363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.688 qpair failed and we were unable to recover it. 00:38:25.688 [2024-12-14 00:19:04.655505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.688 [2024-12-14 00:19:04.655519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.688 qpair failed and we were unable to recover it. 00:38:25.688 [2024-12-14 00:19:04.655598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.688 [2024-12-14 00:19:04.655611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.688 qpair failed and we were unable to recover it. 00:38:25.688 [2024-12-14 00:19:04.655819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.688 [2024-12-14 00:19:04.655832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.689 qpair failed and we were unable to recover it. 00:38:25.689 [2024-12-14 00:19:04.655936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.689 [2024-12-14 00:19:04.655949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.689 qpair failed and we were unable to recover it. 00:38:25.689 [2024-12-14 00:19:04.656101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.689 [2024-12-14 00:19:04.656114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.689 qpair failed and we were unable to recover it. 00:38:25.689 [2024-12-14 00:19:04.656332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.689 [2024-12-14 00:19:04.656345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.689 qpair failed and we were unable to recover it. 00:38:25.689 [2024-12-14 00:19:04.656432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.689 [2024-12-14 00:19:04.656461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.689 qpair failed and we were unable to recover it. 00:38:25.689 [2024-12-14 00:19:04.656657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.689 [2024-12-14 00:19:04.656675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.689 qpair failed and we were unable to recover it. 00:38:25.689 [2024-12-14 00:19:04.656848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.689 [2024-12-14 00:19:04.656862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.689 qpair failed and we were unable to recover it. 00:38:25.689 [2024-12-14 00:19:04.656935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.689 [2024-12-14 00:19:04.656949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.689 qpair failed and we were unable to recover it. 00:38:25.689 [2024-12-14 00:19:04.657175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.689 [2024-12-14 00:19:04.657189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.689 qpair failed and we were unable to recover it. 00:38:25.689 [2024-12-14 00:19:04.657325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.689 [2024-12-14 00:19:04.657338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.689 qpair failed and we were unable to recover it. 00:38:25.689 [2024-12-14 00:19:04.657551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.689 [2024-12-14 00:19:04.657565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.689 qpair failed and we were unable to recover it. 00:38:25.689 [2024-12-14 00:19:04.657712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.689 [2024-12-14 00:19:04.657726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.689 qpair failed and we were unable to recover it. 00:38:25.689 [2024-12-14 00:19:04.657885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.689 [2024-12-14 00:19:04.657898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.689 qpair failed and we were unable to recover it. 00:38:25.689 [2024-12-14 00:19:04.658048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.689 [2024-12-14 00:19:04.658062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.689 qpair failed and we were unable to recover it. 00:38:25.689 [2024-12-14 00:19:04.658151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.689 [2024-12-14 00:19:04.658165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.689 qpair failed and we were unable to recover it. 00:38:25.689 [2024-12-14 00:19:04.658252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.689 [2024-12-14 00:19:04.658265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.689 qpair failed and we were unable to recover it. 00:38:25.689 [2024-12-14 00:19:04.658411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.689 [2024-12-14 00:19:04.658424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.689 qpair failed and we were unable to recover it. 00:38:25.689 [2024-12-14 00:19:04.658537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.689 [2024-12-14 00:19:04.658568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:25.689 qpair failed and we were unable to recover it. 00:38:25.689 [2024-12-14 00:19:04.658764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.689 [2024-12-14 00:19:04.658787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:25.689 qpair failed and we were unable to recover it. 00:38:25.689 [2024-12-14 00:19:04.658893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.689 [2024-12-14 00:19:04.658915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:25.689 qpair failed and we were unable to recover it. 00:38:25.689 [2024-12-14 00:19:04.659100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.689 [2024-12-14 00:19:04.659122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:25.689 qpair failed and we were unable to recover it. 00:38:25.689 [2024-12-14 00:19:04.659312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.689 [2024-12-14 00:19:04.659334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:25.689 qpair failed and we were unable to recover it. 00:38:25.689 [2024-12-14 00:19:04.659487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.689 [2024-12-14 00:19:04.659510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:25.689 qpair failed and we were unable to recover it. 00:38:25.689 [2024-12-14 00:19:04.659674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.689 [2024-12-14 00:19:04.659695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:25.689 qpair failed and we were unable to recover it. 00:38:25.689 [2024-12-14 00:19:04.659853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.689 [2024-12-14 00:19:04.659874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:25.689 qpair failed and we were unable to recover it. 00:38:25.689 [2024-12-14 00:19:04.660026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.689 [2024-12-14 00:19:04.660047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:25.689 qpair failed and we were unable to recover it. 00:38:25.689 [2024-12-14 00:19:04.660153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.689 [2024-12-14 00:19:04.660170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.689 qpair failed and we were unable to recover it. 00:38:25.689 [2024-12-14 00:19:04.660396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.689 [2024-12-14 00:19:04.660409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.689 qpair failed and we were unable to recover it. 00:38:25.689 [2024-12-14 00:19:04.660511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.689 [2024-12-14 00:19:04.660526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.689 qpair failed and we were unable to recover it. 00:38:25.689 [2024-12-14 00:19:04.660736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.689 [2024-12-14 00:19:04.660749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.689 qpair failed and we were unable to recover it. 00:38:25.689 [2024-12-14 00:19:04.660850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.689 [2024-12-14 00:19:04.660864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.689 qpair failed and we were unable to recover it. 00:38:25.689 [2024-12-14 00:19:04.661018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.689 [2024-12-14 00:19:04.661031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.689 qpair failed and we were unable to recover it. 00:38:25.689 [2024-12-14 00:19:04.661108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.689 [2024-12-14 00:19:04.661122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.689 qpair failed and we were unable to recover it. 00:38:25.689 [2024-12-14 00:19:04.661200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.689 [2024-12-14 00:19:04.661213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.689 qpair failed and we were unable to recover it. 00:38:25.689 [2024-12-14 00:19:04.661310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.690 [2024-12-14 00:19:04.661324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.690 qpair failed and we were unable to recover it. 00:38:25.690 [2024-12-14 00:19:04.661480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.690 [2024-12-14 00:19:04.661494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.690 qpair failed and we were unable to recover it. 00:38:25.690 [2024-12-14 00:19:04.661666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.690 [2024-12-14 00:19:04.661679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.690 qpair failed and we were unable to recover it. 00:38:25.690 [2024-12-14 00:19:04.661760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.690 [2024-12-14 00:19:04.661774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.690 qpair failed and we were unable to recover it. 00:38:25.690 [2024-12-14 00:19:04.661867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.690 [2024-12-14 00:19:04.661881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.690 qpair failed and we were unable to recover it. 00:38:25.690 [2024-12-14 00:19:04.662039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.690 [2024-12-14 00:19:04.662053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.690 qpair failed and we were unable to recover it. 00:38:25.690 [2024-12-14 00:19:04.662148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.690 [2024-12-14 00:19:04.662161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.690 qpair failed and we were unable to recover it. 00:38:25.690 [2024-12-14 00:19:04.662312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.690 [2024-12-14 00:19:04.662326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.690 qpair failed and we were unable to recover it. 00:38:25.690 [2024-12-14 00:19:04.662411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.690 [2024-12-14 00:19:04.662424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.690 qpair failed and we were unable to recover it. 00:38:25.690 [2024-12-14 00:19:04.662641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.690 [2024-12-14 00:19:04.662655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.690 qpair failed and we were unable to recover it. 00:38:25.690 [2024-12-14 00:19:04.662792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.690 [2024-12-14 00:19:04.662805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.690 qpair failed and we were unable to recover it. 00:38:25.690 [2024-12-14 00:19:04.662888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.690 [2024-12-14 00:19:04.662905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.690 qpair failed and we were unable to recover it. 00:38:25.690 [2024-12-14 00:19:04.662994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.690 [2024-12-14 00:19:04.663007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.690 qpair failed and we were unable to recover it. 00:38:25.690 [2024-12-14 00:19:04.663263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.690 [2024-12-14 00:19:04.663277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.690 qpair failed and we were unable to recover it. 00:38:25.690 [2024-12-14 00:19:04.663421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.690 [2024-12-14 00:19:04.663435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.690 qpair failed and we were unable to recover it. 00:38:25.690 [2024-12-14 00:19:04.663651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.690 [2024-12-14 00:19:04.663664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.690 qpair failed and we were unable to recover it. 00:38:25.690 [2024-12-14 00:19:04.663738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.690 [2024-12-14 00:19:04.663750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.690 qpair failed and we were unable to recover it. 00:38:25.690 [2024-12-14 00:19:04.663898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.690 [2024-12-14 00:19:04.663911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.690 qpair failed and we were unable to recover it. 00:38:25.690 [2024-12-14 00:19:04.664132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.690 [2024-12-14 00:19:04.664149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.690 qpair failed and we were unable to recover it. 00:38:25.690 [2024-12-14 00:19:04.664218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.690 [2024-12-14 00:19:04.664230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.690 qpair failed and we were unable to recover it. 00:38:25.690 [2024-12-14 00:19:04.664400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.690 [2024-12-14 00:19:04.664413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.690 qpair failed and we were unable to recover it. 00:38:25.690 [2024-12-14 00:19:04.664563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.690 [2024-12-14 00:19:04.664577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.690 qpair failed and we were unable to recover it. 00:38:25.690 [2024-12-14 00:19:04.664669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.690 [2024-12-14 00:19:04.664683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.690 qpair failed and we were unable to recover it. 00:38:25.690 [2024-12-14 00:19:04.664907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.690 [2024-12-14 00:19:04.664920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.690 qpair failed and we were unable to recover it. 00:38:25.690 [2024-12-14 00:19:04.665063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.690 [2024-12-14 00:19:04.665076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.690 qpair failed and we were unable to recover it. 00:38:25.690 [2024-12-14 00:19:04.665216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.690 [2024-12-14 00:19:04.665229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.690 qpair failed and we were unable to recover it. 00:38:25.690 [2024-12-14 00:19:04.665322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.690 [2024-12-14 00:19:04.665336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.690 qpair failed and we were unable to recover it. 00:38:25.690 [2024-12-14 00:19:04.665417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.690 [2024-12-14 00:19:04.665431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.690 qpair failed and we were unable to recover it. 00:38:25.690 [2024-12-14 00:19:04.665595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.690 [2024-12-14 00:19:04.665610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.690 qpair failed and we were unable to recover it. 00:38:25.690 [2024-12-14 00:19:04.665813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.690 [2024-12-14 00:19:04.665827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.690 qpair failed and we were unable to recover it. 00:38:25.690 [2024-12-14 00:19:04.665963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.690 [2024-12-14 00:19:04.665976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.690 qpair failed and we were unable to recover it. 00:38:25.690 [2024-12-14 00:19:04.666110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.690 [2024-12-14 00:19:04.666123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.690 qpair failed and we were unable to recover it. 00:38:25.690 [2024-12-14 00:19:04.666286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.690 [2024-12-14 00:19:04.666300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.690 qpair failed and we were unable to recover it. 00:38:25.690 [2024-12-14 00:19:04.666461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.690 [2024-12-14 00:19:04.666480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.690 qpair failed and we were unable to recover it. 00:38:25.690 [2024-12-14 00:19:04.666574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.690 [2024-12-14 00:19:04.666587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.690 qpair failed and we were unable to recover it. 00:38:25.690 [2024-12-14 00:19:04.666792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.690 [2024-12-14 00:19:04.666806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.690 qpair failed and we were unable to recover it. 00:38:25.690 [2024-12-14 00:19:04.666905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.690 [2024-12-14 00:19:04.666919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.690 qpair failed and we were unable to recover it. 00:38:25.690 [2024-12-14 00:19:04.667138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.690 [2024-12-14 00:19:04.667152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.691 qpair failed and we were unable to recover it. 00:38:25.691 [2024-12-14 00:19:04.667300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.691 [2024-12-14 00:19:04.667314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.691 qpair failed and we were unable to recover it. 00:38:25.691 [2024-12-14 00:19:04.667401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.691 [2024-12-14 00:19:04.667414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.691 qpair failed and we were unable to recover it. 00:38:25.691 [2024-12-14 00:19:04.667493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.691 [2024-12-14 00:19:04.667506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.691 qpair failed and we were unable to recover it. 00:38:25.691 [2024-12-14 00:19:04.667591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.691 [2024-12-14 00:19:04.667604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.691 qpair failed and we were unable to recover it. 00:38:25.691 [2024-12-14 00:19:04.667670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.691 [2024-12-14 00:19:04.667683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.691 qpair failed and we were unable to recover it. 00:38:25.691 [2024-12-14 00:19:04.667761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.691 [2024-12-14 00:19:04.667773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.691 qpair failed and we were unable to recover it. 00:38:25.691 [2024-12-14 00:19:04.667926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.691 [2024-12-14 00:19:04.667940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.691 qpair failed and we were unable to recover it. 00:38:25.691 [2024-12-14 00:19:04.668084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.691 [2024-12-14 00:19:04.668097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.691 qpair failed and we were unable to recover it. 00:38:25.691 [2024-12-14 00:19:04.668242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.691 [2024-12-14 00:19:04.668255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.691 qpair failed and we were unable to recover it. 00:38:25.691 [2024-12-14 00:19:04.668325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.691 [2024-12-14 00:19:04.668338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.691 qpair failed and we were unable to recover it. 00:38:25.691 [2024-12-14 00:19:04.668516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.691 [2024-12-14 00:19:04.668530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.691 qpair failed and we were unable to recover it. 00:38:25.691 [2024-12-14 00:19:04.668731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.691 [2024-12-14 00:19:04.668744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.691 qpair failed and we were unable to recover it. 00:38:25.691 [2024-12-14 00:19:04.669000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.691 [2024-12-14 00:19:04.669014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.691 qpair failed and we were unable to recover it. 00:38:25.691 [2024-12-14 00:19:04.669149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.691 [2024-12-14 00:19:04.669164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.691 qpair failed and we were unable to recover it. 00:38:25.691 [2024-12-14 00:19:04.669263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.691 [2024-12-14 00:19:04.669276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.691 qpair failed and we were unable to recover it. 00:38:25.691 [2024-12-14 00:19:04.669456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.691 [2024-12-14 00:19:04.669470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.691 qpair failed and we were unable to recover it. 00:38:25.691 [2024-12-14 00:19:04.669632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.691 [2024-12-14 00:19:04.669645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.691 qpair failed and we were unable to recover it. 00:38:25.691 [2024-12-14 00:19:04.669782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.691 [2024-12-14 00:19:04.669796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.691 qpair failed and we were unable to recover it. 00:38:25.691 [2024-12-14 00:19:04.669885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.691 [2024-12-14 00:19:04.669899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.691 qpair failed and we were unable to recover it. 00:38:25.691 [2024-12-14 00:19:04.669978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.691 [2024-12-14 00:19:04.669991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.691 qpair failed and we were unable to recover it. 00:38:25.691 [2024-12-14 00:19:04.670089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.691 [2024-12-14 00:19:04.670103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.691 qpair failed and we were unable to recover it. 00:38:25.691 [2024-12-14 00:19:04.670263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.691 [2024-12-14 00:19:04.670276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.691 qpair failed and we were unable to recover it. 00:38:25.691 [2024-12-14 00:19:04.670453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.691 [2024-12-14 00:19:04.670467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.691 qpair failed and we were unable to recover it. 00:38:25.691 [2024-12-14 00:19:04.670600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.691 [2024-12-14 00:19:04.670613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.691 qpair failed and we were unable to recover it. 00:38:25.691 [2024-12-14 00:19:04.670813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.691 [2024-12-14 00:19:04.670827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.691 qpair failed and we were unable to recover it. 00:38:25.691 [2024-12-14 00:19:04.670911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.691 [2024-12-14 00:19:04.670923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.691 qpair failed and we were unable to recover it. 00:38:25.691 [2024-12-14 00:19:04.671162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.691 [2024-12-14 00:19:04.671176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.691 qpair failed and we were unable to recover it. 00:38:25.691 [2024-12-14 00:19:04.671388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.691 [2024-12-14 00:19:04.671401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.691 qpair failed and we were unable to recover it. 00:38:25.691 [2024-12-14 00:19:04.671554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.691 [2024-12-14 00:19:04.671568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.691 qpair failed and we were unable to recover it. 00:38:25.691 [2024-12-14 00:19:04.671725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.691 [2024-12-14 00:19:04.671738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.691 qpair failed and we were unable to recover it. 00:38:25.691 [2024-12-14 00:19:04.671912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.691 [2024-12-14 00:19:04.671925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.691 qpair failed and we were unable to recover it. 00:38:25.691 [2024-12-14 00:19:04.672019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.691 [2024-12-14 00:19:04.672033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.691 qpair failed and we were unable to recover it. 00:38:25.691 [2024-12-14 00:19:04.672102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.691 [2024-12-14 00:19:04.672115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.691 qpair failed and we were unable to recover it. 00:38:25.691 [2024-12-14 00:19:04.672250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.691 [2024-12-14 00:19:04.672263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.691 qpair failed and we were unable to recover it. 00:38:25.691 [2024-12-14 00:19:04.672367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.691 [2024-12-14 00:19:04.672381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.691 qpair failed and we were unable to recover it. 00:38:25.691 [2024-12-14 00:19:04.672551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.691 [2024-12-14 00:19:04.672565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.691 qpair failed and we were unable to recover it. 00:38:25.691 [2024-12-14 00:19:04.672731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.691 [2024-12-14 00:19:04.672745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.691 qpair failed and we were unable to recover it. 00:38:25.691 [2024-12-14 00:19:04.672957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.691 [2024-12-14 00:19:04.672973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.691 qpair failed and we were unable to recover it. 00:38:25.691 [2024-12-14 00:19:04.673190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.691 [2024-12-14 00:19:04.673204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.691 qpair failed and we were unable to recover it. 00:38:25.692 [2024-12-14 00:19:04.673338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.692 [2024-12-14 00:19:04.673351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.692 qpair failed and we were unable to recover it. 00:38:25.692 [2024-12-14 00:19:04.673578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.692 [2024-12-14 00:19:04.673593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.692 qpair failed and we were unable to recover it. 00:38:25.692 [2024-12-14 00:19:04.673745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.692 [2024-12-14 00:19:04.673758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.692 qpair failed and we were unable to recover it. 00:38:25.692 [2024-12-14 00:19:04.673902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.692 [2024-12-14 00:19:04.673915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.692 qpair failed and we were unable to recover it. 00:38:25.692 [2024-12-14 00:19:04.674060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.692 [2024-12-14 00:19:04.674074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.692 qpair failed and we were unable to recover it. 00:38:25.692 [2024-12-14 00:19:04.674152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.692 [2024-12-14 00:19:04.674164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.692 qpair failed and we were unable to recover it. 00:38:25.692 [2024-12-14 00:19:04.674266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.692 [2024-12-14 00:19:04.674280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.692 qpair failed and we were unable to recover it. 00:38:25.692 [2024-12-14 00:19:04.674435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.692 [2024-12-14 00:19:04.674455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.692 qpair failed and we were unable to recover it. 00:38:25.692 [2024-12-14 00:19:04.674598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.692 [2024-12-14 00:19:04.674612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.692 qpair failed and we were unable to recover it. 00:38:25.692 [2024-12-14 00:19:04.674680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.692 [2024-12-14 00:19:04.674693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.692 qpair failed and we were unable to recover it. 00:38:25.692 [2024-12-14 00:19:04.674906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.692 [2024-12-14 00:19:04.674919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.692 qpair failed and we were unable to recover it. 00:38:25.692 [2024-12-14 00:19:04.675007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.692 [2024-12-14 00:19:04.675020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.692 qpair failed and we were unable to recover it. 00:38:25.692 [2024-12-14 00:19:04.675115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.692 [2024-12-14 00:19:04.675128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.692 qpair failed and we were unable to recover it. 00:38:25.692 [2024-12-14 00:19:04.675218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.692 [2024-12-14 00:19:04.675231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.692 qpair failed and we were unable to recover it. 00:38:25.692 [2024-12-14 00:19:04.675369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.692 [2024-12-14 00:19:04.675385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.692 qpair failed and we were unable to recover it. 00:38:25.692 [2024-12-14 00:19:04.675536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.692 [2024-12-14 00:19:04.675550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.692 qpair failed and we were unable to recover it. 00:38:25.692 [2024-12-14 00:19:04.675763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.692 [2024-12-14 00:19:04.675776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.692 qpair failed and we were unable to recover it. 00:38:25.692 [2024-12-14 00:19:04.675929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.692 [2024-12-14 00:19:04.675942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.692 qpair failed and we were unable to recover it. 00:38:25.692 [2024-12-14 00:19:04.676013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.692 [2024-12-14 00:19:04.676025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.692 qpair failed and we were unable to recover it. 00:38:25.692 [2024-12-14 00:19:04.676095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.692 [2024-12-14 00:19:04.676108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.692 qpair failed and we were unable to recover it. 00:38:25.692 [2024-12-14 00:19:04.676284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.692 [2024-12-14 00:19:04.676302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.692 qpair failed and we were unable to recover it. 00:38:25.692 [2024-12-14 00:19:04.676455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.692 [2024-12-14 00:19:04.676469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.692 qpair failed and we were unable to recover it. 00:38:25.692 [2024-12-14 00:19:04.676558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.692 [2024-12-14 00:19:04.676571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.692 qpair failed and we were unable to recover it. 00:38:25.692 [2024-12-14 00:19:04.676656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.692 [2024-12-14 00:19:04.676670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.692 qpair failed and we were unable to recover it. 00:38:25.692 [2024-12-14 00:19:04.676755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.692 [2024-12-14 00:19:04.676768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.692 qpair failed and we were unable to recover it. 00:38:25.692 [2024-12-14 00:19:04.676865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.692 [2024-12-14 00:19:04.676878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.692 qpair failed and we were unable to recover it. 00:38:25.692 [2024-12-14 00:19:04.676978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.692 [2024-12-14 00:19:04.676991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.692 qpair failed and we were unable to recover it. 00:38:25.692 [2024-12-14 00:19:04.677199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.692 [2024-12-14 00:19:04.677212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.692 qpair failed and we were unable to recover it. 00:38:25.692 [2024-12-14 00:19:04.677419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.692 [2024-12-14 00:19:04.677433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.692 qpair failed and we were unable to recover it. 00:38:25.692 [2024-12-14 00:19:04.677534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.692 [2024-12-14 00:19:04.677548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.692 qpair failed and we were unable to recover it. 00:38:25.692 [2024-12-14 00:19:04.677636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.692 [2024-12-14 00:19:04.677650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.692 qpair failed and we were unable to recover it. 00:38:25.692 [2024-12-14 00:19:04.677805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.692 [2024-12-14 00:19:04.677818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.692 qpair failed and we were unable to recover it. 00:38:25.692 [2024-12-14 00:19:04.677904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.692 [2024-12-14 00:19:04.677917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.692 qpair failed and we were unable to recover it. 00:38:25.692 [2024-12-14 00:19:04.677999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.692 [2024-12-14 00:19:04.678012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.692 qpair failed and we were unable to recover it. 00:38:25.692 [2024-12-14 00:19:04.678084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.692 [2024-12-14 00:19:04.678100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.692 qpair failed and we were unable to recover it. 00:38:25.692 [2024-12-14 00:19:04.678171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.692 [2024-12-14 00:19:04.678183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.692 qpair failed and we were unable to recover it. 00:38:25.692 [2024-12-14 00:19:04.678274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.692 [2024-12-14 00:19:04.678287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.692 qpair failed and we were unable to recover it. 00:38:25.692 [2024-12-14 00:19:04.678422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.692 [2024-12-14 00:19:04.678435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.692 qpair failed and we were unable to recover it. 00:38:25.692 [2024-12-14 00:19:04.678544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.692 [2024-12-14 00:19:04.678557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.693 qpair failed and we were unable to recover it. 00:38:25.693 [2024-12-14 00:19:04.678705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.693 [2024-12-14 00:19:04.678719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.693 qpair failed and we were unable to recover it. 00:38:25.693 [2024-12-14 00:19:04.678793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.693 [2024-12-14 00:19:04.678806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.693 qpair failed and we were unable to recover it. 00:38:25.693 [2024-12-14 00:19:04.678968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.693 [2024-12-14 00:19:04.678995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:25.693 qpair failed and we were unable to recover it. 00:38:25.693 [2024-12-14 00:19:04.679094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.693 [2024-12-14 00:19:04.679119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:25.693 qpair failed and we were unable to recover it. 00:38:25.693 [2024-12-14 00:19:04.679294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.693 [2024-12-14 00:19:04.679328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:25.693 qpair failed and we were unable to recover it. 00:38:25.693 [2024-12-14 00:19:04.679514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.693 [2024-12-14 00:19:04.679529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.693 qpair failed and we were unable to recover it. 00:38:25.693 [2024-12-14 00:19:04.679771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.693 [2024-12-14 00:19:04.679785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.693 qpair failed and we were unable to recover it. 00:38:25.693 [2024-12-14 00:19:04.679940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.693 [2024-12-14 00:19:04.679954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.693 qpair failed and we were unable to recover it. 00:38:25.693 [2024-12-14 00:19:04.680098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.693 [2024-12-14 00:19:04.680112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.693 qpair failed and we were unable to recover it. 00:38:25.693 [2024-12-14 00:19:04.680260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.693 [2024-12-14 00:19:04.680273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.693 qpair failed and we were unable to recover it. 00:38:25.693 [2024-12-14 00:19:04.680339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.693 [2024-12-14 00:19:04.680351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.693 qpair failed and we were unable to recover it. 00:38:25.693 [2024-12-14 00:19:04.680512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.693 [2024-12-14 00:19:04.680527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.693 qpair failed and we were unable to recover it. 00:38:25.693 [2024-12-14 00:19:04.680629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.693 [2024-12-14 00:19:04.680642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.693 qpair failed and we were unable to recover it. 00:38:25.693 [2024-12-14 00:19:04.680792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.693 [2024-12-14 00:19:04.680806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.693 qpair failed and we were unable to recover it. 00:38:25.693 [2024-12-14 00:19:04.680949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.693 [2024-12-14 00:19:04.680962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.693 qpair failed and we were unable to recover it. 00:38:25.693 [2024-12-14 00:19:04.681098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.693 [2024-12-14 00:19:04.681114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.693 qpair failed and we were unable to recover it. 00:38:25.693 [2024-12-14 00:19:04.681308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.693 [2024-12-14 00:19:04.681322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.693 qpair failed and we were unable to recover it. 00:38:25.693 [2024-12-14 00:19:04.681394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.693 [2024-12-14 00:19:04.681406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.693 qpair failed and we were unable to recover it. 00:38:25.693 [2024-12-14 00:19:04.681576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.693 [2024-12-14 00:19:04.681591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.693 qpair failed and we were unable to recover it. 00:38:25.693 [2024-12-14 00:19:04.681683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.693 [2024-12-14 00:19:04.681697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.693 qpair failed and we were unable to recover it. 00:38:25.693 [2024-12-14 00:19:04.681853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.693 [2024-12-14 00:19:04.681867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.693 qpair failed and we were unable to recover it. 00:38:25.693 [2024-12-14 00:19:04.682035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.693 [2024-12-14 00:19:04.682049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.693 qpair failed and we were unable to recover it. 00:38:25.693 [2024-12-14 00:19:04.682219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.693 [2024-12-14 00:19:04.682233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.693 qpair failed and we were unable to recover it. 00:38:25.693 [2024-12-14 00:19:04.682390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.693 [2024-12-14 00:19:04.682404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.693 qpair failed and we were unable to recover it. 00:38:25.693 [2024-12-14 00:19:04.682543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.693 [2024-12-14 00:19:04.682557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.693 qpair failed and we were unable to recover it. 00:38:25.693 [2024-12-14 00:19:04.682643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.693 [2024-12-14 00:19:04.682656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.693 qpair failed and we were unable to recover it. 00:38:25.693 [2024-12-14 00:19:04.682798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.693 [2024-12-14 00:19:04.682812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.693 qpair failed and we were unable to recover it. 00:38:25.693 [2024-12-14 00:19:04.682967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.693 [2024-12-14 00:19:04.682981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.693 qpair failed and we were unable to recover it. 00:38:25.693 [2024-12-14 00:19:04.683159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.693 [2024-12-14 00:19:04.683173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.693 qpair failed and we were unable to recover it. 00:38:25.693 [2024-12-14 00:19:04.683380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.693 [2024-12-14 00:19:04.683394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.693 qpair failed and we were unable to recover it. 00:38:25.693 [2024-12-14 00:19:04.683485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.693 [2024-12-14 00:19:04.683499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.693 qpair failed and we were unable to recover it. 00:38:25.693 [2024-12-14 00:19:04.683705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.693 [2024-12-14 00:19:04.683718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.693 qpair failed and we were unable to recover it. 00:38:25.693 [2024-12-14 00:19:04.683802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.693 [2024-12-14 00:19:04.683815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.693 qpair failed and we were unable to recover it. 00:38:25.693 [2024-12-14 00:19:04.683991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.693 [2024-12-14 00:19:04.684008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.693 qpair failed and we were unable to recover it. 00:38:25.693 [2024-12-14 00:19:04.684083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.693 [2024-12-14 00:19:04.684096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.693 qpair failed and we were unable to recover it. 00:38:25.693 [2024-12-14 00:19:04.684251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.693 [2024-12-14 00:19:04.684268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.693 qpair failed and we were unable to recover it. 00:38:25.693 [2024-12-14 00:19:04.684354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.693 [2024-12-14 00:19:04.684368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.693 qpair failed and we were unable to recover it. 00:38:25.694 [2024-12-14 00:19:04.684466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.694 [2024-12-14 00:19:04.684480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.694 qpair failed and we were unable to recover it. 00:38:25.694 [2024-12-14 00:19:04.684721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.694 [2024-12-14 00:19:04.684735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.694 qpair failed and we were unable to recover it. 00:38:25.694 [2024-12-14 00:19:04.684828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.694 [2024-12-14 00:19:04.684841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.694 qpair failed and we were unable to recover it. 00:38:25.694 [2024-12-14 00:19:04.685066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.694 [2024-12-14 00:19:04.685079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.694 qpair failed and we were unable to recover it. 00:38:25.694 [2024-12-14 00:19:04.685164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.694 [2024-12-14 00:19:04.685176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.694 qpair failed and we were unable to recover it. 00:38:25.694 [2024-12-14 00:19:04.685359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.694 [2024-12-14 00:19:04.685385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:25.694 qpair failed and we were unable to recover it. 00:38:25.694 [2024-12-14 00:19:04.685681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.694 [2024-12-14 00:19:04.685706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:25.694 qpair failed and we were unable to recover it. 00:38:25.694 [2024-12-14 00:19:04.685896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.694 [2024-12-14 00:19:04.685924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:25.694 qpair failed and we were unable to recover it. 00:38:25.694 [2024-12-14 00:19:04.686154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.694 [2024-12-14 00:19:04.686170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.694 qpair failed and we were unable to recover it. 00:38:25.694 [2024-12-14 00:19:04.686269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.694 [2024-12-14 00:19:04.686283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.694 qpair failed and we were unable to recover it. 00:38:25.694 [2024-12-14 00:19:04.686428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.694 [2024-12-14 00:19:04.686454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.694 qpair failed and we were unable to recover it. 00:38:25.694 [2024-12-14 00:19:04.686598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.694 [2024-12-14 00:19:04.686612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.694 qpair failed and we were unable to recover it. 00:38:25.694 [2024-12-14 00:19:04.686690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.694 [2024-12-14 00:19:04.686703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.694 qpair failed and we were unable to recover it. 00:38:25.694 [2024-12-14 00:19:04.686852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.694 [2024-12-14 00:19:04.686866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.694 qpair failed and we were unable to recover it. 00:38:25.694 [2024-12-14 00:19:04.686963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.694 [2024-12-14 00:19:04.686994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.694 qpair failed and we were unable to recover it. 00:38:25.694 [2024-12-14 00:19:04.687093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.694 [2024-12-14 00:19:04.687106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.694 qpair failed and we were unable to recover it. 00:38:25.694 [2024-12-14 00:19:04.687180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.694 [2024-12-14 00:19:04.687193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.694 qpair failed and we were unable to recover it. 00:38:25.694 [2024-12-14 00:19:04.687335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.694 [2024-12-14 00:19:04.687349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.694 qpair failed and we were unable to recover it. 00:38:25.694 [2024-12-14 00:19:04.687494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.694 [2024-12-14 00:19:04.687510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.694 qpair failed and we were unable to recover it. 00:38:25.694 [2024-12-14 00:19:04.687580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.694 [2024-12-14 00:19:04.687592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.694 qpair failed and we were unable to recover it. 00:38:25.694 [2024-12-14 00:19:04.687659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.694 [2024-12-14 00:19:04.687671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.694 qpair failed and we were unable to recover it. 00:38:25.694 [2024-12-14 00:19:04.687840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.694 [2024-12-14 00:19:04.687853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.694 qpair failed and we were unable to recover it. 00:38:25.694 [2024-12-14 00:19:04.687924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.694 [2024-12-14 00:19:04.687936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.694 qpair failed and we were unable to recover it. 00:38:25.694 [2024-12-14 00:19:04.688084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.694 [2024-12-14 00:19:04.688098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.694 qpair failed and we were unable to recover it. 00:38:25.694 [2024-12-14 00:19:04.688243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.694 [2024-12-14 00:19:04.688257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.694 qpair failed and we were unable to recover it. 00:38:25.694 [2024-12-14 00:19:04.688326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.694 [2024-12-14 00:19:04.688339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.694 qpair failed and we were unable to recover it. 00:38:25.694 [2024-12-14 00:19:04.688485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.694 [2024-12-14 00:19:04.688499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.694 qpair failed and we were unable to recover it. 00:38:25.694 [2024-12-14 00:19:04.688643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.694 [2024-12-14 00:19:04.688657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.694 qpair failed and we were unable to recover it. 00:38:25.694 [2024-12-14 00:19:04.688750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.694 [2024-12-14 00:19:04.688763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.694 qpair failed and we were unable to recover it. 00:38:25.694 [2024-12-14 00:19:04.688999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.694 [2024-12-14 00:19:04.689012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.694 qpair failed and we were unable to recover it. 00:38:25.694 [2024-12-14 00:19:04.689100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.694 [2024-12-14 00:19:04.689112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.694 qpair failed and we were unable to recover it. 00:38:25.694 [2024-12-14 00:19:04.689183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.694 [2024-12-14 00:19:04.689196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.694 qpair failed and we were unable to recover it. 00:38:25.694 [2024-12-14 00:19:04.689339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.694 [2024-12-14 00:19:04.689352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.694 qpair failed and we were unable to recover it. 00:38:25.694 [2024-12-14 00:19:04.689429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.694 [2024-12-14 00:19:04.689445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.695 qpair failed and we were unable to recover it. 00:38:25.695 [2024-12-14 00:19:04.689655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.695 [2024-12-14 00:19:04.689669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.695 qpair failed and we were unable to recover it. 00:38:25.695 [2024-12-14 00:19:04.689755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.695 [2024-12-14 00:19:04.689769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.695 qpair failed and we were unable to recover it. 00:38:25.695 [2024-12-14 00:19:04.689848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.695 [2024-12-14 00:19:04.689862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.695 qpair failed and we were unable to recover it. 00:38:25.695 [2024-12-14 00:19:04.690019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.695 [2024-12-14 00:19:04.690032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.695 qpair failed and we were unable to recover it. 00:38:25.695 [2024-12-14 00:19:04.690177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.695 [2024-12-14 00:19:04.690190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.695 qpair failed and we were unable to recover it. 00:38:25.695 [2024-12-14 00:19:04.690259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.695 [2024-12-14 00:19:04.690271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.695 qpair failed and we were unable to recover it. 00:38:25.695 [2024-12-14 00:19:04.690414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.695 [2024-12-14 00:19:04.690429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.695 qpair failed and we were unable to recover it. 00:38:25.695 [2024-12-14 00:19:04.690655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.695 [2024-12-14 00:19:04.690669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.695 qpair failed and we were unable to recover it. 00:38:25.695 [2024-12-14 00:19:04.690839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.695 [2024-12-14 00:19:04.690853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.695 qpair failed and we were unable to recover it. 00:38:25.695 [2024-12-14 00:19:04.691007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.695 [2024-12-14 00:19:04.691020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.695 qpair failed and we were unable to recover it. 00:38:25.695 [2024-12-14 00:19:04.691177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.695 [2024-12-14 00:19:04.691191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.695 qpair failed and we were unable to recover it. 00:38:25.695 [2024-12-14 00:19:04.691309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.695 [2024-12-14 00:19:04.691335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:25.695 qpair failed and we were unable to recover it. 00:38:25.695 [2024-12-14 00:19:04.691614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.695 [2024-12-14 00:19:04.691637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:25.695 qpair failed and we were unable to recover it. 00:38:25.695 [2024-12-14 00:19:04.691731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.695 [2024-12-14 00:19:04.691755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:25.695 qpair failed and we were unable to recover it. 00:38:25.695 [2024-12-14 00:19:04.691968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.695 [2024-12-14 00:19:04.691984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.695 qpair failed and we were unable to recover it. 00:38:25.695 [2024-12-14 00:19:04.692067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.695 [2024-12-14 00:19:04.692080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.695 qpair failed and we were unable to recover it. 00:38:25.695 [2024-12-14 00:19:04.692225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.695 [2024-12-14 00:19:04.692238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.695 qpair failed and we were unable to recover it. 00:38:25.695 [2024-12-14 00:19:04.692374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.695 [2024-12-14 00:19:04.692387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.695 qpair failed and we were unable to recover it. 00:38:25.695 [2024-12-14 00:19:04.692614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.695 [2024-12-14 00:19:04.692628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.695 qpair failed and we were unable to recover it. 00:38:25.695 [2024-12-14 00:19:04.692726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.695 [2024-12-14 00:19:04.692740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.695 qpair failed and we were unable to recover it. 00:38:25.695 [2024-12-14 00:19:04.692838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.695 [2024-12-14 00:19:04.692852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.695 qpair failed and we were unable to recover it. 00:38:25.695 [2024-12-14 00:19:04.692928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.695 [2024-12-14 00:19:04.692940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.695 qpair failed and we were unable to recover it. 00:38:25.695 [2024-12-14 00:19:04.693020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.695 [2024-12-14 00:19:04.693034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.695 qpair failed and we were unable to recover it. 00:38:25.695 [2024-12-14 00:19:04.693119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.695 [2024-12-14 00:19:04.693132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.695 qpair failed and we were unable to recover it. 00:38:25.695 [2024-12-14 00:19:04.693223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.695 [2024-12-14 00:19:04.693249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.695 qpair failed and we were unable to recover it. 00:38:25.695 [2024-12-14 00:19:04.693464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.695 [2024-12-14 00:19:04.693481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.695 qpair failed and we were unable to recover it. 00:38:25.695 [2024-12-14 00:19:04.693663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.695 [2024-12-14 00:19:04.693677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.695 qpair failed and we were unable to recover it. 00:38:25.695 [2024-12-14 00:19:04.693774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.695 [2024-12-14 00:19:04.693787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.695 qpair failed and we were unable to recover it. 00:38:25.695 [2024-12-14 00:19:04.694058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.695 [2024-12-14 00:19:04.694071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.695 qpair failed and we were unable to recover it. 00:38:25.695 [2024-12-14 00:19:04.694145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.695 [2024-12-14 00:19:04.694158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.695 qpair failed and we were unable to recover it. 00:38:25.695 [2024-12-14 00:19:04.694355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.695 [2024-12-14 00:19:04.694369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.695 qpair failed and we were unable to recover it. 00:38:25.695 [2024-12-14 00:19:04.694532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.695 [2024-12-14 00:19:04.694546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.695 qpair failed and we were unable to recover it. 00:38:25.695 [2024-12-14 00:19:04.694709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.695 [2024-12-14 00:19:04.694724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.695 qpair failed and we were unable to recover it. 00:38:25.695 [2024-12-14 00:19:04.694874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.695 [2024-12-14 00:19:04.694887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.695 qpair failed and we were unable to recover it. 00:38:25.695 [2024-12-14 00:19:04.694958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.695 [2024-12-14 00:19:04.694970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.695 qpair failed and we were unable to recover it. 00:38:25.695 [2024-12-14 00:19:04.695050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.695 [2024-12-14 00:19:04.695063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.695 qpair failed and we were unable to recover it. 00:38:25.695 [2024-12-14 00:19:04.695201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.695 [2024-12-14 00:19:04.695215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.695 qpair failed and we were unable to recover it. 00:38:25.695 [2024-12-14 00:19:04.695356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.696 [2024-12-14 00:19:04.695369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.696 qpair failed and we were unable to recover it. 00:38:25.696 [2024-12-14 00:19:04.695449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.696 [2024-12-14 00:19:04.695462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.696 qpair failed and we were unable to recover it. 00:38:25.696 [2024-12-14 00:19:04.695538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.696 [2024-12-14 00:19:04.695551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.696 qpair failed and we were unable to recover it. 00:38:25.696 [2024-12-14 00:19:04.695709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.696 [2024-12-14 00:19:04.695722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.696 qpair failed and we were unable to recover it. 00:38:25.696 [2024-12-14 00:19:04.695803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.696 [2024-12-14 00:19:04.695816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.696 qpair failed and we were unable to recover it. 00:38:25.696 [2024-12-14 00:19:04.695889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.696 [2024-12-14 00:19:04.695901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.696 qpair failed and we were unable to recover it. 00:38:25.696 [2024-12-14 00:19:04.696056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.696 [2024-12-14 00:19:04.696069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.696 qpair failed and we were unable to recover it. 00:38:25.696 [2024-12-14 00:19:04.696141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.696 [2024-12-14 00:19:04.696153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.696 qpair failed and we were unable to recover it. 00:38:25.696 [2024-12-14 00:19:04.696297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.696 [2024-12-14 00:19:04.696310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.696 qpair failed and we were unable to recover it. 00:38:25.696 [2024-12-14 00:19:04.696425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.696 [2024-12-14 00:19:04.696444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.696 qpair failed and we were unable to recover it. 00:38:25.696 [2024-12-14 00:19:04.696598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.696 [2024-12-14 00:19:04.696616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.696 qpair failed and we were unable to recover it. 00:38:25.696 [2024-12-14 00:19:04.696692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.696 [2024-12-14 00:19:04.696705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.696 qpair failed and we were unable to recover it. 00:38:25.696 [2024-12-14 00:19:04.696840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.696 [2024-12-14 00:19:04.696854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.696 qpair failed and we were unable to recover it. 00:38:25.696 [2024-12-14 00:19:04.697067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.696 [2024-12-14 00:19:04.697080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.696 qpair failed and we were unable to recover it. 00:38:25.696 [2024-12-14 00:19:04.697235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.696 [2024-12-14 00:19:04.697260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:25.696 qpair failed and we were unable to recover it. 00:38:25.696 [2024-12-14 00:19:04.697368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.696 [2024-12-14 00:19:04.697390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:25.696 qpair failed and we were unable to recover it. 00:38:25.696 [2024-12-14 00:19:04.697559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.696 [2024-12-14 00:19:04.697583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:25.696 qpair failed and we were unable to recover it. 00:38:25.696 [2024-12-14 00:19:04.697759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.696 [2024-12-14 00:19:04.697774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.696 qpair failed and we were unable to recover it. 00:38:25.696 [2024-12-14 00:19:04.697871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.696 [2024-12-14 00:19:04.697885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.696 qpair failed and we were unable to recover it. 00:38:25.696 [2024-12-14 00:19:04.697950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.696 [2024-12-14 00:19:04.697963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.696 qpair failed and we were unable to recover it. 00:38:25.696 [2024-12-14 00:19:04.698165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.696 [2024-12-14 00:19:04.698179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.696 qpair failed and we were unable to recover it. 00:38:25.696 [2024-12-14 00:19:04.698322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.696 [2024-12-14 00:19:04.698335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.696 qpair failed and we were unable to recover it. 00:38:25.696 [2024-12-14 00:19:04.698512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.696 [2024-12-14 00:19:04.698525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.696 qpair failed and we were unable to recover it. 00:38:25.696 [2024-12-14 00:19:04.698678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.696 [2024-12-14 00:19:04.698691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.696 qpair failed and we were unable to recover it. 00:38:25.696 [2024-12-14 00:19:04.698778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.696 [2024-12-14 00:19:04.698792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.696 qpair failed and we were unable to recover it. 00:38:25.696 [2024-12-14 00:19:04.698870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.696 [2024-12-14 00:19:04.698882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.696 qpair failed and we were unable to recover it. 00:38:25.696 [2024-12-14 00:19:04.699020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.696 [2024-12-14 00:19:04.699033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.696 qpair failed and we were unable to recover it. 00:38:25.696 [2024-12-14 00:19:04.699201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.696 [2024-12-14 00:19:04.699217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.696 qpair failed and we were unable to recover it. 00:38:25.696 [2024-12-14 00:19:04.699289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.696 [2024-12-14 00:19:04.699302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.696 qpair failed and we were unable to recover it. 00:38:25.696 [2024-12-14 00:19:04.699445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.696 [2024-12-14 00:19:04.699459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.696 qpair failed and we were unable to recover it. 00:38:25.696 [2024-12-14 00:19:04.699620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.696 [2024-12-14 00:19:04.699634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.696 qpair failed and we were unable to recover it. 00:38:25.696 [2024-12-14 00:19:04.699787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.696 [2024-12-14 00:19:04.699801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.696 qpair failed and we were unable to recover it. 00:38:25.696 [2024-12-14 00:19:04.699942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.696 [2024-12-14 00:19:04.699955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.696 qpair failed and we were unable to recover it. 00:38:25.696 [2024-12-14 00:19:04.700117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.696 [2024-12-14 00:19:04.700133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.696 qpair failed and we were unable to recover it. 00:38:25.696 [2024-12-14 00:19:04.700331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.696 [2024-12-14 00:19:04.700345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.696 qpair failed and we were unable to recover it. 00:38:25.696 [2024-12-14 00:19:04.700513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.696 [2024-12-14 00:19:04.700527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.696 qpair failed and we were unable to recover it. 00:38:25.696 [2024-12-14 00:19:04.700713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.696 [2024-12-14 00:19:04.700726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.696 qpair failed and we were unable to recover it. 00:38:25.696 [2024-12-14 00:19:04.700894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.696 [2024-12-14 00:19:04.700908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.696 qpair failed and we were unable to recover it. 00:38:25.696 [2024-12-14 00:19:04.701056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.696 [2024-12-14 00:19:04.701069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.696 qpair failed and we were unable to recover it. 00:38:25.696 [2024-12-14 00:19:04.701279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.696 [2024-12-14 00:19:04.701293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.697 qpair failed and we were unable to recover it. 00:38:25.697 [2024-12-14 00:19:04.701379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.697 [2024-12-14 00:19:04.701392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.697 qpair failed and we were unable to recover it. 00:38:25.697 [2024-12-14 00:19:04.701476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.697 [2024-12-14 00:19:04.701489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.697 qpair failed and we were unable to recover it. 00:38:25.697 [2024-12-14 00:19:04.701585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.697 [2024-12-14 00:19:04.701598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.697 qpair failed and we were unable to recover it. 00:38:25.697 [2024-12-14 00:19:04.701696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.697 [2024-12-14 00:19:04.701709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.697 qpair failed and we were unable to recover it. 00:38:25.697 [2024-12-14 00:19:04.701857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.697 [2024-12-14 00:19:04.701871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.697 qpair failed and we were unable to recover it. 00:38:25.697 [2024-12-14 00:19:04.702092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.697 [2024-12-14 00:19:04.702105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.697 qpair failed and we were unable to recover it. 00:38:25.697 [2024-12-14 00:19:04.702239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.697 [2024-12-14 00:19:04.702252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.697 qpair failed and we were unable to recover it. 00:38:25.697 [2024-12-14 00:19:04.702331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.697 [2024-12-14 00:19:04.702344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.697 qpair failed and we were unable to recover it. 00:38:25.697 [2024-12-14 00:19:04.702498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.697 [2024-12-14 00:19:04.702512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.697 qpair failed and we were unable to recover it. 00:38:25.697 [2024-12-14 00:19:04.702662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.697 [2024-12-14 00:19:04.702676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.697 qpair failed and we were unable to recover it. 00:38:25.697 [2024-12-14 00:19:04.702816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.697 [2024-12-14 00:19:04.702829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.697 qpair failed and we were unable to recover it. 00:38:25.697 [2024-12-14 00:19:04.703053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.697 [2024-12-14 00:19:04.703066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.697 qpair failed and we were unable to recover it. 00:38:25.697 [2024-12-14 00:19:04.703149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.697 [2024-12-14 00:19:04.703163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.697 qpair failed and we were unable to recover it. 00:38:25.697 [2024-12-14 00:19:04.703243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.697 [2024-12-14 00:19:04.703255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.697 qpair failed and we were unable to recover it. 00:38:25.697 [2024-12-14 00:19:04.703431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.697 [2024-12-14 00:19:04.703462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:25.697 qpair failed and we were unable to recover it. 00:38:25.697 [2024-12-14 00:19:04.703646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.697 [2024-12-14 00:19:04.703670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:25.697 qpair failed and we were unable to recover it. 00:38:25.697 [2024-12-14 00:19:04.703837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.697 [2024-12-14 00:19:04.703859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:25.697 qpair failed and we were unable to recover it. 00:38:25.697 [2024-12-14 00:19:04.704085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.697 [2024-12-14 00:19:04.704107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:25.697 qpair failed and we were unable to recover it. 00:38:25.697 [2024-12-14 00:19:04.704209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.697 [2024-12-14 00:19:04.704229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:25.697 qpair failed and we were unable to recover it. 00:38:25.697 [2024-12-14 00:19:04.704381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.697 [2024-12-14 00:19:04.704402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:25.697 qpair failed and we were unable to recover it. 00:38:25.697 [2024-12-14 00:19:04.704484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.697 [2024-12-14 00:19:04.704498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.697 qpair failed and we were unable to recover it. 00:38:25.697 [2024-12-14 00:19:04.704586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.697 [2024-12-14 00:19:04.704600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.697 qpair failed and we were unable to recover it. 00:38:25.697 [2024-12-14 00:19:04.704737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.697 [2024-12-14 00:19:04.704751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.697 qpair failed and we were unable to recover it. 00:38:25.697 [2024-12-14 00:19:04.704830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.697 [2024-12-14 00:19:04.704843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.697 qpair failed and we were unable to recover it. 00:38:25.697 [2024-12-14 00:19:04.705013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.697 [2024-12-14 00:19:04.705027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.697 qpair failed and we were unable to recover it. 00:38:25.697 [2024-12-14 00:19:04.705116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.697 [2024-12-14 00:19:04.705130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.697 qpair failed and we were unable to recover it. 00:38:25.697 [2024-12-14 00:19:04.705275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.697 [2024-12-14 00:19:04.705289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.697 qpair failed and we were unable to recover it. 00:38:25.697 [2024-12-14 00:19:04.705463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.697 [2024-12-14 00:19:04.705483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.697 qpair failed and we were unable to recover it. 00:38:25.697 [2024-12-14 00:19:04.705623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.697 [2024-12-14 00:19:04.705637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.697 qpair failed and we were unable to recover it. 00:38:25.697 [2024-12-14 00:19:04.705836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.697 [2024-12-14 00:19:04.705850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.697 qpair failed and we were unable to recover it. 00:38:25.697 [2024-12-14 00:19:04.706002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.697 [2024-12-14 00:19:04.706016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.697 qpair failed and we were unable to recover it. 00:38:25.697 [2024-12-14 00:19:04.706218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.697 [2024-12-14 00:19:04.706247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.697 qpair failed and we were unable to recover it. 00:38:25.697 [2024-12-14 00:19:04.706420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.697 [2024-12-14 00:19:04.706435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.697 qpair failed and we were unable to recover it. 00:38:25.697 [2024-12-14 00:19:04.706588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.697 [2024-12-14 00:19:04.706603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.697 qpair failed and we were unable to recover it. 00:38:25.697 [2024-12-14 00:19:04.706754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.697 [2024-12-14 00:19:04.706768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.697 qpair failed and we were unable to recover it. 00:38:25.697 [2024-12-14 00:19:04.706943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.697 [2024-12-14 00:19:04.706957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.697 qpair failed and we were unable to recover it. 00:38:25.697 [2024-12-14 00:19:04.707053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.697 [2024-12-14 00:19:04.707066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.697 qpair failed and we were unable to recover it. 00:38:25.697 [2024-12-14 00:19:04.707218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.698 [2024-12-14 00:19:04.707232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.698 qpair failed and we were unable to recover it. 00:38:25.698 [2024-12-14 00:19:04.707405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.698 [2024-12-14 00:19:04.707418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.698 qpair failed and we were unable to recover it. 00:38:25.698 [2024-12-14 00:19:04.707571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.698 [2024-12-14 00:19:04.707585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.698 qpair failed and we were unable to recover it. 00:38:25.698 [2024-12-14 00:19:04.707789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.698 [2024-12-14 00:19:04.707803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.698 qpair failed and we were unable to recover it. 00:38:25.698 [2024-12-14 00:19:04.707960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.698 [2024-12-14 00:19:04.707974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.698 qpair failed and we were unable to recover it. 00:38:25.698 [2024-12-14 00:19:04.708126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.698 [2024-12-14 00:19:04.708139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.698 qpair failed and we were unable to recover it. 00:38:25.698 [2024-12-14 00:19:04.708398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.698 [2024-12-14 00:19:04.708415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.698 qpair failed and we were unable to recover it. 00:38:25.698 [2024-12-14 00:19:04.708505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.698 [2024-12-14 00:19:04.708519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.698 qpair failed and we were unable to recover it. 00:38:25.698 [2024-12-14 00:19:04.708726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.698 [2024-12-14 00:19:04.708739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.698 qpair failed and we were unable to recover it. 00:38:25.698 [2024-12-14 00:19:04.708817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.698 [2024-12-14 00:19:04.708829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.698 qpair failed and we were unable to recover it. 00:38:25.698 [2024-12-14 00:19:04.708921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.698 [2024-12-14 00:19:04.708934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.698 qpair failed and we were unable to recover it. 00:38:25.698 [2024-12-14 00:19:04.709087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.698 [2024-12-14 00:19:04.709101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.698 qpair failed and we were unable to recover it. 00:38:25.698 [2024-12-14 00:19:04.709245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.698 [2024-12-14 00:19:04.709258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.698 qpair failed and we were unable to recover it. 00:38:25.698 [2024-12-14 00:19:04.709408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.698 [2024-12-14 00:19:04.709421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.698 qpair failed and we were unable to recover it. 00:38:25.698 [2024-12-14 00:19:04.709630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.698 [2024-12-14 00:19:04.709645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.698 qpair failed and we were unable to recover it. 00:38:25.698 [2024-12-14 00:19:04.709806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.698 [2024-12-14 00:19:04.709820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.698 qpair failed and we were unable to recover it. 00:38:25.698 [2024-12-14 00:19:04.709907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.698 [2024-12-14 00:19:04.709921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.698 qpair failed and we were unable to recover it. 00:38:25.698 [2024-12-14 00:19:04.710106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.698 [2024-12-14 00:19:04.710130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:25.698 qpair failed and we were unable to recover it. 00:38:25.698 [2024-12-14 00:19:04.710301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.698 [2024-12-14 00:19:04.710326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:25.698 qpair failed and we were unable to recover it. 00:38:25.698 [2024-12-14 00:19:04.710432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.698 [2024-12-14 00:19:04.710462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:25.698 qpair failed and we were unable to recover it. 00:38:25.698 [2024-12-14 00:19:04.710553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.698 [2024-12-14 00:19:04.710568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.698 qpair failed and we were unable to recover it. 00:38:25.698 [2024-12-14 00:19:04.710715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.698 [2024-12-14 00:19:04.710729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.698 qpair failed and we were unable to recover it. 00:38:25.698 [2024-12-14 00:19:04.710932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.698 [2024-12-14 00:19:04.710945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.698 qpair failed and we were unable to recover it. 00:38:25.698 [2024-12-14 00:19:04.711023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.698 [2024-12-14 00:19:04.711035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.698 qpair failed and we were unable to recover it. 00:38:25.698 [2024-12-14 00:19:04.711121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.698 [2024-12-14 00:19:04.711134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.698 qpair failed and we were unable to recover it. 00:38:25.698 [2024-12-14 00:19:04.711205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.698 [2024-12-14 00:19:04.711217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.698 qpair failed and we were unable to recover it. 00:38:25.698 [2024-12-14 00:19:04.711365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.698 [2024-12-14 00:19:04.711378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.698 qpair failed and we were unable to recover it. 00:38:25.698 [2024-12-14 00:19:04.711509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.698 [2024-12-14 00:19:04.711523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.698 qpair failed and we were unable to recover it. 00:38:25.698 [2024-12-14 00:19:04.711672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.698 [2024-12-14 00:19:04.711685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.698 qpair failed and we were unable to recover it. 00:38:25.698 [2024-12-14 00:19:04.711825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.698 [2024-12-14 00:19:04.711841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.698 qpair failed and we were unable to recover it. 00:38:25.698 [2024-12-14 00:19:04.711926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.698 [2024-12-14 00:19:04.711941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.698 qpair failed and we were unable to recover it. 00:38:25.698 [2024-12-14 00:19:04.712089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.698 [2024-12-14 00:19:04.712102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.698 qpair failed and we were unable to recover it. 00:38:25.698 [2024-12-14 00:19:04.712178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.698 [2024-12-14 00:19:04.712191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.698 qpair failed and we were unable to recover it. 00:38:25.698 [2024-12-14 00:19:04.712265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.698 [2024-12-14 00:19:04.712278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.698 qpair failed and we were unable to recover it. 00:38:25.698 [2024-12-14 00:19:04.712361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.699 [2024-12-14 00:19:04.712374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.699 qpair failed and we were unable to recover it. 00:38:25.699 [2024-12-14 00:19:04.712533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.699 [2024-12-14 00:19:04.712547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.699 qpair failed and we were unable to recover it. 00:38:25.699 [2024-12-14 00:19:04.712704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.699 [2024-12-14 00:19:04.712717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.699 qpair failed and we were unable to recover it. 00:38:25.699 [2024-12-14 00:19:04.712871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.699 [2024-12-14 00:19:04.712885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.699 qpair failed and we were unable to recover it. 00:38:25.699 [2024-12-14 00:19:04.713020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.699 [2024-12-14 00:19:04.713033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.699 qpair failed and we were unable to recover it. 00:38:25.699 [2024-12-14 00:19:04.713185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.699 [2024-12-14 00:19:04.713198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.699 qpair failed and we were unable to recover it. 00:38:25.699 [2024-12-14 00:19:04.713274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.699 [2024-12-14 00:19:04.713286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.699 qpair failed and we were unable to recover it. 00:38:25.699 [2024-12-14 00:19:04.713443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.699 [2024-12-14 00:19:04.713457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.699 qpair failed and we were unable to recover it. 00:38:25.699 [2024-12-14 00:19:04.713664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.699 [2024-12-14 00:19:04.713677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.699 qpair failed and we were unable to recover it. 00:38:25.699 [2024-12-14 00:19:04.713750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.699 [2024-12-14 00:19:04.713763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.699 qpair failed and we were unable to recover it. 00:38:25.699 [2024-12-14 00:19:04.713848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.699 [2024-12-14 00:19:04.713860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.699 qpair failed and we were unable to recover it. 00:38:25.699 [2024-12-14 00:19:04.713991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.699 [2024-12-14 00:19:04.714004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.699 qpair failed and we were unable to recover it. 00:38:25.699 [2024-12-14 00:19:04.714246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.699 [2024-12-14 00:19:04.714260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.699 qpair failed and we were unable to recover it. 00:38:25.699 [2024-12-14 00:19:04.714405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.699 [2024-12-14 00:19:04.714418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.699 qpair failed and we were unable to recover it. 00:38:25.699 [2024-12-14 00:19:04.714584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.699 [2024-12-14 00:19:04.714599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.699 qpair failed and we were unable to recover it. 00:38:25.699 [2024-12-14 00:19:04.714737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.699 [2024-12-14 00:19:04.714750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.699 qpair failed and we were unable to recover it. 00:38:25.699 [2024-12-14 00:19:04.714970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.699 [2024-12-14 00:19:04.714983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.699 qpair failed and we were unable to recover it. 00:38:25.699 [2024-12-14 00:19:04.715133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.699 [2024-12-14 00:19:04.715147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.699 qpair failed and we were unable to recover it. 00:38:25.699 [2024-12-14 00:19:04.715280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.699 [2024-12-14 00:19:04.715293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.699 qpair failed and we were unable to recover it. 00:38:25.699 [2024-12-14 00:19:04.715458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.699 [2024-12-14 00:19:04.715472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.699 qpair failed and we were unable to recover it. 00:38:25.699 [2024-12-14 00:19:04.715697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.699 [2024-12-14 00:19:04.715710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.699 qpair failed and we were unable to recover it. 00:38:25.699 [2024-12-14 00:19:04.715856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.699 [2024-12-14 00:19:04.715870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.699 qpair failed and we were unable to recover it. 00:38:25.699 [2024-12-14 00:19:04.716016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.699 [2024-12-14 00:19:04.716029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.699 qpair failed and we were unable to recover it. 00:38:25.699 [2024-12-14 00:19:04.716177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.699 [2024-12-14 00:19:04.716194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.699 qpair failed and we were unable to recover it. 00:38:25.699 [2024-12-14 00:19:04.716276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.699 [2024-12-14 00:19:04.716289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.699 qpair failed and we were unable to recover it. 00:38:25.699 [2024-12-14 00:19:04.716386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.699 [2024-12-14 00:19:04.716400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.699 qpair failed and we were unable to recover it. 00:38:25.699 [2024-12-14 00:19:04.716492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.699 [2024-12-14 00:19:04.716506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.699 qpair failed and we were unable to recover it. 00:38:25.699 [2024-12-14 00:19:04.716581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.699 [2024-12-14 00:19:04.716594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.699 qpair failed and we were unable to recover it. 00:38:25.699 [2024-12-14 00:19:04.716744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.699 [2024-12-14 00:19:04.716757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.699 qpair failed and we were unable to recover it. 00:38:25.699 [2024-12-14 00:19:04.716825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.699 [2024-12-14 00:19:04.716838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.699 qpair failed and we were unable to recover it. 00:38:25.699 [2024-12-14 00:19:04.716928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.699 [2024-12-14 00:19:04.716941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.699 qpair failed and we were unable to recover it. 00:38:25.699 [2024-12-14 00:19:04.717036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.699 [2024-12-14 00:19:04.717049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.699 qpair failed and we were unable to recover it. 00:38:25.699 [2024-12-14 00:19:04.717203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.699 [2024-12-14 00:19:04.717217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.699 qpair failed and we were unable to recover it. 00:38:25.699 [2024-12-14 00:19:04.717384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.699 [2024-12-14 00:19:04.717398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.699 qpair failed and we were unable to recover it. 00:38:25.699 [2024-12-14 00:19:04.717486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.699 [2024-12-14 00:19:04.717500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.699 qpair failed and we were unable to recover it. 00:38:25.699 [2024-12-14 00:19:04.717708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.699 [2024-12-14 00:19:04.717721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.699 qpair failed and we were unable to recover it. 00:38:25.699 [2024-12-14 00:19:04.717858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.699 [2024-12-14 00:19:04.717871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.699 qpair failed and we were unable to recover it. 00:38:25.699 [2024-12-14 00:19:04.718021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.699 [2024-12-14 00:19:04.718035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.699 qpair failed and we were unable to recover it. 00:38:25.699 [2024-12-14 00:19:04.718137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.699 [2024-12-14 00:19:04.718150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.699 qpair failed and we were unable to recover it. 00:38:25.699 [2024-12-14 00:19:04.718231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.700 [2024-12-14 00:19:04.718252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.700 qpair failed and we were unable to recover it. 00:38:25.700 [2024-12-14 00:19:04.718354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.700 [2024-12-14 00:19:04.718377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:25.700 qpair failed and we were unable to recover it. 00:38:25.700 [2024-12-14 00:19:04.718557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.700 [2024-12-14 00:19:04.718583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:25.700 qpair failed and we were unable to recover it. 00:38:25.700 [2024-12-14 00:19:04.718719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.700 [2024-12-14 00:19:04.718745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:25.700 qpair failed and we were unable to recover it. 00:38:25.700 [2024-12-14 00:19:04.718826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.700 [2024-12-14 00:19:04.718842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.700 qpair failed and we were unable to recover it. 00:38:25.700 [2024-12-14 00:19:04.718913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.700 [2024-12-14 00:19:04.718926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.700 qpair failed and we were unable to recover it. 00:38:25.700 [2024-12-14 00:19:04.719026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.700 [2024-12-14 00:19:04.719040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.700 qpair failed and we were unable to recover it. 00:38:25.700 [2024-12-14 00:19:04.719247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.700 [2024-12-14 00:19:04.719260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.700 qpair failed and we were unable to recover it. 00:38:25.700 [2024-12-14 00:19:04.719344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.700 [2024-12-14 00:19:04.719357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.700 qpair failed and we were unable to recover it. 00:38:25.700 [2024-12-14 00:19:04.719518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.700 [2024-12-14 00:19:04.719532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.700 qpair failed and we were unable to recover it. 00:38:25.700 [2024-12-14 00:19:04.719616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.700 [2024-12-14 00:19:04.719632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.700 qpair failed and we were unable to recover it. 00:38:25.700 [2024-12-14 00:19:04.719723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.700 [2024-12-14 00:19:04.719737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.700 qpair failed and we were unable to recover it. 00:38:25.700 [2024-12-14 00:19:04.719833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.700 [2024-12-14 00:19:04.719846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.700 qpair failed and we were unable to recover it. 00:38:25.700 [2024-12-14 00:19:04.719988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.700 [2024-12-14 00:19:04.720002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.700 qpair failed and we were unable to recover it. 00:38:25.700 [2024-12-14 00:19:04.720084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.700 [2024-12-14 00:19:04.720097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.700 qpair failed and we were unable to recover it. 00:38:25.700 [2024-12-14 00:19:04.720183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.700 [2024-12-14 00:19:04.720196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.700 qpair failed and we were unable to recover it. 00:38:25.700 [2024-12-14 00:19:04.720278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.700 [2024-12-14 00:19:04.720292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.700 qpair failed and we were unable to recover it. 00:38:25.700 [2024-12-14 00:19:04.720463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.700 [2024-12-14 00:19:04.720477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.700 qpair failed and we were unable to recover it. 00:38:25.700 [2024-12-14 00:19:04.720611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.700 [2024-12-14 00:19:04.720625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.700 qpair failed and we were unable to recover it. 00:38:25.700 [2024-12-14 00:19:04.720796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.700 [2024-12-14 00:19:04.720810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.700 qpair failed and we were unable to recover it. 00:38:25.700 [2024-12-14 00:19:04.720901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.700 [2024-12-14 00:19:04.720914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.700 qpair failed and we were unable to recover it. 00:38:25.700 [2024-12-14 00:19:04.721065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.700 [2024-12-14 00:19:04.721078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.700 qpair failed and we were unable to recover it. 00:38:25.700 [2024-12-14 00:19:04.721308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.700 [2024-12-14 00:19:04.721322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.700 qpair failed and we were unable to recover it. 00:38:25.700 [2024-12-14 00:19:04.721531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.700 [2024-12-14 00:19:04.721545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.700 qpair failed and we were unable to recover it. 00:38:25.700 [2024-12-14 00:19:04.721755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.700 [2024-12-14 00:19:04.721770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.700 qpair failed and we were unable to recover it. 00:38:25.700 [2024-12-14 00:19:04.721915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.700 [2024-12-14 00:19:04.721928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.700 qpair failed and we were unable to recover it. 00:38:25.700 [2024-12-14 00:19:04.722133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.700 [2024-12-14 00:19:04.722146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.700 qpair failed and we were unable to recover it. 00:38:25.700 [2024-12-14 00:19:04.722225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.700 [2024-12-14 00:19:04.722239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.700 qpair failed and we were unable to recover it. 00:38:25.700 [2024-12-14 00:19:04.722319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.700 [2024-12-14 00:19:04.722332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.700 qpair failed and we were unable to recover it. 00:38:25.700 [2024-12-14 00:19:04.722486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.700 [2024-12-14 00:19:04.722501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.700 qpair failed and we were unable to recover it. 00:38:25.700 [2024-12-14 00:19:04.722586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.700 [2024-12-14 00:19:04.722600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.700 qpair failed and we were unable to recover it. 00:38:25.700 [2024-12-14 00:19:04.722670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.700 [2024-12-14 00:19:04.722683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.700 qpair failed and we were unable to recover it. 00:38:25.700 [2024-12-14 00:19:04.722837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.700 [2024-12-14 00:19:04.722851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.700 qpair failed and we were unable to recover it. 00:38:25.700 [2024-12-14 00:19:04.723001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.700 [2024-12-14 00:19:04.723015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.700 qpair failed and we were unable to recover it. 00:38:25.700 [2024-12-14 00:19:04.723217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.700 [2024-12-14 00:19:04.723230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.700 qpair failed and we were unable to recover it. 00:38:25.700 [2024-12-14 00:19:04.723380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.700 [2024-12-14 00:19:04.723394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.700 qpair failed and we were unable to recover it. 00:38:25.700 [2024-12-14 00:19:04.723561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.700 [2024-12-14 00:19:04.723584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.700 qpair failed and we were unable to recover it. 00:38:25.700 [2024-12-14 00:19:04.723717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.700 [2024-12-14 00:19:04.723731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.701 qpair failed and we were unable to recover it. 00:38:25.701 [2024-12-14 00:19:04.723817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.701 [2024-12-14 00:19:04.723830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.701 qpair failed and we were unable to recover it. 00:38:25.701 [2024-12-14 00:19:04.724009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.701 [2024-12-14 00:19:04.724023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.701 qpair failed and we were unable to recover it. 00:38:25.701 [2024-12-14 00:19:04.724240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.701 [2024-12-14 00:19:04.724253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.701 qpair failed and we were unable to recover it. 00:38:25.701 [2024-12-14 00:19:04.724346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.701 [2024-12-14 00:19:04.724359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.701 qpair failed and we were unable to recover it. 00:38:25.701 [2024-12-14 00:19:04.724433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.701 [2024-12-14 00:19:04.724450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.701 qpair failed and we were unable to recover it. 00:38:25.701 [2024-12-14 00:19:04.724710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.701 [2024-12-14 00:19:04.724723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.701 qpair failed and we were unable to recover it. 00:38:25.701 [2024-12-14 00:19:04.724868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.701 [2024-12-14 00:19:04.724881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.701 qpair failed and we were unable to recover it. 00:38:25.701 [2024-12-14 00:19:04.724950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.701 [2024-12-14 00:19:04.724962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.701 qpair failed and we were unable to recover it. 00:38:25.701 [2024-12-14 00:19:04.725058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.701 [2024-12-14 00:19:04.725071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.701 qpair failed and we were unable to recover it. 00:38:25.701 [2024-12-14 00:19:04.725141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.701 [2024-12-14 00:19:04.725153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.701 qpair failed and we were unable to recover it. 00:38:25.701 [2024-12-14 00:19:04.725235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.701 [2024-12-14 00:19:04.725248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.701 qpair failed and we were unable to recover it. 00:38:25.701 [2024-12-14 00:19:04.725397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.701 [2024-12-14 00:19:04.725411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.701 qpair failed and we were unable to recover it. 00:38:25.701 [2024-12-14 00:19:04.725492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.701 [2024-12-14 00:19:04.725505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.701 qpair failed and we were unable to recover it. 00:38:25.701 [2024-12-14 00:19:04.725590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.701 [2024-12-14 00:19:04.725609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.701 qpair failed and we were unable to recover it. 00:38:25.701 [2024-12-14 00:19:04.725692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.701 [2024-12-14 00:19:04.725705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.701 qpair failed and we were unable to recover it. 00:38:25.701 [2024-12-14 00:19:04.725905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.701 [2024-12-14 00:19:04.725918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.701 qpair failed and we were unable to recover it. 00:38:25.701 [2024-12-14 00:19:04.726056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.701 [2024-12-14 00:19:04.726070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.701 qpair failed and we were unable to recover it. 00:38:25.701 [2024-12-14 00:19:04.726167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.701 [2024-12-14 00:19:04.726181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.701 qpair failed and we were unable to recover it. 00:38:25.701 [2024-12-14 00:19:04.726354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.701 [2024-12-14 00:19:04.726368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.701 qpair failed and we were unable to recover it. 00:38:25.701 [2024-12-14 00:19:04.726450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.701 [2024-12-14 00:19:04.726463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.701 qpair failed and we were unable to recover it. 00:38:25.701 [2024-12-14 00:19:04.726547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.701 [2024-12-14 00:19:04.726559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.701 qpair failed and we were unable to recover it. 00:38:25.701 [2024-12-14 00:19:04.726709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.701 [2024-12-14 00:19:04.726722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.701 qpair failed and we were unable to recover it. 00:38:25.701 [2024-12-14 00:19:04.726869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.701 [2024-12-14 00:19:04.726883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.701 qpair failed and we were unable to recover it. 00:38:25.701 [2024-12-14 00:19:04.727059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.701 [2024-12-14 00:19:04.727072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.701 qpair failed and we were unable to recover it. 00:38:25.701 [2024-12-14 00:19:04.727297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.701 [2024-12-14 00:19:04.727311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.701 qpair failed and we were unable to recover it. 00:38:25.701 [2024-12-14 00:19:04.727408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.701 [2024-12-14 00:19:04.727422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.701 qpair failed and we were unable to recover it. 00:38:25.701 [2024-12-14 00:19:04.727539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.701 [2024-12-14 00:19:04.727556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.701 qpair failed and we were unable to recover it. 00:38:25.701 [2024-12-14 00:19:04.727655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.701 [2024-12-14 00:19:04.727669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.701 qpair failed and we were unable to recover it. 00:38:25.701 [2024-12-14 00:19:04.727748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.701 [2024-12-14 00:19:04.727760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.701 qpair failed and we were unable to recover it. 00:38:25.701 [2024-12-14 00:19:04.727843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.701 [2024-12-14 00:19:04.727860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.701 qpair failed and we were unable to recover it. 00:38:25.701 [2024-12-14 00:19:04.728019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.701 [2024-12-14 00:19:04.728032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.701 qpair failed and we were unable to recover it. 00:38:25.701 [2024-12-14 00:19:04.728183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.701 [2024-12-14 00:19:04.728196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.701 qpair failed and we were unable to recover it. 00:38:25.701 [2024-12-14 00:19:04.728332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.701 [2024-12-14 00:19:04.728345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.701 qpair failed and we were unable to recover it. 00:38:25.701 [2024-12-14 00:19:04.728501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.701 [2024-12-14 00:19:04.728516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.701 qpair failed and we were unable to recover it. 00:38:25.701 [2024-12-14 00:19:04.728582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.701 [2024-12-14 00:19:04.728595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.701 qpair failed and we were unable to recover it. 00:38:25.701 [2024-12-14 00:19:04.728684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.701 [2024-12-14 00:19:04.728697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.701 qpair failed and we were unable to recover it. 00:38:25.701 [2024-12-14 00:19:04.728847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.701 [2024-12-14 00:19:04.728861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.701 qpair failed and we were unable to recover it. 00:38:25.701 [2024-12-14 00:19:04.729014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.701 [2024-12-14 00:19:04.729027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.701 qpair failed and we were unable to recover it. 00:38:25.701 [2024-12-14 00:19:04.729164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.702 [2024-12-14 00:19:04.729178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.702 qpair failed and we were unable to recover it. 00:38:25.702 [2024-12-14 00:19:04.729326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.702 [2024-12-14 00:19:04.729339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.702 qpair failed and we were unable to recover it. 00:38:25.702 [2024-12-14 00:19:04.729416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.702 [2024-12-14 00:19:04.729429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.702 qpair failed and we were unable to recover it. 00:38:25.702 [2024-12-14 00:19:04.729638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.702 [2024-12-14 00:19:04.729652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.702 qpair failed and we were unable to recover it. 00:38:25.702 [2024-12-14 00:19:04.729719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.702 [2024-12-14 00:19:04.729732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.702 qpair failed and we were unable to recover it. 00:38:25.702 [2024-12-14 00:19:04.729812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.702 [2024-12-14 00:19:04.729824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.702 qpair failed and we were unable to recover it. 00:38:25.702 [2024-12-14 00:19:04.730028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.702 [2024-12-14 00:19:04.730041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.702 qpair failed and we were unable to recover it. 00:38:25.702 [2024-12-14 00:19:04.730218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.702 [2024-12-14 00:19:04.730231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.702 qpair failed and we were unable to recover it. 00:38:25.702 [2024-12-14 00:19:04.730372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.702 [2024-12-14 00:19:04.730386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.702 qpair failed and we were unable to recover it. 00:38:25.702 [2024-12-14 00:19:04.730540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.702 [2024-12-14 00:19:04.730554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.702 qpair failed and we were unable to recover it. 00:38:25.702 [2024-12-14 00:19:04.730706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.702 [2024-12-14 00:19:04.730719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.702 qpair failed and we were unable to recover it. 00:38:25.702 [2024-12-14 00:19:04.730826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.702 [2024-12-14 00:19:04.730842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.702 qpair failed and we were unable to recover it. 00:38:25.702 [2024-12-14 00:19:04.730908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.702 [2024-12-14 00:19:04.730920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.702 qpair failed and we were unable to recover it. 00:38:25.702 [2024-12-14 00:19:04.731058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.702 [2024-12-14 00:19:04.731070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.702 qpair failed and we were unable to recover it. 00:38:25.702 [2024-12-14 00:19:04.731219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.702 [2024-12-14 00:19:04.731232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.702 qpair failed and we were unable to recover it. 00:38:25.702 [2024-12-14 00:19:04.731444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.702 [2024-12-14 00:19:04.731458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.702 qpair failed and we were unable to recover it. 00:38:25.702 [2024-12-14 00:19:04.731545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.702 [2024-12-14 00:19:04.731558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.702 qpair failed and we were unable to recover it. 00:38:25.702 [2024-12-14 00:19:04.731642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.702 [2024-12-14 00:19:04.731655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.702 qpair failed and we were unable to recover it. 00:38:25.702 [2024-12-14 00:19:04.731788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.702 [2024-12-14 00:19:04.731801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.702 qpair failed and we were unable to recover it. 00:38:25.702 [2024-12-14 00:19:04.731945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.702 [2024-12-14 00:19:04.731958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.702 qpair failed and we were unable to recover it. 00:38:25.702 [2024-12-14 00:19:04.732048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.702 [2024-12-14 00:19:04.732061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.702 qpair failed and we were unable to recover it. 00:38:25.702 [2024-12-14 00:19:04.732218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.702 [2024-12-14 00:19:04.732231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.702 qpair failed and we were unable to recover it. 00:38:25.702 [2024-12-14 00:19:04.732325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.702 [2024-12-14 00:19:04.732339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.702 qpair failed and we were unable to recover it. 00:38:25.702 [2024-12-14 00:19:04.732425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.702 [2024-12-14 00:19:04.732443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.702 qpair failed and we were unable to recover it. 00:38:25.702 [2024-12-14 00:19:04.732579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.702 [2024-12-14 00:19:04.732592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.702 qpair failed and we were unable to recover it. 00:38:25.702 [2024-12-14 00:19:04.732670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.702 [2024-12-14 00:19:04.732682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.702 qpair failed and we were unable to recover it. 00:38:25.702 [2024-12-14 00:19:04.732832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.702 [2024-12-14 00:19:04.732845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.702 qpair failed and we were unable to recover it. 00:38:25.702 [2024-12-14 00:19:04.732983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.702 [2024-12-14 00:19:04.732996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.702 qpair failed and we were unable to recover it. 00:38:25.702 [2024-12-14 00:19:04.733078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.702 [2024-12-14 00:19:04.733096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.702 qpair failed and we were unable to recover it. 00:38:25.702 [2024-12-14 00:19:04.733170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.702 [2024-12-14 00:19:04.733183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.702 qpair failed and we were unable to recover it. 00:38:25.702 [2024-12-14 00:19:04.733321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.702 [2024-12-14 00:19:04.733334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.702 qpair failed and we were unable to recover it. 00:38:25.702 [2024-12-14 00:19:04.733421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.702 [2024-12-14 00:19:04.733434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.702 qpair failed and we were unable to recover it. 00:38:25.702 [2024-12-14 00:19:04.733583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.702 [2024-12-14 00:19:04.733597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.702 qpair failed and we were unable to recover it. 00:38:25.702 [2024-12-14 00:19:04.733730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.702 [2024-12-14 00:19:04.733744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.702 qpair failed and we were unable to recover it. 00:38:25.702 [2024-12-14 00:19:04.733891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.702 [2024-12-14 00:19:04.733905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.702 qpair failed and we were unable to recover it. 00:38:25.702 [2024-12-14 00:19:04.734037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.702 [2024-12-14 00:19:04.734051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.702 qpair failed and we were unable to recover it. 00:38:25.702 [2024-12-14 00:19:04.734130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.702 [2024-12-14 00:19:04.734143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.702 qpair failed and we were unable to recover it. 00:38:25.702 [2024-12-14 00:19:04.734295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.702 [2024-12-14 00:19:04.734308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.702 qpair failed and we were unable to recover it. 00:38:25.702 [2024-12-14 00:19:04.734384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.702 [2024-12-14 00:19:04.734397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.702 qpair failed and we were unable to recover it. 00:38:25.703 [2024-12-14 00:19:04.734493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.703 [2024-12-14 00:19:04.734507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.703 qpair failed and we were unable to recover it. 00:38:25.703 [2024-12-14 00:19:04.734643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.703 [2024-12-14 00:19:04.734657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.703 qpair failed and we were unable to recover it. 00:38:25.703 [2024-12-14 00:19:04.734867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.703 [2024-12-14 00:19:04.734881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.703 qpair failed and we were unable to recover it. 00:38:25.703 [2024-12-14 00:19:04.735020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.703 [2024-12-14 00:19:04.735033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.703 qpair failed and we were unable to recover it. 00:38:25.703 [2024-12-14 00:19:04.735179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.703 [2024-12-14 00:19:04.735192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.703 qpair failed and we were unable to recover it. 00:38:25.703 [2024-12-14 00:19:04.735329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.703 [2024-12-14 00:19:04.735342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.703 qpair failed and we were unable to recover it. 00:38:25.703 [2024-12-14 00:19:04.735442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.703 [2024-12-14 00:19:04.735456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.703 qpair failed and we were unable to recover it. 00:38:25.703 [2024-12-14 00:19:04.735603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.703 [2024-12-14 00:19:04.735616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.703 qpair failed and we were unable to recover it. 00:38:25.703 [2024-12-14 00:19:04.735761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.703 [2024-12-14 00:19:04.735774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.703 qpair failed and we were unable to recover it. 00:38:25.703 [2024-12-14 00:19:04.735875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.703 [2024-12-14 00:19:04.735888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.703 qpair failed and we were unable to recover it. 00:38:25.703 [2024-12-14 00:19:04.736052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.703 [2024-12-14 00:19:04.736065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.703 qpair failed and we were unable to recover it. 00:38:25.703 [2024-12-14 00:19:04.736139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.703 [2024-12-14 00:19:04.736152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.703 qpair failed and we were unable to recover it. 00:38:25.703 [2024-12-14 00:19:04.736380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.703 [2024-12-14 00:19:04.736392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.703 qpair failed and we were unable to recover it. 00:38:25.703 [2024-12-14 00:19:04.736487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.703 [2024-12-14 00:19:04.736502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.703 qpair failed and we were unable to recover it. 00:38:25.703 [2024-12-14 00:19:04.736669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.703 [2024-12-14 00:19:04.736682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.703 qpair failed and we were unable to recover it. 00:38:25.703 [2024-12-14 00:19:04.736764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.703 [2024-12-14 00:19:04.736778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.703 qpair failed and we were unable to recover it. 00:38:25.703 [2024-12-14 00:19:04.736853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.703 [2024-12-14 00:19:04.736874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.703 qpair failed and we were unable to recover it. 00:38:25.703 [2024-12-14 00:19:04.737035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.703 [2024-12-14 00:19:04.737048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.703 qpair failed and we were unable to recover it. 00:38:25.703 [2024-12-14 00:19:04.737201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.703 [2024-12-14 00:19:04.737214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.703 qpair failed and we were unable to recover it. 00:38:25.703 [2024-12-14 00:19:04.737368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.703 [2024-12-14 00:19:04.737381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.703 qpair failed and we were unable to recover it. 00:38:25.703 [2024-12-14 00:19:04.737476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.703 [2024-12-14 00:19:04.737490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.703 qpair failed and we were unable to recover it. 00:38:25.703 [2024-12-14 00:19:04.737649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.703 [2024-12-14 00:19:04.737662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.703 qpair failed and we were unable to recover it. 00:38:25.703 [2024-12-14 00:19:04.737810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.703 [2024-12-14 00:19:04.737823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.703 qpair failed and we were unable to recover it. 00:38:25.703 [2024-12-14 00:19:04.737984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.703 [2024-12-14 00:19:04.737998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.703 qpair failed and we were unable to recover it. 00:38:25.703 [2024-12-14 00:19:04.738075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.703 [2024-12-14 00:19:04.738089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.703 qpair failed and we were unable to recover it. 00:38:25.703 [2024-12-14 00:19:04.738244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.703 [2024-12-14 00:19:04.738257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.703 qpair failed and we were unable to recover it. 00:38:25.703 [2024-12-14 00:19:04.738403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.703 [2024-12-14 00:19:04.738417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.703 qpair failed and we were unable to recover it. 00:38:25.703 [2024-12-14 00:19:04.738604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.703 [2024-12-14 00:19:04.738619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.703 qpair failed and we were unable to recover it. 00:38:25.703 [2024-12-14 00:19:04.738765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.703 [2024-12-14 00:19:04.738778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.703 qpair failed and we were unable to recover it. 00:38:25.703 [2024-12-14 00:19:04.738867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.703 [2024-12-14 00:19:04.738884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.703 qpair failed and we were unable to recover it. 00:38:25.703 [2024-12-14 00:19:04.739244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.703 [2024-12-14 00:19:04.739258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.703 qpair failed and we were unable to recover it. 00:38:25.703 [2024-12-14 00:19:04.739344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.703 [2024-12-14 00:19:04.739357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.703 qpair failed and we were unable to recover it. 00:38:25.703 [2024-12-14 00:19:04.739422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.703 [2024-12-14 00:19:04.739435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.703 qpair failed and we were unable to recover it. 00:38:25.703 [2024-12-14 00:19:04.739578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.703 [2024-12-14 00:19:04.739592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.703 qpair failed and we were unable to recover it. 00:38:25.703 [2024-12-14 00:19:04.739750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.704 [2024-12-14 00:19:04.739763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.704 qpair failed and we were unable to recover it. 00:38:25.704 [2024-12-14 00:19:04.739839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.704 [2024-12-14 00:19:04.739852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.704 qpair failed and we were unable to recover it. 00:38:25.704 [2024-12-14 00:19:04.739944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.704 [2024-12-14 00:19:04.739959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.704 qpair failed and we were unable to recover it. 00:38:25.704 [2024-12-14 00:19:04.740102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.704 [2024-12-14 00:19:04.740115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.704 qpair failed and we were unable to recover it. 00:38:25.704 [2024-12-14 00:19:04.740260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.704 [2024-12-14 00:19:04.740273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.704 qpair failed and we were unable to recover it. 00:38:25.704 [2024-12-14 00:19:04.740478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.704 [2024-12-14 00:19:04.740491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.704 qpair failed and we were unable to recover it. 00:38:25.704 [2024-12-14 00:19:04.740567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.704 [2024-12-14 00:19:04.740580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.704 qpair failed and we were unable to recover it. 00:38:25.704 [2024-12-14 00:19:04.740674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.704 [2024-12-14 00:19:04.740687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.704 qpair failed and we were unable to recover it. 00:38:25.704 [2024-12-14 00:19:04.740854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.704 [2024-12-14 00:19:04.740868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.704 qpair failed and we were unable to recover it. 00:38:25.704 [2024-12-14 00:19:04.741090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.704 [2024-12-14 00:19:04.741103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.704 qpair failed and we were unable to recover it. 00:38:25.704 [2024-12-14 00:19:04.741239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.704 [2024-12-14 00:19:04.741253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.704 qpair failed and we were unable to recover it. 00:38:25.704 [2024-12-14 00:19:04.741325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.704 [2024-12-14 00:19:04.741337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.704 qpair failed and we were unable to recover it. 00:38:25.704 [2024-12-14 00:19:04.741500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.704 [2024-12-14 00:19:04.741513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.704 qpair failed and we were unable to recover it. 00:38:25.704 [2024-12-14 00:19:04.741721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.704 [2024-12-14 00:19:04.741735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.704 qpair failed and we were unable to recover it. 00:38:25.704 [2024-12-14 00:19:04.741889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.704 [2024-12-14 00:19:04.741903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.704 qpair failed and we were unable to recover it. 00:38:25.704 [2024-12-14 00:19:04.742038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.704 [2024-12-14 00:19:04.742051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.704 qpair failed and we were unable to recover it. 00:38:25.704 [2024-12-14 00:19:04.742200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.704 [2024-12-14 00:19:04.742213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.704 qpair failed and we were unable to recover it. 00:38:25.704 [2024-12-14 00:19:04.742305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.704 [2024-12-14 00:19:04.742319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.704 qpair failed and we were unable to recover it. 00:38:25.704 [2024-12-14 00:19:04.742452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.704 [2024-12-14 00:19:04.742466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.704 qpair failed and we were unable to recover it. 00:38:25.704 [2024-12-14 00:19:04.742549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.704 [2024-12-14 00:19:04.742562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.704 qpair failed and we were unable to recover it. 00:38:25.704 [2024-12-14 00:19:04.742711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.704 [2024-12-14 00:19:04.742725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.704 qpair failed and we were unable to recover it. 00:38:25.704 [2024-12-14 00:19:04.742806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.704 [2024-12-14 00:19:04.742819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.704 qpair failed and we were unable to recover it. 00:38:25.704 [2024-12-14 00:19:04.742972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.704 [2024-12-14 00:19:04.742985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.704 qpair failed and we were unable to recover it. 00:38:25.704 [2024-12-14 00:19:04.743066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.704 [2024-12-14 00:19:04.743080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.704 qpair failed and we were unable to recover it. 00:38:25.704 [2024-12-14 00:19:04.743178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.704 [2024-12-14 00:19:04.743191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.704 qpair failed and we were unable to recover it. 00:38:25.704 [2024-12-14 00:19:04.743352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.704 [2024-12-14 00:19:04.743365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.704 qpair failed and we were unable to recover it. 00:38:25.704 [2024-12-14 00:19:04.743598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.704 [2024-12-14 00:19:04.743612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.704 qpair failed and we were unable to recover it. 00:38:25.704 [2024-12-14 00:19:04.743695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.704 [2024-12-14 00:19:04.743708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.704 qpair failed and we were unable to recover it. 00:38:25.704 [2024-12-14 00:19:04.743914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.704 [2024-12-14 00:19:04.743927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.704 qpair failed and we were unable to recover it. 00:38:25.704 [2024-12-14 00:19:04.744014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.704 [2024-12-14 00:19:04.744027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.704 qpair failed and we were unable to recover it. 00:38:25.704 [2024-12-14 00:19:04.744122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.704 [2024-12-14 00:19:04.744136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.704 qpair failed and we were unable to recover it. 00:38:25.704 [2024-12-14 00:19:04.744283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.704 [2024-12-14 00:19:04.744296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.704 qpair failed and we were unable to recover it. 00:38:25.704 [2024-12-14 00:19:04.744381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.704 [2024-12-14 00:19:04.744394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.704 qpair failed and we were unable to recover it. 00:38:25.704 [2024-12-14 00:19:04.744533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.704 [2024-12-14 00:19:04.744547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.704 qpair failed and we were unable to recover it. 00:38:25.704 [2024-12-14 00:19:04.744614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.704 [2024-12-14 00:19:04.744626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.704 qpair failed and we were unable to recover it. 00:38:25.704 [2024-12-14 00:19:04.744717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.704 [2024-12-14 00:19:04.744734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.704 qpair failed and we were unable to recover it. 00:38:25.704 [2024-12-14 00:19:04.744897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.704 [2024-12-14 00:19:04.744911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.704 qpair failed and we were unable to recover it. 00:38:25.704 [2024-12-14 00:19:04.744983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.704 [2024-12-14 00:19:04.744996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.704 qpair failed and we were unable to recover it. 00:38:25.704 [2024-12-14 00:19:04.745149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.704 [2024-12-14 00:19:04.745163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.704 qpair failed and we were unable to recover it. 00:38:25.704 [2024-12-14 00:19:04.745247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.705 [2024-12-14 00:19:04.745260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.705 qpair failed and we were unable to recover it. 00:38:25.705 [2024-12-14 00:19:04.745395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.705 [2024-12-14 00:19:04.745409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.705 qpair failed and we were unable to recover it. 00:38:25.705 [2024-12-14 00:19:04.745547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.705 [2024-12-14 00:19:04.745561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.705 qpair failed and we were unable to recover it. 00:38:25.705 [2024-12-14 00:19:04.745711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.705 [2024-12-14 00:19:04.745724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.705 qpair failed and we were unable to recover it. 00:38:25.705 [2024-12-14 00:19:04.745877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.705 [2024-12-14 00:19:04.745891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.705 qpair failed and we were unable to recover it. 00:38:25.705 [2024-12-14 00:19:04.746037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.705 [2024-12-14 00:19:04.746050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.705 qpair failed and we were unable to recover it. 00:38:25.705 [2024-12-14 00:19:04.746191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.705 [2024-12-14 00:19:04.746204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.705 qpair failed and we were unable to recover it. 00:38:25.705 [2024-12-14 00:19:04.746297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.705 [2024-12-14 00:19:04.746318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.705 qpair failed and we were unable to recover it. 00:38:25.705 [2024-12-14 00:19:04.746398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.705 [2024-12-14 00:19:04.746411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.705 qpair failed and we were unable to recover it. 00:38:25.705 [2024-12-14 00:19:04.746482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.705 [2024-12-14 00:19:04.746495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.705 qpair failed and we were unable to recover it. 00:38:25.705 [2024-12-14 00:19:04.746583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.705 [2024-12-14 00:19:04.746596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.705 qpair failed and we were unable to recover it. 00:38:25.705 [2024-12-14 00:19:04.746669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.705 [2024-12-14 00:19:04.746682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.705 qpair failed and we were unable to recover it. 00:38:25.705 [2024-12-14 00:19:04.746774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.705 [2024-12-14 00:19:04.746787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.705 qpair failed and we were unable to recover it. 00:38:25.705 [2024-12-14 00:19:04.746967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.705 [2024-12-14 00:19:04.746980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.705 qpair failed and we were unable to recover it. 00:38:25.705 [2024-12-14 00:19:04.747142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.705 [2024-12-14 00:19:04.747156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.705 qpair failed and we were unable to recover it. 00:38:25.705 [2024-12-14 00:19:04.747246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.705 [2024-12-14 00:19:04.747259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.705 qpair failed and we were unable to recover it. 00:38:25.705 [2024-12-14 00:19:04.747337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.705 [2024-12-14 00:19:04.747350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.705 qpair failed and we were unable to recover it. 00:38:25.705 [2024-12-14 00:19:04.747503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.705 [2024-12-14 00:19:04.747517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.705 qpair failed and we were unable to recover it. 00:38:25.705 [2024-12-14 00:19:04.747660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.705 [2024-12-14 00:19:04.747674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.705 qpair failed and we were unable to recover it. 00:38:25.705 [2024-12-14 00:19:04.747755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.705 [2024-12-14 00:19:04.747768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.705 qpair failed and we were unable to recover it. 00:38:25.705 [2024-12-14 00:19:04.747924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.705 [2024-12-14 00:19:04.747937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.705 qpair failed and we were unable to recover it. 00:38:25.705 [2024-12-14 00:19:04.748023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.705 [2024-12-14 00:19:04.748037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.705 qpair failed and we were unable to recover it. 00:38:25.705 [2024-12-14 00:19:04.748181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.705 [2024-12-14 00:19:04.748194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.705 qpair failed and we were unable to recover it. 00:38:25.705 [2024-12-14 00:19:04.748273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.705 [2024-12-14 00:19:04.748286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.705 qpair failed and we were unable to recover it. 00:38:25.705 [2024-12-14 00:19:04.748421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.705 [2024-12-14 00:19:04.748434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.705 qpair failed and we were unable to recover it. 00:38:25.705 [2024-12-14 00:19:04.748535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.705 [2024-12-14 00:19:04.748549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.705 qpair failed and we were unable to recover it. 00:38:25.705 [2024-12-14 00:19:04.748647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.705 [2024-12-14 00:19:04.748660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.705 qpair failed and we were unable to recover it. 00:38:25.705 [2024-12-14 00:19:04.748815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.705 [2024-12-14 00:19:04.748828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.705 qpair failed and we were unable to recover it. 00:38:25.705 [2024-12-14 00:19:04.749051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.705 [2024-12-14 00:19:04.749064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.705 qpair failed and we were unable to recover it. 00:38:25.705 [2024-12-14 00:19:04.749201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.705 [2024-12-14 00:19:04.749214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.705 qpair failed and we were unable to recover it. 00:38:25.705 [2024-12-14 00:19:04.749295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.705 [2024-12-14 00:19:04.749309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.705 qpair failed and we were unable to recover it. 00:38:25.705 [2024-12-14 00:19:04.749456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.705 [2024-12-14 00:19:04.749469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.705 qpair failed and we were unable to recover it. 00:38:25.705 [2024-12-14 00:19:04.749542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.705 [2024-12-14 00:19:04.749554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.705 qpair failed and we were unable to recover it. 00:38:25.705 [2024-12-14 00:19:04.749691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.705 [2024-12-14 00:19:04.749704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.705 qpair failed and we were unable to recover it. 00:38:25.705 [2024-12-14 00:19:04.749801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.705 [2024-12-14 00:19:04.749814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.705 qpair failed and we were unable to recover it. 00:38:25.705 [2024-12-14 00:19:04.749886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.705 [2024-12-14 00:19:04.749899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.705 qpair failed and we were unable to recover it. 00:38:25.705 [2024-12-14 00:19:04.749968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.705 [2024-12-14 00:19:04.750007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.706 qpair failed and we were unable to recover it. 00:38:25.706 [2024-12-14 00:19:04.750212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.706 [2024-12-14 00:19:04.750225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.706 qpair failed and we were unable to recover it. 00:38:25.706 [2024-12-14 00:19:04.750313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.706 [2024-12-14 00:19:04.750326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.706 qpair failed and we were unable to recover it. 00:38:25.706 [2024-12-14 00:19:04.750527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.706 [2024-12-14 00:19:04.750541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.706 qpair failed and we were unable to recover it. 00:38:25.706 [2024-12-14 00:19:04.750619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.706 [2024-12-14 00:19:04.750632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.706 qpair failed and we were unable to recover it. 00:38:25.706 [2024-12-14 00:19:04.750707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.706 [2024-12-14 00:19:04.750721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.706 qpair failed and we were unable to recover it. 00:38:25.706 [2024-12-14 00:19:04.750889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.706 [2024-12-14 00:19:04.750903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.706 qpair failed and we were unable to recover it. 00:38:25.706 [2024-12-14 00:19:04.750971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.706 [2024-12-14 00:19:04.750983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.706 qpair failed and we were unable to recover it. 00:38:25.706 [2024-12-14 00:19:04.751137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.706 [2024-12-14 00:19:04.751150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.706 qpair failed and we were unable to recover it. 00:38:25.706 [2024-12-14 00:19:04.751286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.706 [2024-12-14 00:19:04.751300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.706 qpair failed and we were unable to recover it. 00:38:25.706 [2024-12-14 00:19:04.751435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.706 [2024-12-14 00:19:04.751454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.706 qpair failed and we were unable to recover it. 00:38:25.706 [2024-12-14 00:19:04.751599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.706 [2024-12-14 00:19:04.751613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.706 qpair failed and we were unable to recover it. 00:38:25.706 [2024-12-14 00:19:04.751772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.706 [2024-12-14 00:19:04.751786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.706 qpair failed and we were unable to recover it. 00:38:25.706 [2024-12-14 00:19:04.751943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.706 [2024-12-14 00:19:04.751957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.706 qpair failed and we were unable to recover it. 00:38:25.706 [2024-12-14 00:19:04.752125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.706 [2024-12-14 00:19:04.752138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.706 qpair failed and we were unable to recover it. 00:38:25.706 [2024-12-14 00:19:04.752345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.706 [2024-12-14 00:19:04.752358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.706 qpair failed and we were unable to recover it. 00:38:25.706 [2024-12-14 00:19:04.752463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.706 [2024-12-14 00:19:04.752476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.706 qpair failed and we were unable to recover it. 00:38:25.706 [2024-12-14 00:19:04.752626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.706 [2024-12-14 00:19:04.752639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.706 qpair failed and we were unable to recover it. 00:38:25.706 [2024-12-14 00:19:04.752779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.706 [2024-12-14 00:19:04.752793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.706 qpair failed and we were unable to recover it. 00:38:25.706 [2024-12-14 00:19:04.752871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.706 [2024-12-14 00:19:04.752884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.706 qpair failed and we were unable to recover it. 00:38:25.706 [2024-12-14 00:19:04.752970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.706 [2024-12-14 00:19:04.752984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.706 qpair failed and we were unable to recover it. 00:38:25.706 [2024-12-14 00:19:04.753124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.706 [2024-12-14 00:19:04.753138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.706 qpair failed and we were unable to recover it. 00:38:25.706 [2024-12-14 00:19:04.753216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.706 [2024-12-14 00:19:04.753230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.706 qpair failed and we were unable to recover it. 00:38:25.706 [2024-12-14 00:19:04.753370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.706 [2024-12-14 00:19:04.753384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.706 qpair failed and we were unable to recover it. 00:38:25.706 [2024-12-14 00:19:04.753591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.706 [2024-12-14 00:19:04.753605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.706 qpair failed and we were unable to recover it. 00:38:25.706 [2024-12-14 00:19:04.753740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.706 [2024-12-14 00:19:04.753754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.706 qpair failed and we were unable to recover it. 00:38:25.706 [2024-12-14 00:19:04.753899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.706 [2024-12-14 00:19:04.753913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.706 qpair failed and we were unable to recover it. 00:38:25.706 [2024-12-14 00:19:04.754009] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:38:25.706 [2024-12-14 00:19:04.754173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.706 [2024-12-14 00:19:04.754197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:25.706 qpair failed and we were unable to recover it. 00:38:25.706 [2024-12-14 00:19:04.754374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.706 [2024-12-14 00:19:04.754396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:25.706 qpair failed and we were unable to recover it. 00:38:25.706 [2024-12-14 00:19:04.754493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.706 [2024-12-14 00:19:04.754515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:25.706 qpair failed and we were unable to recover it. 00:38:25.706 [2024-12-14 00:19:04.754664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.706 [2024-12-14 00:19:04.754685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:25.706 qpair failed and we were unable to recover it. 00:38:25.706 [2024-12-14 00:19:04.754777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.706 [2024-12-14 00:19:04.754798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:25.706 qpair failed and we were unable to recover it. 00:38:25.706 [2024-12-14 00:19:04.754950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.706 [2024-12-14 00:19:04.754970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:25.706 qpair failed and we were unable to recover it. 00:38:25.706 [2024-12-14 00:19:04.755189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.706 [2024-12-14 00:19:04.755205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.706 qpair failed and we were unable to recover it. 00:38:25.706 [2024-12-14 00:19:04.755285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.706 [2024-12-14 00:19:04.755298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.706 qpair failed and we were unable to recover it. 00:38:25.706 [2024-12-14 00:19:04.755393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.706 [2024-12-14 00:19:04.755407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.706 qpair failed and we were unable to recover it. 00:38:25.706 [2024-12-14 00:19:04.755490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.706 [2024-12-14 00:19:04.755504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.706 qpair failed and we were unable to recover it. 00:38:25.706 [2024-12-14 00:19:04.755593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.706 [2024-12-14 00:19:04.755606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.706 qpair failed and we were unable to recover it. 00:38:25.706 [2024-12-14 00:19:04.755759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.706 [2024-12-14 00:19:04.755772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.706 qpair failed and we were unable to recover it. 00:38:25.707 [2024-12-14 00:19:04.756016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.707 [2024-12-14 00:19:04.756033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.707 qpair failed and we were unable to recover it. 00:38:25.707 [2024-12-14 00:19:04.756128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.707 [2024-12-14 00:19:04.756144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.707 qpair failed and we were unable to recover it. 00:38:25.707 [2024-12-14 00:19:04.756214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.707 [2024-12-14 00:19:04.756232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.707 qpair failed and we were unable to recover it. 00:38:25.707 [2024-12-14 00:19:04.756376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.707 [2024-12-14 00:19:04.756389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.707 qpair failed and we were unable to recover it. 00:38:25.707 [2024-12-14 00:19:04.756476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.707 [2024-12-14 00:19:04.756490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.707 qpair failed and we were unable to recover it. 00:38:25.707 [2024-12-14 00:19:04.756582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.707 [2024-12-14 00:19:04.756595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.707 qpair failed and we were unable to recover it. 00:38:25.707 [2024-12-14 00:19:04.756700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.707 [2024-12-14 00:19:04.756714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.707 qpair failed and we were unable to recover it. 00:38:25.707 [2024-12-14 00:19:04.756859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.707 [2024-12-14 00:19:04.756871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.707 qpair failed and we were unable to recover it. 00:38:25.707 [2024-12-14 00:19:04.757043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.707 [2024-12-14 00:19:04.757057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.707 qpair failed and we were unable to recover it. 00:38:25.707 [2024-12-14 00:19:04.757140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.707 [2024-12-14 00:19:04.757153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.707 qpair failed and we were unable to recover it. 00:38:25.707 [2024-12-14 00:19:04.757287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.707 [2024-12-14 00:19:04.757300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.707 qpair failed and we were unable to recover it. 00:38:25.707 [2024-12-14 00:19:04.757457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.707 [2024-12-14 00:19:04.757471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.707 qpair failed and we were unable to recover it. 00:38:25.707 [2024-12-14 00:19:04.757619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.707 [2024-12-14 00:19:04.757633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.707 qpair failed and we were unable to recover it. 00:38:25.707 [2024-12-14 00:19:04.757783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.707 [2024-12-14 00:19:04.757796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.707 qpair failed and we were unable to recover it. 00:38:25.707 [2024-12-14 00:19:04.757869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.707 [2024-12-14 00:19:04.757884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.707 qpair failed and we were unable to recover it. 00:38:25.707 [2024-12-14 00:19:04.757984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.707 [2024-12-14 00:19:04.757997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.707 qpair failed and we were unable to recover it. 00:38:25.707 [2024-12-14 00:19:04.758201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.707 [2024-12-14 00:19:04.758214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.707 qpair failed and we were unable to recover it. 00:38:25.707 [2024-12-14 00:19:04.758414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.707 [2024-12-14 00:19:04.758428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.707 qpair failed and we were unable to recover it. 00:38:25.707 [2024-12-14 00:19:04.758534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.707 [2024-12-14 00:19:04.758547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.707 qpair failed and we were unable to recover it. 00:38:25.707 [2024-12-14 00:19:04.758699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.707 [2024-12-14 00:19:04.758713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.707 qpair failed and we were unable to recover it. 00:38:25.707 [2024-12-14 00:19:04.758915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.707 [2024-12-14 00:19:04.758928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.707 qpair failed and we were unable to recover it. 00:38:25.707 [2024-12-14 00:19:04.759178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.707 [2024-12-14 00:19:04.759191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.707 qpair failed and we were unable to recover it. 00:38:25.707 [2024-12-14 00:19:04.759349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.707 [2024-12-14 00:19:04.759363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.707 qpair failed and we were unable to recover it. 00:38:25.707 [2024-12-14 00:19:04.759498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.707 [2024-12-14 00:19:04.759512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.707 qpair failed and we were unable to recover it. 00:38:25.707 [2024-12-14 00:19:04.759602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.707 [2024-12-14 00:19:04.759615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.707 qpair failed and we were unable to recover it. 00:38:25.707 [2024-12-14 00:19:04.759698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.707 [2024-12-14 00:19:04.759711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.707 qpair failed and we were unable to recover it. 00:38:25.707 [2024-12-14 00:19:04.759830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.707 [2024-12-14 00:19:04.759843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.707 qpair failed and we were unable to recover it. 00:38:25.707 [2024-12-14 00:19:04.759995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.707 [2024-12-14 00:19:04.760008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.707 qpair failed and we were unable to recover it. 00:38:25.707 [2024-12-14 00:19:04.760148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.707 [2024-12-14 00:19:04.760161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.707 qpair failed and we were unable to recover it. 00:38:25.707 [2024-12-14 00:19:04.760238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.707 [2024-12-14 00:19:04.760251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.707 qpair failed and we were unable to recover it. 00:38:25.707 [2024-12-14 00:19:04.760317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.707 [2024-12-14 00:19:04.760330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.707 qpair failed and we were unable to recover it. 00:38:25.707 [2024-12-14 00:19:04.760419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.707 [2024-12-14 00:19:04.760432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.708 qpair failed and we were unable to recover it. 00:38:25.708 [2024-12-14 00:19:04.760593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.708 [2024-12-14 00:19:04.760607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.708 qpair failed and we were unable to recover it. 00:38:25.708 [2024-12-14 00:19:04.760807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.708 [2024-12-14 00:19:04.760820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.708 qpair failed and we were unable to recover it. 00:38:25.708 [2024-12-14 00:19:04.761042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.708 [2024-12-14 00:19:04.761055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.708 qpair failed and we were unable to recover it. 00:38:25.708 [2024-12-14 00:19:04.761207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.708 [2024-12-14 00:19:04.761221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.708 qpair failed and we were unable to recover it. 00:38:25.708 [2024-12-14 00:19:04.761356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.708 [2024-12-14 00:19:04.761368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.708 qpair failed and we were unable to recover it. 00:38:25.708 [2024-12-14 00:19:04.761543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.708 [2024-12-14 00:19:04.761563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.708 qpair failed and we were unable to recover it. 00:38:25.708 [2024-12-14 00:19:04.761700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.708 [2024-12-14 00:19:04.761714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.708 qpair failed and we were unable to recover it. 00:38:25.708 [2024-12-14 00:19:04.761868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.708 [2024-12-14 00:19:04.761881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.708 qpair failed and we were unable to recover it. 00:38:25.708 [2024-12-14 00:19:04.762026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.708 [2024-12-14 00:19:04.762040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.708 qpair failed and we were unable to recover it. 00:38:25.708 [2024-12-14 00:19:04.762144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.708 [2024-12-14 00:19:04.762157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.708 qpair failed and we were unable to recover it. 00:38:25.708 [2024-12-14 00:19:04.762290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.708 [2024-12-14 00:19:04.762304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.708 qpair failed and we were unable to recover it. 00:38:25.708 [2024-12-14 00:19:04.762470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.708 [2024-12-14 00:19:04.762484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.708 qpair failed and we were unable to recover it. 00:38:25.708 [2024-12-14 00:19:04.762643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.708 [2024-12-14 00:19:04.762657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.708 qpair failed and we were unable to recover it. 00:38:25.708 [2024-12-14 00:19:04.762868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.708 [2024-12-14 00:19:04.762882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.708 qpair failed and we were unable to recover it. 00:38:25.708 [2024-12-14 00:19:04.763049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.708 [2024-12-14 00:19:04.763062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.708 qpair failed and we were unable to recover it. 00:38:25.708 [2024-12-14 00:19:04.763262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.708 [2024-12-14 00:19:04.763275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.708 qpair failed and we were unable to recover it. 00:38:25.708 [2024-12-14 00:19:04.763424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.708 [2024-12-14 00:19:04.763452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.708 qpair failed and we were unable to recover it. 00:38:25.708 [2024-12-14 00:19:04.763547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.708 [2024-12-14 00:19:04.763561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.708 qpair failed and we were unable to recover it. 00:38:25.708 [2024-12-14 00:19:04.763723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.708 [2024-12-14 00:19:04.763738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.708 qpair failed and we were unable to recover it. 00:38:25.708 [2024-12-14 00:19:04.763812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.708 [2024-12-14 00:19:04.763835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.708 qpair failed and we were unable to recover it. 00:38:25.708 [2024-12-14 00:19:04.763987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.708 [2024-12-14 00:19:04.764000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.708 qpair failed and we were unable to recover it. 00:38:25.708 [2024-12-14 00:19:04.764080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.708 [2024-12-14 00:19:04.764093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.708 qpair failed and we were unable to recover it. 00:38:25.708 [2024-12-14 00:19:04.764229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.708 [2024-12-14 00:19:04.764245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.708 qpair failed and we were unable to recover it. 00:38:25.708 [2024-12-14 00:19:04.764479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.708 [2024-12-14 00:19:04.764495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.708 qpair failed and we were unable to recover it. 00:38:25.708 [2024-12-14 00:19:04.764722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.708 [2024-12-14 00:19:04.764736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.708 qpair failed and we were unable to recover it. 00:38:25.708 [2024-12-14 00:19:04.764914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.708 [2024-12-14 00:19:04.764928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.708 qpair failed and we were unable to recover it. 00:38:25.708 [2024-12-14 00:19:04.765024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.708 [2024-12-14 00:19:04.765038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.708 qpair failed and we were unable to recover it. 00:38:25.708 [2024-12-14 00:19:04.765220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.708 [2024-12-14 00:19:04.765234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.708 qpair failed and we were unable to recover it. 00:38:25.708 [2024-12-14 00:19:04.765310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.708 [2024-12-14 00:19:04.765325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.708 qpair failed and we were unable to recover it. 00:38:25.708 [2024-12-14 00:19:04.765423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.708 [2024-12-14 00:19:04.765436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.708 qpair failed and we were unable to recover it. 00:38:25.708 [2024-12-14 00:19:04.765527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.708 [2024-12-14 00:19:04.765541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.708 qpair failed and we were unable to recover it. 00:38:25.708 [2024-12-14 00:19:04.765641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.708 [2024-12-14 00:19:04.765655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.708 qpair failed and we were unable to recover it. 00:38:25.708 [2024-12-14 00:19:04.765813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.708 [2024-12-14 00:19:04.765826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.708 qpair failed and we were unable to recover it. 00:38:25.708 [2024-12-14 00:19:04.765933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.708 [2024-12-14 00:19:04.765949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.708 qpair failed and we were unable to recover it. 00:38:25.708 [2024-12-14 00:19:04.766106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.708 [2024-12-14 00:19:04.766126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.708 qpair failed and we were unable to recover it. 00:38:25.708 [2024-12-14 00:19:04.766212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.708 [2024-12-14 00:19:04.766225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.709 qpair failed and we were unable to recover it. 00:38:25.709 [2024-12-14 00:19:04.766364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.709 [2024-12-14 00:19:04.766378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.709 qpair failed and we were unable to recover it. 00:38:25.709 [2024-12-14 00:19:04.766461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.709 [2024-12-14 00:19:04.766475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.709 qpair failed and we were unable to recover it. 00:38:25.709 [2024-12-14 00:19:04.766553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.709 [2024-12-14 00:19:04.766571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.709 qpair failed and we were unable to recover it. 00:38:25.709 [2024-12-14 00:19:04.766659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.709 [2024-12-14 00:19:04.766672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.709 qpair failed and we were unable to recover it. 00:38:25.709 [2024-12-14 00:19:04.766764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.709 [2024-12-14 00:19:04.766777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.709 qpair failed and we were unable to recover it. 00:38:25.709 [2024-12-14 00:19:04.766984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.709 [2024-12-14 00:19:04.766997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.709 qpair failed and we were unable to recover it. 00:38:25.709 [2024-12-14 00:19:04.767077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.709 [2024-12-14 00:19:04.767100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.709 qpair failed and we were unable to recover it. 00:38:25.709 [2024-12-14 00:19:04.767247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.709 [2024-12-14 00:19:04.767262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.709 qpair failed and we were unable to recover it. 00:38:25.709 [2024-12-14 00:19:04.767338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.709 [2024-12-14 00:19:04.767351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.709 qpair failed and we were unable to recover it. 00:38:25.709 [2024-12-14 00:19:04.767605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.709 [2024-12-14 00:19:04.767625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.709 qpair failed and we were unable to recover it. 00:38:25.709 [2024-12-14 00:19:04.767828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.709 [2024-12-14 00:19:04.767842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.709 qpair failed and we were unable to recover it. 00:38:25.709 [2024-12-14 00:19:04.767924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.709 [2024-12-14 00:19:04.767940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.709 qpair failed and we were unable to recover it. 00:38:25.709 [2024-12-14 00:19:04.768020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.709 [2024-12-14 00:19:04.768034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.709 qpair failed and we were unable to recover it. 00:38:25.709 [2024-12-14 00:19:04.768173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.709 [2024-12-14 00:19:04.768188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.709 qpair failed and we were unable to recover it. 00:38:25.709 [2024-12-14 00:19:04.768449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.709 [2024-12-14 00:19:04.768463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.709 qpair failed and we were unable to recover it. 00:38:25.709 [2024-12-14 00:19:04.768532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.709 [2024-12-14 00:19:04.768545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.709 qpair failed and we were unable to recover it. 00:38:25.709 [2024-12-14 00:19:04.768680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.709 [2024-12-14 00:19:04.768694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.709 qpair failed and we were unable to recover it. 00:38:25.709 [2024-12-14 00:19:04.768782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.709 [2024-12-14 00:19:04.768796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.709 qpair failed and we were unable to recover it. 00:38:25.709 [2024-12-14 00:19:04.768999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.709 [2024-12-14 00:19:04.769013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.709 qpair failed and we were unable to recover it. 00:38:25.709 [2024-12-14 00:19:04.769172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.709 [2024-12-14 00:19:04.769185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.709 qpair failed and we were unable to recover it. 00:38:25.709 [2024-12-14 00:19:04.769346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.709 [2024-12-14 00:19:04.769360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.709 qpair failed and we were unable to recover it. 00:38:25.709 [2024-12-14 00:19:04.769523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.709 [2024-12-14 00:19:04.769538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.709 qpair failed and we were unable to recover it. 00:38:25.709 [2024-12-14 00:19:04.769705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.709 [2024-12-14 00:19:04.769719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.709 qpair failed and we were unable to recover it. 00:38:25.709 [2024-12-14 00:19:04.769879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.709 [2024-12-14 00:19:04.769893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.709 qpair failed and we were unable to recover it. 00:38:25.709 [2024-12-14 00:19:04.770054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.709 [2024-12-14 00:19:04.770068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.709 qpair failed and we were unable to recover it. 00:38:25.709 [2024-12-14 00:19:04.770271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.709 [2024-12-14 00:19:04.770284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.709 qpair failed and we were unable to recover it. 00:38:25.709 [2024-12-14 00:19:04.770430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.709 [2024-12-14 00:19:04.770460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.709 qpair failed and we were unable to recover it. 00:38:25.709 [2024-12-14 00:19:04.770675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.709 [2024-12-14 00:19:04.770688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.709 qpair failed and we were unable to recover it. 00:38:25.709 [2024-12-14 00:19:04.770831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.709 [2024-12-14 00:19:04.770856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.709 qpair failed and we were unable to recover it. 00:38:25.709 [2024-12-14 00:19:04.770963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.709 [2024-12-14 00:19:04.770977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.709 qpair failed and we were unable to recover it. 00:38:25.709 [2024-12-14 00:19:04.771134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.709 [2024-12-14 00:19:04.771148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.709 qpair failed and we were unable to recover it. 00:38:25.709 [2024-12-14 00:19:04.771242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.709 [2024-12-14 00:19:04.771255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.709 qpair failed and we were unable to recover it. 00:38:25.709 [2024-12-14 00:19:04.771386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.709 [2024-12-14 00:19:04.771399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.709 qpair failed and we were unable to recover it. 00:38:25.709 [2024-12-14 00:19:04.771498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.709 [2024-12-14 00:19:04.771513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.709 qpair failed and we were unable to recover it. 00:38:25.709 [2024-12-14 00:19:04.771596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.709 [2024-12-14 00:19:04.771611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.709 qpair failed and we were unable to recover it. 00:38:25.709 [2024-12-14 00:19:04.771773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.709 [2024-12-14 00:19:04.771786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.709 qpair failed and we were unable to recover it. 00:38:25.709 [2024-12-14 00:19:04.771989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.709 [2024-12-14 00:19:04.772003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.709 qpair failed and we were unable to recover it. 00:38:25.709 [2024-12-14 00:19:04.772139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.709 [2024-12-14 00:19:04.772153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.709 qpair failed and we were unable to recover it. 00:38:25.709 [2024-12-14 00:19:04.772289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.709 [2024-12-14 00:19:04.772304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.709 qpair failed and we were unable to recover it. 00:38:25.710 [2024-12-14 00:19:04.772459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.710 [2024-12-14 00:19:04.772474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.710 qpair failed and we were unable to recover it. 00:38:25.710 [2024-12-14 00:19:04.772608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.710 [2024-12-14 00:19:04.772622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.710 qpair failed and we were unable to recover it. 00:38:25.710 [2024-12-14 00:19:04.772773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.710 [2024-12-14 00:19:04.772787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.710 qpair failed and we were unable to recover it. 00:38:25.710 [2024-12-14 00:19:04.772944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.710 [2024-12-14 00:19:04.772957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.710 qpair failed and we were unable to recover it. 00:38:25.710 [2024-12-14 00:19:04.773134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.710 [2024-12-14 00:19:04.773147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.710 qpair failed and we were unable to recover it. 00:38:25.710 [2024-12-14 00:19:04.773216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.710 [2024-12-14 00:19:04.773228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.710 qpair failed and we were unable to recover it. 00:38:25.710 [2024-12-14 00:19:04.773312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.710 [2024-12-14 00:19:04.773325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.710 qpair failed and we were unable to recover it. 00:38:25.710 [2024-12-14 00:19:04.773409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.710 [2024-12-14 00:19:04.773423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.710 qpair failed and we were unable to recover it. 00:38:25.710 [2024-12-14 00:19:04.773517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.710 [2024-12-14 00:19:04.773532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.710 qpair failed and we were unable to recover it. 00:38:25.710 [2024-12-14 00:19:04.773681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.710 [2024-12-14 00:19:04.773694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.710 qpair failed and we were unable to recover it. 00:38:25.710 [2024-12-14 00:19:04.773844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.710 [2024-12-14 00:19:04.773859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.710 qpair failed and we were unable to recover it. 00:38:25.710 [2024-12-14 00:19:04.773945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.710 [2024-12-14 00:19:04.773958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.710 qpair failed and we were unable to recover it. 00:38:25.710 [2024-12-14 00:19:04.774028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.710 [2024-12-14 00:19:04.774041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.710 qpair failed and we were unable to recover it. 00:38:25.710 [2024-12-14 00:19:04.774130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.710 [2024-12-14 00:19:04.774143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.710 qpair failed and we were unable to recover it. 00:38:25.710 [2024-12-14 00:19:04.774252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.710 [2024-12-14 00:19:04.774265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.710 qpair failed and we were unable to recover it. 00:38:25.710 [2024-12-14 00:19:04.774471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.710 [2024-12-14 00:19:04.774488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.710 qpair failed and we were unable to recover it. 00:38:25.710 [2024-12-14 00:19:04.774658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.710 [2024-12-14 00:19:04.774686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.710 qpair failed and we were unable to recover it. 00:38:25.710 [2024-12-14 00:19:04.774784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.710 [2024-12-14 00:19:04.774797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.710 qpair failed and we were unable to recover it. 00:38:25.710 [2024-12-14 00:19:04.774950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.710 [2024-12-14 00:19:04.774964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.710 qpair failed and we were unable to recover it. 00:38:25.710 [2024-12-14 00:19:04.775143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.710 [2024-12-14 00:19:04.775156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.710 qpair failed and we were unable to recover it. 00:38:25.710 [2024-12-14 00:19:04.775244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.710 [2024-12-14 00:19:04.775257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.710 qpair failed and we were unable to recover it. 00:38:25.710 [2024-12-14 00:19:04.775436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.710 [2024-12-14 00:19:04.775468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.710 qpair failed and we were unable to recover it. 00:38:25.710 [2024-12-14 00:19:04.775590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.710 [2024-12-14 00:19:04.775605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.710 qpair failed and we were unable to recover it. 00:38:25.710 [2024-12-14 00:19:04.775675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.710 [2024-12-14 00:19:04.775692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.710 qpair failed and we were unable to recover it. 00:38:25.710 [2024-12-14 00:19:04.775826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.710 [2024-12-14 00:19:04.775840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.710 qpair failed and we were unable to recover it. 00:38:25.710 [2024-12-14 00:19:04.775933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.710 [2024-12-14 00:19:04.775947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.710 qpair failed and we were unable to recover it. 00:38:25.710 [2024-12-14 00:19:04.776024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.710 [2024-12-14 00:19:04.776036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.710 qpair failed and we were unable to recover it. 00:38:25.710 [2024-12-14 00:19:04.776246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.710 [2024-12-14 00:19:04.776262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.710 qpair failed and we were unable to recover it. 00:38:25.710 [2024-12-14 00:19:04.776341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.710 [2024-12-14 00:19:04.776354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.710 qpair failed and we were unable to recover it. 00:38:25.710 [2024-12-14 00:19:04.776436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.710 [2024-12-14 00:19:04.776456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.710 qpair failed and we were unable to recover it. 00:38:25.710 [2024-12-14 00:19:04.776529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.710 [2024-12-14 00:19:04.776541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.710 qpair failed and we were unable to recover it. 00:38:25.710 [2024-12-14 00:19:04.776625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.710 [2024-12-14 00:19:04.776638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.710 qpair failed and we were unable to recover it. 00:38:25.710 [2024-12-14 00:19:04.776718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.710 [2024-12-14 00:19:04.776732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.710 qpair failed and we were unable to recover it. 00:38:25.710 [2024-12-14 00:19:04.776874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.710 [2024-12-14 00:19:04.776888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.710 qpair failed and we were unable to recover it. 00:38:25.710 [2024-12-14 00:19:04.777086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.710 [2024-12-14 00:19:04.777099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.710 qpair failed and we were unable to recover it. 00:38:25.710 [2024-12-14 00:19:04.777250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.710 [2024-12-14 00:19:04.777263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.710 qpair failed and we were unable to recover it. 00:38:25.710 [2024-12-14 00:19:04.777397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.710 [2024-12-14 00:19:04.777411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.710 qpair failed and we were unable to recover it. 00:38:25.710 [2024-12-14 00:19:04.777500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.710 [2024-12-14 00:19:04.777514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.710 qpair failed and we were unable to recover it. 00:38:25.710 [2024-12-14 00:19:04.777761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.710 [2024-12-14 00:19:04.777774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.711 qpair failed and we were unable to recover it. 00:38:25.711 [2024-12-14 00:19:04.777926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.711 [2024-12-14 00:19:04.777939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.711 qpair failed and we were unable to recover it. 00:38:25.711 [2024-12-14 00:19:04.778074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.711 [2024-12-14 00:19:04.778087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.711 qpair failed and we were unable to recover it. 00:38:25.711 [2024-12-14 00:19:04.778268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.711 [2024-12-14 00:19:04.778281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.711 qpair failed and we were unable to recover it. 00:38:25.711 [2024-12-14 00:19:04.778363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.711 [2024-12-14 00:19:04.778376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.711 qpair failed and we were unable to recover it. 00:38:25.711 [2024-12-14 00:19:04.778517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.711 [2024-12-14 00:19:04.778531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.711 qpair failed and we were unable to recover it. 00:38:25.711 [2024-12-14 00:19:04.778696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.711 [2024-12-14 00:19:04.778710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.711 qpair failed and we were unable to recover it. 00:38:25.711 [2024-12-14 00:19:04.778800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.711 [2024-12-14 00:19:04.778813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.711 qpair failed and we were unable to recover it. 00:38:25.711 [2024-12-14 00:19:04.778949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.711 [2024-12-14 00:19:04.778962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.711 qpair failed and we were unable to recover it. 00:38:25.711 [2024-12-14 00:19:04.779097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.711 [2024-12-14 00:19:04.779111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.711 qpair failed and we were unable to recover it. 00:38:25.711 [2024-12-14 00:19:04.779266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.711 [2024-12-14 00:19:04.779279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.711 qpair failed and we were unable to recover it. 00:38:25.711 [2024-12-14 00:19:04.779361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.711 [2024-12-14 00:19:04.779374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.711 qpair failed and we were unable to recover it. 00:38:25.711 [2024-12-14 00:19:04.779582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.711 [2024-12-14 00:19:04.779596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.711 qpair failed and we were unable to recover it. 00:38:25.711 [2024-12-14 00:19:04.779814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.711 [2024-12-14 00:19:04.779827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.711 qpair failed and we were unable to recover it. 00:38:25.711 [2024-12-14 00:19:04.779974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.711 [2024-12-14 00:19:04.779987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.711 qpair failed and we were unable to recover it. 00:38:25.711 [2024-12-14 00:19:04.780082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.711 [2024-12-14 00:19:04.780095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.711 qpair failed and we were unable to recover it. 00:38:25.711 [2024-12-14 00:19:04.780195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.711 [2024-12-14 00:19:04.780208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.711 qpair failed and we were unable to recover it. 00:38:25.711 [2024-12-14 00:19:04.780292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.711 [2024-12-14 00:19:04.780305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.711 qpair failed and we were unable to recover it. 00:38:25.711 [2024-12-14 00:19:04.780499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.711 [2024-12-14 00:19:04.780513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.711 qpair failed and we were unable to recover it. 00:38:25.711 [2024-12-14 00:19:04.780734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.711 [2024-12-14 00:19:04.780748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.711 qpair failed and we were unable to recover it. 00:38:25.711 [2024-12-14 00:19:04.780949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.711 [2024-12-14 00:19:04.780963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.711 qpair failed and we were unable to recover it. 00:38:25.711 [2024-12-14 00:19:04.781116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.711 [2024-12-14 00:19:04.781130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.711 qpair failed and we were unable to recover it. 00:38:25.711 [2024-12-14 00:19:04.781329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.711 [2024-12-14 00:19:04.781342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.711 qpair failed and we were unable to recover it. 00:38:25.711 [2024-12-14 00:19:04.781509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.711 [2024-12-14 00:19:04.781523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.711 qpair failed and we were unable to recover it. 00:38:25.711 [2024-12-14 00:19:04.781750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.711 [2024-12-14 00:19:04.781764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.711 qpair failed and we were unable to recover it. 00:38:25.711 [2024-12-14 00:19:04.781866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.711 [2024-12-14 00:19:04.781879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.711 qpair failed and we were unable to recover it. 00:38:25.711 [2024-12-14 00:19:04.782023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.711 [2024-12-14 00:19:04.782036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.711 qpair failed and we were unable to recover it. 00:38:25.711 [2024-12-14 00:19:04.782191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.711 [2024-12-14 00:19:04.782205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.711 qpair failed and we were unable to recover it. 00:38:25.711 [2024-12-14 00:19:04.782353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.711 [2024-12-14 00:19:04.782367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.711 qpair failed and we were unable to recover it. 00:38:25.711 [2024-12-14 00:19:04.782460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.711 [2024-12-14 00:19:04.782476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.711 qpair failed and we were unable to recover it. 00:38:25.711 [2024-12-14 00:19:04.782618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.711 [2024-12-14 00:19:04.782631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.711 qpair failed and we were unable to recover it. 00:38:25.711 [2024-12-14 00:19:04.782717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.711 [2024-12-14 00:19:04.782731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.711 qpair failed and we were unable to recover it. 00:38:25.711 [2024-12-14 00:19:04.782944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.711 [2024-12-14 00:19:04.782958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.711 qpair failed and we were unable to recover it. 00:38:25.711 [2024-12-14 00:19:04.783046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.711 [2024-12-14 00:19:04.783059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.711 qpair failed and we were unable to recover it. 00:38:25.711 [2024-12-14 00:19:04.783147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.711 [2024-12-14 00:19:04.783161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.711 qpair failed and we were unable to recover it. 00:38:25.711 [2024-12-14 00:19:04.783296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.711 [2024-12-14 00:19:04.783310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.711 qpair failed and we were unable to recover it. 00:38:25.711 [2024-12-14 00:19:04.783456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.711 [2024-12-14 00:19:04.783470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.711 qpair failed and we were unable to recover it. 00:38:25.711 [2024-12-14 00:19:04.783696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.711 [2024-12-14 00:19:04.783710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.711 qpair failed and we were unable to recover it. 00:38:25.712 [2024-12-14 00:19:04.783890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.712 [2024-12-14 00:19:04.783903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.712 qpair failed and we were unable to recover it. 00:38:25.712 [2024-12-14 00:19:04.783984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.712 [2024-12-14 00:19:04.783997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.712 qpair failed and we were unable to recover it. 00:38:25.712 [2024-12-14 00:19:04.784101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.712 [2024-12-14 00:19:04.784114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.712 qpair failed and we were unable to recover it. 00:38:25.712 [2024-12-14 00:19:04.784292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.712 [2024-12-14 00:19:04.784306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.712 qpair failed and we were unable to recover it. 00:38:25.712 [2024-12-14 00:19:04.784384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.712 [2024-12-14 00:19:04.784397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.712 qpair failed and we were unable to recover it. 00:38:25.712 [2024-12-14 00:19:04.784480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.712 [2024-12-14 00:19:04.784493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.712 qpair failed and we were unable to recover it. 00:38:25.712 [2024-12-14 00:19:04.784584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.712 [2024-12-14 00:19:04.784597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.712 qpair failed and we were unable to recover it. 00:38:25.712 [2024-12-14 00:19:04.784755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.712 [2024-12-14 00:19:04.784768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.712 qpair failed and we were unable to recover it. 00:38:25.712 [2024-12-14 00:19:04.784848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.712 [2024-12-14 00:19:04.784862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.712 qpair failed and we were unable to recover it. 00:38:25.712 [2024-12-14 00:19:04.785069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.712 [2024-12-14 00:19:04.785082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.712 qpair failed and we were unable to recover it. 00:38:25.712 [2024-12-14 00:19:04.785168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.712 [2024-12-14 00:19:04.785182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.712 qpair failed and we were unable to recover it. 00:38:25.712 [2024-12-14 00:19:04.785275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.712 [2024-12-14 00:19:04.785293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.712 qpair failed and we were unable to recover it. 00:38:25.712 [2024-12-14 00:19:04.785447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.712 [2024-12-14 00:19:04.785461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.712 qpair failed and we were unable to recover it. 00:38:25.712 [2024-12-14 00:19:04.785645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.712 [2024-12-14 00:19:04.785659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.712 qpair failed and we were unable to recover it. 00:38:25.712 [2024-12-14 00:19:04.785810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.712 [2024-12-14 00:19:04.785824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.712 qpair failed and we were unable to recover it. 00:38:25.712 [2024-12-14 00:19:04.785973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.712 [2024-12-14 00:19:04.785986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.712 qpair failed and we were unable to recover it. 00:38:25.712 [2024-12-14 00:19:04.786061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.712 [2024-12-14 00:19:04.786074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.712 qpair failed and we were unable to recover it. 00:38:25.712 [2024-12-14 00:19:04.786225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.712 [2024-12-14 00:19:04.786238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.712 qpair failed and we were unable to recover it. 00:38:25.712 [2024-12-14 00:19:04.786310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.712 [2024-12-14 00:19:04.786322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.712 qpair failed and we were unable to recover it. 00:38:25.712 [2024-12-14 00:19:04.786534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.712 [2024-12-14 00:19:04.786548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.712 qpair failed and we were unable to recover it. 00:38:25.712 [2024-12-14 00:19:04.786728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.712 [2024-12-14 00:19:04.786742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.712 qpair failed and we were unable to recover it. 00:38:25.712 [2024-12-14 00:19:04.786844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.712 [2024-12-14 00:19:04.786858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.712 qpair failed and we were unable to recover it. 00:38:25.712 [2024-12-14 00:19:04.786944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.712 [2024-12-14 00:19:04.786957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.712 qpair failed and we were unable to recover it. 00:38:25.712 [2024-12-14 00:19:04.787041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.712 [2024-12-14 00:19:04.787054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.712 qpair failed and we were unable to recover it. 00:38:25.712 [2024-12-14 00:19:04.787215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.712 [2024-12-14 00:19:04.787228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.712 qpair failed and we were unable to recover it. 00:38:25.712 [2024-12-14 00:19:04.787297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.712 [2024-12-14 00:19:04.787309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.712 qpair failed and we were unable to recover it. 00:38:25.712 [2024-12-14 00:19:04.787396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.712 [2024-12-14 00:19:04.787409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.712 qpair failed and we were unable to recover it. 00:38:25.995 [2024-12-14 00:19:04.787546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.995 [2024-12-14 00:19:04.787560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.995 qpair failed and we were unable to recover it. 00:38:25.995 [2024-12-14 00:19:04.787644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.995 [2024-12-14 00:19:04.787657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.995 qpair failed and we were unable to recover it. 00:38:25.995 [2024-12-14 00:19:04.787860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.995 [2024-12-14 00:19:04.787873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.995 qpair failed and we were unable to recover it. 00:38:25.995 [2024-12-14 00:19:04.788033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.995 [2024-12-14 00:19:04.788046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.995 qpair failed and we were unable to recover it. 00:38:25.995 [2024-12-14 00:19:04.788206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.995 [2024-12-14 00:19:04.788222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.995 qpair failed and we were unable to recover it. 00:38:25.995 [2024-12-14 00:19:04.788307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.995 [2024-12-14 00:19:04.788321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.996 qpair failed and we were unable to recover it. 00:38:25.996 [2024-12-14 00:19:04.788402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.996 [2024-12-14 00:19:04.788415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.996 qpair failed and we were unable to recover it. 00:38:25.996 [2024-12-14 00:19:04.788523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.996 [2024-12-14 00:19:04.788537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.996 qpair failed and we were unable to recover it. 00:38:25.996 [2024-12-14 00:19:04.788616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.996 [2024-12-14 00:19:04.788630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.996 qpair failed and we were unable to recover it. 00:38:25.996 [2024-12-14 00:19:04.788714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.996 [2024-12-14 00:19:04.788727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.996 qpair failed and we were unable to recover it. 00:38:25.996 [2024-12-14 00:19:04.788811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.996 [2024-12-14 00:19:04.788824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.996 qpair failed and we were unable to recover it. 00:38:25.996 [2024-12-14 00:19:04.788904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.996 [2024-12-14 00:19:04.788918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.996 qpair failed and we were unable to recover it. 00:38:25.996 [2024-12-14 00:19:04.789144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.996 [2024-12-14 00:19:04.789157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.996 qpair failed and we were unable to recover it. 00:38:25.996 [2024-12-14 00:19:04.789298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.996 [2024-12-14 00:19:04.789311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.996 qpair failed and we were unable to recover it. 00:38:25.996 [2024-12-14 00:19:04.789402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.996 [2024-12-14 00:19:04.789415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.996 qpair failed and we were unable to recover it. 00:38:25.996 [2024-12-14 00:19:04.789502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.996 [2024-12-14 00:19:04.789516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.996 qpair failed and we were unable to recover it. 00:38:25.996 [2024-12-14 00:19:04.789603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.996 [2024-12-14 00:19:04.789616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.996 qpair failed and we were unable to recover it. 00:38:25.996 [2024-12-14 00:19:04.789695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.996 [2024-12-14 00:19:04.789708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.996 qpair failed and we were unable to recover it. 00:38:25.996 [2024-12-14 00:19:04.789864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.996 [2024-12-14 00:19:04.789877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.996 qpair failed and we were unable to recover it. 00:38:25.996 [2024-12-14 00:19:04.789962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.996 [2024-12-14 00:19:04.789975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.996 qpair failed and we were unable to recover it. 00:38:25.996 [2024-12-14 00:19:04.790045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.996 [2024-12-14 00:19:04.790057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.996 qpair failed and we were unable to recover it. 00:38:25.996 [2024-12-14 00:19:04.790146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.996 [2024-12-14 00:19:04.790160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.996 qpair failed and we were unable to recover it. 00:38:25.996 [2024-12-14 00:19:04.790230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.996 [2024-12-14 00:19:04.790242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.996 qpair failed and we were unable to recover it. 00:38:25.996 [2024-12-14 00:19:04.790377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.996 [2024-12-14 00:19:04.790390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.996 qpair failed and we were unable to recover it. 00:38:25.996 [2024-12-14 00:19:04.790481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.996 [2024-12-14 00:19:04.790495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.996 qpair failed and we were unable to recover it. 00:38:25.996 [2024-12-14 00:19:04.790631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.996 [2024-12-14 00:19:04.790662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.996 qpair failed and we were unable to recover it. 00:38:25.996 [2024-12-14 00:19:04.790738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.996 [2024-12-14 00:19:04.790752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.996 qpair failed and we were unable to recover it. 00:38:25.996 [2024-12-14 00:19:04.790903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.996 [2024-12-14 00:19:04.790916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.996 qpair failed and we were unable to recover it. 00:38:25.996 [2024-12-14 00:19:04.791015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.996 [2024-12-14 00:19:04.791028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.996 qpair failed and we were unable to recover it. 00:38:25.996 [2024-12-14 00:19:04.791173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.996 [2024-12-14 00:19:04.791186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.996 qpair failed and we were unable to recover it. 00:38:25.996 [2024-12-14 00:19:04.791265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.996 [2024-12-14 00:19:04.791278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.996 qpair failed and we were unable to recover it. 00:38:25.996 [2024-12-14 00:19:04.791371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.996 [2024-12-14 00:19:04.791385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.996 qpair failed and we were unable to recover it. 00:38:25.996 [2024-12-14 00:19:04.791473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.996 [2024-12-14 00:19:04.791487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.996 qpair failed and we were unable to recover it. 00:38:25.996 [2024-12-14 00:19:04.791658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.996 [2024-12-14 00:19:04.791671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.996 qpair failed and we were unable to recover it. 00:38:25.996 [2024-12-14 00:19:04.791773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.996 [2024-12-14 00:19:04.791787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.996 qpair failed and we were unable to recover it. 00:38:25.996 [2024-12-14 00:19:04.791869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.996 [2024-12-14 00:19:04.791882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.996 qpair failed and we were unable to recover it. 00:38:25.996 [2024-12-14 00:19:04.792018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.996 [2024-12-14 00:19:04.792031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.996 qpair failed and we were unable to recover it. 00:38:25.996 [2024-12-14 00:19:04.792131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.996 [2024-12-14 00:19:04.792144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.996 qpair failed and we were unable to recover it. 00:38:25.996 [2024-12-14 00:19:04.792282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.996 [2024-12-14 00:19:04.792296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.996 qpair failed and we were unable to recover it. 00:38:25.996 [2024-12-14 00:19:04.792377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.996 [2024-12-14 00:19:04.792391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.996 qpair failed and we were unable to recover it. 00:38:25.996 [2024-12-14 00:19:04.792470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.996 [2024-12-14 00:19:04.792484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.996 qpair failed and we were unable to recover it. 00:38:25.996 [2024-12-14 00:19:04.792623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.996 [2024-12-14 00:19:04.792636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.996 qpair failed and we were unable to recover it. 00:38:25.996 [2024-12-14 00:19:04.792717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.996 [2024-12-14 00:19:04.792730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.996 qpair failed and we were unable to recover it. 00:38:25.996 [2024-12-14 00:19:04.792893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.996 [2024-12-14 00:19:04.792906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.996 qpair failed and we were unable to recover it. 00:38:25.996 [2024-12-14 00:19:04.793143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.997 [2024-12-14 00:19:04.793158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.997 qpair failed and we were unable to recover it. 00:38:25.997 [2024-12-14 00:19:04.793311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.997 [2024-12-14 00:19:04.793324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.997 qpair failed and we were unable to recover it. 00:38:25.997 [2024-12-14 00:19:04.793423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.997 [2024-12-14 00:19:04.793436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.997 qpair failed and we were unable to recover it. 00:38:25.997 [2024-12-14 00:19:04.793541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.997 [2024-12-14 00:19:04.793556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.997 qpair failed and we were unable to recover it. 00:38:25.997 [2024-12-14 00:19:04.793621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.997 [2024-12-14 00:19:04.793638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.997 qpair failed and we were unable to recover it. 00:38:25.997 [2024-12-14 00:19:04.793717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.997 [2024-12-14 00:19:04.793731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.997 qpair failed and we were unable to recover it. 00:38:25.997 [2024-12-14 00:19:04.793955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.997 [2024-12-14 00:19:04.793969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.997 qpair failed and we were unable to recover it. 00:38:25.997 [2024-12-14 00:19:04.794171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.997 [2024-12-14 00:19:04.794185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.997 qpair failed and we were unable to recover it. 00:38:25.997 [2024-12-14 00:19:04.794346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.997 [2024-12-14 00:19:04.794359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.997 qpair failed and we were unable to recover it. 00:38:25.997 [2024-12-14 00:19:04.794460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.997 [2024-12-14 00:19:04.794473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.997 qpair failed and we were unable to recover it. 00:38:25.997 [2024-12-14 00:19:04.794636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.997 [2024-12-14 00:19:04.794649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.997 qpair failed and we were unable to recover it. 00:38:25.997 [2024-12-14 00:19:04.794802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.997 [2024-12-14 00:19:04.794815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.997 qpair failed and we were unable to recover it. 00:38:25.997 [2024-12-14 00:19:04.794987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.997 [2024-12-14 00:19:04.795001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.997 qpair failed and we were unable to recover it. 00:38:25.997 [2024-12-14 00:19:04.795202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.997 [2024-12-14 00:19:04.795215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.997 qpair failed and we were unable to recover it. 00:38:25.997 [2024-12-14 00:19:04.795304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.997 [2024-12-14 00:19:04.795318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.997 qpair failed and we were unable to recover it. 00:38:25.997 [2024-12-14 00:19:04.795471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.997 [2024-12-14 00:19:04.795488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.997 qpair failed and we were unable to recover it. 00:38:25.997 [2024-12-14 00:19:04.795685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.997 [2024-12-14 00:19:04.795699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.997 qpair failed and we were unable to recover it. 00:38:25.997 [2024-12-14 00:19:04.795791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.997 [2024-12-14 00:19:04.795804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.997 qpair failed and we were unable to recover it. 00:38:25.997 [2024-12-14 00:19:04.795964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.997 [2024-12-14 00:19:04.795977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.997 qpair failed and we were unable to recover it. 00:38:25.997 [2024-12-14 00:19:04.796080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.997 [2024-12-14 00:19:04.796093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.997 qpair failed and we were unable to recover it. 00:38:25.997 [2024-12-14 00:19:04.796247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.997 [2024-12-14 00:19:04.796260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.997 qpair failed and we were unable to recover it. 00:38:25.997 [2024-12-14 00:19:04.796341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.997 [2024-12-14 00:19:04.796354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.997 qpair failed and we were unable to recover it. 00:38:25.997 [2024-12-14 00:19:04.796553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.997 [2024-12-14 00:19:04.796567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.997 qpair failed and we were unable to recover it. 00:38:25.997 [2024-12-14 00:19:04.796647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.997 [2024-12-14 00:19:04.796660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.997 qpair failed and we were unable to recover it. 00:38:25.997 [2024-12-14 00:19:04.796737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.997 [2024-12-14 00:19:04.796751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.997 qpair failed and we were unable to recover it. 00:38:25.997 [2024-12-14 00:19:04.796840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.997 [2024-12-14 00:19:04.796854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.997 qpair failed and we were unable to recover it. 00:38:25.997 [2024-12-14 00:19:04.796948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.997 [2024-12-14 00:19:04.796965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.997 qpair failed and we were unable to recover it. 00:38:25.997 [2024-12-14 00:19:04.797103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.997 [2024-12-14 00:19:04.797117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.997 qpair failed and we were unable to recover it. 00:38:25.997 [2024-12-14 00:19:04.797193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.997 [2024-12-14 00:19:04.797206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.997 qpair failed and we were unable to recover it. 00:38:25.997 [2024-12-14 00:19:04.797291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.997 [2024-12-14 00:19:04.797304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.997 qpair failed and we were unable to recover it. 00:38:25.997 [2024-12-14 00:19:04.797478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.997 [2024-12-14 00:19:04.797492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.997 qpair failed and we were unable to recover it. 00:38:25.997 [2024-12-14 00:19:04.797695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.997 [2024-12-14 00:19:04.797708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.997 qpair failed and we were unable to recover it. 00:38:25.997 [2024-12-14 00:19:04.797807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.997 [2024-12-14 00:19:04.797821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.997 qpair failed and we were unable to recover it. 00:38:25.997 [2024-12-14 00:19:04.797901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.997 [2024-12-14 00:19:04.797915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.997 qpair failed and we were unable to recover it. 00:38:25.997 [2024-12-14 00:19:04.797999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.997 [2024-12-14 00:19:04.798013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.997 qpair failed and we were unable to recover it. 00:38:25.997 [2024-12-14 00:19:04.798160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.997 [2024-12-14 00:19:04.798173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.997 qpair failed and we were unable to recover it. 00:38:25.997 [2024-12-14 00:19:04.798307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.997 [2024-12-14 00:19:04.798320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.997 qpair failed and we were unable to recover it. 00:38:25.997 [2024-12-14 00:19:04.798403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.997 [2024-12-14 00:19:04.798416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.997 qpair failed and we were unable to recover it. 00:38:25.997 [2024-12-14 00:19:04.798564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.997 [2024-12-14 00:19:04.798578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.997 qpair failed and we were unable to recover it. 00:38:25.997 [2024-12-14 00:19:04.798740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.998 [2024-12-14 00:19:04.798753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.998 qpair failed and we were unable to recover it. 00:38:25.998 [2024-12-14 00:19:04.798901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.998 [2024-12-14 00:19:04.798918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.998 qpair failed and we were unable to recover it. 00:38:25.998 [2024-12-14 00:19:04.799014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.998 [2024-12-14 00:19:04.799027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.998 qpair failed and we were unable to recover it. 00:38:25.998 [2024-12-14 00:19:04.799273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.998 [2024-12-14 00:19:04.799287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.998 qpair failed and we were unable to recover it. 00:38:25.998 [2024-12-14 00:19:04.799358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.998 [2024-12-14 00:19:04.799371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.998 qpair failed and we were unable to recover it. 00:38:25.998 [2024-12-14 00:19:04.799457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.998 [2024-12-14 00:19:04.799471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.998 qpair failed and we were unable to recover it. 00:38:25.998 [2024-12-14 00:19:04.799616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.998 [2024-12-14 00:19:04.799630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.998 qpair failed and we were unable to recover it. 00:38:25.998 [2024-12-14 00:19:04.799725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.998 [2024-12-14 00:19:04.799738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.998 qpair failed and we were unable to recover it. 00:38:25.998 [2024-12-14 00:19:04.799825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.998 [2024-12-14 00:19:04.799838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.998 qpair failed and we were unable to recover it. 00:38:25.998 [2024-12-14 00:19:04.800044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.998 [2024-12-14 00:19:04.800057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.998 qpair failed and we were unable to recover it. 00:38:25.998 [2024-12-14 00:19:04.800207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.998 [2024-12-14 00:19:04.800221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.998 qpair failed and we were unable to recover it. 00:38:25.998 [2024-12-14 00:19:04.800386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.998 [2024-12-14 00:19:04.800399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.998 qpair failed and we were unable to recover it. 00:38:25.998 [2024-12-14 00:19:04.800497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.998 [2024-12-14 00:19:04.800510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.998 qpair failed and we were unable to recover it. 00:38:25.998 [2024-12-14 00:19:04.800712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.998 [2024-12-14 00:19:04.800725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.998 qpair failed and we were unable to recover it. 00:38:25.998 [2024-12-14 00:19:04.800874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.998 [2024-12-14 00:19:04.800887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.998 qpair failed and we were unable to recover it. 00:38:25.998 [2024-12-14 00:19:04.801050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.998 [2024-12-14 00:19:04.801064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.998 qpair failed and we were unable to recover it. 00:38:25.998 [2024-12-14 00:19:04.801234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.998 [2024-12-14 00:19:04.801247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.998 qpair failed and we were unable to recover it. 00:38:25.998 [2024-12-14 00:19:04.801385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.998 [2024-12-14 00:19:04.801398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.998 qpair failed and we were unable to recover it. 00:38:25.998 [2024-12-14 00:19:04.801543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.998 [2024-12-14 00:19:04.801557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.998 qpair failed and we were unable to recover it. 00:38:25.998 [2024-12-14 00:19:04.801624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.998 [2024-12-14 00:19:04.801636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.998 qpair failed and we were unable to recover it. 00:38:25.998 [2024-12-14 00:19:04.801782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.998 [2024-12-14 00:19:04.801795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.998 qpair failed and we were unable to recover it. 00:38:25.998 [2024-12-14 00:19:04.801927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.998 [2024-12-14 00:19:04.801940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.998 qpair failed and we were unable to recover it. 00:38:25.998 [2024-12-14 00:19:04.802096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.998 [2024-12-14 00:19:04.802109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.998 qpair failed and we were unable to recover it. 00:38:25.998 [2024-12-14 00:19:04.802194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.998 [2024-12-14 00:19:04.802208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.998 qpair failed and we were unable to recover it. 00:38:25.998 [2024-12-14 00:19:04.802284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.998 [2024-12-14 00:19:04.802296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.998 qpair failed and we were unable to recover it. 00:38:25.998 [2024-12-14 00:19:04.802482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.998 [2024-12-14 00:19:04.802496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.998 qpair failed and we were unable to recover it. 00:38:25.998 [2024-12-14 00:19:04.802565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.998 [2024-12-14 00:19:04.802577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.998 qpair failed and we were unable to recover it. 00:38:25.998 [2024-12-14 00:19:04.802785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.998 [2024-12-14 00:19:04.802798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.998 qpair failed and we were unable to recover it. 00:38:25.998 [2024-12-14 00:19:04.802906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.998 [2024-12-14 00:19:04.802931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:25.998 qpair failed and we were unable to recover it. 00:38:25.998 [2024-12-14 00:19:04.803132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.998 [2024-12-14 00:19:04.803164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:25.998 qpair failed and we were unable to recover it. 00:38:25.998 [2024-12-14 00:19:04.803363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.998 [2024-12-14 00:19:04.803391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:25.998 qpair failed and we were unable to recover it. 00:38:25.998 [2024-12-14 00:19:04.803579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.998 [2024-12-14 00:19:04.803596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.998 qpair failed and we were unable to recover it. 00:38:25.998 [2024-12-14 00:19:04.803679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.998 [2024-12-14 00:19:04.803701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.998 qpair failed and we were unable to recover it. 00:38:25.998 [2024-12-14 00:19:04.803836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.998 [2024-12-14 00:19:04.803849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.998 qpair failed and we were unable to recover it. 00:38:25.998 [2024-12-14 00:19:04.803938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.998 [2024-12-14 00:19:04.803951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.998 qpair failed and we were unable to recover it. 00:38:25.998 [2024-12-14 00:19:04.804035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.998 [2024-12-14 00:19:04.804049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.998 qpair failed and we were unable to recover it. 00:38:25.998 [2024-12-14 00:19:04.804202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.998 [2024-12-14 00:19:04.804215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.998 qpair failed and we were unable to recover it. 00:38:25.998 [2024-12-14 00:19:04.804366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.998 [2024-12-14 00:19:04.804380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.998 qpair failed and we were unable to recover it. 00:38:25.998 [2024-12-14 00:19:04.804550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.998 [2024-12-14 00:19:04.804564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.998 qpair failed and we were unable to recover it. 00:38:25.998 [2024-12-14 00:19:04.804712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.999 [2024-12-14 00:19:04.804726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.999 qpair failed and we were unable to recover it. 00:38:25.999 [2024-12-14 00:19:04.804816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.999 [2024-12-14 00:19:04.804829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.999 qpair failed and we were unable to recover it. 00:38:25.999 [2024-12-14 00:19:04.804917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.999 [2024-12-14 00:19:04.804933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.999 qpair failed and we were unable to recover it. 00:38:25.999 [2024-12-14 00:19:04.805032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.999 [2024-12-14 00:19:04.805046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.999 qpair failed and we were unable to recover it. 00:38:25.999 [2024-12-14 00:19:04.805187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.999 [2024-12-14 00:19:04.805200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.999 qpair failed and we were unable to recover it. 00:38:25.999 [2024-12-14 00:19:04.805266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.999 [2024-12-14 00:19:04.805277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.999 qpair failed and we were unable to recover it. 00:38:25.999 [2024-12-14 00:19:04.805367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.999 [2024-12-14 00:19:04.805380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.999 qpair failed and we were unable to recover it. 00:38:25.999 [2024-12-14 00:19:04.805469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.999 [2024-12-14 00:19:04.805483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.999 qpair failed and we were unable to recover it. 00:38:25.999 [2024-12-14 00:19:04.805567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.999 [2024-12-14 00:19:04.805580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.999 qpair failed and we were unable to recover it. 00:38:25.999 [2024-12-14 00:19:04.805651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.999 [2024-12-14 00:19:04.805663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.999 qpair failed and we were unable to recover it. 00:38:25.999 [2024-12-14 00:19:04.805807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.999 [2024-12-14 00:19:04.805820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.999 qpair failed and we were unable to recover it. 00:38:25.999 [2024-12-14 00:19:04.805959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.999 [2024-12-14 00:19:04.805972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.999 qpair failed and we were unable to recover it. 00:38:25.999 [2024-12-14 00:19:04.806111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.999 [2024-12-14 00:19:04.806123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.999 qpair failed and we were unable to recover it. 00:38:25.999 [2024-12-14 00:19:04.806221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.999 [2024-12-14 00:19:04.806235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.999 qpair failed and we were unable to recover it. 00:38:25.999 [2024-12-14 00:19:04.806313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.999 [2024-12-14 00:19:04.806327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.999 qpair failed and we were unable to recover it. 00:38:25.999 [2024-12-14 00:19:04.806494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.999 [2024-12-14 00:19:04.806507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.999 qpair failed and we were unable to recover it. 00:38:25.999 [2024-12-14 00:19:04.806598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.999 [2024-12-14 00:19:04.806611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.999 qpair failed and we were unable to recover it. 00:38:25.999 [2024-12-14 00:19:04.806749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.999 [2024-12-14 00:19:04.806762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.999 qpair failed and we were unable to recover it. 00:38:25.999 [2024-12-14 00:19:04.806840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.999 [2024-12-14 00:19:04.806852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.999 qpair failed and we were unable to recover it. 00:38:25.999 [2024-12-14 00:19:04.806931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.999 [2024-12-14 00:19:04.806944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.999 qpair failed and we were unable to recover it. 00:38:25.999 [2024-12-14 00:19:04.807073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.999 [2024-12-14 00:19:04.807097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.999 qpair failed and we were unable to recover it. 00:38:25.999 [2024-12-14 00:19:04.807194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.999 [2024-12-14 00:19:04.807207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.999 qpair failed and we were unable to recover it. 00:38:25.999 [2024-12-14 00:19:04.807301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.999 [2024-12-14 00:19:04.807315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.999 qpair failed and we were unable to recover it. 00:38:25.999 [2024-12-14 00:19:04.807401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.999 [2024-12-14 00:19:04.807413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.999 qpair failed and we were unable to recover it. 00:38:25.999 [2024-12-14 00:19:04.807509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.999 [2024-12-14 00:19:04.807523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.999 qpair failed and we were unable to recover it. 00:38:25.999 [2024-12-14 00:19:04.807591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.999 [2024-12-14 00:19:04.807603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.999 qpair failed and we were unable to recover it. 00:38:25.999 [2024-12-14 00:19:04.807829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.999 [2024-12-14 00:19:04.807842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.999 qpair failed and we were unable to recover it. 00:38:25.999 [2024-12-14 00:19:04.808029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.999 [2024-12-14 00:19:04.808042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.999 qpair failed and we were unable to recover it. 00:38:25.999 [2024-12-14 00:19:04.808179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.999 [2024-12-14 00:19:04.808192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.999 qpair failed and we were unable to recover it. 00:38:25.999 [2024-12-14 00:19:04.808381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.999 [2024-12-14 00:19:04.808404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:25.999 qpair failed and we were unable to recover it. 00:38:25.999 [2024-12-14 00:19:04.808580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.999 [2024-12-14 00:19:04.808607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:25.999 qpair failed and we were unable to recover it. 00:38:25.999 [2024-12-14 00:19:04.808791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.999 [2024-12-14 00:19:04.808817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:25.999 qpair failed and we were unable to recover it. 00:38:25.999 [2024-12-14 00:19:04.809034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.999 [2024-12-14 00:19:04.809050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.999 qpair failed and we were unable to recover it. 00:38:25.999 [2024-12-14 00:19:04.809142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.999 [2024-12-14 00:19:04.809155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.999 qpair failed and we were unable to recover it. 00:38:25.999 [2024-12-14 00:19:04.809304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.999 [2024-12-14 00:19:04.809317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.999 qpair failed and we were unable to recover it. 00:38:25.999 [2024-12-14 00:19:04.809470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.999 [2024-12-14 00:19:04.809484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.999 qpair failed and we were unable to recover it. 00:38:25.999 [2024-12-14 00:19:04.809586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.999 [2024-12-14 00:19:04.809599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.999 qpair failed and we were unable to recover it. 00:38:25.999 [2024-12-14 00:19:04.809738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.999 [2024-12-14 00:19:04.809753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.999 qpair failed and we were unable to recover it. 00:38:25.999 [2024-12-14 00:19:04.809911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.999 [2024-12-14 00:19:04.809925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.999 qpair failed and we were unable to recover it. 00:38:25.999 [2024-12-14 00:19:04.810141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:25.999 [2024-12-14 00:19:04.810155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:25.999 qpair failed and we were unable to recover it. 00:38:26.000 [2024-12-14 00:19:04.810323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.000 [2024-12-14 00:19:04.810337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.000 qpair failed and we were unable to recover it. 00:38:26.000 [2024-12-14 00:19:04.810480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.000 [2024-12-14 00:19:04.810495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.000 qpair failed and we were unable to recover it. 00:38:26.000 [2024-12-14 00:19:04.810630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.000 [2024-12-14 00:19:04.810646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.000 qpair failed and we were unable to recover it. 00:38:26.000 [2024-12-14 00:19:04.810853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.000 [2024-12-14 00:19:04.810866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.000 qpair failed and we were unable to recover it. 00:38:26.000 [2024-12-14 00:19:04.811012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.000 [2024-12-14 00:19:04.811026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.000 qpair failed and we were unable to recover it. 00:38:26.000 [2024-12-14 00:19:04.811197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.000 [2024-12-14 00:19:04.811210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.000 qpair failed and we were unable to recover it. 00:38:26.000 [2024-12-14 00:19:04.811289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.000 [2024-12-14 00:19:04.811303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.000 qpair failed and we were unable to recover it. 00:38:26.000 [2024-12-14 00:19:04.811460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.000 [2024-12-14 00:19:04.811474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.000 qpair failed and we were unable to recover it. 00:38:26.000 [2024-12-14 00:19:04.811573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.000 [2024-12-14 00:19:04.811587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.000 qpair failed and we were unable to recover it. 00:38:26.000 [2024-12-14 00:19:04.811765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.000 [2024-12-14 00:19:04.811779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.000 qpair failed and we were unable to recover it. 00:38:26.000 [2024-12-14 00:19:04.811860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.000 [2024-12-14 00:19:04.811873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.000 qpair failed and we were unable to recover it. 00:38:26.000 [2024-12-14 00:19:04.812036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.000 [2024-12-14 00:19:04.812049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.000 qpair failed and we were unable to recover it. 00:38:26.000 [2024-12-14 00:19:04.812278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.000 [2024-12-14 00:19:04.812291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.000 qpair failed and we were unable to recover it. 00:38:26.000 [2024-12-14 00:19:04.812383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.000 [2024-12-14 00:19:04.812396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.000 qpair failed and we were unable to recover it. 00:38:26.000 [2024-12-14 00:19:04.812488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.000 [2024-12-14 00:19:04.812502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.000 qpair failed and we were unable to recover it. 00:38:26.000 [2024-12-14 00:19:04.812600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.000 [2024-12-14 00:19:04.812614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.000 qpair failed and we were unable to recover it. 00:38:26.000 [2024-12-14 00:19:04.812758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.000 [2024-12-14 00:19:04.812771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.000 qpair failed and we were unable to recover it. 00:38:26.000 [2024-12-14 00:19:04.812875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.000 [2024-12-14 00:19:04.812889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.000 qpair failed and we were unable to recover it. 00:38:26.000 [2024-12-14 00:19:04.813061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.000 [2024-12-14 00:19:04.813075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.000 qpair failed and we were unable to recover it. 00:38:26.000 [2024-12-14 00:19:04.813164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.000 [2024-12-14 00:19:04.813178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.000 qpair failed and we were unable to recover it. 00:38:26.000 [2024-12-14 00:19:04.813280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.000 [2024-12-14 00:19:04.813302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.000 qpair failed and we were unable to recover it. 00:38:26.000 [2024-12-14 00:19:04.813397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.000 [2024-12-14 00:19:04.813411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.000 qpair failed and we were unable to recover it. 00:38:26.000 [2024-12-14 00:19:04.813558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.000 [2024-12-14 00:19:04.813572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.000 qpair failed and we were unable to recover it. 00:38:26.000 [2024-12-14 00:19:04.813650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.000 [2024-12-14 00:19:04.813663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.000 qpair failed and we were unable to recover it. 00:38:26.000 [2024-12-14 00:19:04.813767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.000 [2024-12-14 00:19:04.813781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.000 qpair failed and we were unable to recover it. 00:38:26.000 [2024-12-14 00:19:04.813926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.000 [2024-12-14 00:19:04.813939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.000 qpair failed and we were unable to recover it. 00:38:26.000 [2024-12-14 00:19:04.814027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.000 [2024-12-14 00:19:04.814041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.000 qpair failed and we were unable to recover it. 00:38:26.000 [2024-12-14 00:19:04.814121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.000 [2024-12-14 00:19:04.814134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.000 qpair failed and we were unable to recover it. 00:38:26.000 [2024-12-14 00:19:04.814206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.000 [2024-12-14 00:19:04.814219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.000 qpair failed and we were unable to recover it. 00:38:26.000 [2024-12-14 00:19:04.814393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.000 [2024-12-14 00:19:04.814416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:26.000 qpair failed and we were unable to recover it. 00:38:26.000 [2024-12-14 00:19:04.814535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.000 [2024-12-14 00:19:04.814561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.000 qpair failed and we were unable to recover it. 00:38:26.000 [2024-12-14 00:19:04.814686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.000 [2024-12-14 00:19:04.814714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:26.000 qpair failed and we were unable to recover it. 00:38:26.000 [2024-12-14 00:19:04.814869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.000 [2024-12-14 00:19:04.814884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.000 qpair failed and we were unable to recover it. 00:38:26.000 [2024-12-14 00:19:04.815043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.001 [2024-12-14 00:19:04.815056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.001 qpair failed and we were unable to recover it. 00:38:26.001 [2024-12-14 00:19:04.815144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.001 [2024-12-14 00:19:04.815157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.001 qpair failed and we were unable to recover it. 00:38:26.001 [2024-12-14 00:19:04.815234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.001 [2024-12-14 00:19:04.815247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.001 qpair failed and we were unable to recover it. 00:38:26.001 [2024-12-14 00:19:04.815392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.001 [2024-12-14 00:19:04.815405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.001 qpair failed and we were unable to recover it. 00:38:26.001 [2024-12-14 00:19:04.815635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.001 [2024-12-14 00:19:04.815649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.001 qpair failed and we were unable to recover it. 00:38:26.001 [2024-12-14 00:19:04.815750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.001 [2024-12-14 00:19:04.815763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.001 qpair failed and we were unable to recover it. 00:38:26.001 [2024-12-14 00:19:04.815923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.001 [2024-12-14 00:19:04.815936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.001 qpair failed and we were unable to recover it. 00:38:26.001 [2024-12-14 00:19:04.816083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.001 [2024-12-14 00:19:04.816096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.001 qpair failed and we were unable to recover it. 00:38:26.001 [2024-12-14 00:19:04.816191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.001 [2024-12-14 00:19:04.816204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.001 qpair failed and we were unable to recover it. 00:38:26.001 [2024-12-14 00:19:04.816356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.001 [2024-12-14 00:19:04.816372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.001 qpair failed and we were unable to recover it. 00:38:26.001 [2024-12-14 00:19:04.816519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.001 [2024-12-14 00:19:04.816534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.001 qpair failed and we were unable to recover it. 00:38:26.001 [2024-12-14 00:19:04.816689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.001 [2024-12-14 00:19:04.816702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.001 qpair failed and we were unable to recover it. 00:38:26.001 [2024-12-14 00:19:04.816905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.001 [2024-12-14 00:19:04.816918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.001 qpair failed and we were unable to recover it. 00:38:26.001 [2024-12-14 00:19:04.817065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.001 [2024-12-14 00:19:04.817078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.001 qpair failed and we were unable to recover it. 00:38:26.001 [2024-12-14 00:19:04.817157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.001 [2024-12-14 00:19:04.817171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.001 qpair failed and we were unable to recover it. 00:38:26.001 [2024-12-14 00:19:04.817251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.001 [2024-12-14 00:19:04.817264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.001 qpair failed and we were unable to recover it. 00:38:26.001 [2024-12-14 00:19:04.817346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.001 [2024-12-14 00:19:04.817359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.001 qpair failed and we were unable to recover it. 00:38:26.001 [2024-12-14 00:19:04.817450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.001 [2024-12-14 00:19:04.817464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.001 qpair failed and we were unable to recover it. 00:38:26.001 [2024-12-14 00:19:04.817549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.001 [2024-12-14 00:19:04.817562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.001 qpair failed and we were unable to recover it. 00:38:26.001 [2024-12-14 00:19:04.817729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.001 [2024-12-14 00:19:04.817742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.001 qpair failed and we were unable to recover it. 00:38:26.001 [2024-12-14 00:19:04.817824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.001 [2024-12-14 00:19:04.817837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.001 qpair failed and we were unable to recover it. 00:38:26.001 [2024-12-14 00:19:04.817974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.001 [2024-12-14 00:19:04.817988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.001 qpair failed and we were unable to recover it. 00:38:26.001 [2024-12-14 00:19:04.818136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.001 [2024-12-14 00:19:04.818149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.001 qpair failed and we were unable to recover it. 00:38:26.001 [2024-12-14 00:19:04.818230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.001 [2024-12-14 00:19:04.818243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.001 qpair failed and we were unable to recover it. 00:38:26.001 [2024-12-14 00:19:04.818325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.001 [2024-12-14 00:19:04.818339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.001 qpair failed and we were unable to recover it. 00:38:26.001 [2024-12-14 00:19:04.818395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.001 [2024-12-14 00:19:04.818408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.001 qpair failed and we were unable to recover it. 00:38:26.001 [2024-12-14 00:19:04.818521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.001 [2024-12-14 00:19:04.818535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.001 qpair failed and we were unable to recover it. 00:38:26.001 [2024-12-14 00:19:04.818604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.001 [2024-12-14 00:19:04.818616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.001 qpair failed and we were unable to recover it. 00:38:26.001 [2024-12-14 00:19:04.818823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.001 [2024-12-14 00:19:04.818836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.001 qpair failed and we were unable to recover it. 00:38:26.001 [2024-12-14 00:19:04.818984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.001 [2024-12-14 00:19:04.818998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.001 qpair failed and we were unable to recover it. 00:38:26.001 [2024-12-14 00:19:04.819233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.001 [2024-12-14 00:19:04.819246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.001 qpair failed and we were unable to recover it. 00:38:26.001 [2024-12-14 00:19:04.819412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.001 [2024-12-14 00:19:04.819425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.001 qpair failed and we were unable to recover it. 00:38:26.001 [2024-12-14 00:19:04.819576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.001 [2024-12-14 00:19:04.819591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.001 qpair failed and we were unable to recover it. 00:38:26.001 [2024-12-14 00:19:04.819680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.001 [2024-12-14 00:19:04.819694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.001 qpair failed and we were unable to recover it. 00:38:26.001 [2024-12-14 00:19:04.819938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.001 [2024-12-14 00:19:04.819951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.001 qpair failed and we were unable to recover it. 00:38:26.001 [2024-12-14 00:19:04.820158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.001 [2024-12-14 00:19:04.820172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.001 qpair failed and we were unable to recover it. 00:38:26.001 [2024-12-14 00:19:04.820274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.001 [2024-12-14 00:19:04.820297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:26.001 qpair failed and we were unable to recover it. 00:38:26.001 [2024-12-14 00:19:04.820489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.001 [2024-12-14 00:19:04.820514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.001 qpair failed and we were unable to recover it. 00:38:26.001 [2024-12-14 00:19:04.820670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.001 [2024-12-14 00:19:04.820691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.001 qpair failed and we were unable to recover it. 00:38:26.001 [2024-12-14 00:19:04.820853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.001 [2024-12-14 00:19:04.820871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.002 qpair failed and we were unable to recover it. 00:38:26.002 [2024-12-14 00:19:04.821028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.002 [2024-12-14 00:19:04.821042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.002 qpair failed and we were unable to recover it. 00:38:26.002 [2024-12-14 00:19:04.821116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.002 [2024-12-14 00:19:04.821129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.002 qpair failed and we were unable to recover it. 00:38:26.002 [2024-12-14 00:19:04.821266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.002 [2024-12-14 00:19:04.821280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.002 qpair failed and we were unable to recover it. 00:38:26.002 [2024-12-14 00:19:04.821379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.002 [2024-12-14 00:19:04.821393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.002 qpair failed and we were unable to recover it. 00:38:26.002 [2024-12-14 00:19:04.821484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.002 [2024-12-14 00:19:04.821498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.002 qpair failed and we were unable to recover it. 00:38:26.002 [2024-12-14 00:19:04.821700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.002 [2024-12-14 00:19:04.821714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.002 qpair failed and we were unable to recover it. 00:38:26.002 [2024-12-14 00:19:04.821812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.002 [2024-12-14 00:19:04.821825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.002 qpair failed and we were unable to recover it. 00:38:26.002 [2024-12-14 00:19:04.821959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.002 [2024-12-14 00:19:04.821972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.002 qpair failed and we were unable to recover it. 00:38:26.002 [2024-12-14 00:19:04.822121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.002 [2024-12-14 00:19:04.822134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.002 qpair failed and we were unable to recover it. 00:38:26.002 [2024-12-14 00:19:04.822234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.002 [2024-12-14 00:19:04.822254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.002 qpair failed and we were unable to recover it. 00:38:26.002 [2024-12-14 00:19:04.822462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.002 [2024-12-14 00:19:04.822476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.002 qpair failed and we were unable to recover it. 00:38:26.002 [2024-12-14 00:19:04.822632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.002 [2024-12-14 00:19:04.822646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.002 qpair failed and we were unable to recover it. 00:38:26.002 [2024-12-14 00:19:04.822784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.002 [2024-12-14 00:19:04.822798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.002 qpair failed and we were unable to recover it. 00:38:26.002 [2024-12-14 00:19:04.822939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.002 [2024-12-14 00:19:04.822952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.002 qpair failed and we were unable to recover it. 00:38:26.002 [2024-12-14 00:19:04.823022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.002 [2024-12-14 00:19:04.823035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.002 qpair failed and we were unable to recover it. 00:38:26.002 [2024-12-14 00:19:04.823203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.002 [2024-12-14 00:19:04.823216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.002 qpair failed and we were unable to recover it. 00:38:26.002 [2024-12-14 00:19:04.823296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.002 [2024-12-14 00:19:04.823310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.002 qpair failed and we were unable to recover it. 00:38:26.002 [2024-12-14 00:19:04.823464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.002 [2024-12-14 00:19:04.823486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.002 qpair failed and we were unable to recover it. 00:38:26.002 [2024-12-14 00:19:04.823604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.002 [2024-12-14 00:19:04.823618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.002 qpair failed and we were unable to recover it. 00:38:26.002 [2024-12-14 00:19:04.823698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.002 [2024-12-14 00:19:04.823711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.002 qpair failed and we were unable to recover it. 00:38:26.002 [2024-12-14 00:19:04.823881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.002 [2024-12-14 00:19:04.823894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.002 qpair failed and we were unable to recover it. 00:38:26.002 [2024-12-14 00:19:04.823982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.002 [2024-12-14 00:19:04.823996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.002 qpair failed and we were unable to recover it. 00:38:26.002 [2024-12-14 00:19:04.824200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.002 [2024-12-14 00:19:04.824213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.002 qpair failed and we were unable to recover it. 00:38:26.002 [2024-12-14 00:19:04.824462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.002 [2024-12-14 00:19:04.824476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.002 qpair failed and we were unable to recover it. 00:38:26.002 [2024-12-14 00:19:04.824568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.002 [2024-12-14 00:19:04.824582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.002 qpair failed and we were unable to recover it. 00:38:26.002 [2024-12-14 00:19:04.824668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.002 [2024-12-14 00:19:04.824681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.002 qpair failed and we were unable to recover it. 00:38:26.002 [2024-12-14 00:19:04.824908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.002 [2024-12-14 00:19:04.824922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.002 qpair failed and we were unable to recover it. 00:38:26.002 [2024-12-14 00:19:04.825083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.002 [2024-12-14 00:19:04.825097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.002 qpair failed and we were unable to recover it. 00:38:26.002 [2024-12-14 00:19:04.825243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.002 [2024-12-14 00:19:04.825256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.002 qpair failed and we were unable to recover it. 00:38:26.002 [2024-12-14 00:19:04.825404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.002 [2024-12-14 00:19:04.825417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.002 qpair failed and we were unable to recover it. 00:38:26.002 [2024-12-14 00:19:04.825623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.002 [2024-12-14 00:19:04.825637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.002 qpair failed and we were unable to recover it. 00:38:26.002 [2024-12-14 00:19:04.825712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.002 [2024-12-14 00:19:04.825725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.002 qpair failed and we were unable to recover it. 00:38:26.002 [2024-12-14 00:19:04.825861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.002 [2024-12-14 00:19:04.825874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.002 qpair failed and we were unable to recover it. 00:38:26.002 [2024-12-14 00:19:04.826042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.002 [2024-12-14 00:19:04.826055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.002 qpair failed and we were unable to recover it. 00:38:26.002 [2024-12-14 00:19:04.826263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.002 [2024-12-14 00:19:04.826277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.002 qpair failed and we were unable to recover it. 00:38:26.002 [2024-12-14 00:19:04.826480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.002 [2024-12-14 00:19:04.826494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.002 qpair failed and we were unable to recover it. 00:38:26.002 [2024-12-14 00:19:04.826592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.002 [2024-12-14 00:19:04.826606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.002 qpair failed and we were unable to recover it. 00:38:26.002 [2024-12-14 00:19:04.826840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.002 [2024-12-14 00:19:04.826854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.002 qpair failed and we were unable to recover it. 00:38:26.002 [2024-12-14 00:19:04.826953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.002 [2024-12-14 00:19:04.826967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.002 qpair failed and we were unable to recover it. 00:38:26.003 [2024-12-14 00:19:04.827032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.003 [2024-12-14 00:19:04.827045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.003 qpair failed and we were unable to recover it. 00:38:26.003 [2024-12-14 00:19:04.827194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.003 [2024-12-14 00:19:04.827207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.003 qpair failed and we were unable to recover it. 00:38:26.003 [2024-12-14 00:19:04.827350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.003 [2024-12-14 00:19:04.827363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.003 qpair failed and we were unable to recover it. 00:38:26.003 [2024-12-14 00:19:04.827454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.003 [2024-12-14 00:19:04.827467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.003 qpair failed and we were unable to recover it. 00:38:26.003 [2024-12-14 00:19:04.827554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.003 [2024-12-14 00:19:04.827568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.003 qpair failed and we were unable to recover it. 00:38:26.003 [2024-12-14 00:19:04.827654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.003 [2024-12-14 00:19:04.827667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.003 qpair failed and we were unable to recover it. 00:38:26.003 [2024-12-14 00:19:04.827804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.003 [2024-12-14 00:19:04.827818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.003 qpair failed and we were unable to recover it. 00:38:26.003 [2024-12-14 00:19:04.827977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.003 [2024-12-14 00:19:04.827990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.003 qpair failed and we were unable to recover it. 00:38:26.003 [2024-12-14 00:19:04.828089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.003 [2024-12-14 00:19:04.828102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.003 qpair failed and we were unable to recover it. 00:38:26.003 [2024-12-14 00:19:04.828278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.003 [2024-12-14 00:19:04.828291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.003 qpair failed and we were unable to recover it. 00:38:26.003 [2024-12-14 00:19:04.828444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.003 [2024-12-14 00:19:04.828458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.003 qpair failed and we were unable to recover it. 00:38:26.003 [2024-12-14 00:19:04.828613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.003 [2024-12-14 00:19:04.828627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.003 qpair failed and we were unable to recover it. 00:38:26.003 [2024-12-14 00:19:04.828775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.003 [2024-12-14 00:19:04.828788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.003 qpair failed and we were unable to recover it. 00:38:26.003 [2024-12-14 00:19:04.828939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.003 [2024-12-14 00:19:04.828952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.003 qpair failed and we were unable to recover it. 00:38:26.003 [2024-12-14 00:19:04.829177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.003 [2024-12-14 00:19:04.829190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.003 qpair failed and we were unable to recover it. 00:38:26.003 [2024-12-14 00:19:04.829295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.003 [2024-12-14 00:19:04.829308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.003 qpair failed and we were unable to recover it. 00:38:26.003 [2024-12-14 00:19:04.829463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.003 [2024-12-14 00:19:04.829477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.003 qpair failed and we were unable to recover it. 00:38:26.003 [2024-12-14 00:19:04.829658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.003 [2024-12-14 00:19:04.829672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.003 qpair failed and we were unable to recover it. 00:38:26.003 [2024-12-14 00:19:04.829738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.003 [2024-12-14 00:19:04.829750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.003 qpair failed and we were unable to recover it. 00:38:26.003 [2024-12-14 00:19:04.829894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.003 [2024-12-14 00:19:04.829907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.003 qpair failed and we were unable to recover it. 00:38:26.003 [2024-12-14 00:19:04.829998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.003 [2024-12-14 00:19:04.830013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.003 qpair failed and we were unable to recover it. 00:38:26.003 [2024-12-14 00:19:04.830169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.003 [2024-12-14 00:19:04.830183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.003 qpair failed and we were unable to recover it. 00:38:26.003 [2024-12-14 00:19:04.830259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.003 [2024-12-14 00:19:04.830277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.003 qpair failed and we were unable to recover it. 00:38:26.003 [2024-12-14 00:19:04.830433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.003 [2024-12-14 00:19:04.830453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.003 qpair failed and we were unable to recover it. 00:38:26.003 [2024-12-14 00:19:04.830666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.003 [2024-12-14 00:19:04.830679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.003 qpair failed and we were unable to recover it. 00:38:26.003 [2024-12-14 00:19:04.830846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.003 [2024-12-14 00:19:04.830859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.003 qpair failed and we were unable to recover it. 00:38:26.003 [2024-12-14 00:19:04.830941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.003 [2024-12-14 00:19:04.830954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.003 qpair failed and we were unable to recover it. 00:38:26.003 [2024-12-14 00:19:04.831102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.003 [2024-12-14 00:19:04.831116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.003 qpair failed and we were unable to recover it. 00:38:26.003 [2024-12-14 00:19:04.831323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.003 [2024-12-14 00:19:04.831337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.003 qpair failed and we were unable to recover it. 00:38:26.003 [2024-12-14 00:19:04.831479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.003 [2024-12-14 00:19:04.831493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.003 qpair failed and we were unable to recover it. 00:38:26.003 [2024-12-14 00:19:04.831750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.003 [2024-12-14 00:19:04.831763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.003 qpair failed and we were unable to recover it. 00:38:26.003 [2024-12-14 00:19:04.831897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.003 [2024-12-14 00:19:04.831910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.003 qpair failed and we were unable to recover it. 00:38:26.003 [2024-12-14 00:19:04.832091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.003 [2024-12-14 00:19:04.832104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.003 qpair failed and we were unable to recover it. 00:38:26.003 [2024-12-14 00:19:04.832263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.003 [2024-12-14 00:19:04.832277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.003 qpair failed and we were unable to recover it. 00:38:26.003 [2024-12-14 00:19:04.832356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.003 [2024-12-14 00:19:04.832369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.003 qpair failed and we were unable to recover it. 00:38:26.003 [2024-12-14 00:19:04.832524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.003 [2024-12-14 00:19:04.832538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.003 qpair failed and we were unable to recover it. 00:38:26.003 [2024-12-14 00:19:04.832618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.003 [2024-12-14 00:19:04.832631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.003 qpair failed and we were unable to recover it. 00:38:26.003 [2024-12-14 00:19:04.832784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.003 [2024-12-14 00:19:04.832800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.003 qpair failed and we were unable to recover it. 00:38:26.003 [2024-12-14 00:19:04.832976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.003 [2024-12-14 00:19:04.832989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.003 qpair failed and we were unable to recover it. 00:38:26.003 [2024-12-14 00:19:04.833133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.004 [2024-12-14 00:19:04.833147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.004 qpair failed and we were unable to recover it. 00:38:26.004 [2024-12-14 00:19:04.833243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.004 [2024-12-14 00:19:04.833257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.004 qpair failed and we were unable to recover it. 00:38:26.004 [2024-12-14 00:19:04.833409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.004 [2024-12-14 00:19:04.833423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.004 qpair failed and we were unable to recover it. 00:38:26.004 [2024-12-14 00:19:04.833506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.004 [2024-12-14 00:19:04.833541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.004 qpair failed and we were unable to recover it. 00:38:26.004 [2024-12-14 00:19:04.833694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.004 [2024-12-14 00:19:04.833707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.004 qpair failed and we were unable to recover it. 00:38:26.004 [2024-12-14 00:19:04.833783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.004 [2024-12-14 00:19:04.833796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.004 qpair failed and we were unable to recover it. 00:38:26.004 [2024-12-14 00:19:04.833888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.004 [2024-12-14 00:19:04.833901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.004 qpair failed and we were unable to recover it. 00:38:26.004 [2024-12-14 00:19:04.833978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.004 [2024-12-14 00:19:04.833991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.004 qpair failed and we were unable to recover it. 00:38:26.004 [2024-12-14 00:19:04.834070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.004 [2024-12-14 00:19:04.834083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.004 qpair failed and we were unable to recover it. 00:38:26.004 [2024-12-14 00:19:04.834151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.004 [2024-12-14 00:19:04.834166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.004 qpair failed and we were unable to recover it. 00:38:26.004 [2024-12-14 00:19:04.834265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.004 [2024-12-14 00:19:04.834279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.004 qpair failed and we were unable to recover it. 00:38:26.004 [2024-12-14 00:19:04.834428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.004 [2024-12-14 00:19:04.834448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.004 qpair failed and we were unable to recover it. 00:38:26.004 [2024-12-14 00:19:04.834545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.004 [2024-12-14 00:19:04.834558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.004 qpair failed and we were unable to recover it. 00:38:26.004 [2024-12-14 00:19:04.834692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.004 [2024-12-14 00:19:04.834706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.004 qpair failed and we were unable to recover it. 00:38:26.004 [2024-12-14 00:19:04.834840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.004 [2024-12-14 00:19:04.834853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.004 qpair failed and we were unable to recover it. 00:38:26.004 [2024-12-14 00:19:04.834936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.004 [2024-12-14 00:19:04.834949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.004 qpair failed and we were unable to recover it. 00:38:26.004 [2024-12-14 00:19:04.835095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.004 [2024-12-14 00:19:04.835109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.004 qpair failed and we were unable to recover it. 00:38:26.004 [2024-12-14 00:19:04.835204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.004 [2024-12-14 00:19:04.835217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.004 qpair failed and we were unable to recover it. 00:38:26.004 [2024-12-14 00:19:04.835376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.004 [2024-12-14 00:19:04.835390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.004 qpair failed and we were unable to recover it. 00:38:26.004 [2024-12-14 00:19:04.835547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.004 [2024-12-14 00:19:04.835561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.004 qpair failed and we were unable to recover it. 00:38:26.004 [2024-12-14 00:19:04.835763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.004 [2024-12-14 00:19:04.835777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.004 qpair failed and we were unable to recover it. 00:38:26.004 [2024-12-14 00:19:04.835991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.004 [2024-12-14 00:19:04.836008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.004 qpair failed and we were unable to recover it. 00:38:26.004 [2024-12-14 00:19:04.836200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.004 [2024-12-14 00:19:04.836213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.004 qpair failed and we were unable to recover it. 00:38:26.004 [2024-12-14 00:19:04.836312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.004 [2024-12-14 00:19:04.836326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.004 qpair failed and we were unable to recover it. 00:38:26.004 [2024-12-14 00:19:04.836535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.004 [2024-12-14 00:19:04.836549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.004 qpair failed and we were unable to recover it. 00:38:26.004 [2024-12-14 00:19:04.836693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.004 [2024-12-14 00:19:04.836706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.004 qpair failed and we were unable to recover it. 00:38:26.004 [2024-12-14 00:19:04.836792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.004 [2024-12-14 00:19:04.836805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.004 qpair failed and we were unable to recover it. 00:38:26.004 [2024-12-14 00:19:04.836889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.004 [2024-12-14 00:19:04.836902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.004 qpair failed and we were unable to recover it. 00:38:26.004 [2024-12-14 00:19:04.837047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.004 [2024-12-14 00:19:04.837061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.004 qpair failed and we were unable to recover it. 00:38:26.004 [2024-12-14 00:19:04.837201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.004 [2024-12-14 00:19:04.837214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.004 qpair failed and we were unable to recover it. 00:38:26.004 [2024-12-14 00:19:04.837300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.004 [2024-12-14 00:19:04.837313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.004 qpair failed and we were unable to recover it. 00:38:26.004 [2024-12-14 00:19:04.837400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.004 [2024-12-14 00:19:04.837413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.004 qpair failed and we were unable to recover it. 00:38:26.004 [2024-12-14 00:19:04.837494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.004 [2024-12-14 00:19:04.837506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.004 qpair failed and we were unable to recover it. 00:38:26.004 [2024-12-14 00:19:04.837647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.004 [2024-12-14 00:19:04.837660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.004 qpair failed and we were unable to recover it. 00:38:26.004 [2024-12-14 00:19:04.837747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.005 [2024-12-14 00:19:04.837759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.005 qpair failed and we were unable to recover it. 00:38:26.005 [2024-12-14 00:19:04.837914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.005 [2024-12-14 00:19:04.837926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.005 qpair failed and we were unable to recover it. 00:38:26.005 [2024-12-14 00:19:04.838013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.005 [2024-12-14 00:19:04.838027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.005 qpair failed and we were unable to recover it. 00:38:26.005 [2024-12-14 00:19:04.838157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.005 [2024-12-14 00:19:04.838169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.005 qpair failed and we were unable to recover it. 00:38:26.005 [2024-12-14 00:19:04.838306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.005 [2024-12-14 00:19:04.838321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.005 qpair failed and we were unable to recover it. 00:38:26.005 [2024-12-14 00:19:04.838456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.005 [2024-12-14 00:19:04.838470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.005 qpair failed and we were unable to recover it. 00:38:26.005 [2024-12-14 00:19:04.838538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.005 [2024-12-14 00:19:04.838550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.005 qpair failed and we were unable to recover it. 00:38:26.005 [2024-12-14 00:19:04.838699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.005 [2024-12-14 00:19:04.838712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.005 qpair failed and we were unable to recover it. 00:38:26.005 [2024-12-14 00:19:04.838872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.005 [2024-12-14 00:19:04.838885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.005 qpair failed and we were unable to recover it. 00:38:26.005 [2024-12-14 00:19:04.839048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.005 [2024-12-14 00:19:04.839061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.005 qpair failed and we were unable to recover it. 00:38:26.005 [2024-12-14 00:19:04.839215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.005 [2024-12-14 00:19:04.839229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.005 qpair failed and we were unable to recover it. 00:38:26.005 [2024-12-14 00:19:04.839302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.005 [2024-12-14 00:19:04.839315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.005 qpair failed and we were unable to recover it. 00:38:26.005 [2024-12-14 00:19:04.839391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.005 [2024-12-14 00:19:04.839403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.005 qpair failed and we were unable to recover it. 00:38:26.005 [2024-12-14 00:19:04.839496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.005 [2024-12-14 00:19:04.839509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.005 qpair failed and we were unable to recover it. 00:38:26.005 [2024-12-14 00:19:04.839716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.005 [2024-12-14 00:19:04.839729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.005 qpair failed and we were unable to recover it. 00:38:26.005 [2024-12-14 00:19:04.839866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.005 [2024-12-14 00:19:04.839894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.005 qpair failed and we were unable to recover it. 00:38:26.005 [2024-12-14 00:19:04.839977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.005 [2024-12-14 00:19:04.839990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.005 qpair failed and we were unable to recover it. 00:38:26.005 [2024-12-14 00:19:04.840077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.005 [2024-12-14 00:19:04.840090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.005 qpair failed and we were unable to recover it. 00:38:26.005 [2024-12-14 00:19:04.840173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.005 [2024-12-14 00:19:04.840187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.005 qpair failed and we were unable to recover it. 00:38:26.005 [2024-12-14 00:19:04.840321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.005 [2024-12-14 00:19:04.840334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.005 qpair failed and we were unable to recover it. 00:38:26.005 [2024-12-14 00:19:04.840485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.005 [2024-12-14 00:19:04.840499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.005 qpair failed and we were unable to recover it. 00:38:26.005 [2024-12-14 00:19:04.840642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.005 [2024-12-14 00:19:04.840655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.005 qpair failed and we were unable to recover it. 00:38:26.005 [2024-12-14 00:19:04.840806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.005 [2024-12-14 00:19:04.840819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.005 qpair failed and we were unable to recover it. 00:38:26.005 [2024-12-14 00:19:04.840953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.005 [2024-12-14 00:19:04.840966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.005 qpair failed and we were unable to recover it. 00:38:26.005 [2024-12-14 00:19:04.841053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.005 [2024-12-14 00:19:04.841066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.005 qpair failed and we were unable to recover it. 00:38:26.005 [2024-12-14 00:19:04.841236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.005 [2024-12-14 00:19:04.841249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.005 qpair failed and we were unable to recover it. 00:38:26.005 [2024-12-14 00:19:04.841417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.005 [2024-12-14 00:19:04.841430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.005 qpair failed and we were unable to recover it. 00:38:26.005 [2024-12-14 00:19:04.841605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.005 [2024-12-14 00:19:04.841618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.005 qpair failed and we were unable to recover it. 00:38:26.005 [2024-12-14 00:19:04.841754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.005 [2024-12-14 00:19:04.841767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.005 qpair failed and we were unable to recover it. 00:38:26.005 [2024-12-14 00:19:04.841898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.005 [2024-12-14 00:19:04.841911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.005 qpair failed and we were unable to recover it. 00:38:26.005 [2024-12-14 00:19:04.841984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.005 [2024-12-14 00:19:04.841995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.005 qpair failed and we were unable to recover it. 00:38:26.005 [2024-12-14 00:19:04.842088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.005 [2024-12-14 00:19:04.842101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.005 qpair failed and we were unable to recover it. 00:38:26.005 [2024-12-14 00:19:04.842308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.005 [2024-12-14 00:19:04.842321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.005 qpair failed and we were unable to recover it. 00:38:26.005 [2024-12-14 00:19:04.842402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.005 [2024-12-14 00:19:04.842420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.005 qpair failed and we were unable to recover it. 00:38:26.005 [2024-12-14 00:19:04.842686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.005 [2024-12-14 00:19:04.842699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.005 qpair failed and we were unable to recover it. 00:38:26.005 [2024-12-14 00:19:04.842852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.005 [2024-12-14 00:19:04.842865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.005 qpair failed and we were unable to recover it. 00:38:26.005 [2024-12-14 00:19:04.842936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.005 [2024-12-14 00:19:04.842948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.005 qpair failed and we were unable to recover it. 00:38:26.005 [2024-12-14 00:19:04.843153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.005 [2024-12-14 00:19:04.843166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.005 qpair failed and we were unable to recover it. 00:38:26.005 [2024-12-14 00:19:04.843246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.005 [2024-12-14 00:19:04.843259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.005 qpair failed and we were unable to recover it. 00:38:26.005 [2024-12-14 00:19:04.843343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.005 [2024-12-14 00:19:04.843356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.005 qpair failed and we were unable to recover it. 00:38:26.006 [2024-12-14 00:19:04.843560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.006 [2024-12-14 00:19:04.843574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.006 qpair failed and we were unable to recover it. 00:38:26.006 [2024-12-14 00:19:04.843639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.006 [2024-12-14 00:19:04.843651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.006 qpair failed and we were unable to recover it. 00:38:26.006 [2024-12-14 00:19:04.843890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.006 [2024-12-14 00:19:04.843903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.006 qpair failed and we were unable to recover it. 00:38:26.006 [2024-12-14 00:19:04.843988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.006 [2024-12-14 00:19:04.844002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.006 qpair failed and we were unable to recover it. 00:38:26.006 [2024-12-14 00:19:04.844095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.006 [2024-12-14 00:19:04.844111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.006 qpair failed and we were unable to recover it. 00:38:26.006 [2024-12-14 00:19:04.844182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.006 [2024-12-14 00:19:04.844197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.006 qpair failed and we were unable to recover it. 00:38:26.006 [2024-12-14 00:19:04.844401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.006 [2024-12-14 00:19:04.844414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.006 qpair failed and we were unable to recover it. 00:38:26.006 [2024-12-14 00:19:04.844484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.006 [2024-12-14 00:19:04.844496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.006 qpair failed and we were unable to recover it. 00:38:26.006 [2024-12-14 00:19:04.844631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.006 [2024-12-14 00:19:04.844644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.006 qpair failed and we were unable to recover it. 00:38:26.006 [2024-12-14 00:19:04.844864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.006 [2024-12-14 00:19:04.844878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.006 qpair failed and we were unable to recover it. 00:38:26.006 [2024-12-14 00:19:04.844945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.006 [2024-12-14 00:19:04.844958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.006 qpair failed and we were unable to recover it. 00:38:26.006 [2024-12-14 00:19:04.845042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.006 [2024-12-14 00:19:04.845055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.006 qpair failed and we were unable to recover it. 00:38:26.006 [2024-12-14 00:19:04.845142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.006 [2024-12-14 00:19:04.845156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.006 qpair failed and we were unable to recover it. 00:38:26.006 [2024-12-14 00:19:04.845230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.006 [2024-12-14 00:19:04.845244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.006 qpair failed and we were unable to recover it. 00:38:26.006 [2024-12-14 00:19:04.845375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.006 [2024-12-14 00:19:04.845389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.006 qpair failed and we were unable to recover it. 00:38:26.006 [2024-12-14 00:19:04.845537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.006 [2024-12-14 00:19:04.845551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.006 qpair failed and we were unable to recover it. 00:38:26.006 [2024-12-14 00:19:04.845709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.006 [2024-12-14 00:19:04.845722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.006 qpair failed and we were unable to recover it. 00:38:26.006 [2024-12-14 00:19:04.845863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.006 [2024-12-14 00:19:04.845876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.006 qpair failed and we were unable to recover it. 00:38:26.006 [2024-12-14 00:19:04.845960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.006 [2024-12-14 00:19:04.845973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.006 qpair failed and we were unable to recover it. 00:38:26.006 [2024-12-14 00:19:04.846116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.006 [2024-12-14 00:19:04.846130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.006 qpair failed and we were unable to recover it. 00:38:26.006 [2024-12-14 00:19:04.846359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.006 [2024-12-14 00:19:04.846372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.006 qpair failed and we were unable to recover it. 00:38:26.006 [2024-12-14 00:19:04.846452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.006 [2024-12-14 00:19:04.846465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.006 qpair failed and we were unable to recover it. 00:38:26.006 [2024-12-14 00:19:04.846565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.006 [2024-12-14 00:19:04.846578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.006 qpair failed and we were unable to recover it. 00:38:26.006 [2024-12-14 00:19:04.846780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.006 [2024-12-14 00:19:04.846793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.006 qpair failed and we were unable to recover it. 00:38:26.006 [2024-12-14 00:19:04.846861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.006 [2024-12-14 00:19:04.846873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.006 qpair failed and we were unable to recover it. 00:38:26.006 [2024-12-14 00:19:04.846971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.006 [2024-12-14 00:19:04.846985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.006 qpair failed and we were unable to recover it. 00:38:26.006 [2024-12-14 00:19:04.847123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.006 [2024-12-14 00:19:04.847136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.006 qpair failed and we were unable to recover it. 00:38:26.006 [2024-12-14 00:19:04.847297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.006 [2024-12-14 00:19:04.847311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.006 qpair failed and we were unable to recover it. 00:38:26.006 [2024-12-14 00:19:04.847514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.006 [2024-12-14 00:19:04.847528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.006 qpair failed and we were unable to recover it. 00:38:26.006 [2024-12-14 00:19:04.847731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.006 [2024-12-14 00:19:04.847744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.006 qpair failed and we were unable to recover it. 00:38:26.006 [2024-12-14 00:19:04.847838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.006 [2024-12-14 00:19:04.847851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.006 qpair failed and we were unable to recover it. 00:38:26.006 [2024-12-14 00:19:04.847999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.006 [2024-12-14 00:19:04.848012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.006 qpair failed and we were unable to recover it. 00:38:26.006 [2024-12-14 00:19:04.848110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.006 [2024-12-14 00:19:04.848123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.006 qpair failed and we were unable to recover it. 00:38:26.006 [2024-12-14 00:19:04.848208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.006 [2024-12-14 00:19:04.848222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.006 qpair failed and we were unable to recover it. 00:38:26.006 [2024-12-14 00:19:04.848316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.006 [2024-12-14 00:19:04.848329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.006 qpair failed and we were unable to recover it. 00:38:26.006 [2024-12-14 00:19:04.848428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.006 [2024-12-14 00:19:04.848460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.006 qpair failed and we were unable to recover it. 00:38:26.006 [2024-12-14 00:19:04.848596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.006 [2024-12-14 00:19:04.848612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.006 qpair failed and we were unable to recover it. 00:38:26.006 [2024-12-14 00:19:04.848691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.006 [2024-12-14 00:19:04.848705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.006 qpair failed and we were unable to recover it. 00:38:26.006 [2024-12-14 00:19:04.848782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.006 [2024-12-14 00:19:04.848795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.006 qpair failed and we were unable to recover it. 00:38:26.006 [2024-12-14 00:19:04.848860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.007 [2024-12-14 00:19:04.848872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.007 qpair failed and we were unable to recover it. 00:38:26.007 [2024-12-14 00:19:04.848963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.007 [2024-12-14 00:19:04.848976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.007 qpair failed and we were unable to recover it. 00:38:26.007 [2024-12-14 00:19:04.849048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.007 [2024-12-14 00:19:04.849059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.007 qpair failed and we were unable to recover it. 00:38:26.007 [2024-12-14 00:19:04.849159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.007 [2024-12-14 00:19:04.849172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.007 qpair failed and we were unable to recover it. 00:38:26.007 [2024-12-14 00:19:04.849244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.007 [2024-12-14 00:19:04.849257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.007 qpair failed and we were unable to recover it. 00:38:26.007 [2024-12-14 00:19:04.849340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.007 [2024-12-14 00:19:04.849356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.007 qpair failed and we were unable to recover it. 00:38:26.007 [2024-12-14 00:19:04.849454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.007 [2024-12-14 00:19:04.849467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.007 qpair failed and we were unable to recover it. 00:38:26.007 [2024-12-14 00:19:04.849546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.007 [2024-12-14 00:19:04.849559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.007 qpair failed and we were unable to recover it. 00:38:26.007 [2024-12-14 00:19:04.849694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.007 [2024-12-14 00:19:04.849708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.007 qpair failed and we were unable to recover it. 00:38:26.007 [2024-12-14 00:19:04.849916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.007 [2024-12-14 00:19:04.849929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.007 qpair failed and we were unable to recover it. 00:38:26.007 [2024-12-14 00:19:04.850006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.007 [2024-12-14 00:19:04.850019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.007 qpair failed and we were unable to recover it. 00:38:26.007 [2024-12-14 00:19:04.850103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.007 [2024-12-14 00:19:04.850117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.007 qpair failed and we were unable to recover it. 00:38:26.007 [2024-12-14 00:19:04.850214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.007 [2024-12-14 00:19:04.850241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.007 qpair failed and we were unable to recover it. 00:38:26.007 [2024-12-14 00:19:04.850489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.007 [2024-12-14 00:19:04.850511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.007 qpair failed and we were unable to recover it. 00:38:26.007 [2024-12-14 00:19:04.850668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.007 [2024-12-14 00:19:04.850691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.007 qpair failed and we were unable to recover it. 00:38:26.007 [2024-12-14 00:19:04.850791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.007 [2024-12-14 00:19:04.850812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.007 qpair failed and we were unable to recover it. 00:38:26.007 [2024-12-14 00:19:04.850981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.007 [2024-12-14 00:19:04.851003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.007 qpair failed and we were unable to recover it. 00:38:26.007 [2024-12-14 00:19:04.851157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.007 [2024-12-14 00:19:04.851178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.007 qpair failed and we were unable to recover it. 00:38:26.007 [2024-12-14 00:19:04.851323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.007 [2024-12-14 00:19:04.851338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.007 qpair failed and we were unable to recover it. 00:38:26.007 [2024-12-14 00:19:04.851490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.007 [2024-12-14 00:19:04.851504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.007 qpair failed and we were unable to recover it. 00:38:26.007 [2024-12-14 00:19:04.851672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.007 [2024-12-14 00:19:04.851699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:26.007 qpair failed and we were unable to recover it. 00:38:26.007 [2024-12-14 00:19:04.851867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.007 [2024-12-14 00:19:04.851889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:26.007 qpair failed and we were unable to recover it. 00:38:26.007 [2024-12-14 00:19:04.852131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.007 [2024-12-14 00:19:04.852153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:26.007 qpair failed and we were unable to recover it. 00:38:26.007 [2024-12-14 00:19:04.852255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.007 [2024-12-14 00:19:04.852270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.007 qpair failed and we were unable to recover it. 00:38:26.007 [2024-12-14 00:19:04.852420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.007 [2024-12-14 00:19:04.852433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.007 qpair failed and we were unable to recover it. 00:38:26.007 [2024-12-14 00:19:04.852520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.007 [2024-12-14 00:19:04.852534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.007 qpair failed and we were unable to recover it. 00:38:26.007 [2024-12-14 00:19:04.852701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.007 [2024-12-14 00:19:04.852716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.007 qpair failed and we were unable to recover it. 00:38:26.007 [2024-12-14 00:19:04.852812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.007 [2024-12-14 00:19:04.852829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.007 qpair failed and we were unable to recover it. 00:38:26.007 [2024-12-14 00:19:04.852917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.007 [2024-12-14 00:19:04.852930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.007 qpair failed and we were unable to recover it. 00:38:26.007 [2024-12-14 00:19:04.853003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.007 [2024-12-14 00:19:04.853018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.007 qpair failed and we were unable to recover it. 00:38:26.007 [2024-12-14 00:19:04.853088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.007 [2024-12-14 00:19:04.853101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.007 qpair failed and we were unable to recover it. 00:38:26.007 [2024-12-14 00:19:04.853249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.007 [2024-12-14 00:19:04.853262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.007 qpair failed and we were unable to recover it. 00:38:26.007 [2024-12-14 00:19:04.853353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.007 [2024-12-14 00:19:04.853366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.007 qpair failed and we were unable to recover it. 00:38:26.007 [2024-12-14 00:19:04.853519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.007 [2024-12-14 00:19:04.853533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.007 qpair failed and we were unable to recover it. 00:38:26.007 [2024-12-14 00:19:04.853613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.007 [2024-12-14 00:19:04.853626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.007 qpair failed and we were unable to recover it. 00:38:26.007 [2024-12-14 00:19:04.853798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.007 [2024-12-14 00:19:04.853811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.007 qpair failed and we were unable to recover it. 00:38:26.007 [2024-12-14 00:19:04.853969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.007 [2024-12-14 00:19:04.853982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.007 qpair failed and we were unable to recover it. 00:38:26.007 [2024-12-14 00:19:04.854137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.007 [2024-12-14 00:19:04.854150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.007 qpair failed and we were unable to recover it. 00:38:26.007 [2024-12-14 00:19:04.854219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.007 [2024-12-14 00:19:04.854231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.007 qpair failed and we were unable to recover it. 00:38:26.007 [2024-12-14 00:19:04.854290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.008 [2024-12-14 00:19:04.854302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.008 qpair failed and we were unable to recover it. 00:38:26.008 [2024-12-14 00:19:04.854373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.008 [2024-12-14 00:19:04.854386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.008 qpair failed and we were unable to recover it. 00:38:26.008 [2024-12-14 00:19:04.854466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.008 [2024-12-14 00:19:04.854479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.008 qpair failed and we were unable to recover it. 00:38:26.008 [2024-12-14 00:19:04.854553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.008 [2024-12-14 00:19:04.854566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.008 qpair failed and we were unable to recover it. 00:38:26.008 [2024-12-14 00:19:04.854705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.008 [2024-12-14 00:19:04.854718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.008 qpair failed and we were unable to recover it. 00:38:26.008 [2024-12-14 00:19:04.854870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.008 [2024-12-14 00:19:04.854883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.008 qpair failed and we were unable to recover it. 00:38:26.008 [2024-12-14 00:19:04.855024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.008 [2024-12-14 00:19:04.855040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.008 qpair failed and we were unable to recover it. 00:38:26.008 [2024-12-14 00:19:04.855110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.008 [2024-12-14 00:19:04.855122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.008 qpair failed and we were unable to recover it. 00:38:26.008 [2024-12-14 00:19:04.855197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.008 [2024-12-14 00:19:04.855210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.008 qpair failed and we were unable to recover it. 00:38:26.008 [2024-12-14 00:19:04.855401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.008 [2024-12-14 00:19:04.855415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.008 qpair failed and we were unable to recover it. 00:38:26.008 [2024-12-14 00:19:04.855513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.008 [2024-12-14 00:19:04.855527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.008 qpair failed and we were unable to recover it. 00:38:26.008 [2024-12-14 00:19:04.855602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.008 [2024-12-14 00:19:04.855615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.008 qpair failed and we were unable to recover it. 00:38:26.008 [2024-12-14 00:19:04.855707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.008 [2024-12-14 00:19:04.855724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.008 qpair failed and we were unable to recover it. 00:38:26.008 [2024-12-14 00:19:04.855824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.008 [2024-12-14 00:19:04.855838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.008 qpair failed and we were unable to recover it. 00:38:26.008 [2024-12-14 00:19:04.855926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.008 [2024-12-14 00:19:04.855941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.008 qpair failed and we were unable to recover it. 00:38:26.008 [2024-12-14 00:19:04.856048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.008 [2024-12-14 00:19:04.856061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.008 qpair failed and we were unable to recover it. 00:38:26.008 [2024-12-14 00:19:04.856143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.008 [2024-12-14 00:19:04.856157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.008 qpair failed and we were unable to recover it. 00:38:26.008 [2024-12-14 00:19:04.856366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.008 [2024-12-14 00:19:04.856379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.008 qpair failed and we were unable to recover it. 00:38:26.008 [2024-12-14 00:19:04.856459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.008 [2024-12-14 00:19:04.856473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.008 qpair failed and we were unable to recover it. 00:38:26.008 [2024-12-14 00:19:04.856646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.008 [2024-12-14 00:19:04.856662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.008 qpair failed and we were unable to recover it. 00:38:26.008 [2024-12-14 00:19:04.856744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.008 [2024-12-14 00:19:04.856758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.008 qpair failed and we were unable to recover it. 00:38:26.008 [2024-12-14 00:19:04.856848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.008 [2024-12-14 00:19:04.856861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.008 qpair failed and we were unable to recover it. 00:38:26.008 [2024-12-14 00:19:04.856955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.008 [2024-12-14 00:19:04.856968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.008 qpair failed and we were unable to recover it. 00:38:26.008 [2024-12-14 00:19:04.857038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.008 [2024-12-14 00:19:04.857052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.008 qpair failed and we were unable to recover it. 00:38:26.008 [2024-12-14 00:19:04.857194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.008 [2024-12-14 00:19:04.857208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.008 qpair failed and we were unable to recover it. 00:38:26.008 [2024-12-14 00:19:04.857367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.008 [2024-12-14 00:19:04.857380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.008 qpair failed and we were unable to recover it. 00:38:26.008 [2024-12-14 00:19:04.857457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.008 [2024-12-14 00:19:04.857472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.008 qpair failed and we were unable to recover it. 00:38:26.008 [2024-12-14 00:19:04.857547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.008 [2024-12-14 00:19:04.857560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.008 qpair failed and we were unable to recover it. 00:38:26.008 [2024-12-14 00:19:04.857693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.008 [2024-12-14 00:19:04.857706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.008 qpair failed and we were unable to recover it. 00:38:26.008 [2024-12-14 00:19:04.857847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.008 [2024-12-14 00:19:04.857860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.008 qpair failed and we were unable to recover it. 00:38:26.008 [2024-12-14 00:19:04.858008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.008 [2024-12-14 00:19:04.858022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.008 qpair failed and we were unable to recover it. 00:38:26.008 [2024-12-14 00:19:04.858093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.008 [2024-12-14 00:19:04.858105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.008 qpair failed and we were unable to recover it. 00:38:26.008 [2024-12-14 00:19:04.858240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.008 [2024-12-14 00:19:04.858254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.008 qpair failed and we were unable to recover it. 00:38:26.008 [2024-12-14 00:19:04.858388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.008 [2024-12-14 00:19:04.858415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.008 qpair failed and we were unable to recover it. 00:38:26.008 [2024-12-14 00:19:04.858594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.008 [2024-12-14 00:19:04.858608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.008 qpair failed and we were unable to recover it. 00:38:26.008 [2024-12-14 00:19:04.858691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.008 [2024-12-14 00:19:04.858704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.008 qpair failed and we were unable to recover it. 00:38:26.008 [2024-12-14 00:19:04.858842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.008 [2024-12-14 00:19:04.858856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.008 qpair failed and we were unable to recover it. 00:38:26.008 [2024-12-14 00:19:04.858925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.008 [2024-12-14 00:19:04.858938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.008 qpair failed and we were unable to recover it. 00:38:26.008 [2024-12-14 00:19:04.859006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.008 [2024-12-14 00:19:04.859018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.008 qpair failed and we were unable to recover it. 00:38:26.008 [2024-12-14 00:19:04.859178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.008 [2024-12-14 00:19:04.859191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.008 qpair failed and we were unable to recover it. 00:38:26.009 [2024-12-14 00:19:04.859325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.009 [2024-12-14 00:19:04.859339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.009 qpair failed and we were unable to recover it. 00:38:26.009 [2024-12-14 00:19:04.859404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.009 [2024-12-14 00:19:04.859417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.009 qpair failed and we were unable to recover it. 00:38:26.009 [2024-12-14 00:19:04.859582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.009 [2024-12-14 00:19:04.859597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.009 qpair failed and we were unable to recover it. 00:38:26.009 [2024-12-14 00:19:04.859678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.009 [2024-12-14 00:19:04.859691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.009 qpair failed and we were unable to recover it. 00:38:26.009 [2024-12-14 00:19:04.859834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.009 [2024-12-14 00:19:04.859847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.009 qpair failed and we were unable to recover it. 00:38:26.009 [2024-12-14 00:19:04.859932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.009 [2024-12-14 00:19:04.859946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.009 qpair failed and we were unable to recover it. 00:38:26.009 [2024-12-14 00:19:04.860203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.009 [2024-12-14 00:19:04.860219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.009 qpair failed and we were unable to recover it. 00:38:26.009 [2024-12-14 00:19:04.860427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.009 [2024-12-14 00:19:04.860448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.009 qpair failed and we were unable to recover it. 00:38:26.009 [2024-12-14 00:19:04.860657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.009 [2024-12-14 00:19:04.860671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.009 qpair failed and we were unable to recover it. 00:38:26.009 [2024-12-14 00:19:04.860843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.009 [2024-12-14 00:19:04.860858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.009 qpair failed and we were unable to recover it. 00:38:26.009 [2024-12-14 00:19:04.860998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.009 [2024-12-14 00:19:04.861011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.009 qpair failed and we were unable to recover it. 00:38:26.009 [2024-12-14 00:19:04.861222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.009 [2024-12-14 00:19:04.861236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.009 qpair failed and we were unable to recover it. 00:38:26.009 [2024-12-14 00:19:04.861342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.009 [2024-12-14 00:19:04.861367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.009 qpair failed and we were unable to recover it. 00:38:26.009 [2024-12-14 00:19:04.861464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.009 [2024-12-14 00:19:04.861490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.009 qpair failed and we were unable to recover it. 00:38:26.009 [2024-12-14 00:19:04.861640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.009 [2024-12-14 00:19:04.861654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.009 qpair failed and we were unable to recover it. 00:38:26.009 [2024-12-14 00:19:04.861766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.009 [2024-12-14 00:19:04.861780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.009 qpair failed and we were unable to recover it. 00:38:26.009 [2024-12-14 00:19:04.861856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.009 [2024-12-14 00:19:04.861869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.009 qpair failed and we were unable to recover it. 00:38:26.009 [2024-12-14 00:19:04.861947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.009 [2024-12-14 00:19:04.861961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.009 qpair failed and we were unable to recover it. 00:38:26.009 [2024-12-14 00:19:04.862023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.009 [2024-12-14 00:19:04.862036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.009 qpair failed and we were unable to recover it. 00:38:26.009 [2024-12-14 00:19:04.862122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.009 [2024-12-14 00:19:04.862135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.009 qpair failed and we were unable to recover it. 00:38:26.009 [2024-12-14 00:19:04.862359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.009 [2024-12-14 00:19:04.862372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.009 qpair failed and we were unable to recover it. 00:38:26.009 [2024-12-14 00:19:04.862518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.009 [2024-12-14 00:19:04.862532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.009 qpair failed and we were unable to recover it. 00:38:26.009 [2024-12-14 00:19:04.862629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.009 [2024-12-14 00:19:04.862643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.009 qpair failed and we were unable to recover it. 00:38:26.009 [2024-12-14 00:19:04.862722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.009 [2024-12-14 00:19:04.862735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.009 qpair failed and we were unable to recover it. 00:38:26.009 [2024-12-14 00:19:04.862815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.009 [2024-12-14 00:19:04.862829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.009 qpair failed and we were unable to recover it. 00:38:26.009 [2024-12-14 00:19:04.862977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.009 [2024-12-14 00:19:04.862992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.009 qpair failed and we were unable to recover it. 00:38:26.009 [2024-12-14 00:19:04.863075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.009 [2024-12-14 00:19:04.863088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.009 qpair failed and we were unable to recover it. 00:38:26.009 [2024-12-14 00:19:04.863246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.009 [2024-12-14 00:19:04.863260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.009 qpair failed and we were unable to recover it. 00:38:26.009 [2024-12-14 00:19:04.863397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.009 [2024-12-14 00:19:04.863411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.009 qpair failed and we were unable to recover it. 00:38:26.009 [2024-12-14 00:19:04.863563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.009 [2024-12-14 00:19:04.863577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.009 qpair failed and we were unable to recover it. 00:38:26.009 [2024-12-14 00:19:04.863783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.009 [2024-12-14 00:19:04.863797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.009 qpair failed and we were unable to recover it. 00:38:26.009 [2024-12-14 00:19:04.863953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.009 [2024-12-14 00:19:04.863966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.009 qpair failed and we were unable to recover it. 00:38:26.009 [2024-12-14 00:19:04.864057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.009 [2024-12-14 00:19:04.864071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.009 qpair failed and we were unable to recover it. 00:38:26.009 [2024-12-14 00:19:04.864141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.009 [2024-12-14 00:19:04.864155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.009 qpair failed and we were unable to recover it. 00:38:26.009 [2024-12-14 00:19:04.864303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.009 [2024-12-14 00:19:04.864316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.009 qpair failed and we were unable to recover it. 00:38:26.010 [2024-12-14 00:19:04.864489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.010 [2024-12-14 00:19:04.864504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.010 qpair failed and we were unable to recover it. 00:38:26.010 [2024-12-14 00:19:04.864712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.010 [2024-12-14 00:19:04.864726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.010 qpair failed and we were unable to recover it. 00:38:26.010 [2024-12-14 00:19:04.864812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.010 [2024-12-14 00:19:04.864825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.010 qpair failed and we were unable to recover it. 00:38:26.010 [2024-12-14 00:19:04.865048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.010 [2024-12-14 00:19:04.865061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.010 qpair failed and we were unable to recover it. 00:38:26.010 [2024-12-14 00:19:04.865138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.010 [2024-12-14 00:19:04.865151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.010 qpair failed and we were unable to recover it. 00:38:26.010 [2024-12-14 00:19:04.865223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.010 [2024-12-14 00:19:04.865237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.010 qpair failed and we were unable to recover it. 00:38:26.010 [2024-12-14 00:19:04.865318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.010 [2024-12-14 00:19:04.865330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.010 qpair failed and we were unable to recover it. 00:38:26.010 [2024-12-14 00:19:04.865542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.010 [2024-12-14 00:19:04.865562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.010 qpair failed and we were unable to recover it. 00:38:26.010 [2024-12-14 00:19:04.865769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.010 [2024-12-14 00:19:04.865783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.010 qpair failed and we were unable to recover it. 00:38:26.010 [2024-12-14 00:19:04.865929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.010 [2024-12-14 00:19:04.865942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.010 qpair failed and we were unable to recover it. 00:38:26.010 [2024-12-14 00:19:04.866148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.010 [2024-12-14 00:19:04.866161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.010 qpair failed and we were unable to recover it. 00:38:26.010 [2024-12-14 00:19:04.866259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.010 [2024-12-14 00:19:04.866276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.010 qpair failed and we were unable to recover it. 00:38:26.010 [2024-12-14 00:19:04.866350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.010 [2024-12-14 00:19:04.866363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.010 qpair failed and we were unable to recover it. 00:38:26.010 [2024-12-14 00:19:04.866451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.010 [2024-12-14 00:19:04.866465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.010 qpair failed and we were unable to recover it. 00:38:26.010 [2024-12-14 00:19:04.866612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.010 [2024-12-14 00:19:04.866626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.010 qpair failed and we were unable to recover it. 00:38:26.010 [2024-12-14 00:19:04.866832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.010 [2024-12-14 00:19:04.866846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.010 qpair failed and we were unable to recover it. 00:38:26.010 [2024-12-14 00:19:04.866991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.010 [2024-12-14 00:19:04.867005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.010 qpair failed and we were unable to recover it. 00:38:26.010 [2024-12-14 00:19:04.867107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.010 [2024-12-14 00:19:04.867121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.010 qpair failed and we were unable to recover it. 00:38:26.010 [2024-12-14 00:19:04.867300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.010 [2024-12-14 00:19:04.867314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.010 qpair failed and we were unable to recover it. 00:38:26.010 [2024-12-14 00:19:04.867406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.010 [2024-12-14 00:19:04.867420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.010 qpair failed and we were unable to recover it. 00:38:26.010 [2024-12-14 00:19:04.867518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.010 [2024-12-14 00:19:04.867531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.010 qpair failed and we were unable to recover it. 00:38:26.010 [2024-12-14 00:19:04.867761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.010 [2024-12-14 00:19:04.867775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.010 qpair failed and we were unable to recover it. 00:38:26.010 [2024-12-14 00:19:04.867868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.010 [2024-12-14 00:19:04.867881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.010 qpair failed and we were unable to recover it. 00:38:26.010 [2024-12-14 00:19:04.867956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.010 [2024-12-14 00:19:04.867969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.010 qpair failed and we were unable to recover it. 00:38:26.010 [2024-12-14 00:19:04.868115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.010 [2024-12-14 00:19:04.868130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.010 qpair failed and we were unable to recover it. 00:38:26.010 [2024-12-14 00:19:04.868270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.010 [2024-12-14 00:19:04.868283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.010 qpair failed and we were unable to recover it. 00:38:26.010 [2024-12-14 00:19:04.868374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.010 [2024-12-14 00:19:04.868387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.010 qpair failed and we were unable to recover it. 00:38:26.010 [2024-12-14 00:19:04.868534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.010 [2024-12-14 00:19:04.868548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.010 qpair failed and we were unable to recover it. 00:38:26.010 [2024-12-14 00:19:04.868682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.010 [2024-12-14 00:19:04.868695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.010 qpair failed and we were unable to recover it. 00:38:26.010 [2024-12-14 00:19:04.868791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.010 [2024-12-14 00:19:04.868804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.010 qpair failed and we were unable to recover it. 00:38:26.010 [2024-12-14 00:19:04.868883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.010 [2024-12-14 00:19:04.868897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.010 qpair failed and we were unable to recover it. 00:38:26.010 [2024-12-14 00:19:04.868973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.010 [2024-12-14 00:19:04.868986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.010 qpair failed and we were unable to recover it. 00:38:26.010 [2024-12-14 00:19:04.869141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.010 [2024-12-14 00:19:04.869154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.010 qpair failed and we were unable to recover it. 00:38:26.010 [2024-12-14 00:19:04.869253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.010 [2024-12-14 00:19:04.869266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.010 qpair failed and we were unable to recover it. 00:38:26.010 [2024-12-14 00:19:04.869402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.010 [2024-12-14 00:19:04.869415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.010 qpair failed and we were unable to recover it. 00:38:26.010 [2024-12-14 00:19:04.869508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.010 [2024-12-14 00:19:04.869522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.010 qpair failed and we were unable to recover it. 00:38:26.010 [2024-12-14 00:19:04.869679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.010 [2024-12-14 00:19:04.869693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.010 qpair failed and we were unable to recover it. 00:38:26.010 [2024-12-14 00:19:04.869833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.010 [2024-12-14 00:19:04.869847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.010 qpair failed and we were unable to recover it. 00:38:26.010 [2024-12-14 00:19:04.869944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.010 [2024-12-14 00:19:04.869968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:26.010 qpair failed and we were unable to recover it. 00:38:26.010 [2024-12-14 00:19:04.870128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.011 [2024-12-14 00:19:04.870149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:26.011 qpair failed and we were unable to recover it. 00:38:26.011 [2024-12-14 00:19:04.870261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.011 [2024-12-14 00:19:04.870282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:26.011 qpair failed and we were unable to recover it. 00:38:26.011 [2024-12-14 00:19:04.870452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.011 [2024-12-14 00:19:04.870474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:26.011 qpair failed and we were unable to recover it. 00:38:26.011 [2024-12-14 00:19:04.870593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.011 [2024-12-14 00:19:04.870614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:26.011 qpair failed and we were unable to recover it. 00:38:26.011 [2024-12-14 00:19:04.870854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.011 [2024-12-14 00:19:04.870876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:26.011 qpair failed and we were unable to recover it. 00:38:26.011 [2024-12-14 00:19:04.871030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.011 [2024-12-14 00:19:04.871051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:26.011 qpair failed and we were unable to recover it. 00:38:26.011 [2024-12-14 00:19:04.871218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.011 [2024-12-14 00:19:04.871240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:26.011 qpair failed and we were unable to recover it. 00:38:26.011 [2024-12-14 00:19:04.871389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.011 [2024-12-14 00:19:04.871410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:26.011 qpair failed and we were unable to recover it. 00:38:26.011 [2024-12-14 00:19:04.871614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.011 [2024-12-14 00:19:04.871631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.011 qpair failed and we were unable to recover it. 00:38:26.011 [2024-12-14 00:19:04.871857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.011 [2024-12-14 00:19:04.871871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.011 qpair failed and we were unable to recover it. 00:38:26.011 [2024-12-14 00:19:04.871951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.011 [2024-12-14 00:19:04.871964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.011 qpair failed and we were unable to recover it. 00:38:26.011 [2024-12-14 00:19:04.872046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.011 [2024-12-14 00:19:04.872068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.011 qpair failed and we were unable to recover it. 00:38:26.011 [2024-12-14 00:19:04.872221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.011 [2024-12-14 00:19:04.872237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.011 qpair failed and we were unable to recover it. 00:38:26.011 [2024-12-14 00:19:04.872346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.011 [2024-12-14 00:19:04.872360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.011 qpair failed and we were unable to recover it. 00:38:26.011 [2024-12-14 00:19:04.872450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.011 [2024-12-14 00:19:04.872464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.011 qpair failed and we were unable to recover it. 00:38:26.011 [2024-12-14 00:19:04.872601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.011 [2024-12-14 00:19:04.872614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.011 qpair failed and we were unable to recover it. 00:38:26.011 [2024-12-14 00:19:04.872828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.011 [2024-12-14 00:19:04.872842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.011 qpair failed and we were unable to recover it. 00:38:26.011 [2024-12-14 00:19:04.872936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.011 [2024-12-14 00:19:04.872950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.011 qpair failed and we were unable to recover it. 00:38:26.011 [2024-12-14 00:19:04.873090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.011 [2024-12-14 00:19:04.873103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.011 qpair failed and we were unable to recover it. 00:38:26.011 [2024-12-14 00:19:04.873309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.011 [2024-12-14 00:19:04.873323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.011 qpair failed and we were unable to recover it. 00:38:26.011 [2024-12-14 00:19:04.873404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.011 [2024-12-14 00:19:04.873417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.011 qpair failed and we were unable to recover it. 00:38:26.011 [2024-12-14 00:19:04.873638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.011 [2024-12-14 00:19:04.873652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.011 qpair failed and we were unable to recover it. 00:38:26.011 [2024-12-14 00:19:04.873802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.011 [2024-12-14 00:19:04.873815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.011 qpair failed and we were unable to recover it. 00:38:26.011 [2024-12-14 00:19:04.873967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.011 [2024-12-14 00:19:04.873981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.011 qpair failed and we were unable to recover it. 00:38:26.011 [2024-12-14 00:19:04.874061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.011 [2024-12-14 00:19:04.874074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.011 qpair failed and we were unable to recover it. 00:38:26.011 [2024-12-14 00:19:04.874165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.011 [2024-12-14 00:19:04.874178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.011 qpair failed and we were unable to recover it. 00:38:26.011 [2024-12-14 00:19:04.874333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.011 [2024-12-14 00:19:04.874346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.011 qpair failed and we were unable to recover it. 00:38:26.011 [2024-12-14 00:19:04.874434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.011 [2024-12-14 00:19:04.874454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.011 qpair failed and we were unable to recover it. 00:38:26.011 [2024-12-14 00:19:04.874612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.011 [2024-12-14 00:19:04.874625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.011 qpair failed and we were unable to recover it. 00:38:26.011 [2024-12-14 00:19:04.874758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.011 [2024-12-14 00:19:04.874778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.011 qpair failed and we were unable to recover it. 00:38:26.011 [2024-12-14 00:19:04.874858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.011 [2024-12-14 00:19:04.874871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.011 qpair failed and we were unable to recover it. 00:38:26.011 [2024-12-14 00:19:04.874985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.011 [2024-12-14 00:19:04.875000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.011 qpair failed and we were unable to recover it. 00:38:26.011 [2024-12-14 00:19:04.875083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.011 [2024-12-14 00:19:04.875096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.011 qpair failed and we were unable to recover it. 00:38:26.011 [2024-12-14 00:19:04.875234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.011 [2024-12-14 00:19:04.875247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.011 qpair failed and we were unable to recover it. 00:38:26.011 [2024-12-14 00:19:04.875410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.011 [2024-12-14 00:19:04.875423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.011 qpair failed and we were unable to recover it. 00:38:26.011 [2024-12-14 00:19:04.875589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.011 [2024-12-14 00:19:04.875603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.011 qpair failed and we were unable to recover it. 00:38:26.011 [2024-12-14 00:19:04.875720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.011 [2024-12-14 00:19:04.875751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.011 qpair failed and we were unable to recover it. 00:38:26.011 [2024-12-14 00:19:04.875863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.011 [2024-12-14 00:19:04.875889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:26.011 qpair failed and we were unable to recover it. 00:38:26.011 [2024-12-14 00:19:04.875998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.011 [2024-12-14 00:19:04.876023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:26.011 qpair failed and we were unable to recover it. 00:38:26.011 [2024-12-14 00:19:04.876190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.011 [2024-12-14 00:19:04.876205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.012 qpair failed and we were unable to recover it. 00:38:26.012 [2024-12-14 00:19:04.876302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.012 [2024-12-14 00:19:04.876315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.012 qpair failed and we were unable to recover it. 00:38:26.012 [2024-12-14 00:19:04.876400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.012 [2024-12-14 00:19:04.876413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.012 qpair failed and we were unable to recover it. 00:38:26.012 [2024-12-14 00:19:04.876501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.012 [2024-12-14 00:19:04.876516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.012 qpair failed and we were unable to recover it. 00:38:26.012 [2024-12-14 00:19:04.876590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.012 [2024-12-14 00:19:04.876603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.012 qpair failed and we were unable to recover it. 00:38:26.012 [2024-12-14 00:19:04.876673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.012 [2024-12-14 00:19:04.876686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.012 qpair failed and we were unable to recover it. 00:38:26.012 [2024-12-14 00:19:04.876780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.012 [2024-12-14 00:19:04.876793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.012 qpair failed and we were unable to recover it. 00:38:26.012 [2024-12-14 00:19:04.876944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.012 [2024-12-14 00:19:04.876957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.012 qpair failed and we were unable to recover it. 00:38:26.012 [2024-12-14 00:19:04.877190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.012 [2024-12-14 00:19:04.877204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.012 qpair failed and we were unable to recover it. 00:38:26.012 [2024-12-14 00:19:04.877365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.012 [2024-12-14 00:19:04.877379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.012 qpair failed and we were unable to recover it. 00:38:26.012 [2024-12-14 00:19:04.877542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.012 [2024-12-14 00:19:04.877556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.012 qpair failed and we were unable to recover it. 00:38:26.012 [2024-12-14 00:19:04.877701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.012 [2024-12-14 00:19:04.877715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.012 qpair failed and we were unable to recover it. 00:38:26.012 [2024-12-14 00:19:04.877815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.012 [2024-12-14 00:19:04.877828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.012 qpair failed and we were unable to recover it. 00:38:26.012 [2024-12-14 00:19:04.877974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.012 [2024-12-14 00:19:04.877990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.012 qpair failed and we were unable to recover it. 00:38:26.012 [2024-12-14 00:19:04.878202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.012 [2024-12-14 00:19:04.878215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.012 qpair failed and we were unable to recover it. 00:38:26.012 [2024-12-14 00:19:04.878300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.012 [2024-12-14 00:19:04.878313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.012 qpair failed and we were unable to recover it. 00:38:26.012 [2024-12-14 00:19:04.878454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.012 [2024-12-14 00:19:04.878468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.012 qpair failed and we were unable to recover it. 00:38:26.012 [2024-12-14 00:19:04.878614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.012 [2024-12-14 00:19:04.878628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.012 qpair failed and we were unable to recover it. 00:38:26.012 [2024-12-14 00:19:04.878814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.012 [2024-12-14 00:19:04.878827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.012 qpair failed and we were unable to recover it. 00:38:26.012 [2024-12-14 00:19:04.879024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.012 [2024-12-14 00:19:04.879037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.012 qpair failed and we were unable to recover it. 00:38:26.012 [2024-12-14 00:19:04.879124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.012 [2024-12-14 00:19:04.879142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.012 qpair failed and we were unable to recover it. 00:38:26.012 [2024-12-14 00:19:04.879232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.012 [2024-12-14 00:19:04.879245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.012 qpair failed and we were unable to recover it. 00:38:26.012 [2024-12-14 00:19:04.879453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.012 [2024-12-14 00:19:04.879466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.012 qpair failed and we were unable to recover it. 00:38:26.012 [2024-12-14 00:19:04.879540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.012 [2024-12-14 00:19:04.879553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.012 qpair failed and we were unable to recover it. 00:38:26.012 [2024-12-14 00:19:04.879700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.012 [2024-12-14 00:19:04.879714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.012 qpair failed and we were unable to recover it. 00:38:26.012 [2024-12-14 00:19:04.879852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.012 [2024-12-14 00:19:04.879865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.012 qpair failed and we were unable to recover it. 00:38:26.012 [2024-12-14 00:19:04.879939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.012 [2024-12-14 00:19:04.879952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.012 qpair failed and we were unable to recover it. 00:38:26.012 [2024-12-14 00:19:04.880091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.012 [2024-12-14 00:19:04.880105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.012 qpair failed and we were unable to recover it. 00:38:26.012 [2024-12-14 00:19:04.880310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.012 [2024-12-14 00:19:04.880323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.012 qpair failed and we were unable to recover it. 00:38:26.012 [2024-12-14 00:19:04.880424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.012 [2024-12-14 00:19:04.880442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.012 qpair failed and we were unable to recover it. 00:38:26.012 [2024-12-14 00:19:04.880605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.012 [2024-12-14 00:19:04.880618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.012 qpair failed and we were unable to recover it. 00:38:26.012 [2024-12-14 00:19:04.880705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.012 [2024-12-14 00:19:04.880718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.012 qpair failed and we were unable to recover it. 00:38:26.012 [2024-12-14 00:19:04.880889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.012 [2024-12-14 00:19:04.880902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.012 qpair failed and we were unable to recover it. 00:38:26.012 [2024-12-14 00:19:04.881104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.012 [2024-12-14 00:19:04.881117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.012 qpair failed and we were unable to recover it. 00:38:26.012 [2024-12-14 00:19:04.881270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.012 [2024-12-14 00:19:04.881283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.012 qpair failed and we were unable to recover it. 00:38:26.012 [2024-12-14 00:19:04.881414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.012 [2024-12-14 00:19:04.881428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.012 qpair failed and we were unable to recover it. 00:38:26.012 [2024-12-14 00:19:04.881585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.012 [2024-12-14 00:19:04.881599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.012 qpair failed and we were unable to recover it. 00:38:26.012 [2024-12-14 00:19:04.881744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.012 [2024-12-14 00:19:04.881757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.012 qpair failed and we were unable to recover it. 00:38:26.012 [2024-12-14 00:19:04.881854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.012 [2024-12-14 00:19:04.881868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.012 qpair failed and we were unable to recover it. 00:38:26.012 [2024-12-14 00:19:04.882002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.012 [2024-12-14 00:19:04.882020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.012 qpair failed and we were unable to recover it. 00:38:26.013 [2024-12-14 00:19:04.882129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.013 [2024-12-14 00:19:04.882155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.013 qpair failed and we were unable to recover it. 00:38:26.013 [2024-12-14 00:19:04.882332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.013 [2024-12-14 00:19:04.882356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:26.013 qpair failed and we were unable to recover it. 00:38:26.013 [2024-12-14 00:19:04.882455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.013 [2024-12-14 00:19:04.882479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:26.013 qpair failed and we were unable to recover it. 00:38:26.013 [2024-12-14 00:19:04.882635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.013 [2024-12-14 00:19:04.882651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.013 qpair failed and we were unable to recover it. 00:38:26.013 [2024-12-14 00:19:04.882794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.013 [2024-12-14 00:19:04.882808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.013 qpair failed and we were unable to recover it. 00:38:26.013 [2024-12-14 00:19:04.882896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.013 [2024-12-14 00:19:04.882909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.013 qpair failed and we were unable to recover it. 00:38:26.013 [2024-12-14 00:19:04.883056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.013 [2024-12-14 00:19:04.883070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.013 qpair failed and we were unable to recover it. 00:38:26.013 [2024-12-14 00:19:04.883167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.013 [2024-12-14 00:19:04.883180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.013 qpair failed and we were unable to recover it. 00:38:26.013 [2024-12-14 00:19:04.883326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.013 [2024-12-14 00:19:04.883339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.013 qpair failed and we were unable to recover it. 00:38:26.013 [2024-12-14 00:19:04.883493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.013 [2024-12-14 00:19:04.883507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.013 qpair failed and we were unable to recover it. 00:38:26.013 [2024-12-14 00:19:04.883605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.013 [2024-12-14 00:19:04.883618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.013 qpair failed and we were unable to recover it. 00:38:26.013 [2024-12-14 00:19:04.883713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.013 [2024-12-14 00:19:04.883726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.013 qpair failed and we were unable to recover it. 00:38:26.013 [2024-12-14 00:19:04.883873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.013 [2024-12-14 00:19:04.883886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.013 qpair failed and we were unable to recover it. 00:38:26.013 [2024-12-14 00:19:04.883979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.013 [2024-12-14 00:19:04.883995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.013 qpair failed and we were unable to recover it. 00:38:26.013 [2024-12-14 00:19:04.884253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.013 [2024-12-14 00:19:04.884267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.013 qpair failed and we were unable to recover it. 00:38:26.013 [2024-12-14 00:19:04.884405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.013 [2024-12-14 00:19:04.884418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.013 qpair failed and we were unable to recover it. 00:38:26.013 [2024-12-14 00:19:04.884514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.013 [2024-12-14 00:19:04.884528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.013 qpair failed and we were unable to recover it. 00:38:26.013 [2024-12-14 00:19:04.884608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.013 [2024-12-14 00:19:04.884621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.013 qpair failed and we were unable to recover it. 00:38:26.013 [2024-12-14 00:19:04.884716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.013 [2024-12-14 00:19:04.884730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.013 qpair failed and we were unable to recover it. 00:38:26.013 [2024-12-14 00:19:04.884881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.013 [2024-12-14 00:19:04.884895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.013 qpair failed and we were unable to recover it. 00:38:26.013 [2024-12-14 00:19:04.884960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.013 [2024-12-14 00:19:04.884972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.013 qpair failed and we were unable to recover it. 00:38:26.013 [2024-12-14 00:19:04.885121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.013 [2024-12-14 00:19:04.885134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.013 qpair failed and we were unable to recover it. 00:38:26.013 [2024-12-14 00:19:04.885222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.013 [2024-12-14 00:19:04.885237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.013 qpair failed and we were unable to recover it. 00:38:26.013 [2024-12-14 00:19:04.885387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.013 [2024-12-14 00:19:04.885400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.013 qpair failed and we were unable to recover it. 00:38:26.013 [2024-12-14 00:19:04.885534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.013 [2024-12-14 00:19:04.885547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.013 qpair failed and we were unable to recover it. 00:38:26.013 [2024-12-14 00:19:04.885686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.013 [2024-12-14 00:19:04.885700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.013 qpair failed and we were unable to recover it. 00:38:26.013 [2024-12-14 00:19:04.885789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.013 [2024-12-14 00:19:04.885802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.013 qpair failed and we were unable to recover it. 00:38:26.013 [2024-12-14 00:19:04.885944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.013 [2024-12-14 00:19:04.885958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.013 qpair failed and we were unable to recover it. 00:38:26.013 [2024-12-14 00:19:04.886041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.013 [2024-12-14 00:19:04.886054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.013 qpair failed and we were unable to recover it. 00:38:26.013 [2024-12-14 00:19:04.886142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.013 [2024-12-14 00:19:04.886155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.013 qpair failed and we were unable to recover it. 00:38:26.013 [2024-12-14 00:19:04.886317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.013 [2024-12-14 00:19:04.886330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.013 qpair failed and we were unable to recover it. 00:38:26.013 [2024-12-14 00:19:04.886554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.013 [2024-12-14 00:19:04.886568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.013 qpair failed and we were unable to recover it. 00:38:26.013 [2024-12-14 00:19:04.886666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.013 [2024-12-14 00:19:04.886679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.013 qpair failed and we were unable to recover it. 00:38:26.013 [2024-12-14 00:19:04.886758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.013 [2024-12-14 00:19:04.886772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.013 qpair failed and we were unable to recover it. 00:38:26.013 [2024-12-14 00:19:04.886853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.014 [2024-12-14 00:19:04.886867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.014 qpair failed and we were unable to recover it. 00:38:26.014 [2024-12-14 00:19:04.886961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.014 [2024-12-14 00:19:04.886974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.014 qpair failed and we were unable to recover it. 00:38:26.014 [2024-12-14 00:19:04.887062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.014 [2024-12-14 00:19:04.887076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.014 qpair failed and we were unable to recover it. 00:38:26.014 [2024-12-14 00:19:04.887212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.014 [2024-12-14 00:19:04.887226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.014 qpair failed and we were unable to recover it. 00:38:26.014 [2024-12-14 00:19:04.887381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.014 [2024-12-14 00:19:04.887395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.014 qpair failed and we were unable to recover it. 00:38:26.014 [2024-12-14 00:19:04.887541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.014 [2024-12-14 00:19:04.887554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.014 qpair failed and we were unable to recover it. 00:38:26.014 [2024-12-14 00:19:04.887642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.014 [2024-12-14 00:19:04.887668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.014 qpair failed and we were unable to recover it. 00:38:26.014 [2024-12-14 00:19:04.887827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.014 [2024-12-14 00:19:04.887850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:26.014 qpair failed and we were unable to recover it. 00:38:26.014 [2024-12-14 00:19:04.887964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.014 [2024-12-14 00:19:04.887988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:26.014 qpair failed and we were unable to recover it. 00:38:26.014 [2024-12-14 00:19:04.888154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.014 [2024-12-14 00:19:04.888168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.014 qpair failed and we were unable to recover it. 00:38:26.014 [2024-12-14 00:19:04.888245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.014 [2024-12-14 00:19:04.888259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.014 qpair failed and we were unable to recover it. 00:38:26.014 [2024-12-14 00:19:04.888466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.014 [2024-12-14 00:19:04.888479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.014 qpair failed and we were unable to recover it. 00:38:26.014 [2024-12-14 00:19:04.888585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.014 [2024-12-14 00:19:04.888599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.014 qpair failed and we were unable to recover it. 00:38:26.014 [2024-12-14 00:19:04.888671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.014 [2024-12-14 00:19:04.888684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.014 qpair failed and we were unable to recover it. 00:38:26.014 [2024-12-14 00:19:04.888780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.014 [2024-12-14 00:19:04.888793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.014 qpair failed and we were unable to recover it. 00:38:26.014 [2024-12-14 00:19:04.888969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.014 [2024-12-14 00:19:04.888983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.014 qpair failed and we were unable to recover it. 00:38:26.014 [2024-12-14 00:19:04.889051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.014 [2024-12-14 00:19:04.889063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.014 qpair failed and we were unable to recover it. 00:38:26.014 [2024-12-14 00:19:04.889138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.014 [2024-12-14 00:19:04.889152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.014 qpair failed and we were unable to recover it. 00:38:26.014 [2024-12-14 00:19:04.889236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.014 [2024-12-14 00:19:04.889250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.014 qpair failed and we were unable to recover it. 00:38:26.014 [2024-12-14 00:19:04.889395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.014 [2024-12-14 00:19:04.889410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.014 qpair failed and we were unable to recover it. 00:38:26.014 [2024-12-14 00:19:04.889508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.014 [2024-12-14 00:19:04.889522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.014 qpair failed and we were unable to recover it. 00:38:26.014 [2024-12-14 00:19:04.889611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.014 [2024-12-14 00:19:04.889624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.014 qpair failed and we were unable to recover it. 00:38:26.014 [2024-12-14 00:19:04.889850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.014 [2024-12-14 00:19:04.889863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.014 qpair failed and we were unable to recover it. 00:38:26.014 [2024-12-14 00:19:04.890039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.014 [2024-12-14 00:19:04.890052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.014 qpair failed and we were unable to recover it. 00:38:26.014 [2024-12-14 00:19:04.890186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.014 [2024-12-14 00:19:04.890205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.014 qpair failed and we were unable to recover it. 00:38:26.014 [2024-12-14 00:19:04.890288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.014 [2024-12-14 00:19:04.890301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.014 qpair failed and we were unable to recover it. 00:38:26.014 [2024-12-14 00:19:04.890386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.014 [2024-12-14 00:19:04.890400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.014 qpair failed and we were unable to recover it. 00:38:26.014 [2024-12-14 00:19:04.890537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.014 [2024-12-14 00:19:04.890551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.014 qpair failed and we were unable to recover it. 00:38:26.014 [2024-12-14 00:19:04.890639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.014 [2024-12-14 00:19:04.890653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.014 qpair failed and we were unable to recover it. 00:38:26.014 [2024-12-14 00:19:04.890778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.014 [2024-12-14 00:19:04.890791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.014 qpair failed and we were unable to recover it. 00:38:26.014 [2024-12-14 00:19:04.890873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.014 [2024-12-14 00:19:04.890886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.014 qpair failed and we were unable to recover it. 00:38:26.014 [2024-12-14 00:19:04.891040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.014 [2024-12-14 00:19:04.891053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.014 qpair failed and we were unable to recover it. 00:38:26.014 [2024-12-14 00:19:04.891127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.014 [2024-12-14 00:19:04.891139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.014 qpair failed and we were unable to recover it. 00:38:26.014 [2024-12-14 00:19:04.891286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.014 [2024-12-14 00:19:04.891299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.014 qpair failed and we were unable to recover it. 00:38:26.014 [2024-12-14 00:19:04.891378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.014 [2024-12-14 00:19:04.891391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.014 qpair failed and we were unable to recover it. 00:38:26.014 [2024-12-14 00:19:04.891466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.014 [2024-12-14 00:19:04.891490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.014 qpair failed and we were unable to recover it. 00:38:26.014 [2024-12-14 00:19:04.891715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.014 [2024-12-14 00:19:04.891729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.014 qpair failed and we were unable to recover it. 00:38:26.014 [2024-12-14 00:19:04.891874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.014 [2024-12-14 00:19:04.891888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.014 qpair failed and we were unable to recover it. 00:38:26.014 [2024-12-14 00:19:04.892022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.014 [2024-12-14 00:19:04.892035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.014 qpair failed and we were unable to recover it. 00:38:26.014 [2024-12-14 00:19:04.892170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.014 [2024-12-14 00:19:04.892183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.015 qpair failed and we were unable to recover it. 00:38:26.015 [2024-12-14 00:19:04.892269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.015 [2024-12-14 00:19:04.892282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.015 qpair failed and we were unable to recover it. 00:38:26.015 [2024-12-14 00:19:04.892497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.015 [2024-12-14 00:19:04.892510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.015 qpair failed and we were unable to recover it. 00:38:26.015 [2024-12-14 00:19:04.892696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.015 [2024-12-14 00:19:04.892709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.015 qpair failed and we were unable to recover it. 00:38:26.015 [2024-12-14 00:19:04.892857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.015 [2024-12-14 00:19:04.892870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.015 qpair failed and we were unable to recover it. 00:38:26.015 [2024-12-14 00:19:04.893007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.015 [2024-12-14 00:19:04.893021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.015 qpair failed and we were unable to recover it. 00:38:26.015 [2024-12-14 00:19:04.893230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.015 [2024-12-14 00:19:04.893244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.015 qpair failed and we were unable to recover it. 00:38:26.015 [2024-12-14 00:19:04.893512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.015 [2024-12-14 00:19:04.893539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.015 qpair failed and we were unable to recover it. 00:38:26.015 [2024-12-14 00:19:04.893706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.015 [2024-12-14 00:19:04.893729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:26.015 qpair failed and we were unable to recover it. 00:38:26.015 [2024-12-14 00:19:04.893956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.015 [2024-12-14 00:19:04.893980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:26.015 qpair failed and we were unable to recover it. 00:38:26.015 [2024-12-14 00:19:04.894197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.015 [2024-12-14 00:19:04.894213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.015 qpair failed and we were unable to recover it. 00:38:26.015 [2024-12-14 00:19:04.894301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.015 [2024-12-14 00:19:04.894314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.015 qpair failed and we were unable to recover it. 00:38:26.015 [2024-12-14 00:19:04.894405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.015 [2024-12-14 00:19:04.894419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.015 qpair failed and we were unable to recover it. 00:38:26.015 [2024-12-14 00:19:04.894633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.015 [2024-12-14 00:19:04.894647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.015 qpair failed and we were unable to recover it. 00:38:26.015 [2024-12-14 00:19:04.894799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.015 [2024-12-14 00:19:04.894812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.015 qpair failed and we were unable to recover it. 00:38:26.015 [2024-12-14 00:19:04.894894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.015 [2024-12-14 00:19:04.894907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.015 qpair failed and we were unable to recover it. 00:38:26.015 [2024-12-14 00:19:04.894989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.015 [2024-12-14 00:19:04.895002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.015 qpair failed and we were unable to recover it. 00:38:26.015 [2024-12-14 00:19:04.895101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.015 [2024-12-14 00:19:04.895115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.015 qpair failed and we were unable to recover it. 00:38:26.015 [2024-12-14 00:19:04.895261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.015 [2024-12-14 00:19:04.895274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.015 qpair failed and we were unable to recover it. 00:38:26.015 [2024-12-14 00:19:04.895422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.015 [2024-12-14 00:19:04.895436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.015 qpair failed and we were unable to recover it. 00:38:26.015 [2024-12-14 00:19:04.895517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.015 [2024-12-14 00:19:04.895533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.015 qpair failed and we were unable to recover it. 00:38:26.015 [2024-12-14 00:19:04.895608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.015 [2024-12-14 00:19:04.895621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.015 qpair failed and we were unable to recover it. 00:38:26.015 [2024-12-14 00:19:04.895696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.015 [2024-12-14 00:19:04.895709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.015 qpair failed and we were unable to recover it. 00:38:26.015 [2024-12-14 00:19:04.895797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.015 [2024-12-14 00:19:04.895810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.015 qpair failed and we were unable to recover it. 00:38:26.015 [2024-12-14 00:19:04.895947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.015 [2024-12-14 00:19:04.895960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.015 qpair failed and we were unable to recover it. 00:38:26.015 [2024-12-14 00:19:04.896048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.015 [2024-12-14 00:19:04.896061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.015 qpair failed and we were unable to recover it. 00:38:26.015 [2024-12-14 00:19:04.896195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.015 [2024-12-14 00:19:04.896208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.015 qpair failed and we were unable to recover it. 00:38:26.015 [2024-12-14 00:19:04.896395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.015 [2024-12-14 00:19:04.896408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.015 qpair failed and we were unable to recover it. 00:38:26.015 [2024-12-14 00:19:04.896555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.015 [2024-12-14 00:19:04.896569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.015 qpair failed and we were unable to recover it. 00:38:26.015 [2024-12-14 00:19:04.896719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.015 [2024-12-14 00:19:04.896732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.015 qpair failed and we were unable to recover it. 00:38:26.015 [2024-12-14 00:19:04.896816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.015 [2024-12-14 00:19:04.896829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.015 qpair failed and we were unable to recover it. 00:38:26.015 [2024-12-14 00:19:04.896894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.015 [2024-12-14 00:19:04.896907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.015 qpair failed and we were unable to recover it. 00:38:26.015 [2024-12-14 00:19:04.896978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.015 [2024-12-14 00:19:04.896991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.015 qpair failed and we were unable to recover it. 00:38:26.015 [2024-12-14 00:19:04.897193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.015 [2024-12-14 00:19:04.897206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.015 qpair failed and we were unable to recover it. 00:38:26.015 [2024-12-14 00:19:04.897321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.015 [2024-12-14 00:19:04.897334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.015 qpair failed and we were unable to recover it. 00:38:26.015 [2024-12-14 00:19:04.897559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.015 [2024-12-14 00:19:04.897573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.015 qpair failed and we were unable to recover it. 00:38:26.015 [2024-12-14 00:19:04.897647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.015 [2024-12-14 00:19:04.897659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.015 qpair failed and we were unable to recover it. 00:38:26.015 [2024-12-14 00:19:04.897796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.015 [2024-12-14 00:19:04.897809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.015 qpair failed and we were unable to recover it. 00:38:26.015 [2024-12-14 00:19:04.897971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.015 [2024-12-14 00:19:04.897985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.015 qpair failed and we were unable to recover it. 00:38:26.015 [2024-12-14 00:19:04.898074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.015 [2024-12-14 00:19:04.898087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.015 qpair failed and we were unable to recover it. 00:38:26.015 [2024-12-14 00:19:04.898254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.016 [2024-12-14 00:19:04.898268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.016 qpair failed and we were unable to recover it. 00:38:26.016 [2024-12-14 00:19:04.898348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.016 [2024-12-14 00:19:04.898362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.016 qpair failed and we were unable to recover it. 00:38:26.016 [2024-12-14 00:19:04.898432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.016 [2024-12-14 00:19:04.898449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.016 qpair failed and we were unable to recover it. 00:38:26.016 [2024-12-14 00:19:04.898548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.016 [2024-12-14 00:19:04.898561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.016 qpair failed and we were unable to recover it. 00:38:26.016 [2024-12-14 00:19:04.898644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.016 [2024-12-14 00:19:04.898657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.016 qpair failed and we were unable to recover it. 00:38:26.016 [2024-12-14 00:19:04.898752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.016 [2024-12-14 00:19:04.898765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.016 qpair failed and we were unable to recover it. 00:38:26.016 [2024-12-14 00:19:04.898838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.016 [2024-12-14 00:19:04.898850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.016 qpair failed and we were unable to recover it. 00:38:26.016 [2024-12-14 00:19:04.898968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.016 [2024-12-14 00:19:04.898994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.016 qpair failed and we were unable to recover it. 00:38:26.016 [2024-12-14 00:19:04.899107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.016 [2024-12-14 00:19:04.899130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:26.016 qpair failed and we were unable to recover it. 00:38:26.016 [2024-12-14 00:19:04.899396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.016 [2024-12-14 00:19:04.899420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:26.016 qpair failed and we were unable to recover it. 00:38:26.016 [2024-12-14 00:19:04.899519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.016 [2024-12-14 00:19:04.899534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.016 qpair failed and we were unable to recover it. 00:38:26.016 [2024-12-14 00:19:04.899619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.016 [2024-12-14 00:19:04.899632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.016 qpair failed and we were unable to recover it. 00:38:26.016 [2024-12-14 00:19:04.899712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.016 [2024-12-14 00:19:04.899725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.016 qpair failed and we were unable to recover it. 00:38:26.016 [2024-12-14 00:19:04.899867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.016 [2024-12-14 00:19:04.899880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.016 qpair failed and we were unable to recover it. 00:38:26.016 [2024-12-14 00:19:04.899963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.016 [2024-12-14 00:19:04.899977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.016 qpair failed and we were unable to recover it. 00:38:26.016 [2024-12-14 00:19:04.900054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.016 [2024-12-14 00:19:04.900068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.016 qpair failed and we were unable to recover it. 00:38:26.016 [2024-12-14 00:19:04.900205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.016 [2024-12-14 00:19:04.900218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.016 qpair failed and we were unable to recover it. 00:38:26.016 [2024-12-14 00:19:04.900300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.016 [2024-12-14 00:19:04.900313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.016 qpair failed and we were unable to recover it. 00:38:26.016 [2024-12-14 00:19:04.900408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.016 [2024-12-14 00:19:04.900421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.016 qpair failed and we were unable to recover it. 00:38:26.016 [2024-12-14 00:19:04.900577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.016 [2024-12-14 00:19:04.900592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.016 qpair failed and we were unable to recover it. 00:38:26.016 [2024-12-14 00:19:04.900725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.016 [2024-12-14 00:19:04.900738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.016 qpair failed and we were unable to recover it. 00:38:26.016 [2024-12-14 00:19:04.900833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.016 [2024-12-14 00:19:04.900846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.016 qpair failed and we were unable to recover it. 00:38:26.016 [2024-12-14 00:19:04.900935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.016 [2024-12-14 00:19:04.900948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.016 qpair failed and we were unable to recover it. 00:38:26.016 [2024-12-14 00:19:04.901088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.016 [2024-12-14 00:19:04.901101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.016 qpair failed and we were unable to recover it. 00:38:26.016 [2024-12-14 00:19:04.901234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.016 [2024-12-14 00:19:04.901247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.016 qpair failed and we were unable to recover it. 00:38:26.016 [2024-12-14 00:19:04.901393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.016 [2024-12-14 00:19:04.901407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.016 qpair failed and we were unable to recover it. 00:38:26.016 [2024-12-14 00:19:04.901642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.016 [2024-12-14 00:19:04.901662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.016 qpair failed and we were unable to recover it. 00:38:26.016 [2024-12-14 00:19:04.901801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.016 [2024-12-14 00:19:04.901814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.016 qpair failed and we were unable to recover it. 00:38:26.016 [2024-12-14 00:19:04.901899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.016 [2024-12-14 00:19:04.901912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.016 qpair failed and we were unable to recover it. 00:38:26.016 [2024-12-14 00:19:04.902045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.016 [2024-12-14 00:19:04.902058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.016 qpair failed and we were unable to recover it. 00:38:26.016 [2024-12-14 00:19:04.902135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.016 [2024-12-14 00:19:04.902149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.016 qpair failed and we were unable to recover it. 00:38:26.016 [2024-12-14 00:19:04.902227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.016 [2024-12-14 00:19:04.902240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.016 qpair failed and we were unable to recover it. 00:38:26.016 [2024-12-14 00:19:04.902317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.016 [2024-12-14 00:19:04.902331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.016 qpair failed and we were unable to recover it. 00:38:26.016 [2024-12-14 00:19:04.902424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.016 [2024-12-14 00:19:04.902442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.016 qpair failed and we were unable to recover it. 00:38:26.016 [2024-12-14 00:19:04.902592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.016 [2024-12-14 00:19:04.902606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.016 qpair failed and we were unable to recover it. 00:38:26.016 [2024-12-14 00:19:04.902692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.016 [2024-12-14 00:19:04.902706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.016 qpair failed and we were unable to recover it. 00:38:26.016 [2024-12-14 00:19:04.902853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.016 [2024-12-14 00:19:04.902867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.016 qpair failed and we were unable to recover it. 00:38:26.016 [2024-12-14 00:19:04.902937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.016 [2024-12-14 00:19:04.902950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.016 qpair failed and we were unable to recover it. 00:38:26.016 [2024-12-14 00:19:04.903026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.016 [2024-12-14 00:19:04.903039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.016 qpair failed and we were unable to recover it. 00:38:26.016 [2024-12-14 00:19:04.903135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.017 [2024-12-14 00:19:04.903148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.017 qpair failed and we were unable to recover it. 00:38:26.017 [2024-12-14 00:19:04.903229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.017 [2024-12-14 00:19:04.903242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.017 qpair failed and we were unable to recover it. 00:38:26.017 [2024-12-14 00:19:04.903385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.017 [2024-12-14 00:19:04.903399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.017 qpair failed and we were unable to recover it. 00:38:26.017 [2024-12-14 00:19:04.903471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.017 [2024-12-14 00:19:04.903484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.017 qpair failed and we were unable to recover it. 00:38:26.017 [2024-12-14 00:19:04.903622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.017 [2024-12-14 00:19:04.903636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.017 qpair failed and we were unable to recover it. 00:38:26.017 [2024-12-14 00:19:04.903785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.017 [2024-12-14 00:19:04.903798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.017 qpair failed and we were unable to recover it. 00:38:26.017 [2024-12-14 00:19:04.903889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.017 [2024-12-14 00:19:04.903902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.017 qpair failed and we were unable to recover it. 00:38:26.017 [2024-12-14 00:19:04.904041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.017 [2024-12-14 00:19:04.904054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.017 qpair failed and we were unable to recover it. 00:38:26.017 [2024-12-14 00:19:04.904209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.017 [2024-12-14 00:19:04.904224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.017 qpair failed and we were unable to recover it. 00:38:26.017 [2024-12-14 00:19:04.904361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.017 [2024-12-14 00:19:04.904374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.017 qpair failed and we were unable to recover it. 00:38:26.017 [2024-12-14 00:19:04.904453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.017 [2024-12-14 00:19:04.904465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.017 qpair failed and we were unable to recover it. 00:38:26.017 [2024-12-14 00:19:04.904569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.017 [2024-12-14 00:19:04.904582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.017 qpair failed and we were unable to recover it. 00:38:26.017 [2024-12-14 00:19:04.904745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.017 [2024-12-14 00:19:04.904759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.017 qpair failed and we were unable to recover it. 00:38:26.017 [2024-12-14 00:19:04.904964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.017 [2024-12-14 00:19:04.904978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.017 qpair failed and we were unable to recover it. 00:38:26.017 [2024-12-14 00:19:04.905133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.017 [2024-12-14 00:19:04.905147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.017 qpair failed and we were unable to recover it. 00:38:26.017 [2024-12-14 00:19:04.905292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.017 [2024-12-14 00:19:04.905306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.017 qpair failed and we were unable to recover it. 00:38:26.017 [2024-12-14 00:19:04.905396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.017 [2024-12-14 00:19:04.905409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.017 qpair failed and we were unable to recover it. 00:38:26.017 [2024-12-14 00:19:04.905501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.017 [2024-12-14 00:19:04.905515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.017 qpair failed and we were unable to recover it. 00:38:26.017 [2024-12-14 00:19:04.905654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.017 [2024-12-14 00:19:04.905667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.017 qpair failed and we were unable to recover it. 00:38:26.017 [2024-12-14 00:19:04.905745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.017 [2024-12-14 00:19:04.905759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.017 qpair failed and we were unable to recover it. 00:38:26.017 [2024-12-14 00:19:04.905839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.017 [2024-12-14 00:19:04.905853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.017 qpair failed and we were unable to recover it. 00:38:26.017 [2024-12-14 00:19:04.906008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.017 [2024-12-14 00:19:04.906022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.017 qpair failed and we were unable to recover it. 00:38:26.017 [2024-12-14 00:19:04.906163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.017 [2024-12-14 00:19:04.906177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.017 qpair failed and we were unable to recover it. 00:38:26.017 [2024-12-14 00:19:04.906338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.017 [2024-12-14 00:19:04.906351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.017 qpair failed and we were unable to recover it. 00:38:26.017 [2024-12-14 00:19:04.906432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.017 [2024-12-14 00:19:04.906449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.017 qpair failed and we were unable to recover it. 00:38:26.017 [2024-12-14 00:19:04.906538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.017 [2024-12-14 00:19:04.906551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.017 qpair failed and we were unable to recover it. 00:38:26.017 [2024-12-14 00:19:04.906712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.017 [2024-12-14 00:19:04.906726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.017 qpair failed and we were unable to recover it. 00:38:26.017 [2024-12-14 00:19:04.906934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.017 [2024-12-14 00:19:04.906947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.017 qpair failed and we were unable to recover it. 00:38:26.017 [2024-12-14 00:19:04.907080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.017 [2024-12-14 00:19:04.907094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.017 qpair failed and we were unable to recover it. 00:38:26.017 [2024-12-14 00:19:04.907163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.017 [2024-12-14 00:19:04.907177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.017 qpair failed and we were unable to recover it. 00:38:26.017 [2024-12-14 00:19:04.907253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.017 [2024-12-14 00:19:04.907266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.017 qpair failed and we were unable to recover it. 00:38:26.017 [2024-12-14 00:19:04.907509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.017 [2024-12-14 00:19:04.907523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.017 qpair failed and we were unable to recover it. 00:38:26.017 [2024-12-14 00:19:04.907604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.017 [2024-12-14 00:19:04.907618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.017 qpair failed and we were unable to recover it. 00:38:26.017 [2024-12-14 00:19:04.907756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.017 [2024-12-14 00:19:04.907769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.018 qpair failed and we were unable to recover it. 00:38:26.018 [2024-12-14 00:19:04.907917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.018 [2024-12-14 00:19:04.907930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.018 qpair failed and we were unable to recover it. 00:38:26.018 [2024-12-14 00:19:04.908085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.018 [2024-12-14 00:19:04.908099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.018 qpair failed and we were unable to recover it. 00:38:26.018 [2024-12-14 00:19:04.908235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.018 [2024-12-14 00:19:04.908249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.018 qpair failed and we were unable to recover it. 00:38:26.018 [2024-12-14 00:19:04.908464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.018 [2024-12-14 00:19:04.908478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.018 qpair failed and we were unable to recover it. 00:38:26.018 [2024-12-14 00:19:04.908629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.018 [2024-12-14 00:19:04.908643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.018 qpair failed and we were unable to recover it. 00:38:26.018 [2024-12-14 00:19:04.908728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.018 [2024-12-14 00:19:04.908742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.018 qpair failed and we were unable to recover it. 00:38:26.018 [2024-12-14 00:19:04.908881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.018 [2024-12-14 00:19:04.908894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.018 qpair failed and we were unable to recover it. 00:38:26.018 [2024-12-14 00:19:04.909063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.018 [2024-12-14 00:19:04.909076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.018 qpair failed and we were unable to recover it. 00:38:26.018 [2024-12-14 00:19:04.909231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.018 [2024-12-14 00:19:04.909244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.018 qpair failed and we were unable to recover it. 00:38:26.018 [2024-12-14 00:19:04.909483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.018 [2024-12-14 00:19:04.909497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.018 qpair failed and we were unable to recover it. 00:38:26.018 [2024-12-14 00:19:04.909642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.018 [2024-12-14 00:19:04.909655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.018 qpair failed and we were unable to recover it. 00:38:26.018 [2024-12-14 00:19:04.909737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.018 [2024-12-14 00:19:04.909750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.018 qpair failed and we were unable to recover it. 00:38:26.018 [2024-12-14 00:19:04.909883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.018 [2024-12-14 00:19:04.909896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.018 qpair failed and we were unable to recover it. 00:38:26.018 [2024-12-14 00:19:04.909982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.018 [2024-12-14 00:19:04.909995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.018 qpair failed and we were unable to recover it. 00:38:26.018 [2024-12-14 00:19:04.910200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.018 [2024-12-14 00:19:04.910216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.018 qpair failed and we were unable to recover it. 00:38:26.018 [2024-12-14 00:19:04.910350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.018 [2024-12-14 00:19:04.910363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.018 qpair failed and we were unable to recover it. 00:38:26.018 [2024-12-14 00:19:04.910529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.018 [2024-12-14 00:19:04.910544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.018 qpair failed and we were unable to recover it. 00:38:26.018 [2024-12-14 00:19:04.910746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.018 [2024-12-14 00:19:04.910764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.018 qpair failed and we were unable to recover it. 00:38:26.018 [2024-12-14 00:19:04.910853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.018 [2024-12-14 00:19:04.910866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.018 qpair failed and we were unable to recover it. 00:38:26.018 [2024-12-14 00:19:04.910960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.018 [2024-12-14 00:19:04.910974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.018 qpair failed and we were unable to recover it. 00:38:26.018 [2024-12-14 00:19:04.911044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.018 [2024-12-14 00:19:04.911057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.018 qpair failed and we were unable to recover it. 00:38:26.018 [2024-12-14 00:19:04.911139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.018 [2024-12-14 00:19:04.911153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.018 qpair failed and we were unable to recover it. 00:38:26.018 [2024-12-14 00:19:04.911235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.018 [2024-12-14 00:19:04.911248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.018 qpair failed and we were unable to recover it. 00:38:26.018 [2024-12-14 00:19:04.911382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.018 [2024-12-14 00:19:04.911395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.018 qpair failed and we were unable to recover it. 00:38:26.018 [2024-12-14 00:19:04.911466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.018 [2024-12-14 00:19:04.911478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.018 qpair failed and we were unable to recover it. 00:38:26.018 [2024-12-14 00:19:04.911563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.018 [2024-12-14 00:19:04.911576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.018 qpair failed and we were unable to recover it. 00:38:26.018 [2024-12-14 00:19:04.911648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.018 [2024-12-14 00:19:04.911662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.018 qpair failed and we were unable to recover it. 00:38:26.018 [2024-12-14 00:19:04.911812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.018 [2024-12-14 00:19:04.911825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.018 qpair failed and we were unable to recover it. 00:38:26.018 [2024-12-14 00:19:04.911901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.018 [2024-12-14 00:19:04.911916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.018 qpair failed and we were unable to recover it. 00:38:26.018 [2024-12-14 00:19:04.911993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.018 [2024-12-14 00:19:04.912006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.018 qpair failed and we were unable to recover it. 00:38:26.018 [2024-12-14 00:19:04.912150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.018 [2024-12-14 00:19:04.912163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.018 qpair failed and we were unable to recover it. 00:38:26.018 [2024-12-14 00:19:04.912312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.018 [2024-12-14 00:19:04.912325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.018 qpair failed and we were unable to recover it. 00:38:26.018 [2024-12-14 00:19:04.912479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.018 [2024-12-14 00:19:04.912493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.018 qpair failed and we were unable to recover it. 00:38:26.018 [2024-12-14 00:19:04.912694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.018 [2024-12-14 00:19:04.912707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.018 qpair failed and we were unable to recover it. 00:38:26.018 [2024-12-14 00:19:04.912898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.018 [2024-12-14 00:19:04.912911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.018 qpair failed and we were unable to recover it. 00:38:26.018 [2024-12-14 00:19:04.913048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.018 [2024-12-14 00:19:04.913061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.018 qpair failed and we were unable to recover it. 00:38:26.018 [2024-12-14 00:19:04.913144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.018 [2024-12-14 00:19:04.913158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.018 qpair failed and we were unable to recover it. 00:38:26.018 [2024-12-14 00:19:04.913308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.018 [2024-12-14 00:19:04.913321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.018 qpair failed and we were unable to recover it. 00:38:26.018 [2024-12-14 00:19:04.913412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.018 [2024-12-14 00:19:04.913425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.018 qpair failed and we were unable to recover it. 00:38:26.019 [2024-12-14 00:19:04.913671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.019 [2024-12-14 00:19:04.913695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:26.019 qpair failed and we were unable to recover it. 00:38:26.019 [2024-12-14 00:19:04.913802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.019 [2024-12-14 00:19:04.913823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:26.019 qpair failed and we were unable to recover it. 00:38:26.019 [2024-12-14 00:19:04.913925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.019 [2024-12-14 00:19:04.913946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:26.019 qpair failed and we were unable to recover it. 00:38:26.019 [2024-12-14 00:19:04.914117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.019 [2024-12-14 00:19:04.914138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:26.019 qpair failed and we were unable to recover it. 00:38:26.019 [2024-12-14 00:19:04.914226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.019 [2024-12-14 00:19:04.914247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:26.019 qpair failed and we were unable to recover it. 00:38:26.019 [2024-12-14 00:19:04.914356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.019 [2024-12-14 00:19:04.914377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:26.019 qpair failed and we were unable to recover it. 00:38:26.019 [2024-12-14 00:19:04.914502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.019 [2024-12-14 00:19:04.914518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.019 qpair failed and we were unable to recover it. 00:38:26.019 [2024-12-14 00:19:04.914587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.019 [2024-12-14 00:19:04.914599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.019 qpair failed and we were unable to recover it. 00:38:26.019 [2024-12-14 00:19:04.914699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.019 [2024-12-14 00:19:04.914712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.019 qpair failed and we were unable to recover it. 00:38:26.019 [2024-12-14 00:19:04.914868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.019 [2024-12-14 00:19:04.914881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.019 qpair failed and we were unable to recover it. 00:38:26.019 [2024-12-14 00:19:04.915028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.019 [2024-12-14 00:19:04.915041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.019 qpair failed and we were unable to recover it. 00:38:26.019 [2024-12-14 00:19:04.915125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.019 [2024-12-14 00:19:04.915138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.019 qpair failed and we were unable to recover it. 00:38:26.019 [2024-12-14 00:19:04.915281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.019 [2024-12-14 00:19:04.915294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.019 qpair failed and we were unable to recover it. 00:38:26.019 [2024-12-14 00:19:04.915446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.019 [2024-12-14 00:19:04.915460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.019 qpair failed and we were unable to recover it. 00:38:26.019 [2024-12-14 00:19:04.915529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.019 [2024-12-14 00:19:04.915542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.019 qpair failed and we were unable to recover it. 00:38:26.019 [2024-12-14 00:19:04.915621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.019 [2024-12-14 00:19:04.915636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.019 qpair failed and we were unable to recover it. 00:38:26.019 [2024-12-14 00:19:04.915781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.019 [2024-12-14 00:19:04.915795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.019 qpair failed and we were unable to recover it. 00:38:26.019 [2024-12-14 00:19:04.915875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.019 [2024-12-14 00:19:04.915888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.019 qpair failed and we were unable to recover it. 00:38:26.019 [2024-12-14 00:19:04.915959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.019 [2024-12-14 00:19:04.915971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.019 qpair failed and we were unable to recover it. 00:38:26.019 [2024-12-14 00:19:04.916121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.019 [2024-12-14 00:19:04.916135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.019 qpair failed and we were unable to recover it. 00:38:26.019 [2024-12-14 00:19:04.916208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.019 [2024-12-14 00:19:04.916222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.019 qpair failed and we were unable to recover it. 00:38:26.019 [2024-12-14 00:19:04.916304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.019 [2024-12-14 00:19:04.916318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.019 qpair failed and we were unable to recover it. 00:38:26.019 [2024-12-14 00:19:04.916415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.019 [2024-12-14 00:19:04.916429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.019 qpair failed and we were unable to recover it. 00:38:26.019 [2024-12-14 00:19:04.916500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.019 [2024-12-14 00:19:04.916513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.019 qpair failed and we were unable to recover it. 00:38:26.019 [2024-12-14 00:19:04.916665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.019 [2024-12-14 00:19:04.916679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.019 qpair failed and we were unable to recover it. 00:38:26.019 [2024-12-14 00:19:04.916831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.019 [2024-12-14 00:19:04.916845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.019 qpair failed and we were unable to recover it. 00:38:26.019 [2024-12-14 00:19:04.916941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.019 [2024-12-14 00:19:04.916954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.019 qpair failed and we were unable to recover it. 00:38:26.019 [2024-12-14 00:19:04.917112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.019 [2024-12-14 00:19:04.917125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.019 qpair failed and we were unable to recover it. 00:38:26.019 [2024-12-14 00:19:04.917191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.019 [2024-12-14 00:19:04.917203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.019 qpair failed and we were unable to recover it. 00:38:26.019 [2024-12-14 00:19:04.917360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.019 [2024-12-14 00:19:04.917373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.019 qpair failed and we were unable to recover it. 00:38:26.019 [2024-12-14 00:19:04.917455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.019 [2024-12-14 00:19:04.917468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.019 qpair failed and we were unable to recover it. 00:38:26.019 [2024-12-14 00:19:04.917544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.019 [2024-12-14 00:19:04.917558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.019 qpair failed and we were unable to recover it. 00:38:26.019 [2024-12-14 00:19:04.917628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.019 [2024-12-14 00:19:04.917641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.019 qpair failed and we were unable to recover it. 00:38:26.019 [2024-12-14 00:19:04.917800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.019 [2024-12-14 00:19:04.917813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.019 qpair failed and we were unable to recover it. 00:38:26.019 [2024-12-14 00:19:04.917954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.019 [2024-12-14 00:19:04.917967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.019 qpair failed and we were unable to recover it. 00:38:26.019 [2024-12-14 00:19:04.918059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.019 [2024-12-14 00:19:04.918072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.019 qpair failed and we were unable to recover it. 00:38:26.019 [2024-12-14 00:19:04.918160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.019 [2024-12-14 00:19:04.918173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.019 qpair failed and we were unable to recover it. 00:38:26.019 [2024-12-14 00:19:04.918248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.019 [2024-12-14 00:19:04.918261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.019 qpair failed and we were unable to recover it. 00:38:26.019 [2024-12-14 00:19:04.918326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.019 [2024-12-14 00:19:04.918339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.019 qpair failed and we were unable to recover it. 00:38:26.019 [2024-12-14 00:19:04.918479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.020 [2024-12-14 00:19:04.918493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.020 qpair failed and we were unable to recover it. 00:38:26.020 [2024-12-14 00:19:04.918565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.020 [2024-12-14 00:19:04.918579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.020 qpair failed and we were unable to recover it. 00:38:26.020 [2024-12-14 00:19:04.918721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.020 [2024-12-14 00:19:04.918734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.020 qpair failed and we were unable to recover it. 00:38:26.020 [2024-12-14 00:19:04.918969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.020 [2024-12-14 00:19:04.918983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.020 qpair failed and we were unable to recover it. 00:38:26.020 [2024-12-14 00:19:04.919064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.020 [2024-12-14 00:19:04.919078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.020 qpair failed and we were unable to recover it. 00:38:26.020 [2024-12-14 00:19:04.919233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.020 [2024-12-14 00:19:04.919246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.020 qpair failed and we were unable to recover it. 00:38:26.020 [2024-12-14 00:19:04.919312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.020 [2024-12-14 00:19:04.919325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.020 qpair failed and we were unable to recover it. 00:38:26.020 [2024-12-14 00:19:04.919401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.020 [2024-12-14 00:19:04.919414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.020 qpair failed and we were unable to recover it. 00:38:26.020 [2024-12-14 00:19:04.919606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.020 [2024-12-14 00:19:04.919622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.020 qpair failed and we were unable to recover it. 00:38:26.020 [2024-12-14 00:19:04.919721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.020 [2024-12-14 00:19:04.919740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.020 qpair failed and we were unable to recover it. 00:38:26.020 [2024-12-14 00:19:04.919879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.020 [2024-12-14 00:19:04.919892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.020 qpair failed and we were unable to recover it. 00:38:26.020 [2024-12-14 00:19:04.919981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.020 [2024-12-14 00:19:04.919994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.020 qpair failed and we were unable to recover it. 00:38:26.020 [2024-12-14 00:19:04.920079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.020 [2024-12-14 00:19:04.920093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.020 qpair failed and we were unable to recover it. 00:38:26.020 [2024-12-14 00:19:04.920182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.020 [2024-12-14 00:19:04.920195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.020 qpair failed and we were unable to recover it. 00:38:26.020 [2024-12-14 00:19:04.920342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.020 [2024-12-14 00:19:04.920356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.020 qpair failed and we were unable to recover it. 00:38:26.020 [2024-12-14 00:19:04.920565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.020 [2024-12-14 00:19:04.920578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.020 qpair failed and we were unable to recover it. 00:38:26.020 [2024-12-14 00:19:04.920657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.020 [2024-12-14 00:19:04.920673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.020 qpair failed and we were unable to recover it. 00:38:26.020 [2024-12-14 00:19:04.920857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.020 [2024-12-14 00:19:04.920870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.020 qpair failed and we were unable to recover it. 00:38:26.020 [2024-12-14 00:19:04.921010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.020 [2024-12-14 00:19:04.921023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.020 qpair failed and we were unable to recover it. 00:38:26.020 [2024-12-14 00:19:04.921170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.020 [2024-12-14 00:19:04.921184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.020 qpair failed and we were unable to recover it. 00:38:26.020 [2024-12-14 00:19:04.921319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.020 [2024-12-14 00:19:04.921332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.020 qpair failed and we were unable to recover it. 00:38:26.020 [2024-12-14 00:19:04.921466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.020 [2024-12-14 00:19:04.921480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.020 qpair failed and we were unable to recover it. 00:38:26.020 [2024-12-14 00:19:04.921573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.020 [2024-12-14 00:19:04.921586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.020 qpair failed and we were unable to recover it. 00:38:26.020 [2024-12-14 00:19:04.921720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.020 [2024-12-14 00:19:04.921733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.020 qpair failed and we were unable to recover it. 00:38:26.020 [2024-12-14 00:19:04.921903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.020 [2024-12-14 00:19:04.921917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.020 qpair failed and we were unable to recover it. 00:38:26.020 [2024-12-14 00:19:04.922070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.020 [2024-12-14 00:19:04.922083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.020 qpair failed and we were unable to recover it. 00:38:26.020 [2024-12-14 00:19:04.922274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.020 [2024-12-14 00:19:04.922288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.020 qpair failed and we were unable to recover it. 00:38:26.020 [2024-12-14 00:19:04.922463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.020 [2024-12-14 00:19:04.922477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.020 qpair failed and we were unable to recover it. 00:38:26.020 [2024-12-14 00:19:04.922622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.020 [2024-12-14 00:19:04.922636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.020 qpair failed and we were unable to recover it. 00:38:26.020 [2024-12-14 00:19:04.922890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.020 [2024-12-14 00:19:04.922903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.020 qpair failed and we were unable to recover it. 00:38:26.020 [2024-12-14 00:19:04.923137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.020 [2024-12-14 00:19:04.923151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.020 qpair failed and we were unable to recover it. 00:38:26.020 [2024-12-14 00:19:04.923324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.020 [2024-12-14 00:19:04.923337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.020 qpair failed and we were unable to recover it. 00:38:26.020 [2024-12-14 00:19:04.923471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.020 [2024-12-14 00:19:04.923485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.020 qpair failed and we were unable to recover it. 00:38:26.020 [2024-12-14 00:19:04.923658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.020 [2024-12-14 00:19:04.923672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.020 qpair failed and we were unable to recover it. 00:38:26.020 [2024-12-14 00:19:04.923826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.021 [2024-12-14 00:19:04.923839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.021 qpair failed and we were unable to recover it. 00:38:26.021 [2024-12-14 00:19:04.923994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.021 [2024-12-14 00:19:04.924007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.021 qpair failed and we were unable to recover it. 00:38:26.021 [2024-12-14 00:19:04.924156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.021 [2024-12-14 00:19:04.924169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.021 qpair failed and we were unable to recover it. 00:38:26.021 [2024-12-14 00:19:04.924251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.021 [2024-12-14 00:19:04.924264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.021 qpair failed and we were unable to recover it. 00:38:26.021 [2024-12-14 00:19:04.924412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.021 [2024-12-14 00:19:04.924426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.021 qpair failed and we were unable to recover it. 00:38:26.021 [2024-12-14 00:19:04.924518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.021 [2024-12-14 00:19:04.924542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:26.021 qpair failed and we were unable to recover it. 00:38:26.021 [2024-12-14 00:19:04.924790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.021 [2024-12-14 00:19:04.924811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:26.021 qpair failed and we were unable to recover it. 00:38:26.021 [2024-12-14 00:19:04.924974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.021 [2024-12-14 00:19:04.924995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:26.021 qpair failed and we were unable to recover it. 00:38:26.021 [2024-12-14 00:19:04.925100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.021 [2024-12-14 00:19:04.925121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:26.021 qpair failed and we were unable to recover it. 00:38:26.021 [2024-12-14 00:19:04.925290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.021 [2024-12-14 00:19:04.925311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:26.021 qpair failed and we were unable to recover it. 00:38:26.021 [2024-12-14 00:19:04.925418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.021 [2024-12-14 00:19:04.925448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:26.021 qpair failed and we were unable to recover it. 00:38:26.021 [2024-12-14 00:19:04.925623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.021 [2024-12-14 00:19:04.925645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:26.021 qpair failed and we were unable to recover it. 00:38:26.021 [2024-12-14 00:19:04.925739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.021 [2024-12-14 00:19:04.925760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:26.021 qpair failed and we were unable to recover it. 00:38:26.021 [2024-12-14 00:19:04.925924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.021 [2024-12-14 00:19:04.925944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:26.021 qpair failed and we were unable to recover it. 00:38:26.021 [2024-12-14 00:19:04.926124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.021 [2024-12-14 00:19:04.926146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:26.021 qpair failed and we were unable to recover it. 00:38:26.021 [2024-12-14 00:19:04.926254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.021 [2024-12-14 00:19:04.926275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:26.021 qpair failed and we were unable to recover it. 00:38:26.021 [2024-12-14 00:19:04.926495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.021 [2024-12-14 00:19:04.926517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:26.021 qpair failed and we were unable to recover it. 00:38:26.021 [2024-12-14 00:19:04.926605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.021 [2024-12-14 00:19:04.926621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.021 qpair failed and we were unable to recover it. 00:38:26.021 [2024-12-14 00:19:04.926776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.021 [2024-12-14 00:19:04.926789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.021 qpair failed and we were unable to recover it. 00:38:26.021 [2024-12-14 00:19:04.927012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.021 [2024-12-14 00:19:04.927026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.021 qpair failed and we were unable to recover it. 00:38:26.021 [2024-12-14 00:19:04.927201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.021 [2024-12-14 00:19:04.927215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.021 qpair failed and we were unable to recover it. 00:38:26.021 [2024-12-14 00:19:04.927411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.021 [2024-12-14 00:19:04.927425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.021 qpair failed and we were unable to recover it. 00:38:26.021 [2024-12-14 00:19:04.927567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.021 [2024-12-14 00:19:04.927584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.021 qpair failed and we were unable to recover it. 00:38:26.021 [2024-12-14 00:19:04.927675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.021 [2024-12-14 00:19:04.927688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.021 qpair failed and we were unable to recover it. 00:38:26.021 [2024-12-14 00:19:04.927853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.021 [2024-12-14 00:19:04.927867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.021 qpair failed and we were unable to recover it. 00:38:26.021 [2024-12-14 00:19:04.928013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.021 [2024-12-14 00:19:04.928026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.021 qpair failed and we were unable to recover it. 00:38:26.021 [2024-12-14 00:19:04.928188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.021 [2024-12-14 00:19:04.928201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.021 qpair failed and we were unable to recover it. 00:38:26.021 [2024-12-14 00:19:04.928300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.021 [2024-12-14 00:19:04.928314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.021 qpair failed and we were unable to recover it. 00:38:26.021 [2024-12-14 00:19:04.928473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.021 [2024-12-14 00:19:04.928491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.021 qpair failed and we were unable to recover it. 00:38:26.021 [2024-12-14 00:19:04.928569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.021 [2024-12-14 00:19:04.928583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.021 qpair failed and we were unable to recover it. 00:38:26.021 [2024-12-14 00:19:04.928732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.021 [2024-12-14 00:19:04.928746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.021 qpair failed and we were unable to recover it. 00:38:26.021 [2024-12-14 00:19:04.928830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.021 [2024-12-14 00:19:04.928843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.021 qpair failed and we were unable to recover it. 00:38:26.021 [2024-12-14 00:19:04.928914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.021 [2024-12-14 00:19:04.928927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.021 qpair failed and we were unable to recover it. 00:38:26.021 [2024-12-14 00:19:04.929063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.021 [2024-12-14 00:19:04.929075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.021 qpair failed and we were unable to recover it. 00:38:26.021 [2024-12-14 00:19:04.929155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.021 [2024-12-14 00:19:04.929168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.021 qpair failed and we were unable to recover it. 00:38:26.021 [2024-12-14 00:19:04.929248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.021 [2024-12-14 00:19:04.929261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.021 qpair failed and we were unable to recover it. 00:38:26.021 [2024-12-14 00:19:04.929399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.021 [2024-12-14 00:19:04.929412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.021 qpair failed and we were unable to recover it. 00:38:26.021 [2024-12-14 00:19:04.929484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.021 [2024-12-14 00:19:04.929496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.021 qpair failed and we were unable to recover it. 00:38:26.021 [2024-12-14 00:19:04.929651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.021 [2024-12-14 00:19:04.929664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.021 qpair failed and we were unable to recover it. 00:38:26.021 [2024-12-14 00:19:04.929820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.022 [2024-12-14 00:19:04.929833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.022 qpair failed and we were unable to recover it. 00:38:26.022 [2024-12-14 00:19:04.929920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.022 [2024-12-14 00:19:04.929934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.022 qpair failed and we were unable to recover it. 00:38:26.022 [2024-12-14 00:19:04.930081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.022 [2024-12-14 00:19:04.930094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.022 qpair failed and we were unable to recover it. 00:38:26.022 [2024-12-14 00:19:04.930229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.022 [2024-12-14 00:19:04.930243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.022 qpair failed and we were unable to recover it. 00:38:26.022 [2024-12-14 00:19:04.930317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.022 [2024-12-14 00:19:04.930329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.022 qpair failed and we were unable to recover it. 00:38:26.022 [2024-12-14 00:19:04.930491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.022 [2024-12-14 00:19:04.930505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.022 qpair failed and we were unable to recover it. 00:38:26.022 [2024-12-14 00:19:04.930587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.022 [2024-12-14 00:19:04.930601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.022 qpair failed and we were unable to recover it. 00:38:26.022 [2024-12-14 00:19:04.930696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.022 [2024-12-14 00:19:04.930709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.022 qpair failed and we were unable to recover it. 00:38:26.022 [2024-12-14 00:19:04.930844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.022 [2024-12-14 00:19:04.930857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.022 qpair failed and we were unable to recover it. 00:38:26.022 [2024-12-14 00:19:04.930939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.022 [2024-12-14 00:19:04.930953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.022 qpair failed and we were unable to recover it. 00:38:26.022 [2024-12-14 00:19:04.931114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.022 [2024-12-14 00:19:04.931127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.022 qpair failed and we were unable to recover it. 00:38:26.022 [2024-12-14 00:19:04.931331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.022 [2024-12-14 00:19:04.931345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.022 qpair failed and we were unable to recover it. 00:38:26.022 [2024-12-14 00:19:04.931415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.022 [2024-12-14 00:19:04.931431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.022 qpair failed and we were unable to recover it. 00:38:26.022 [2024-12-14 00:19:04.931574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.022 [2024-12-14 00:19:04.931589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.022 qpair failed and we were unable to recover it. 00:38:26.022 [2024-12-14 00:19:04.931678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.022 [2024-12-14 00:19:04.931691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.022 qpair failed and we were unable to recover it. 00:38:26.022 [2024-12-14 00:19:04.931849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.022 [2024-12-14 00:19:04.931863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.022 qpair failed and we were unable to recover it. 00:38:26.022 [2024-12-14 00:19:04.932016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.022 [2024-12-14 00:19:04.932030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.022 qpair failed and we were unable to recover it. 00:38:26.022 [2024-12-14 00:19:04.932117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.022 [2024-12-14 00:19:04.932130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.022 qpair failed and we were unable to recover it. 00:38:26.022 [2024-12-14 00:19:04.932280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.022 [2024-12-14 00:19:04.932293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.022 qpair failed and we were unable to recover it. 00:38:26.022 [2024-12-14 00:19:04.932396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.022 [2024-12-14 00:19:04.932409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.022 qpair failed and we were unable to recover it. 00:38:26.022 [2024-12-14 00:19:04.932506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.022 [2024-12-14 00:19:04.932520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.022 qpair failed and we were unable to recover it. 00:38:26.022 [2024-12-14 00:19:04.932663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.022 [2024-12-14 00:19:04.932676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.022 qpair failed and we were unable to recover it. 00:38:26.022 [2024-12-14 00:19:04.932849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.022 [2024-12-14 00:19:04.932862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.022 qpair failed and we were unable to recover it. 00:38:26.022 [2024-12-14 00:19:04.933094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.022 [2024-12-14 00:19:04.933110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.022 qpair failed and we were unable to recover it. 00:38:26.022 [2024-12-14 00:19:04.933260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.022 [2024-12-14 00:19:04.933274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.022 qpair failed and we were unable to recover it. 00:38:26.022 [2024-12-14 00:19:04.933345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.022 [2024-12-14 00:19:04.933357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.022 qpair failed and we were unable to recover it. 00:38:26.022 [2024-12-14 00:19:04.933496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.022 [2024-12-14 00:19:04.933509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.022 qpair failed and we were unable to recover it. 00:38:26.022 [2024-12-14 00:19:04.933676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.022 [2024-12-14 00:19:04.933689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.022 qpair failed and we were unable to recover it. 00:38:26.022 [2024-12-14 00:19:04.933784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.022 [2024-12-14 00:19:04.933797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.022 qpair failed and we were unable to recover it. 00:38:26.022 [2024-12-14 00:19:04.933957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.022 [2024-12-14 00:19:04.933970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.022 qpair failed and we were unable to recover it. 00:38:26.022 [2024-12-14 00:19:04.934129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.022 [2024-12-14 00:19:04.934142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.022 qpair failed and we were unable to recover it. 00:38:26.022 [2024-12-14 00:19:04.934366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.022 [2024-12-14 00:19:04.934379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.022 qpair failed and we were unable to recover it. 00:38:26.022 [2024-12-14 00:19:04.934465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.022 [2024-12-14 00:19:04.934479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.022 qpair failed and we were unable to recover it. 00:38:26.022 [2024-12-14 00:19:04.934649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.022 [2024-12-14 00:19:04.934662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.022 qpair failed and we were unable to recover it. 00:38:26.022 [2024-12-14 00:19:04.934865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.022 [2024-12-14 00:19:04.934878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.023 qpair failed and we were unable to recover it. 00:38:26.023 [2024-12-14 00:19:04.935129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.023 [2024-12-14 00:19:04.935142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.023 qpair failed and we were unable to recover it. 00:38:26.023 [2024-12-14 00:19:04.935223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.023 [2024-12-14 00:19:04.935237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.023 qpair failed and we were unable to recover it. 00:38:26.023 [2024-12-14 00:19:04.935444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.023 [2024-12-14 00:19:04.935458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.023 qpair failed and we were unable to recover it. 00:38:26.023 [2024-12-14 00:19:04.935684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.023 [2024-12-14 00:19:04.935697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.023 qpair failed and we were unable to recover it. 00:38:26.023 [2024-12-14 00:19:04.935877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.023 [2024-12-14 00:19:04.935890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.023 qpair failed and we were unable to recover it. 00:38:26.023 [2024-12-14 00:19:04.936071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.023 [2024-12-14 00:19:04.936085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.023 qpair failed and we were unable to recover it. 00:38:26.023 [2024-12-14 00:19:04.936253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.023 [2024-12-14 00:19:04.936266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.023 qpair failed and we were unable to recover it. 00:38:26.023 [2024-12-14 00:19:04.936411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.023 [2024-12-14 00:19:04.936424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.023 qpair failed and we were unable to recover it. 00:38:26.023 [2024-12-14 00:19:04.936521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.023 [2024-12-14 00:19:04.936535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.023 qpair failed and we were unable to recover it. 00:38:26.023 [2024-12-14 00:19:04.936623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.023 [2024-12-14 00:19:04.936636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.023 qpair failed and we were unable to recover it. 00:38:26.023 [2024-12-14 00:19:04.936716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.023 [2024-12-14 00:19:04.936729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.023 qpair failed and we were unable to recover it. 00:38:26.023 [2024-12-14 00:19:04.936811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.023 [2024-12-14 00:19:04.936825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.023 qpair failed and we were unable to recover it. 00:38:26.023 [2024-12-14 00:19:04.936911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.023 [2024-12-14 00:19:04.936924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.023 qpair failed and we were unable to recover it. 00:38:26.023 [2024-12-14 00:19:04.937006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.023 [2024-12-14 00:19:04.937021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.023 qpair failed and we were unable to recover it. 00:38:26.023 [2024-12-14 00:19:04.937187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.023 [2024-12-14 00:19:04.937201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.023 qpair failed and we were unable to recover it. 00:38:26.023 [2024-12-14 00:19:04.937341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.023 [2024-12-14 00:19:04.937354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.023 qpair failed and we were unable to recover it. 00:38:26.023 [2024-12-14 00:19:04.937436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.023 [2024-12-14 00:19:04.937462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.023 qpair failed and we were unable to recover it. 00:38:26.023 [2024-12-14 00:19:04.937625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.023 [2024-12-14 00:19:04.937639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.023 qpair failed and we were unable to recover it. 00:38:26.023 [2024-12-14 00:19:04.937730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.023 [2024-12-14 00:19:04.937744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.023 qpair failed and we were unable to recover it. 00:38:26.023 [2024-12-14 00:19:04.937898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.023 [2024-12-14 00:19:04.937911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.023 qpair failed and we were unable to recover it. 00:38:26.023 [2024-12-14 00:19:04.938137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.023 [2024-12-14 00:19:04.938151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.023 qpair failed and we were unable to recover it. 00:38:26.023 [2024-12-14 00:19:04.938249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.023 [2024-12-14 00:19:04.938261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.023 qpair failed and we were unable to recover it. 00:38:26.023 [2024-12-14 00:19:04.938421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.023 [2024-12-14 00:19:04.938434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.023 qpair failed and we were unable to recover it. 00:38:26.023 [2024-12-14 00:19:04.938608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.023 [2024-12-14 00:19:04.938622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.023 qpair failed and we were unable to recover it. 00:38:26.023 [2024-12-14 00:19:04.938800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.023 [2024-12-14 00:19:04.938814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.023 qpair failed and we were unable to recover it. 00:38:26.023 [2024-12-14 00:19:04.938969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.023 [2024-12-14 00:19:04.938982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.023 qpair failed and we were unable to recover it. 00:38:26.023 [2024-12-14 00:19:04.939218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.023 [2024-12-14 00:19:04.939232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.023 qpair failed and we were unable to recover it. 00:38:26.023 [2024-12-14 00:19:04.939395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.023 [2024-12-14 00:19:04.939409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.023 qpair failed and we were unable to recover it. 00:38:26.023 [2024-12-14 00:19:04.939615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.023 [2024-12-14 00:19:04.939631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.023 qpair failed and we were unable to recover it. 00:38:26.023 [2024-12-14 00:19:04.939860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.023 [2024-12-14 00:19:04.939874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.023 qpair failed and we were unable to recover it. 00:38:26.023 [2024-12-14 00:19:04.940013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.023 [2024-12-14 00:19:04.940026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.023 qpair failed and we were unable to recover it. 00:38:26.023 [2024-12-14 00:19:04.940248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.023 [2024-12-14 00:19:04.940261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.023 qpair failed and we were unable to recover it. 00:38:26.023 [2024-12-14 00:19:04.940359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.023 [2024-12-14 00:19:04.940372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.023 qpair failed and we were unable to recover it. 00:38:26.023 [2024-12-14 00:19:04.940521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.023 [2024-12-14 00:19:04.940535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.023 qpair failed and we were unable to recover it. 00:38:26.023 [2024-12-14 00:19:04.940686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.023 [2024-12-14 00:19:04.940700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.023 qpair failed and we were unable to recover it. 00:38:26.023 [2024-12-14 00:19:04.940941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.023 [2024-12-14 00:19:04.940954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.023 qpair failed and we were unable to recover it. 00:38:26.023 [2024-12-14 00:19:04.941101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.023 [2024-12-14 00:19:04.941114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.023 qpair failed and we were unable to recover it. 00:38:26.023 [2024-12-14 00:19:04.941262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.023 [2024-12-14 00:19:04.941276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.023 qpair failed and we were unable to recover it. 00:38:26.023 [2024-12-14 00:19:04.941428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.023 [2024-12-14 00:19:04.941446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.023 qpair failed and we were unable to recover it. 00:38:26.024 [2024-12-14 00:19:04.941613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.024 [2024-12-14 00:19:04.941626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.024 qpair failed and we were unable to recover it. 00:38:26.024 [2024-12-14 00:19:04.941887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.024 [2024-12-14 00:19:04.941903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.024 qpair failed and we were unable to recover it. 00:38:26.024 [2024-12-14 00:19:04.942000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.024 [2024-12-14 00:19:04.942018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.024 qpair failed and we were unable to recover it. 00:38:26.024 [2024-12-14 00:19:04.942168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.024 [2024-12-14 00:19:04.942181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.024 qpair failed and we were unable to recover it. 00:38:26.024 [2024-12-14 00:19:04.942315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.024 [2024-12-14 00:19:04.942328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.024 qpair failed and we were unable to recover it. 00:38:26.024 [2024-12-14 00:19:04.942462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.024 [2024-12-14 00:19:04.942475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.024 qpair failed and we were unable to recover it. 00:38:26.024 [2024-12-14 00:19:04.942569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.024 [2024-12-14 00:19:04.942582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.024 qpair failed and we were unable to recover it. 00:38:26.024 [2024-12-14 00:19:04.942721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.024 [2024-12-14 00:19:04.942734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.024 qpair failed and we were unable to recover it. 00:38:26.024 [2024-12-14 00:19:04.942884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.024 [2024-12-14 00:19:04.942898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.024 qpair failed and we were unable to recover it. 00:38:26.024 [2024-12-14 00:19:04.943127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.024 [2024-12-14 00:19:04.943140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.024 qpair failed and we were unable to recover it. 00:38:26.024 [2024-12-14 00:19:04.943226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.024 [2024-12-14 00:19:04.943239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.024 qpair failed and we were unable to recover it. 00:38:26.024 [2024-12-14 00:19:04.943397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.024 [2024-12-14 00:19:04.943411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.024 qpair failed and we were unable to recover it. 00:38:26.024 [2024-12-14 00:19:04.943604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.024 [2024-12-14 00:19:04.943618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.024 qpair failed and we were unable to recover it. 00:38:26.024 [2024-12-14 00:19:04.943770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.024 [2024-12-14 00:19:04.943784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.024 qpair failed and we were unable to recover it. 00:38:26.024 [2024-12-14 00:19:04.943890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.024 [2024-12-14 00:19:04.943903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.024 qpair failed and we were unable to recover it. 00:38:26.024 [2024-12-14 00:19:04.944124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.024 [2024-12-14 00:19:04.944137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.024 qpair failed and we were unable to recover it. 00:38:26.024 [2024-12-14 00:19:04.944241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.024 [2024-12-14 00:19:04.944254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.024 qpair failed and we were unable to recover it. 00:38:26.024 [2024-12-14 00:19:04.944458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.024 [2024-12-14 00:19:04.944473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.024 qpair failed and we were unable to recover it. 00:38:26.024 [2024-12-14 00:19:04.944557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.024 [2024-12-14 00:19:04.944573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.024 qpair failed and we were unable to recover it. 00:38:26.024 [2024-12-14 00:19:04.944734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.024 [2024-12-14 00:19:04.944747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.024 qpair failed and we were unable to recover it. 00:38:26.024 [2024-12-14 00:19:04.944839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.024 [2024-12-14 00:19:04.944853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.024 qpair failed and we were unable to recover it. 00:38:26.024 [2024-12-14 00:19:04.944931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.024 [2024-12-14 00:19:04.944944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.024 qpair failed and we were unable to recover it. 00:38:26.024 [2024-12-14 00:19:04.945097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.024 [2024-12-14 00:19:04.945110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.024 qpair failed and we were unable to recover it. 00:38:26.024 [2024-12-14 00:19:04.945204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.024 [2024-12-14 00:19:04.945218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.024 qpair failed and we were unable to recover it. 00:38:26.024 [2024-12-14 00:19:04.945419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.024 [2024-12-14 00:19:04.945432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.024 qpair failed and we were unable to recover it. 00:38:26.024 [2024-12-14 00:19:04.945588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.024 [2024-12-14 00:19:04.945601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.024 qpair failed and we were unable to recover it. 00:38:26.024 [2024-12-14 00:19:04.945705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.024 [2024-12-14 00:19:04.945719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.024 qpair failed and we were unable to recover it. 00:38:26.024 [2024-12-14 00:19:04.945808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.024 [2024-12-14 00:19:04.945821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.024 qpair failed and we were unable to recover it. 00:38:26.024 [2024-12-14 00:19:04.945971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.024 [2024-12-14 00:19:04.945984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.024 qpair failed and we were unable to recover it. 00:38:26.024 [2024-12-14 00:19:04.946053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.024 [2024-12-14 00:19:04.946068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.024 qpair failed and we were unable to recover it. 00:38:26.024 [2024-12-14 00:19:04.946200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.024 [2024-12-14 00:19:04.946213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.024 qpair failed and we were unable to recover it. 00:38:26.024 [2024-12-14 00:19:04.946373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.024 [2024-12-14 00:19:04.946387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.024 qpair failed and we were unable to recover it. 00:38:26.024 [2024-12-14 00:19:04.946470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.024 [2024-12-14 00:19:04.946482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.024 qpair failed and we were unable to recover it. 00:38:26.024 [2024-12-14 00:19:04.946618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.024 [2024-12-14 00:19:04.946631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.024 qpair failed and we were unable to recover it. 00:38:26.024 [2024-12-14 00:19:04.946783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.024 [2024-12-14 00:19:04.946796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.024 qpair failed and we were unable to recover it. 00:38:26.024 [2024-12-14 00:19:04.946932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.024 [2024-12-14 00:19:04.946945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.024 qpair failed and we were unable to recover it. 00:38:26.024 [2024-12-14 00:19:04.947044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.024 [2024-12-14 00:19:04.947057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.024 qpair failed and we were unable to recover it. 00:38:26.024 [2024-12-14 00:19:04.947145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.024 [2024-12-14 00:19:04.947159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.024 qpair failed and we were unable to recover it. 00:38:26.024 [2024-12-14 00:19:04.947255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.024 [2024-12-14 00:19:04.947269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.024 qpair failed and we were unable to recover it. 00:38:26.024 [2024-12-14 00:19:04.947420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.025 [2024-12-14 00:19:04.947433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.025 qpair failed and we were unable to recover it. 00:38:26.025 [2024-12-14 00:19:04.947586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.025 [2024-12-14 00:19:04.947600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.025 qpair failed and we were unable to recover it. 00:38:26.025 [2024-12-14 00:19:04.947776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.025 [2024-12-14 00:19:04.947789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.025 qpair failed and we were unable to recover it. 00:38:26.025 [2024-12-14 00:19:04.947942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.025 [2024-12-14 00:19:04.947955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.025 qpair failed and we were unable to recover it. 00:38:26.025 [2024-12-14 00:19:04.948113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.025 [2024-12-14 00:19:04.948127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.025 qpair failed and we were unable to recover it. 00:38:26.025 [2024-12-14 00:19:04.948336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.025 [2024-12-14 00:19:04.948349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.025 qpair failed and we were unable to recover it. 00:38:26.025 [2024-12-14 00:19:04.948526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.025 [2024-12-14 00:19:04.948540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.025 qpair failed and we were unable to recover it. 00:38:26.025 [2024-12-14 00:19:04.948635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.025 [2024-12-14 00:19:04.948649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.025 qpair failed and we were unable to recover it. 00:38:26.025 [2024-12-14 00:19:04.948788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.025 [2024-12-14 00:19:04.948802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.025 qpair failed and we were unable to recover it. 00:38:26.025 [2024-12-14 00:19:04.949006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.025 [2024-12-14 00:19:04.949019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.025 qpair failed and we were unable to recover it. 00:38:26.025 [2024-12-14 00:19:04.949116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.025 [2024-12-14 00:19:04.949129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.025 qpair failed and we were unable to recover it. 00:38:26.025 [2024-12-14 00:19:04.949278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.025 [2024-12-14 00:19:04.949292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.025 qpair failed and we were unable to recover it. 00:38:26.025 [2024-12-14 00:19:04.949457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.025 [2024-12-14 00:19:04.949470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.025 qpair failed and we were unable to recover it. 00:38:26.025 [2024-12-14 00:19:04.949550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.025 [2024-12-14 00:19:04.949562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.025 qpair failed and we were unable to recover it. 00:38:26.025 [2024-12-14 00:19:04.949721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.025 [2024-12-14 00:19:04.949734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.025 qpair failed and we were unable to recover it. 00:38:26.025 [2024-12-14 00:19:04.949928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.025 [2024-12-14 00:19:04.949942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.025 qpair failed and we were unable to recover it. 00:38:26.025 [2024-12-14 00:19:04.950103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.025 [2024-12-14 00:19:04.950117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.025 qpair failed and we were unable to recover it. 00:38:26.025 [2024-12-14 00:19:04.950229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.025 [2024-12-14 00:19:04.950254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:26.025 qpair failed and we were unable to recover it. 00:38:26.025 [2024-12-14 00:19:04.950493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.025 [2024-12-14 00:19:04.950515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:26.025 qpair failed and we were unable to recover it. 00:38:26.025 [2024-12-14 00:19:04.950614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.025 [2024-12-14 00:19:04.950634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:26.025 qpair failed and we were unable to recover it. 00:38:26.025 [2024-12-14 00:19:04.950753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.025 [2024-12-14 00:19:04.950775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:26.025 qpair failed and we were unable to recover it. 00:38:26.025 [2024-12-14 00:19:04.950933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.025 [2024-12-14 00:19:04.950954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:26.025 qpair failed and we were unable to recover it. 00:38:26.025 [2024-12-14 00:19:04.951123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.025 [2024-12-14 00:19:04.951144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:26.025 qpair failed and we were unable to recover it. 00:38:26.025 [2024-12-14 00:19:04.951244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.025 [2024-12-14 00:19:04.951265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:26.025 qpair failed and we were unable to recover it. 00:38:26.025 [2024-12-14 00:19:04.951357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.025 [2024-12-14 00:19:04.951378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:26.025 qpair failed and we were unable to recover it. 00:38:26.025 [2024-12-14 00:19:04.951524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.025 [2024-12-14 00:19:04.951546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:26.025 qpair failed and we were unable to recover it. 00:38:26.025 [2024-12-14 00:19:04.951691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.025 [2024-12-14 00:19:04.951707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.025 qpair failed and we were unable to recover it. 00:38:26.025 [2024-12-14 00:19:04.951859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.025 [2024-12-14 00:19:04.951873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.025 qpair failed and we were unable to recover it. 00:38:26.025 [2024-12-14 00:19:04.952007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.025 [2024-12-14 00:19:04.952020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.025 qpair failed and we were unable to recover it. 00:38:26.025 [2024-12-14 00:19:04.952109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.025 [2024-12-14 00:19:04.952122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.025 qpair failed and we were unable to recover it. 00:38:26.025 [2024-12-14 00:19:04.952279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.025 [2024-12-14 00:19:04.952295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.025 qpair failed and we were unable to recover it. 00:38:26.025 [2024-12-14 00:19:04.952446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.025 [2024-12-14 00:19:04.952460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.025 qpair failed and we were unable to recover it. 00:38:26.025 [2024-12-14 00:19:04.952598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.025 [2024-12-14 00:19:04.952612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.025 qpair failed and we were unable to recover it. 00:38:26.025 [2024-12-14 00:19:04.952776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.025 [2024-12-14 00:19:04.952790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.025 qpair failed and we were unable to recover it. 00:38:26.025 [2024-12-14 00:19:04.952873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.025 [2024-12-14 00:19:04.952886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.025 qpair failed and we were unable to recover it. 00:38:26.025 [2024-12-14 00:19:04.953048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.025 [2024-12-14 00:19:04.953063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.025 qpair failed and we were unable to recover it. 00:38:26.025 [2024-12-14 00:19:04.953202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.025 [2024-12-14 00:19:04.953220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.025 qpair failed and we were unable to recover it. 00:38:26.025 [2024-12-14 00:19:04.953367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.025 [2024-12-14 00:19:04.953380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.025 qpair failed and we were unable to recover it. 00:38:26.025 [2024-12-14 00:19:04.953455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.025 [2024-12-14 00:19:04.953468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.025 qpair failed and we were unable to recover it. 00:38:26.025 [2024-12-14 00:19:04.953569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.025 [2024-12-14 00:19:04.953583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.025 qpair failed and we were unable to recover it. 00:38:26.026 [2024-12-14 00:19:04.953786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.026 [2024-12-14 00:19:04.953799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.026 qpair failed and we were unable to recover it. 00:38:26.026 [2024-12-14 00:19:04.953968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.026 [2024-12-14 00:19:04.953981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.026 qpair failed and we were unable to recover it. 00:38:26.026 [2024-12-14 00:19:04.954132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.026 [2024-12-14 00:19:04.954145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.026 qpair failed and we were unable to recover it. 00:38:26.026 [2024-12-14 00:19:04.954293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.026 [2024-12-14 00:19:04.954306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.026 qpair failed and we were unable to recover it. 00:38:26.026 [2024-12-14 00:19:04.954386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.026 [2024-12-14 00:19:04.954401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.026 qpair failed and we were unable to recover it. 00:38:26.026 [2024-12-14 00:19:04.954544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.026 [2024-12-14 00:19:04.954558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.026 qpair failed and we were unable to recover it. 00:38:26.026 [2024-12-14 00:19:04.954734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.026 [2024-12-14 00:19:04.954747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.026 qpair failed and we were unable to recover it. 00:38:26.026 [2024-12-14 00:19:04.954889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.026 [2024-12-14 00:19:04.954905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.026 qpair failed and we were unable to recover it. 00:38:26.026 [2024-12-14 00:19:04.955054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.026 [2024-12-14 00:19:04.955068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.026 qpair failed and we were unable to recover it. 00:38:26.026 [2024-12-14 00:19:04.955203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.026 [2024-12-14 00:19:04.955216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.026 qpair failed and we were unable to recover it. 00:38:26.026 [2024-12-14 00:19:04.955364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.026 [2024-12-14 00:19:04.955377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.026 qpair failed and we were unable to recover it. 00:38:26.026 [2024-12-14 00:19:04.955473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.026 [2024-12-14 00:19:04.955486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.026 qpair failed and we were unable to recover it. 00:38:26.026 [2024-12-14 00:19:04.955570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.026 [2024-12-14 00:19:04.955583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.026 qpair failed and we were unable to recover it. 00:38:26.026 [2024-12-14 00:19:04.955788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.026 [2024-12-14 00:19:04.955801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.026 qpair failed and we were unable to recover it. 00:38:26.026 [2024-12-14 00:19:04.955883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.026 [2024-12-14 00:19:04.955896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.026 qpair failed and we were unable to recover it. 00:38:26.026 [2024-12-14 00:19:04.955992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.026 [2024-12-14 00:19:04.956005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.026 qpair failed and we were unable to recover it. 00:38:26.026 [2024-12-14 00:19:04.956103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.026 [2024-12-14 00:19:04.956117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.026 qpair failed and we were unable to recover it. 00:38:26.026 [2024-12-14 00:19:04.956189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.026 [2024-12-14 00:19:04.956201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.026 qpair failed and we were unable to recover it. 00:38:26.026 [2024-12-14 00:19:04.956334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.026 [2024-12-14 00:19:04.956347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.026 qpair failed and we were unable to recover it. 00:38:26.026 [2024-12-14 00:19:04.956424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.026 [2024-12-14 00:19:04.956441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.026 qpair failed and we were unable to recover it. 00:38:26.026 [2024-12-14 00:19:04.956515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.026 [2024-12-14 00:19:04.956528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.026 qpair failed and we were unable to recover it. 00:38:26.026 [2024-12-14 00:19:04.956726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.026 [2024-12-14 00:19:04.956739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.026 qpair failed and we were unable to recover it. 00:38:26.026 [2024-12-14 00:19:04.956931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.026 [2024-12-14 00:19:04.956945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.026 qpair failed and we were unable to recover it. 00:38:26.026 [2024-12-14 00:19:04.957088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.026 [2024-12-14 00:19:04.957101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.026 qpair failed and we were unable to recover it. 00:38:26.026 [2024-12-14 00:19:04.957186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.026 [2024-12-14 00:19:04.957199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.026 qpair failed and we were unable to recover it. 00:38:26.026 [2024-12-14 00:19:04.957297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.026 [2024-12-14 00:19:04.957314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.026 qpair failed and we were unable to recover it. 00:38:26.026 [2024-12-14 00:19:04.957398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.026 [2024-12-14 00:19:04.957412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.026 qpair failed and we were unable to recover it. 00:38:26.026 [2024-12-14 00:19:04.957485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.026 [2024-12-14 00:19:04.957497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.026 qpair failed and we were unable to recover it. 00:38:26.026 [2024-12-14 00:19:04.957584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.026 [2024-12-14 00:19:04.957597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.026 qpair failed and we were unable to recover it. 00:38:26.026 [2024-12-14 00:19:04.957676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.026 [2024-12-14 00:19:04.957690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.026 qpair failed and we were unable to recover it. 00:38:26.026 [2024-12-14 00:19:04.957829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.026 [2024-12-14 00:19:04.957846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.026 qpair failed and we were unable to recover it. 00:38:26.026 [2024-12-14 00:19:04.957997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.026 [2024-12-14 00:19:04.958010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.026 qpair failed and we were unable to recover it. 00:38:26.026 [2024-12-14 00:19:04.958176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.026 [2024-12-14 00:19:04.958189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.026 qpair failed and we were unable to recover it. 00:38:26.026 [2024-12-14 00:19:04.958275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.026 [2024-12-14 00:19:04.958287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.026 qpair failed and we were unable to recover it. 00:38:26.026 [2024-12-14 00:19:04.958425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.026 [2024-12-14 00:19:04.958443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.026 qpair failed and we were unable to recover it. 00:38:26.026 [2024-12-14 00:19:04.958597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.027 [2024-12-14 00:19:04.958610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.027 qpair failed and we were unable to recover it. 00:38:26.027 [2024-12-14 00:19:04.958774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.027 [2024-12-14 00:19:04.958787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.027 qpair failed and we were unable to recover it. 00:38:26.027 [2024-12-14 00:19:04.958940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.027 [2024-12-14 00:19:04.958954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.027 qpair failed and we were unable to recover it. 00:38:26.027 [2024-12-14 00:19:04.959045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.027 [2024-12-14 00:19:04.959059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.027 qpair failed and we were unable to recover it. 00:38:26.027 [2024-12-14 00:19:04.959149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.027 [2024-12-14 00:19:04.959162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.027 qpair failed and we were unable to recover it. 00:38:26.027 [2024-12-14 00:19:04.959329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.027 [2024-12-14 00:19:04.959342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.027 qpair failed and we were unable to recover it. 00:38:26.027 [2024-12-14 00:19:04.959414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.027 [2024-12-14 00:19:04.959427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.027 qpair failed and we were unable to recover it. 00:38:26.027 [2024-12-14 00:19:04.959569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.027 [2024-12-14 00:19:04.959583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.027 qpair failed and we were unable to recover it. 00:38:26.027 [2024-12-14 00:19:04.959787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.027 [2024-12-14 00:19:04.959801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.027 qpair failed and we were unable to recover it. 00:38:26.027 [2024-12-14 00:19:04.959959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.027 [2024-12-14 00:19:04.959973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.027 qpair failed and we were unable to recover it. 00:38:26.027 [2024-12-14 00:19:04.960137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.027 [2024-12-14 00:19:04.960150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.027 qpair failed and we were unable to recover it. 00:38:26.027 [2024-12-14 00:19:04.960220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.027 [2024-12-14 00:19:04.960233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.027 qpair failed and we were unable to recover it. 00:38:26.027 [2024-12-14 00:19:04.960396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.027 [2024-12-14 00:19:04.960410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.027 qpair failed and we were unable to recover it. 00:38:26.027 [2024-12-14 00:19:04.960485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.027 [2024-12-14 00:19:04.960498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.027 qpair failed and we were unable to recover it. 00:38:26.027 [2024-12-14 00:19:04.960710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.027 [2024-12-14 00:19:04.960724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.027 qpair failed and we were unable to recover it. 00:38:26.027 [2024-12-14 00:19:04.960870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.027 [2024-12-14 00:19:04.960883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.027 qpair failed and we were unable to recover it. 00:38:26.027 [2024-12-14 00:19:04.961088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.027 [2024-12-14 00:19:04.961102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.027 qpair failed and we were unable to recover it. 00:38:26.027 [2024-12-14 00:19:04.961198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.027 [2024-12-14 00:19:04.961212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.027 qpair failed and we were unable to recover it. 00:38:26.027 [2024-12-14 00:19:04.961300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.027 [2024-12-14 00:19:04.961313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.027 qpair failed and we were unable to recover it. 00:38:26.027 [2024-12-14 00:19:04.961460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.027 [2024-12-14 00:19:04.961474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.027 qpair failed and we were unable to recover it. 00:38:26.027 [2024-12-14 00:19:04.961637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.027 [2024-12-14 00:19:04.961650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.027 qpair failed and we were unable to recover it. 00:38:26.027 [2024-12-14 00:19:04.961851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.027 [2024-12-14 00:19:04.961866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.027 qpair failed and we were unable to recover it. 00:38:26.027 [2024-12-14 00:19:04.962028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.027 [2024-12-14 00:19:04.962057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.027 qpair failed and we were unable to recover it. 00:38:26.027 [2024-12-14 00:19:04.962160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.027 [2024-12-14 00:19:04.962183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:26.027 qpair failed and we were unable to recover it. 00:38:26.027 [2024-12-14 00:19:04.962425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.027 [2024-12-14 00:19:04.962453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:26.027 qpair failed and we were unable to recover it. 00:38:26.027 [2024-12-14 00:19:04.962562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.027 [2024-12-14 00:19:04.962587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:26.027 qpair failed and we were unable to recover it. 00:38:26.027 [2024-12-14 00:19:04.962680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.027 [2024-12-14 00:19:04.962701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:26.027 qpair failed and we were unable to recover it. 00:38:26.027 [2024-12-14 00:19:04.962863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.027 [2024-12-14 00:19:04.962884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:26.027 qpair failed and we were unable to recover it. 00:38:26.027 [2024-12-14 00:19:04.963056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.027 [2024-12-14 00:19:04.963071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.027 qpair failed and we were unable to recover it. 00:38:26.027 [2024-12-14 00:19:04.963215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.027 [2024-12-14 00:19:04.963229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.027 qpair failed and we were unable to recover it. 00:38:26.027 [2024-12-14 00:19:04.963406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.027 [2024-12-14 00:19:04.963422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.027 qpair failed and we were unable to recover it. 00:38:26.027 [2024-12-14 00:19:04.963527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.027 [2024-12-14 00:19:04.963546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.027 qpair failed and we were unable to recover it. 00:38:26.027 [2024-12-14 00:19:04.963739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.027 [2024-12-14 00:19:04.963752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.027 qpair failed and we were unable to recover it. 00:38:26.027 [2024-12-14 00:19:04.963854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.027 [2024-12-14 00:19:04.963867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.027 qpair failed and we were unable to recover it. 00:38:26.027 [2024-12-14 00:19:04.963961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.027 [2024-12-14 00:19:04.963974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.027 qpair failed and we were unable to recover it. 00:38:26.027 [2024-12-14 00:19:04.964199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.027 [2024-12-14 00:19:04.964214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.027 qpair failed and we were unable to recover it. 00:38:26.027 [2024-12-14 00:19:04.964359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.027 [2024-12-14 00:19:04.964371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.027 qpair failed and we were unable to recover it. 00:38:26.027 [2024-12-14 00:19:04.964510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.027 [2024-12-14 00:19:04.964524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.027 qpair failed and we were unable to recover it. 00:38:26.027 [2024-12-14 00:19:04.964673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.027 [2024-12-14 00:19:04.964686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.027 qpair failed and we were unable to recover it. 00:38:26.027 [2024-12-14 00:19:04.964819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.028 [2024-12-14 00:19:04.964833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.028 qpair failed and we were unable to recover it. 00:38:26.028 [2024-12-14 00:19:04.965003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.028 [2024-12-14 00:19:04.965016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.028 qpair failed and we were unable to recover it. 00:38:26.028 [2024-12-14 00:19:04.965195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.028 [2024-12-14 00:19:04.965209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.028 qpair failed and we were unable to recover it. 00:38:26.028 [2024-12-14 00:19:04.965290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.028 [2024-12-14 00:19:04.965303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.028 qpair failed and we were unable to recover it. 00:38:26.028 [2024-12-14 00:19:04.965385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.028 [2024-12-14 00:19:04.965399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.028 qpair failed and we were unable to recover it. 00:38:26.028 [2024-12-14 00:19:04.965558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.028 [2024-12-14 00:19:04.965571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.028 qpair failed and we were unable to recover it. 00:38:26.028 [2024-12-14 00:19:04.965718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.028 [2024-12-14 00:19:04.965731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.028 qpair failed and we were unable to recover it. 00:38:26.028 [2024-12-14 00:19:04.965831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.028 [2024-12-14 00:19:04.965844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.028 qpair failed and we were unable to recover it. 00:38:26.028 [2024-12-14 00:19:04.965988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.028 [2024-12-14 00:19:04.966001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.028 qpair failed and we were unable to recover it. 00:38:26.028 [2024-12-14 00:19:04.966098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.028 [2024-12-14 00:19:04.966110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.028 qpair failed and we were unable to recover it. 00:38:26.028 [2024-12-14 00:19:04.966249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.028 [2024-12-14 00:19:04.966262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.028 qpair failed and we were unable to recover it. 00:38:26.028 [2024-12-14 00:19:04.966338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.028 [2024-12-14 00:19:04.966350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.028 qpair failed and we were unable to recover it. 00:38:26.028 [2024-12-14 00:19:04.966505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.028 [2024-12-14 00:19:04.966519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.028 qpair failed and we were unable to recover it. 00:38:26.028 [2024-12-14 00:19:04.966598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.028 [2024-12-14 00:19:04.966610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.028 qpair failed and we were unable to recover it. 00:38:26.028 [2024-12-14 00:19:04.966697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.028 [2024-12-14 00:19:04.966711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.028 qpair failed and we were unable to recover it. 00:38:26.028 [2024-12-14 00:19:04.966846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.028 [2024-12-14 00:19:04.966859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.028 qpair failed and we were unable to recover it. 00:38:26.028 [2024-12-14 00:19:04.966991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.028 [2024-12-14 00:19:04.967005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.028 qpair failed and we were unable to recover it. 00:38:26.028 [2024-12-14 00:19:04.967226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.028 [2024-12-14 00:19:04.967240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.028 qpair failed and we were unable to recover it. 00:38:26.028 [2024-12-14 00:19:04.967319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.028 [2024-12-14 00:19:04.967331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.028 qpair failed and we were unable to recover it. 00:38:26.028 [2024-12-14 00:19:04.967487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.028 [2024-12-14 00:19:04.967501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.028 qpair failed and we were unable to recover it. 00:38:26.028 [2024-12-14 00:19:04.967593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.028 [2024-12-14 00:19:04.967606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.028 qpair failed and we were unable to recover it. 00:38:26.028 [2024-12-14 00:19:04.967682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.028 [2024-12-14 00:19:04.967695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.028 qpair failed and we were unable to recover it. 00:38:26.028 [2024-12-14 00:19:04.967840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.028 [2024-12-14 00:19:04.967853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.028 qpair failed and we were unable to recover it. 00:38:26.028 [2024-12-14 00:19:04.968127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.028 [2024-12-14 00:19:04.968153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.028 qpair failed and we were unable to recover it. 00:38:26.028 [2024-12-14 00:19:04.968324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.028 [2024-12-14 00:19:04.968347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.028 qpair failed and we were unable to recover it. 00:38:26.028 [2024-12-14 00:19:04.968464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.028 [2024-12-14 00:19:04.968486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.028 qpair failed and we were unable to recover it. 00:38:26.028 [2024-12-14 00:19:04.968642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.028 [2024-12-14 00:19:04.968663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.028 qpair failed and we were unable to recover it. 00:38:26.028 [2024-12-14 00:19:04.968851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.028 [2024-12-14 00:19:04.968872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.028 qpair failed and we were unable to recover it. 00:38:26.028 [2024-12-14 00:19:04.969093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.028 [2024-12-14 00:19:04.969114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.028 qpair failed and we were unable to recover it. 00:38:26.028 [2024-12-14 00:19:04.969273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.028 [2024-12-14 00:19:04.969294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.028 qpair failed and we were unable to recover it. 00:38:26.028 [2024-12-14 00:19:04.969501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.028 [2024-12-14 00:19:04.969525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.028 qpair failed and we were unable to recover it. 00:38:26.028 [2024-12-14 00:19:04.969629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.028 [2024-12-14 00:19:04.969649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.028 qpair failed and we were unable to recover it. 00:38:26.028 [2024-12-14 00:19:04.969856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.028 [2024-12-14 00:19:04.969872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.028 qpair failed and we were unable to recover it. 00:38:26.028 [2024-12-14 00:19:04.969947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.028 [2024-12-14 00:19:04.969960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.028 qpair failed and we were unable to recover it. 00:38:26.028 [2024-12-14 00:19:04.970162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.028 [2024-12-14 00:19:04.970176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.028 qpair failed and we were unable to recover it. 00:38:26.028 [2024-12-14 00:19:04.970258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.028 [2024-12-14 00:19:04.970271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.028 qpair failed and we were unable to recover it. 00:38:26.028 [2024-12-14 00:19:04.970377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.028 [2024-12-14 00:19:04.970393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.028 qpair failed and we were unable to recover it. 00:38:26.028 [2024-12-14 00:19:04.970488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.028 [2024-12-14 00:19:04.970502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.028 qpair failed and we were unable to recover it. 00:38:26.028 [2024-12-14 00:19:04.970635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.028 [2024-12-14 00:19:04.970648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.028 qpair failed and we were unable to recover it. 00:38:26.028 [2024-12-14 00:19:04.970798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.028 [2024-12-14 00:19:04.970812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.029 qpair failed and we were unable to recover it. 00:38:26.029 [2024-12-14 00:19:04.970910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.029 [2024-12-14 00:19:04.970924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.029 qpair failed and we were unable to recover it. 00:38:26.029 [2024-12-14 00:19:04.971061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.029 [2024-12-14 00:19:04.971075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.029 qpair failed and we were unable to recover it. 00:38:26.029 [2024-12-14 00:19:04.971171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.029 [2024-12-14 00:19:04.971185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.029 qpair failed and we were unable to recover it. 00:38:26.029 [2024-12-14 00:19:04.971365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.029 [2024-12-14 00:19:04.971379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.029 qpair failed and we were unable to recover it. 00:38:26.029 [2024-12-14 00:19:04.971602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.029 [2024-12-14 00:19:04.971616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.029 qpair failed and we were unable to recover it. 00:38:26.029 [2024-12-14 00:19:04.971700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.029 [2024-12-14 00:19:04.971712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.029 qpair failed and we were unable to recover it. 00:38:26.029 [2024-12-14 00:19:04.971784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.029 [2024-12-14 00:19:04.971796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.029 qpair failed and we were unable to recover it. 00:38:26.029 [2024-12-14 00:19:04.971936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.029 [2024-12-14 00:19:04.971949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.029 qpair failed and we were unable to recover it. 00:38:26.029 [2024-12-14 00:19:04.972083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.029 [2024-12-14 00:19:04.972097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.029 qpair failed and we were unable to recover it. 00:38:26.029 [2024-12-14 00:19:04.972249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.029 [2024-12-14 00:19:04.972262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.029 qpair failed and we were unable to recover it. 00:38:26.029 [2024-12-14 00:19:04.972353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.029 [2024-12-14 00:19:04.972368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.029 qpair failed and we were unable to recover it. 00:38:26.029 [2024-12-14 00:19:04.972503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.029 [2024-12-14 00:19:04.972517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.029 qpair failed and we were unable to recover it. 00:38:26.029 [2024-12-14 00:19:04.972686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.029 [2024-12-14 00:19:04.972700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.029 qpair failed and we were unable to recover it. 00:38:26.029 [2024-12-14 00:19:04.972771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.029 [2024-12-14 00:19:04.972783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.029 qpair failed and we were unable to recover it. 00:38:26.029 [2024-12-14 00:19:04.972892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.029 [2024-12-14 00:19:04.972905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.029 qpair failed and we were unable to recover it. 00:38:26.029 [2024-12-14 00:19:04.973042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.029 [2024-12-14 00:19:04.973056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.029 qpair failed and we were unable to recover it. 00:38:26.029 [2024-12-14 00:19:04.973202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.029 [2024-12-14 00:19:04.973216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.029 qpair failed and we were unable to recover it. 00:38:26.029 [2024-12-14 00:19:04.973368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.029 [2024-12-14 00:19:04.973381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.029 qpair failed and we were unable to recover it. 00:38:26.029 [2024-12-14 00:19:04.973488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.029 [2024-12-14 00:19:04.973502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.029 qpair failed and we were unable to recover it. 00:38:26.029 [2024-12-14 00:19:04.973662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.029 [2024-12-14 00:19:04.973676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.029 qpair failed and we were unable to recover it. 00:38:26.029 [2024-12-14 00:19:04.973767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.029 [2024-12-14 00:19:04.973782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.029 qpair failed and we were unable to recover it. 00:38:26.029 [2024-12-14 00:19:04.973918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.029 [2024-12-14 00:19:04.973931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.029 qpair failed and we were unable to recover it. 00:38:26.029 [2024-12-14 00:19:04.974017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.029 [2024-12-14 00:19:04.974031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.029 qpair failed and we were unable to recover it. 00:38:26.029 [2024-12-14 00:19:04.974181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.029 [2024-12-14 00:19:04.974195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.029 qpair failed and we were unable to recover it. 00:38:26.029 [2024-12-14 00:19:04.974347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.029 [2024-12-14 00:19:04.974360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.029 qpair failed and we were unable to recover it. 00:38:26.029 [2024-12-14 00:19:04.974450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.029 [2024-12-14 00:19:04.974470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.029 qpair failed and we were unable to recover it. 00:38:26.029 [2024-12-14 00:19:04.974551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.029 [2024-12-14 00:19:04.974564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.029 qpair failed and we were unable to recover it. 00:38:26.029 [2024-12-14 00:19:04.974728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.029 [2024-12-14 00:19:04.974742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.029 qpair failed and we were unable to recover it. 00:38:26.029 [2024-12-14 00:19:04.974910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.029 [2024-12-14 00:19:04.974924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.029 qpair failed and we were unable to recover it. 00:38:26.029 [2024-12-14 00:19:04.975073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.029 [2024-12-14 00:19:04.975086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.029 qpair failed and we were unable to recover it. 00:38:26.029 [2024-12-14 00:19:04.975231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.029 [2024-12-14 00:19:04.975245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.029 qpair failed and we were unable to recover it. 00:38:26.029 [2024-12-14 00:19:04.975382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.029 [2024-12-14 00:19:04.975395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.029 qpair failed and we were unable to recover it. 00:38:26.029 [2024-12-14 00:19:04.975487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.029 [2024-12-14 00:19:04.975501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.029 qpair failed and we were unable to recover it. 00:38:26.029 [2024-12-14 00:19:04.975577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.029 [2024-12-14 00:19:04.975589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.029 qpair failed and we were unable to recover it. 00:38:26.029 [2024-12-14 00:19:04.975739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.029 [2024-12-14 00:19:04.975753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.029 qpair failed and we were unable to recover it. 00:38:26.029 [2024-12-14 00:19:04.975900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.029 [2024-12-14 00:19:04.975914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.029 qpair failed and we were unable to recover it. 00:38:26.029 [2024-12-14 00:19:04.976014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.029 [2024-12-14 00:19:04.976027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.029 qpair failed and we were unable to recover it. 00:38:26.029 [2024-12-14 00:19:04.976107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.029 [2024-12-14 00:19:04.976120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.029 qpair failed and we were unable to recover it. 00:38:26.029 [2024-12-14 00:19:04.976205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.029 [2024-12-14 00:19:04.976219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.029 qpair failed and we were unable to recover it. 00:38:26.030 [2024-12-14 00:19:04.976388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.030 [2024-12-14 00:19:04.976402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.030 qpair failed and we were unable to recover it. 00:38:26.030 [2024-12-14 00:19:04.976571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.030 [2024-12-14 00:19:04.976587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.030 qpair failed and we were unable to recover it. 00:38:26.030 [2024-12-14 00:19:04.976668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.030 [2024-12-14 00:19:04.976682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.030 qpair failed and we were unable to recover it. 00:38:26.030 [2024-12-14 00:19:04.976778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.030 [2024-12-14 00:19:04.976792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.030 qpair failed and we were unable to recover it. 00:38:26.030 [2024-12-14 00:19:04.976972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.030 [2024-12-14 00:19:04.976985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.030 qpair failed and we were unable to recover it. 00:38:26.030 [2024-12-14 00:19:04.977155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.030 [2024-12-14 00:19:04.977168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.030 qpair failed and we were unable to recover it. 00:38:26.030 [2024-12-14 00:19:04.977243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.030 [2024-12-14 00:19:04.977256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.030 qpair failed and we were unable to recover it. 00:38:26.030 [2024-12-14 00:19:04.977410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.030 [2024-12-14 00:19:04.977424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.030 qpair failed and we were unable to recover it. 00:38:26.030 [2024-12-14 00:19:04.977580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.030 [2024-12-14 00:19:04.977593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.030 qpair failed and we were unable to recover it. 00:38:26.030 [2024-12-14 00:19:04.977687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.030 [2024-12-14 00:19:04.977701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.030 qpair failed and we were unable to recover it. 00:38:26.030 [2024-12-14 00:19:04.977795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.030 [2024-12-14 00:19:04.977808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.030 qpair failed and we were unable to recover it. 00:38:26.030 [2024-12-14 00:19:04.977955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.030 [2024-12-14 00:19:04.977969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.030 qpair failed and we were unable to recover it. 00:38:26.030 [2024-12-14 00:19:04.978109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.030 [2024-12-14 00:19:04.978122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.030 qpair failed and we were unable to recover it. 00:38:26.030 [2024-12-14 00:19:04.978192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.030 [2024-12-14 00:19:04.978205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.030 qpair failed and we were unable to recover it. 00:38:26.030 [2024-12-14 00:19:04.978376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.030 [2024-12-14 00:19:04.978390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.030 qpair failed and we were unable to recover it. 00:38:26.030 [2024-12-14 00:19:04.978545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.030 [2024-12-14 00:19:04.978559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.030 qpair failed and we were unable to recover it. 00:38:26.030 [2024-12-14 00:19:04.978695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.030 [2024-12-14 00:19:04.978708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.030 qpair failed and we were unable to recover it. 00:38:26.030 [2024-12-14 00:19:04.978792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.030 [2024-12-14 00:19:04.978805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.030 qpair failed and we were unable to recover it. 00:38:26.030 [2024-12-14 00:19:04.978948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.030 [2024-12-14 00:19:04.978961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.030 qpair failed and we were unable to recover it. 00:38:26.030 [2024-12-14 00:19:04.979050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.030 [2024-12-14 00:19:04.979063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.030 qpair failed and we were unable to recover it. 00:38:26.030 [2024-12-14 00:19:04.979233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.030 [2024-12-14 00:19:04.979247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.030 qpair failed and we were unable to recover it. 00:38:26.030 [2024-12-14 00:19:04.979335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.030 [2024-12-14 00:19:04.979348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.030 qpair failed and we were unable to recover it. 00:38:26.030 [2024-12-14 00:19:04.979491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.030 [2024-12-14 00:19:04.979505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.030 qpair failed and we were unable to recover it. 00:38:26.030 [2024-12-14 00:19:04.979636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.030 [2024-12-14 00:19:04.979649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.030 qpair failed and we were unable to recover it. 00:38:26.030 [2024-12-14 00:19:04.979783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.030 [2024-12-14 00:19:04.979799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.030 qpair failed and we were unable to recover it. 00:38:26.030 [2024-12-14 00:19:04.979958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.030 [2024-12-14 00:19:04.979971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.030 qpair failed and we were unable to recover it. 00:38:26.030 [2024-12-14 00:19:04.980151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.030 [2024-12-14 00:19:04.980165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.030 qpair failed and we were unable to recover it. 00:38:26.030 [2024-12-14 00:19:04.980254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.030 [2024-12-14 00:19:04.980268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.030 qpair failed and we were unable to recover it. 00:38:26.030 [2024-12-14 00:19:04.980354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.030 [2024-12-14 00:19:04.980368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.030 qpair failed and we were unable to recover it. 00:38:26.030 [2024-12-14 00:19:04.980528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.030 [2024-12-14 00:19:04.980545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.030 qpair failed and we were unable to recover it. 00:38:26.030 [2024-12-14 00:19:04.980700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.030 [2024-12-14 00:19:04.980714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.030 qpair failed and we were unable to recover it. 00:38:26.030 [2024-12-14 00:19:04.980805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.030 [2024-12-14 00:19:04.980818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.030 qpair failed and we were unable to recover it. 00:38:26.030 [2024-12-14 00:19:04.980974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.030 [2024-12-14 00:19:04.980988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.030 qpair failed and we were unable to recover it. 00:38:26.030 [2024-12-14 00:19:04.981074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.030 [2024-12-14 00:19:04.981088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.030 qpair failed and we were unable to recover it. 00:38:26.030 [2024-12-14 00:19:04.981158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.030 [2024-12-14 00:19:04.981170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.030 qpair failed and we were unable to recover it. 00:38:26.030 [2024-12-14 00:19:04.981362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.030 [2024-12-14 00:19:04.981376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.030 qpair failed and we were unable to recover it. 00:38:26.030 [2024-12-14 00:19:04.981459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.030 [2024-12-14 00:19:04.981473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.030 qpair failed and we were unable to recover it. 00:38:26.030 [2024-12-14 00:19:04.981557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.030 [2024-12-14 00:19:04.981571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.030 qpair failed and we were unable to recover it. 00:38:26.030 [2024-12-14 00:19:04.981722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.030 [2024-12-14 00:19:04.981735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.030 qpair failed and we were unable to recover it. 00:38:26.030 [2024-12-14 00:19:04.981803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.031 [2024-12-14 00:19:04.981815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.031 qpair failed and we were unable to recover it. 00:38:26.031 [2024-12-14 00:19:04.981887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.031 [2024-12-14 00:19:04.981899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.031 qpair failed and we were unable to recover it. 00:38:26.031 [2024-12-14 00:19:04.981987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.031 [2024-12-14 00:19:04.982001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.031 qpair failed and we were unable to recover it. 00:38:26.031 [2024-12-14 00:19:04.982148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.031 [2024-12-14 00:19:04.982161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.031 qpair failed and we were unable to recover it. 00:38:26.031 [2024-12-14 00:19:04.982253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.031 [2024-12-14 00:19:04.982267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.031 qpair failed and we were unable to recover it. 00:38:26.031 [2024-12-14 00:19:04.982428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.031 [2024-12-14 00:19:04.982447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.031 qpair failed and we were unable to recover it. 00:38:26.031 [2024-12-14 00:19:04.982516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.031 [2024-12-14 00:19:04.982529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.031 qpair failed and we were unable to recover it. 00:38:26.031 [2024-12-14 00:19:04.982768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.031 [2024-12-14 00:19:04.982782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.031 qpair failed and we were unable to recover it. 00:38:26.031 [2024-12-14 00:19:04.982932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.031 [2024-12-14 00:19:04.982946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.031 qpair failed and we were unable to recover it. 00:38:26.031 [2024-12-14 00:19:04.983077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.031 [2024-12-14 00:19:04.983090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.031 qpair failed and we were unable to recover it. 00:38:26.031 [2024-12-14 00:19:04.983182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.031 [2024-12-14 00:19:04.983195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.031 qpair failed and we were unable to recover it. 00:38:26.031 [2024-12-14 00:19:04.983284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.031 [2024-12-14 00:19:04.983301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.031 qpair failed and we were unable to recover it. 00:38:26.031 [2024-12-14 00:19:04.983389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.031 [2024-12-14 00:19:04.983402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.031 qpair failed and we were unable to recover it. 00:38:26.031 [2024-12-14 00:19:04.983478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.031 [2024-12-14 00:19:04.983491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.031 qpair failed and we were unable to recover it. 00:38:26.031 [2024-12-14 00:19:04.983643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.031 [2024-12-14 00:19:04.983658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.031 qpair failed and we were unable to recover it. 00:38:26.031 [2024-12-14 00:19:04.983796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.031 [2024-12-14 00:19:04.983809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.031 qpair failed and we were unable to recover it. 00:38:26.031 [2024-12-14 00:19:04.983957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.031 [2024-12-14 00:19:04.983971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.031 qpair failed and we were unable to recover it. 00:38:26.031 [2024-12-14 00:19:04.984193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.031 [2024-12-14 00:19:04.984207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.031 qpair failed and we were unable to recover it. 00:38:26.031 [2024-12-14 00:19:04.984349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.031 [2024-12-14 00:19:04.984362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.031 qpair failed and we were unable to recover it. 00:38:26.031 [2024-12-14 00:19:04.984460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.031 [2024-12-14 00:19:04.984473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.031 qpair failed and we were unable to recover it. 00:38:26.031 [2024-12-14 00:19:04.984630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.031 [2024-12-14 00:19:04.984644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.031 qpair failed and we were unable to recover it. 00:38:26.031 [2024-12-14 00:19:04.984732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.031 [2024-12-14 00:19:04.984746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.031 qpair failed and we were unable to recover it. 00:38:26.031 [2024-12-14 00:19:04.984895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.031 [2024-12-14 00:19:04.984909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.031 qpair failed and we were unable to recover it. 00:38:26.031 [2024-12-14 00:19:04.985045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.031 [2024-12-14 00:19:04.985058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.031 qpair failed and we were unable to recover it. 00:38:26.031 [2024-12-14 00:19:04.985152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.031 [2024-12-14 00:19:04.985165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.031 qpair failed and we were unable to recover it. 00:38:26.031 [2024-12-14 00:19:04.985251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.031 [2024-12-14 00:19:04.985267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.031 qpair failed and we were unable to recover it. 00:38:26.031 [2024-12-14 00:19:04.985425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.031 [2024-12-14 00:19:04.985447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.031 qpair failed and we were unable to recover it. 00:38:26.031 [2024-12-14 00:19:04.985551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.031 [2024-12-14 00:19:04.985564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.031 qpair failed and we were unable to recover it. 00:38:26.031 [2024-12-14 00:19:04.985645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.031 [2024-12-14 00:19:04.985658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.031 qpair failed and we were unable to recover it. 00:38:26.031 [2024-12-14 00:19:04.985737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.031 [2024-12-14 00:19:04.985749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.031 qpair failed and we were unable to recover it. 00:38:26.031 [2024-12-14 00:19:04.985825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.031 [2024-12-14 00:19:04.985839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.031 qpair failed and we were unable to recover it. 00:38:26.031 [2024-12-14 00:19:04.986001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.031 [2024-12-14 00:19:04.986014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.031 qpair failed and we were unable to recover it. 00:38:26.031 [2024-12-14 00:19:04.986099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.031 [2024-12-14 00:19:04.986114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.031 qpair failed and we were unable to recover it. 00:38:26.031 [2024-12-14 00:19:04.986203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.031 [2024-12-14 00:19:04.986216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.031 qpair failed and we were unable to recover it. 00:38:26.031 [2024-12-14 00:19:04.986306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.031 [2024-12-14 00:19:04.986319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.031 qpair failed and we were unable to recover it. 00:38:26.031 [2024-12-14 00:19:04.986391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.032 [2024-12-14 00:19:04.986403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.032 qpair failed and we were unable to recover it. 00:38:26.032 [2024-12-14 00:19:04.986547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.032 [2024-12-14 00:19:04.986562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.032 qpair failed and we were unable to recover it. 00:38:26.032 [2024-12-14 00:19:04.986647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.032 [2024-12-14 00:19:04.986659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.032 qpair failed and we were unable to recover it. 00:38:26.032 [2024-12-14 00:19:04.986808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.032 [2024-12-14 00:19:04.986821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.032 qpair failed and we were unable to recover it. 00:38:26.032 [2024-12-14 00:19:04.986901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.032 [2024-12-14 00:19:04.986914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.032 qpair failed and we were unable to recover it. 00:38:26.032 [2024-12-14 00:19:04.987065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.032 [2024-12-14 00:19:04.987079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.032 qpair failed and we were unable to recover it. 00:38:26.032 [2024-12-14 00:19:04.987214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.032 [2024-12-14 00:19:04.987235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.032 qpair failed and we were unable to recover it. 00:38:26.032 [2024-12-14 00:19:04.987389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.032 [2024-12-14 00:19:04.987403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.032 qpair failed and we were unable to recover it. 00:38:26.032 [2024-12-14 00:19:04.987486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.032 [2024-12-14 00:19:04.987500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.032 qpair failed and we were unable to recover it. 00:38:26.032 [2024-12-14 00:19:04.987635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.032 [2024-12-14 00:19:04.987648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.032 qpair failed and we were unable to recover it. 00:38:26.032 [2024-12-14 00:19:04.987828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.032 [2024-12-14 00:19:04.987841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.032 qpair failed and we were unable to recover it. 00:38:26.032 [2024-12-14 00:19:04.987980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.032 [2024-12-14 00:19:04.987993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.032 qpair failed and we were unable to recover it. 00:38:26.032 [2024-12-14 00:19:04.988148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.032 [2024-12-14 00:19:04.988162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.032 qpair failed and we were unable to recover it. 00:38:26.032 [2024-12-14 00:19:04.988311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.032 [2024-12-14 00:19:04.988325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.032 qpair failed and we were unable to recover it. 00:38:26.032 [2024-12-14 00:19:04.988395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.032 [2024-12-14 00:19:04.988408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.032 qpair failed and we were unable to recover it. 00:38:26.032 [2024-12-14 00:19:04.988487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.032 [2024-12-14 00:19:04.988500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.032 qpair failed and we were unable to recover it. 00:38:26.032 [2024-12-14 00:19:04.988672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.032 [2024-12-14 00:19:04.988685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.032 qpair failed and we were unable to recover it. 00:38:26.032 [2024-12-14 00:19:04.988775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.032 [2024-12-14 00:19:04.988788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.032 qpair failed and we were unable to recover it. 00:38:26.032 [2024-12-14 00:19:04.988891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.032 [2024-12-14 00:19:04.988906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.032 qpair failed and we were unable to recover it. 00:38:26.032 [2024-12-14 00:19:04.988994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.032 [2024-12-14 00:19:04.989007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.032 qpair failed and we were unable to recover it. 00:38:26.032 [2024-12-14 00:19:04.989102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.032 [2024-12-14 00:19:04.989116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.032 qpair failed and we were unable to recover it. 00:38:26.032 [2024-12-14 00:19:04.989244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.032 [2024-12-14 00:19:04.989257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.032 qpair failed and we were unable to recover it. 00:38:26.032 [2024-12-14 00:19:04.989426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.032 [2024-12-14 00:19:04.989456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.032 qpair failed and we were unable to recover it. 00:38:26.032 [2024-12-14 00:19:04.989540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.032 [2024-12-14 00:19:04.989553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.032 qpair failed and we were unable to recover it. 00:38:26.032 [2024-12-14 00:19:04.989627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.032 [2024-12-14 00:19:04.989640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.032 qpair failed and we were unable to recover it. 00:38:26.032 [2024-12-14 00:19:04.989793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.032 [2024-12-14 00:19:04.989806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.032 qpair failed and we were unable to recover it. 00:38:26.032 [2024-12-14 00:19:04.989900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.032 [2024-12-14 00:19:04.989914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.032 qpair failed and we were unable to recover it. 00:38:26.032 [2024-12-14 00:19:04.990076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.032 [2024-12-14 00:19:04.990089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.032 qpair failed and we were unable to recover it. 00:38:26.032 [2024-12-14 00:19:04.990166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.032 [2024-12-14 00:19:04.990180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.032 qpair failed and we were unable to recover it. 00:38:26.032 [2024-12-14 00:19:04.990331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.032 [2024-12-14 00:19:04.990344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.032 qpair failed and we were unable to recover it. 00:38:26.032 [2024-12-14 00:19:04.990436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.032 [2024-12-14 00:19:04.990457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.032 qpair failed and we were unable to recover it. 00:38:26.032 [2024-12-14 00:19:04.990537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.032 [2024-12-14 00:19:04.990550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.032 qpair failed and we were unable to recover it. 00:38:26.032 [2024-12-14 00:19:04.990688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.032 [2024-12-14 00:19:04.990702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.032 qpair failed and we were unable to recover it. 00:38:26.032 [2024-12-14 00:19:04.990778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.032 [2024-12-14 00:19:04.990791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.032 qpair failed and we were unable to recover it. 00:38:26.032 [2024-12-14 00:19:04.990995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.032 [2024-12-14 00:19:04.991008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.032 qpair failed and we were unable to recover it. 00:38:26.032 [2024-12-14 00:19:04.991154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.032 [2024-12-14 00:19:04.991167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.032 qpair failed and we were unable to recover it. 00:38:26.032 [2024-12-14 00:19:04.991369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.032 [2024-12-14 00:19:04.991384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.032 qpair failed and we were unable to recover it. 00:38:26.032 [2024-12-14 00:19:04.991537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.032 [2024-12-14 00:19:04.991551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.032 qpair failed and we were unable to recover it. 00:38:26.032 [2024-12-14 00:19:04.991686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.032 [2024-12-14 00:19:04.991699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.032 qpair failed and we were unable to recover it. 00:38:26.033 [2024-12-14 00:19:04.991794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.033 [2024-12-14 00:19:04.991807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.033 qpair failed and we were unable to recover it. 00:38:26.033 [2024-12-14 00:19:04.991881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.033 [2024-12-14 00:19:04.991914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.033 qpair failed and we were unable to recover it. 00:38:26.033 [2024-12-14 00:19:04.992144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.033 [2024-12-14 00:19:04.992158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.033 qpair failed and we were unable to recover it. 00:38:26.033 [2024-12-14 00:19:04.992313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.033 [2024-12-14 00:19:04.992326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.033 qpair failed and we were unable to recover it. 00:38:26.033 [2024-12-14 00:19:04.992474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.033 [2024-12-14 00:19:04.992487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.033 qpair failed and we were unable to recover it. 00:38:26.033 [2024-12-14 00:19:04.992589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.033 [2024-12-14 00:19:04.992601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.033 qpair failed and we were unable to recover it. 00:38:26.033 [2024-12-14 00:19:04.992741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.033 [2024-12-14 00:19:04.992755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.033 qpair failed and we were unable to recover it. 00:38:26.033 [2024-12-14 00:19:04.992903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.033 [2024-12-14 00:19:04.992916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.033 qpair failed and we were unable to recover it. 00:38:26.033 [2024-12-14 00:19:04.992991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.033 [2024-12-14 00:19:04.993004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.033 qpair failed and we were unable to recover it. 00:38:26.033 [2024-12-14 00:19:04.993219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.033 [2024-12-14 00:19:04.993232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.033 qpair failed and we were unable to recover it. 00:38:26.033 [2024-12-14 00:19:04.993309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.033 [2024-12-14 00:19:04.993322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.033 qpair failed and we were unable to recover it. 00:38:26.033 [2024-12-14 00:19:04.993478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.033 [2024-12-14 00:19:04.993491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.033 qpair failed and we were unable to recover it. 00:38:26.033 [2024-12-14 00:19:04.993579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.033 [2024-12-14 00:19:04.993594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.033 qpair failed and we were unable to recover it. 00:38:26.033 [2024-12-14 00:19:04.993670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.033 [2024-12-14 00:19:04.993683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.033 qpair failed and we were unable to recover it. 00:38:26.033 [2024-12-14 00:19:04.993836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.033 [2024-12-14 00:19:04.993849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.033 qpair failed and we were unable to recover it. 00:38:26.033 [2024-12-14 00:19:04.994004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.033 [2024-12-14 00:19:04.994017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.033 qpair failed and we were unable to recover it. 00:38:26.033 [2024-12-14 00:19:04.994087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.033 [2024-12-14 00:19:04.994101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.033 qpair failed and we were unable to recover it. 00:38:26.033 [2024-12-14 00:19:04.994261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.033 [2024-12-14 00:19:04.994274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.033 qpair failed and we were unable to recover it. 00:38:26.033 [2024-12-14 00:19:04.994434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.033 [2024-12-14 00:19:04.994458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.033 qpair failed and we were unable to recover it. 00:38:26.033 [2024-12-14 00:19:04.994707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.033 [2024-12-14 00:19:04.994721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.033 qpair failed and we were unable to recover it. 00:38:26.033 [2024-12-14 00:19:04.994802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.033 [2024-12-14 00:19:04.994815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.033 qpair failed and we were unable to recover it. 00:38:26.033 [2024-12-14 00:19:04.994897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.033 [2024-12-14 00:19:04.994910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.033 qpair failed and we were unable to recover it. 00:38:26.033 [2024-12-14 00:19:04.995078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.033 [2024-12-14 00:19:04.995092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.033 qpair failed and we were unable to recover it. 00:38:26.033 [2024-12-14 00:19:04.995186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.033 [2024-12-14 00:19:04.995199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.033 qpair failed and we were unable to recover it. 00:38:26.033 [2024-12-14 00:19:04.995290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.033 [2024-12-14 00:19:04.995304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.033 qpair failed and we were unable to recover it. 00:38:26.033 [2024-12-14 00:19:04.995444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.033 [2024-12-14 00:19:04.995458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.033 qpair failed and we were unable to recover it. 00:38:26.033 [2024-12-14 00:19:04.995613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.033 [2024-12-14 00:19:04.995627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.033 qpair failed and we were unable to recover it. 00:38:26.033 [2024-12-14 00:19:04.995830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.033 [2024-12-14 00:19:04.995843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.033 qpair failed and we were unable to recover it. 00:38:26.033 [2024-12-14 00:19:04.996038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.033 [2024-12-14 00:19:04.996051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.033 qpair failed and we were unable to recover it. 00:38:26.033 [2024-12-14 00:19:04.996125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.033 [2024-12-14 00:19:04.996138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.033 qpair failed and we were unable to recover it. 00:38:26.033 [2024-12-14 00:19:04.996339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.033 [2024-12-14 00:19:04.996352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.033 qpair failed and we were unable to recover it. 00:38:26.033 [2024-12-14 00:19:04.996491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.033 [2024-12-14 00:19:04.996508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.033 qpair failed and we were unable to recover it. 00:38:26.033 [2024-12-14 00:19:04.996644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.033 [2024-12-14 00:19:04.996658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.033 qpair failed and we were unable to recover it. 00:38:26.033 [2024-12-14 00:19:04.996808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.033 [2024-12-14 00:19:04.996820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.033 qpair failed and we were unable to recover it. 00:38:26.033 [2024-12-14 00:19:04.996914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.033 [2024-12-14 00:19:04.996928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.033 qpair failed and we were unable to recover it. 00:38:26.033 [2024-12-14 00:19:04.997068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.033 [2024-12-14 00:19:04.997082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.033 qpair failed and we were unable to recover it. 00:38:26.033 [2024-12-14 00:19:04.997157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.033 [2024-12-14 00:19:04.997169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.033 qpair failed and we were unable to recover it. 00:38:26.033 [2024-12-14 00:19:04.997298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.033 [2024-12-14 00:19:04.997312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.033 qpair failed and we were unable to recover it. 00:38:26.033 [2024-12-14 00:19:04.997516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.033 [2024-12-14 00:19:04.997529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.033 qpair failed and we were unable to recover it. 00:38:26.033 [2024-12-14 00:19:04.997678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.034 [2024-12-14 00:19:04.997692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.034 qpair failed and we were unable to recover it. 00:38:26.034 [2024-12-14 00:19:04.997761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.034 [2024-12-14 00:19:04.997774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.034 qpair failed and we were unable to recover it. 00:38:26.034 [2024-12-14 00:19:04.997911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.034 [2024-12-14 00:19:04.997925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.034 qpair failed and we were unable to recover it. 00:38:26.034 [2024-12-14 00:19:04.998007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.034 [2024-12-14 00:19:04.998020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.034 qpair failed and we were unable to recover it. 00:38:26.034 [2024-12-14 00:19:04.998110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.034 [2024-12-14 00:19:04.998125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.034 qpair failed and we were unable to recover it. 00:38:26.034 [2024-12-14 00:19:04.998197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.034 [2024-12-14 00:19:04.998215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.034 qpair failed and we were unable to recover it. 00:38:26.034 [2024-12-14 00:19:04.998331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.034 [2024-12-14 00:19:04.998345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.034 qpair failed and we were unable to recover it. 00:38:26.034 [2024-12-14 00:19:04.998482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.034 [2024-12-14 00:19:04.998496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.034 qpair failed and we were unable to recover it. 00:38:26.034 [2024-12-14 00:19:04.998582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.034 [2024-12-14 00:19:04.998595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.034 qpair failed and we were unable to recover it. 00:38:26.034 [2024-12-14 00:19:04.998733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.034 [2024-12-14 00:19:04.998746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.034 qpair failed and we were unable to recover it. 00:38:26.034 [2024-12-14 00:19:04.998839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.034 [2024-12-14 00:19:04.998852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.034 qpair failed and we were unable to recover it. 00:38:26.034 [2024-12-14 00:19:04.999088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.034 [2024-12-14 00:19:04.999102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.034 qpair failed and we were unable to recover it. 00:38:26.034 [2024-12-14 00:19:04.999255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.034 [2024-12-14 00:19:04.999268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.034 qpair failed and we were unable to recover it. 00:38:26.034 [2024-12-14 00:19:04.999474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.034 [2024-12-14 00:19:04.999488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.034 qpair failed and we were unable to recover it. 00:38:26.034 [2024-12-14 00:19:04.999629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.034 [2024-12-14 00:19:04.999643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.034 qpair failed and we were unable to recover it. 00:38:26.034 [2024-12-14 00:19:04.999733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.034 [2024-12-14 00:19:04.999746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.034 qpair failed and we were unable to recover it. 00:38:26.034 [2024-12-14 00:19:04.999900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.034 [2024-12-14 00:19:04.999913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.034 qpair failed and we were unable to recover it. 00:38:26.034 [2024-12-14 00:19:05.000136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.034 [2024-12-14 00:19:05.000149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.034 qpair failed and we were unable to recover it. 00:38:26.034 [2024-12-14 00:19:05.000231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.034 [2024-12-14 00:19:05.000245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.034 qpair failed and we were unable to recover it. 00:38:26.034 [2024-12-14 00:19:05.000394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.034 [2024-12-14 00:19:05.000409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.034 qpair failed and we were unable to recover it. 00:38:26.034 [2024-12-14 00:19:05.000569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.034 [2024-12-14 00:19:05.000584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.034 qpair failed and we were unable to recover it. 00:38:26.034 [2024-12-14 00:19:05.000806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.034 [2024-12-14 00:19:05.000821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.034 qpair failed and we were unable to recover it. 00:38:26.034 [2024-12-14 00:19:05.000968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.034 [2024-12-14 00:19:05.000983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.034 qpair failed and we were unable to recover it. 00:38:26.034 [2024-12-14 00:19:05.001167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.034 [2024-12-14 00:19:05.001180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.034 qpair failed and we were unable to recover it. 00:38:26.034 [2024-12-14 00:19:05.001258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.034 [2024-12-14 00:19:05.001271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.034 qpair failed and we were unable to recover it. 00:38:26.034 [2024-12-14 00:19:05.001353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.034 [2024-12-14 00:19:05.001367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.034 qpair failed and we were unable to recover it. 00:38:26.034 [2024-12-14 00:19:05.001469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.034 [2024-12-14 00:19:05.001487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.034 qpair failed and we were unable to recover it. 00:38:26.034 [2024-12-14 00:19:05.001597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.034 [2024-12-14 00:19:05.001610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.034 qpair failed and we were unable to recover it. 00:38:26.034 [2024-12-14 00:19:05.001694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.034 [2024-12-14 00:19:05.001708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.034 qpair failed and we were unable to recover it. 00:38:26.034 [2024-12-14 00:19:05.001854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.034 [2024-12-14 00:19:05.001867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.034 qpair failed and we were unable to recover it. 00:38:26.034 [2024-12-14 00:19:05.002007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.034 [2024-12-14 00:19:05.002021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.034 qpair failed and we were unable to recover it. 00:38:26.034 [2024-12-14 00:19:05.002182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.034 [2024-12-14 00:19:05.002196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.034 qpair failed and we were unable to recover it. 00:38:26.034 [2024-12-14 00:19:05.002277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.034 [2024-12-14 00:19:05.002292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.034 qpair failed and we were unable to recover it. 00:38:26.034 [2024-12-14 00:19:05.002428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.034 [2024-12-14 00:19:05.002448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.034 qpair failed and we were unable to recover it. 00:38:26.034 [2024-12-14 00:19:05.002536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.034 [2024-12-14 00:19:05.002550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.034 qpair failed and we were unable to recover it. 00:38:26.034 [2024-12-14 00:19:05.002629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.034 [2024-12-14 00:19:05.002642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.034 qpair failed and we were unable to recover it. 00:38:26.034 [2024-12-14 00:19:05.002719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.034 [2024-12-14 00:19:05.002733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.034 qpair failed and we were unable to recover it. 00:38:26.034 [2024-12-14 00:19:05.002823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.034 [2024-12-14 00:19:05.002836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.034 qpair failed and we were unable to recover it. 00:38:26.034 [2024-12-14 00:19:05.002911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.034 [2024-12-14 00:19:05.002924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.034 qpair failed and we were unable to recover it. 00:38:26.034 [2024-12-14 00:19:05.003060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.034 [2024-12-14 00:19:05.003073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.035 qpair failed and we were unable to recover it. 00:38:26.035 [2024-12-14 00:19:05.003177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.035 [2024-12-14 00:19:05.003190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.035 qpair failed and we were unable to recover it. 00:38:26.035 [2024-12-14 00:19:05.003337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.035 [2024-12-14 00:19:05.003350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.035 qpair failed and we were unable to recover it. 00:38:26.035 [2024-12-14 00:19:05.003495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.035 [2024-12-14 00:19:05.003509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.035 qpair failed and we were unable to recover it. 00:38:26.035 [2024-12-14 00:19:05.003722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.035 [2024-12-14 00:19:05.003735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.035 qpair failed and we were unable to recover it. 00:38:26.035 [2024-12-14 00:19:05.003879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.035 [2024-12-14 00:19:05.003892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.035 qpair failed and we were unable to recover it. 00:38:26.035 [2024-12-14 00:19:05.003982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.035 [2024-12-14 00:19:05.003997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.035 qpair failed and we were unable to recover it. 00:38:26.035 [2024-12-14 00:19:05.004158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.035 [2024-12-14 00:19:05.004172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.035 qpair failed and we were unable to recover it. 00:38:26.035 [2024-12-14 00:19:05.004255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.035 [2024-12-14 00:19:05.004269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.035 qpair failed and we were unable to recover it. 00:38:26.035 [2024-12-14 00:19:05.004422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.035 [2024-12-14 00:19:05.004435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.035 qpair failed and we were unable to recover it. 00:38:26.035 [2024-12-14 00:19:05.004540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.035 [2024-12-14 00:19:05.004555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.035 qpair failed and we were unable to recover it. 00:38:26.035 [2024-12-14 00:19:05.004636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.035 [2024-12-14 00:19:05.004651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.035 qpair failed and we were unable to recover it. 00:38:26.035 [2024-12-14 00:19:05.004752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.035 [2024-12-14 00:19:05.004766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.035 qpair failed and we were unable to recover it. 00:38:26.035 [2024-12-14 00:19:05.004860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.035 [2024-12-14 00:19:05.004874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.035 qpair failed and we were unable to recover it. 00:38:26.035 [2024-12-14 00:19:05.004976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.035 [2024-12-14 00:19:05.004990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.035 qpair failed and we were unable to recover it. 00:38:26.035 [2024-12-14 00:19:05.005135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.035 [2024-12-14 00:19:05.005148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.035 qpair failed and we were unable to recover it. 00:38:26.035 [2024-12-14 00:19:05.005308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.035 [2024-12-14 00:19:05.005322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.035 qpair failed and we were unable to recover it. 00:38:26.035 [2024-12-14 00:19:05.005457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.035 [2024-12-14 00:19:05.005470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.035 qpair failed and we were unable to recover it. 00:38:26.035 [2024-12-14 00:19:05.005553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.035 [2024-12-14 00:19:05.005567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.035 qpair failed and we were unable to recover it. 00:38:26.035 [2024-12-14 00:19:05.005633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.035 [2024-12-14 00:19:05.005646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.035 qpair failed and we were unable to recover it. 00:38:26.035 [2024-12-14 00:19:05.005785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.035 [2024-12-14 00:19:05.005799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.035 qpair failed and we were unable to recover it. 00:38:26.035 [2024-12-14 00:19:05.005945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.035 [2024-12-14 00:19:05.005958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.035 qpair failed and we were unable to recover it. 00:38:26.035 [2024-12-14 00:19:05.006066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.035 [2024-12-14 00:19:05.006079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.035 qpair failed and we were unable to recover it. 00:38:26.035 [2024-12-14 00:19:05.006174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.035 [2024-12-14 00:19:05.006188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.035 qpair failed and we were unable to recover it. 00:38:26.035 [2024-12-14 00:19:05.006269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.035 [2024-12-14 00:19:05.006282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.035 qpair failed and we were unable to recover it. 00:38:26.035 [2024-12-14 00:19:05.006347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.035 [2024-12-14 00:19:05.006371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.035 qpair failed and we were unable to recover it. 00:38:26.035 [2024-12-14 00:19:05.006457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.035 [2024-12-14 00:19:05.006470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.035 qpair failed and we were unable to recover it. 00:38:26.035 [2024-12-14 00:19:05.006539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.035 [2024-12-14 00:19:05.006552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.035 qpair failed and we were unable to recover it. 00:38:26.035 [2024-12-14 00:19:05.006759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.035 [2024-12-14 00:19:05.006778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.035 qpair failed and we were unable to recover it. 00:38:26.035 [2024-12-14 00:19:05.006868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.035 [2024-12-14 00:19:05.006882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.035 qpair failed and we were unable to recover it. 00:38:26.035 [2024-12-14 00:19:05.007041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.035 [2024-12-14 00:19:05.007054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.035 qpair failed and we were unable to recover it. 00:38:26.035 [2024-12-14 00:19:05.007193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.035 [2024-12-14 00:19:05.007216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.035 qpair failed and we were unable to recover it. 00:38:26.035 [2024-12-14 00:19:05.007296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.035 [2024-12-14 00:19:05.007309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.035 qpair failed and we were unable to recover it. 00:38:26.035 [2024-12-14 00:19:05.007389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.035 [2024-12-14 00:19:05.007404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.035 qpair failed and we were unable to recover it. 00:38:26.035 [2024-12-14 00:19:05.007478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.035 [2024-12-14 00:19:05.007490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.035 qpair failed and we were unable to recover it. 00:38:26.035 [2024-12-14 00:19:05.007565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.036 [2024-12-14 00:19:05.007579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.036 qpair failed and we were unable to recover it. 00:38:26.036 [2024-12-14 00:19:05.007727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.036 [2024-12-14 00:19:05.007740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.036 qpair failed and we were unable to recover it. 00:38:26.036 [2024-12-14 00:19:05.007889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.036 [2024-12-14 00:19:05.007903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.036 qpair failed and we were unable to recover it. 00:38:26.036 [2024-12-14 00:19:05.007988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.036 [2024-12-14 00:19:05.008001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.036 qpair failed and we were unable to recover it. 00:38:26.036 [2024-12-14 00:19:05.008140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.036 [2024-12-14 00:19:05.008153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.036 qpair failed and we were unable to recover it. 00:38:26.036 [2024-12-14 00:19:05.008242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.036 [2024-12-14 00:19:05.008254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.036 qpair failed and we were unable to recover it. 00:38:26.036 [2024-12-14 00:19:05.008385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.036 [2024-12-14 00:19:05.008398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.036 qpair failed and we were unable to recover it. 00:38:26.036 [2024-12-14 00:19:05.008476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.036 [2024-12-14 00:19:05.008490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.036 qpair failed and we were unable to recover it. 00:38:26.036 [2024-12-14 00:19:05.008557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.036 [2024-12-14 00:19:05.008569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.036 qpair failed and we were unable to recover it. 00:38:26.036 [2024-12-14 00:19:05.008669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.036 [2024-12-14 00:19:05.008683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.036 qpair failed and we were unable to recover it. 00:38:26.036 [2024-12-14 00:19:05.008833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.036 [2024-12-14 00:19:05.008849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.036 qpair failed and we were unable to recover it. 00:38:26.036 [2024-12-14 00:19:05.008980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.036 [2024-12-14 00:19:05.008994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.036 qpair failed and we were unable to recover it. 00:38:26.036 [2024-12-14 00:19:05.009081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.036 [2024-12-14 00:19:05.009094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.036 qpair failed and we were unable to recover it. 00:38:26.036 [2024-12-14 00:19:05.009229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.036 [2024-12-14 00:19:05.009243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.036 qpair failed and we were unable to recover it. 00:38:26.036 [2024-12-14 00:19:05.009387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.036 [2024-12-14 00:19:05.009401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.036 qpair failed and we were unable to recover it. 00:38:26.036 [2024-12-14 00:19:05.009541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.036 [2024-12-14 00:19:05.009556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.036 qpair failed and we were unable to recover it. 00:38:26.036 [2024-12-14 00:19:05.009715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.036 [2024-12-14 00:19:05.009734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.036 qpair failed and we were unable to recover it. 00:38:26.036 [2024-12-14 00:19:05.009801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.036 [2024-12-14 00:19:05.009812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.036 qpair failed and we were unable to recover it. 00:38:26.036 [2024-12-14 00:19:05.009894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.036 [2024-12-14 00:19:05.009908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.036 qpair failed and we were unable to recover it. 00:38:26.036 [2024-12-14 00:19:05.009984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.036 [2024-12-14 00:19:05.009996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.036 qpair failed and we were unable to recover it. 00:38:26.036 [2024-12-14 00:19:05.010079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.036 [2024-12-14 00:19:05.010091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.036 qpair failed and we were unable to recover it. 00:38:26.036 [2024-12-14 00:19:05.010236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.036 [2024-12-14 00:19:05.010251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.036 qpair failed and we were unable to recover it. 00:38:26.036 [2024-12-14 00:19:05.010325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.036 [2024-12-14 00:19:05.010340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.036 qpair failed and we were unable to recover it. 00:38:26.036 [2024-12-14 00:19:05.010475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.036 [2024-12-14 00:19:05.010490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.036 qpair failed and we were unable to recover it. 00:38:26.036 [2024-12-14 00:19:05.010586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.036 [2024-12-14 00:19:05.010601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.036 qpair failed and we were unable to recover it. 00:38:26.036 [2024-12-14 00:19:05.010690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.036 [2024-12-14 00:19:05.010703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.036 qpair failed and we were unable to recover it. 00:38:26.036 [2024-12-14 00:19:05.010853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.036 [2024-12-14 00:19:05.010867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.036 qpair failed and we were unable to recover it. 00:38:26.036 [2024-12-14 00:19:05.011018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.036 [2024-12-14 00:19:05.011032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.036 qpair failed and we were unable to recover it. 00:38:26.036 [2024-12-14 00:19:05.011117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.036 [2024-12-14 00:19:05.011131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.036 qpair failed and we were unable to recover it. 00:38:26.036 [2024-12-14 00:19:05.011277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.036 [2024-12-14 00:19:05.011290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.036 qpair failed and we were unable to recover it. 00:38:26.036 [2024-12-14 00:19:05.011372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.036 [2024-12-14 00:19:05.011386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.036 qpair failed and we were unable to recover it. 00:38:26.036 [2024-12-14 00:19:05.011592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.036 [2024-12-14 00:19:05.011608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.036 qpair failed and we were unable to recover it. 00:38:26.036 [2024-12-14 00:19:05.011682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.036 [2024-12-14 00:19:05.011696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.036 qpair failed and we were unable to recover it. 00:38:26.036 [2024-12-14 00:19:05.011786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.036 [2024-12-14 00:19:05.011799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.036 qpair failed and we were unable to recover it. 00:38:26.036 [2024-12-14 00:19:05.011974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.036 [2024-12-14 00:19:05.011987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.036 qpair failed and we were unable to recover it. 00:38:26.036 [2024-12-14 00:19:05.012126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.036 [2024-12-14 00:19:05.012139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.036 qpair failed and we were unable to recover it. 00:38:26.036 [2024-12-14 00:19:05.012284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.036 [2024-12-14 00:19:05.012298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.036 qpair failed and we were unable to recover it. 00:38:26.036 [2024-12-14 00:19:05.012391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.036 [2024-12-14 00:19:05.012404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.036 qpair failed and we were unable to recover it. 00:38:26.036 [2024-12-14 00:19:05.012488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.036 [2024-12-14 00:19:05.012504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.036 qpair failed and we were unable to recover it. 00:38:26.036 [2024-12-14 00:19:05.012577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.036 [2024-12-14 00:19:05.012590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.036 qpair failed and we were unable to recover it. 00:38:26.037 [2024-12-14 00:19:05.012671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.037 [2024-12-14 00:19:05.012685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.037 qpair failed and we were unable to recover it. 00:38:26.037 [2024-12-14 00:19:05.012762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.037 [2024-12-14 00:19:05.012776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.037 qpair failed and we were unable to recover it. 00:38:26.037 [2024-12-14 00:19:05.012930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.037 [2024-12-14 00:19:05.012943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.037 qpair failed and we were unable to recover it. 00:38:26.037 [2024-12-14 00:19:05.013082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.037 [2024-12-14 00:19:05.013095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.037 qpair failed and we were unable to recover it. 00:38:26.037 [2024-12-14 00:19:05.013181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.037 [2024-12-14 00:19:05.013195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.037 qpair failed and we were unable to recover it. 00:38:26.037 [2024-12-14 00:19:05.013336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.037 [2024-12-14 00:19:05.013350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.037 qpair failed and we were unable to recover it. 00:38:26.037 [2024-12-14 00:19:05.013435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.037 [2024-12-14 00:19:05.013460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.037 qpair failed and we were unable to recover it. 00:38:26.037 [2024-12-14 00:19:05.013529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.037 [2024-12-14 00:19:05.013542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.037 qpair failed and we were unable to recover it. 00:38:26.037 [2024-12-14 00:19:05.013635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.037 [2024-12-14 00:19:05.013648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.037 qpair failed and we were unable to recover it. 00:38:26.037 [2024-12-14 00:19:05.013724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.037 [2024-12-14 00:19:05.013737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.037 qpair failed and we were unable to recover it. 00:38:26.037 [2024-12-14 00:19:05.013900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.037 [2024-12-14 00:19:05.013913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.037 qpair failed and we were unable to recover it. 00:38:26.037 [2024-12-14 00:19:05.014046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.037 [2024-12-14 00:19:05.014076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.037 qpair failed and we were unable to recover it. 00:38:26.037 [2024-12-14 00:19:05.014147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.037 [2024-12-14 00:19:05.014161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.037 qpair failed and we were unable to recover it. 00:38:26.037 [2024-12-14 00:19:05.014268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.037 [2024-12-14 00:19:05.014281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.037 qpair failed and we were unable to recover it. 00:38:26.037 [2024-12-14 00:19:05.014370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.037 [2024-12-14 00:19:05.014384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.037 qpair failed and we were unable to recover it. 00:38:26.037 [2024-12-14 00:19:05.014533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.037 [2024-12-14 00:19:05.014548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.037 qpair failed and we were unable to recover it. 00:38:26.037 [2024-12-14 00:19:05.014642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.037 [2024-12-14 00:19:05.014656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.037 qpair failed and we were unable to recover it. 00:38:26.037 [2024-12-14 00:19:05.014726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.037 [2024-12-14 00:19:05.014741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.037 qpair failed and we were unable to recover it. 00:38:26.037 [2024-12-14 00:19:05.014830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.037 [2024-12-14 00:19:05.014847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.037 qpair failed and we were unable to recover it. 00:38:26.037 [2024-12-14 00:19:05.014937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.037 [2024-12-14 00:19:05.014951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.037 qpair failed and we were unable to recover it. 00:38:26.037 [2024-12-14 00:19:05.015118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.037 [2024-12-14 00:19:05.015132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.037 qpair failed and we were unable to recover it. 00:38:26.037 [2024-12-14 00:19:05.015214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.037 [2024-12-14 00:19:05.015228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.037 qpair failed and we were unable to recover it. 00:38:26.037 [2024-12-14 00:19:05.015323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.037 [2024-12-14 00:19:05.015336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.037 qpair failed and we were unable to recover it. 00:38:26.037 [2024-12-14 00:19:05.015493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.037 [2024-12-14 00:19:05.015508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.037 qpair failed and we were unable to recover it. 00:38:26.037 [2024-12-14 00:19:05.015605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.037 [2024-12-14 00:19:05.015634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.037 qpair failed and we were unable to recover it. 00:38:26.037 [2024-12-14 00:19:05.015827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.037 [2024-12-14 00:19:05.015852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:26.037 qpair failed and we were unable to recover it. 00:38:26.037 [2024-12-14 00:19:05.015945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.037 [2024-12-14 00:19:05.015967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:26.037 qpair failed and we were unable to recover it. 00:38:26.037 [2024-12-14 00:19:05.016050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.037 [2024-12-14 00:19:05.016072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:26.037 qpair failed and we were unable to recover it. 00:38:26.037 [2024-12-14 00:19:05.016157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.037 [2024-12-14 00:19:05.016173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.037 qpair failed and we were unable to recover it. 00:38:26.037 [2024-12-14 00:19:05.016254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.037 [2024-12-14 00:19:05.016268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.037 qpair failed and we were unable to recover it. 00:38:26.037 [2024-12-14 00:19:05.016352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.037 [2024-12-14 00:19:05.016365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.037 qpair failed and we were unable to recover it. 00:38:26.037 [2024-12-14 00:19:05.016598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.037 [2024-12-14 00:19:05.016612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.037 qpair failed and we were unable to recover it. 00:38:26.037 [2024-12-14 00:19:05.016771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.037 [2024-12-14 00:19:05.016785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.037 qpair failed and we were unable to recover it. 00:38:26.037 [2024-12-14 00:19:05.016870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.037 [2024-12-14 00:19:05.016884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.037 qpair failed and we were unable to recover it. 00:38:26.037 [2024-12-14 00:19:05.017040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.037 [2024-12-14 00:19:05.017053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.037 qpair failed and we were unable to recover it. 00:38:26.037 [2024-12-14 00:19:05.017149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.037 [2024-12-14 00:19:05.017164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.037 qpair failed and we were unable to recover it. 00:38:26.037 [2024-12-14 00:19:05.017243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.037 [2024-12-14 00:19:05.017256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.037 qpair failed and we were unable to recover it. 00:38:26.037 [2024-12-14 00:19:05.017329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.037 [2024-12-14 00:19:05.017343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.037 qpair failed and we were unable to recover it. 00:38:26.037 [2024-12-14 00:19:05.017455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.037 [2024-12-14 00:19:05.017472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.037 qpair failed and we were unable to recover it. 00:38:26.037 [2024-12-14 00:19:05.017557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.038 [2024-12-14 00:19:05.017571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.038 qpair failed and we were unable to recover it. 00:38:26.038 [2024-12-14 00:19:05.017646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.038 [2024-12-14 00:19:05.017658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.038 qpair failed and we were unable to recover it. 00:38:26.038 [2024-12-14 00:19:05.017891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.038 [2024-12-14 00:19:05.017905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.038 qpair failed and we were unable to recover it. 00:38:26.038 [2024-12-14 00:19:05.017994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.038 [2024-12-14 00:19:05.018007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.038 qpair failed and we were unable to recover it. 00:38:26.038 [2024-12-14 00:19:05.018092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.038 [2024-12-14 00:19:05.018112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.038 qpair failed and we were unable to recover it. 00:38:26.038 [2024-12-14 00:19:05.018246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.038 [2024-12-14 00:19:05.018259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.038 qpair failed and we were unable to recover it. 00:38:26.038 [2024-12-14 00:19:05.018346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.038 [2024-12-14 00:19:05.018359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.038 qpair failed and we were unable to recover it. 00:38:26.038 [2024-12-14 00:19:05.018447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.038 [2024-12-14 00:19:05.018461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.038 qpair failed and we were unable to recover it. 00:38:26.038 [2024-12-14 00:19:05.018621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.038 [2024-12-14 00:19:05.018634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.038 qpair failed and we were unable to recover it. 00:38:26.038 [2024-12-14 00:19:05.018716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.038 [2024-12-14 00:19:05.018730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.038 qpair failed and we were unable to recover it. 00:38:26.038 [2024-12-14 00:19:05.018827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.038 [2024-12-14 00:19:05.018840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.038 qpair failed and we were unable to recover it. 00:38:26.038 [2024-12-14 00:19:05.018979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.038 [2024-12-14 00:19:05.018993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.038 qpair failed and we were unable to recover it. 00:38:26.038 [2024-12-14 00:19:05.019084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.038 [2024-12-14 00:19:05.019097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.038 qpair failed and we were unable to recover it. 00:38:26.038 [2024-12-14 00:19:05.019191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.038 [2024-12-14 00:19:05.019204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.038 qpair failed and we were unable to recover it. 00:38:26.038 [2024-12-14 00:19:05.019371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.038 [2024-12-14 00:19:05.019384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.038 qpair failed and we were unable to recover it. 00:38:26.038 [2024-12-14 00:19:05.019482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.038 [2024-12-14 00:19:05.019496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.038 qpair failed and we were unable to recover it. 00:38:26.038 [2024-12-14 00:19:05.019582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.038 [2024-12-14 00:19:05.019595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.038 qpair failed and we were unable to recover it. 00:38:26.038 [2024-12-14 00:19:05.019671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.038 [2024-12-14 00:19:05.019684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.038 qpair failed and we were unable to recover it. 00:38:26.038 [2024-12-14 00:19:05.019773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.038 [2024-12-14 00:19:05.019786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.038 qpair failed and we were unable to recover it. 00:38:26.038 [2024-12-14 00:19:05.019859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.038 [2024-12-14 00:19:05.019872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.038 qpair failed and we were unable to recover it. 00:38:26.038 [2024-12-14 00:19:05.020077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.038 [2024-12-14 00:19:05.020091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.038 qpair failed and we were unable to recover it. 00:38:26.038 [2024-12-14 00:19:05.020234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.038 [2024-12-14 00:19:05.020247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.038 qpair failed and we were unable to recover it. 00:38:26.038 [2024-12-14 00:19:05.020322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.038 [2024-12-14 00:19:05.020335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.038 qpair failed and we were unable to recover it. 00:38:26.038 [2024-12-14 00:19:05.020567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.038 [2024-12-14 00:19:05.020582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.038 qpair failed and we were unable to recover it. 00:38:26.038 [2024-12-14 00:19:05.020654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.038 [2024-12-14 00:19:05.020668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.038 qpair failed and we were unable to recover it. 00:38:26.038 [2024-12-14 00:19:05.020803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.038 [2024-12-14 00:19:05.020818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.038 qpair failed and we were unable to recover it. 00:38:26.038 [2024-12-14 00:19:05.020983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.038 [2024-12-14 00:19:05.020997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.038 qpair failed and we were unable to recover it. 00:38:26.038 [2024-12-14 00:19:05.021167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.038 [2024-12-14 00:19:05.021181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.038 qpair failed and we were unable to recover it. 00:38:26.038 [2024-12-14 00:19:05.021334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.038 [2024-12-14 00:19:05.021348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.038 qpair failed and we were unable to recover it. 00:38:26.038 [2024-12-14 00:19:05.021419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.038 [2024-12-14 00:19:05.021432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.038 qpair failed and we were unable to recover it. 00:38:26.038 [2024-12-14 00:19:05.021590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.038 [2024-12-14 00:19:05.021605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.038 qpair failed and we were unable to recover it. 00:38:26.038 [2024-12-14 00:19:05.021713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.038 [2024-12-14 00:19:05.021726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.038 qpair failed and we were unable to recover it. 00:38:26.038 [2024-12-14 00:19:05.021876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.038 [2024-12-14 00:19:05.021890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.038 qpair failed and we were unable to recover it. 00:38:26.038 [2024-12-14 00:19:05.022028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.038 [2024-12-14 00:19:05.022041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.038 qpair failed and we were unable to recover it. 00:38:26.038 [2024-12-14 00:19:05.022122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.038 [2024-12-14 00:19:05.022135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.038 qpair failed and we were unable to recover it. 00:38:26.038 [2024-12-14 00:19:05.022289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.038 [2024-12-14 00:19:05.022302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.038 qpair failed and we were unable to recover it. 00:38:26.038 [2024-12-14 00:19:05.022444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.038 [2024-12-14 00:19:05.022458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.038 qpair failed and we were unable to recover it. 00:38:26.038 [2024-12-14 00:19:05.022605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.038 [2024-12-14 00:19:05.022618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.038 qpair failed and we were unable to recover it. 00:38:26.038 [2024-12-14 00:19:05.022843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.038 [2024-12-14 00:19:05.022856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.038 qpair failed and we were unable to recover it. 00:38:26.038 [2024-12-14 00:19:05.023002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.038 [2024-12-14 00:19:05.023017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.039 qpair failed and we were unable to recover it. 00:38:26.039 [2024-12-14 00:19:05.023092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.039 [2024-12-14 00:19:05.023105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.039 qpair failed and we were unable to recover it. 00:38:26.039 [2024-12-14 00:19:05.023188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.039 [2024-12-14 00:19:05.023202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.039 qpair failed and we were unable to recover it. 00:38:26.039 [2024-12-14 00:19:05.023376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.039 [2024-12-14 00:19:05.023390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.039 qpair failed and we were unable to recover it. 00:38:26.039 [2024-12-14 00:19:05.023470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.039 [2024-12-14 00:19:05.023484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.039 qpair failed and we were unable to recover it. 00:38:26.039 [2024-12-14 00:19:05.023633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.039 [2024-12-14 00:19:05.023646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.039 qpair failed and we were unable to recover it. 00:38:26.039 [2024-12-14 00:19:05.023732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.039 [2024-12-14 00:19:05.023746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.039 qpair failed and we were unable to recover it. 00:38:26.039 [2024-12-14 00:19:05.023957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.039 [2024-12-14 00:19:05.023972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.039 qpair failed and we were unable to recover it. 00:38:26.039 [2024-12-14 00:19:05.024053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.039 [2024-12-14 00:19:05.024066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.039 qpair failed and we were unable to recover it. 00:38:26.039 [2024-12-14 00:19:05.024218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.039 [2024-12-14 00:19:05.024231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.039 qpair failed and we were unable to recover it. 00:38:26.039 [2024-12-14 00:19:05.024392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.039 [2024-12-14 00:19:05.024406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.039 qpair failed and we were unable to recover it. 00:38:26.039 [2024-12-14 00:19:05.024568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.039 [2024-12-14 00:19:05.024584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.039 qpair failed and we were unable to recover it. 00:38:26.039 [2024-12-14 00:19:05.024659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.039 [2024-12-14 00:19:05.024673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.039 qpair failed and we were unable to recover it. 00:38:26.039 [2024-12-14 00:19:05.024755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.039 [2024-12-14 00:19:05.024769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.039 qpair failed and we were unable to recover it. 00:38:26.039 [2024-12-14 00:19:05.024910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.039 [2024-12-14 00:19:05.024924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.039 qpair failed and we were unable to recover it. 00:38:26.039 [2024-12-14 00:19:05.025062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.039 [2024-12-14 00:19:05.025077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.039 qpair failed and we were unable to recover it. 00:38:26.039 [2024-12-14 00:19:05.025236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.039 [2024-12-14 00:19:05.025250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.039 qpair failed and we were unable to recover it. 00:38:26.039 [2024-12-14 00:19:05.025332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.039 [2024-12-14 00:19:05.025347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.039 qpair failed and we were unable to recover it. 00:38:26.039 [2024-12-14 00:19:05.025428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.039 [2024-12-14 00:19:05.025452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.039 qpair failed and we were unable to recover it. 00:38:26.039 [2024-12-14 00:19:05.025537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.039 [2024-12-14 00:19:05.025551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.039 qpair failed and we were unable to recover it. 00:38:26.039 [2024-12-14 00:19:05.025651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.039 [2024-12-14 00:19:05.025665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.039 qpair failed and we were unable to recover it. 00:38:26.039 [2024-12-14 00:19:05.025764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.039 [2024-12-14 00:19:05.025777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.039 qpair failed and we were unable to recover it. 00:38:26.039 [2024-12-14 00:19:05.025856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.039 [2024-12-14 00:19:05.025870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.039 qpair failed and we were unable to recover it. 00:38:26.039 [2024-12-14 00:19:05.026017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.039 [2024-12-14 00:19:05.026030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.039 qpair failed and we were unable to recover it. 00:38:26.039 [2024-12-14 00:19:05.026187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.039 [2024-12-14 00:19:05.026200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.039 qpair failed and we were unable to recover it. 00:38:26.039 [2024-12-14 00:19:05.026408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.039 [2024-12-14 00:19:05.026422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.039 qpair failed and we were unable to recover it. 00:38:26.039 [2024-12-14 00:19:05.026572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.039 [2024-12-14 00:19:05.026585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.039 qpair failed and we were unable to recover it. 00:38:26.039 [2024-12-14 00:19:05.026733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.039 [2024-12-14 00:19:05.026746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.039 qpair failed and we were unable to recover it. 00:38:26.039 [2024-12-14 00:19:05.026831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.039 [2024-12-14 00:19:05.026844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.039 qpair failed and we were unable to recover it. 00:38:26.039 [2024-12-14 00:19:05.026929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.039 [2024-12-14 00:19:05.026947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.039 qpair failed and we were unable to recover it. 00:38:26.039 [2024-12-14 00:19:05.027033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.039 [2024-12-14 00:19:05.027046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.039 qpair failed and we were unable to recover it. 00:38:26.039 [2024-12-14 00:19:05.027139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.039 [2024-12-14 00:19:05.027152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.039 qpair failed and we were unable to recover it. 00:38:26.039 [2024-12-14 00:19:05.027234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.039 [2024-12-14 00:19:05.027247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.039 qpair failed and we were unable to recover it. 00:38:26.039 [2024-12-14 00:19:05.027318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.039 [2024-12-14 00:19:05.027331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.039 qpair failed and we were unable to recover it. 00:38:26.039 [2024-12-14 00:19:05.027427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.039 [2024-12-14 00:19:05.027447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.039 qpair failed and we were unable to recover it. 00:38:26.039 [2024-12-14 00:19:05.027602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.039 [2024-12-14 00:19:05.027615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.039 qpair failed and we were unable to recover it. 00:38:26.039 [2024-12-14 00:19:05.027821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.039 [2024-12-14 00:19:05.027834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.039 qpair failed and we were unable to recover it. 00:38:26.039 [2024-12-14 00:19:05.027987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.039 [2024-12-14 00:19:05.028000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.039 qpair failed and we were unable to recover it. 00:38:26.039 [2024-12-14 00:19:05.028084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.039 [2024-12-14 00:19:05.028097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.039 qpair failed and we were unable to recover it. 00:38:26.039 [2024-12-14 00:19:05.028182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.039 [2024-12-14 00:19:05.028195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.039 qpair failed and we were unable to recover it. 00:38:26.040 [2024-12-14 00:19:05.028289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.040 [2024-12-14 00:19:05.028305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.040 qpair failed and we were unable to recover it. 00:38:26.040 [2024-12-14 00:19:05.028397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.040 [2024-12-14 00:19:05.028411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.040 qpair failed and we were unable to recover it. 00:38:26.040 [2024-12-14 00:19:05.028489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.040 [2024-12-14 00:19:05.028502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.040 qpair failed and we were unable to recover it. 00:38:26.040 [2024-12-14 00:19:05.028602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.040 [2024-12-14 00:19:05.028616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.040 qpair failed and we were unable to recover it. 00:38:26.040 [2024-12-14 00:19:05.028704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.040 [2024-12-14 00:19:05.028718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.040 qpair failed and we were unable to recover it. 00:38:26.040 [2024-12-14 00:19:05.028797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.040 [2024-12-14 00:19:05.028810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.040 qpair failed and we were unable to recover it. 00:38:26.040 [2024-12-14 00:19:05.028953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.040 [2024-12-14 00:19:05.028967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.040 qpair failed and we were unable to recover it. 00:38:26.040 [2024-12-14 00:19:05.029044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.040 [2024-12-14 00:19:05.029058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.040 qpair failed and we were unable to recover it. 00:38:26.040 [2024-12-14 00:19:05.029130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.040 [2024-12-14 00:19:05.029144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.040 qpair failed and we were unable to recover it. 00:38:26.040 [2024-12-14 00:19:05.029232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.040 [2024-12-14 00:19:05.029246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.040 qpair failed and we were unable to recover it. 00:38:26.040 [2024-12-14 00:19:05.029328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.040 [2024-12-14 00:19:05.029340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.040 qpair failed and we were unable to recover it. 00:38:26.040 [2024-12-14 00:19:05.029422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.040 [2024-12-14 00:19:05.029435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.040 qpair failed and we were unable to recover it. 00:38:26.040 [2024-12-14 00:19:05.029517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.040 [2024-12-14 00:19:05.029530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.040 qpair failed and we were unable to recover it. 00:38:26.040 [2024-12-14 00:19:05.029604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.040 [2024-12-14 00:19:05.029617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.040 qpair failed and we were unable to recover it. 00:38:26.040 [2024-12-14 00:19:05.029701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.040 [2024-12-14 00:19:05.029714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.040 qpair failed and we were unable to recover it. 00:38:26.040 [2024-12-14 00:19:05.029847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.040 [2024-12-14 00:19:05.029860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.040 qpair failed and we were unable to recover it. 00:38:26.040 [2024-12-14 00:19:05.029932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.040 [2024-12-14 00:19:05.029945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.040 qpair failed and we were unable to recover it. 00:38:26.040 [2024-12-14 00:19:05.030016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.040 [2024-12-14 00:19:05.030029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.040 qpair failed and we were unable to recover it. 00:38:26.040 [2024-12-14 00:19:05.030131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.040 [2024-12-14 00:19:05.030147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.040 qpair failed and we were unable to recover it. 00:38:26.040 [2024-12-14 00:19:05.030213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.040 [2024-12-14 00:19:05.030226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.040 qpair failed and we were unable to recover it. 00:38:26.040 [2024-12-14 00:19:05.030382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.040 [2024-12-14 00:19:05.030394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.040 qpair failed and we were unable to recover it. 00:38:26.040 [2024-12-14 00:19:05.030471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.040 [2024-12-14 00:19:05.030485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.040 qpair failed and we were unable to recover it. 00:38:26.040 [2024-12-14 00:19:05.030575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.040 [2024-12-14 00:19:05.030588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.040 qpair failed and we were unable to recover it. 00:38:26.040 [2024-12-14 00:19:05.030734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.041 [2024-12-14 00:19:05.030748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.041 qpair failed and we were unable to recover it. 00:38:26.041 [2024-12-14 00:19:05.030831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.041 [2024-12-14 00:19:05.030844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.041 qpair failed and we were unable to recover it. 00:38:26.041 [2024-12-14 00:19:05.030942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.041 [2024-12-14 00:19:05.030955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.041 qpair failed and we were unable to recover it. 00:38:26.041 [2024-12-14 00:19:05.031212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.041 [2024-12-14 00:19:05.031226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.041 qpair failed and we were unable to recover it. 00:38:26.041 [2024-12-14 00:19:05.031319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.041 [2024-12-14 00:19:05.031332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.041 qpair failed and we were unable to recover it. 00:38:26.041 [2024-12-14 00:19:05.031429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.041 [2024-12-14 00:19:05.031447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.041 qpair failed and we were unable to recover it. 00:38:26.041 [2024-12-14 00:19:05.031608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.041 [2024-12-14 00:19:05.031621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.041 qpair failed and we were unable to recover it. 00:38:26.041 [2024-12-14 00:19:05.031707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.041 [2024-12-14 00:19:05.031721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.041 qpair failed and we were unable to recover it. 00:38:26.041 [2024-12-14 00:19:05.031851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.041 [2024-12-14 00:19:05.031869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.041 qpair failed and we were unable to recover it. 00:38:26.041 [2024-12-14 00:19:05.031948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.041 [2024-12-14 00:19:05.031962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.041 qpair failed and we were unable to recover it. 00:38:26.041 [2024-12-14 00:19:05.032114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.041 [2024-12-14 00:19:05.032128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.041 qpair failed and we were unable to recover it. 00:38:26.041 [2024-12-14 00:19:05.032269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.041 [2024-12-14 00:19:05.032282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.041 qpair failed and we were unable to recover it. 00:38:26.041 [2024-12-14 00:19:05.032425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.041 [2024-12-14 00:19:05.032443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.041 qpair failed and we were unable to recover it. 00:38:26.041 [2024-12-14 00:19:05.032530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.041 [2024-12-14 00:19:05.032543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.041 qpair failed and we were unable to recover it. 00:38:26.041 [2024-12-14 00:19:05.032622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.041 [2024-12-14 00:19:05.032635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.041 qpair failed and we were unable to recover it. 00:38:26.041 [2024-12-14 00:19:05.032715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.041 [2024-12-14 00:19:05.032728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.041 qpair failed and we were unable to recover it. 00:38:26.041 [2024-12-14 00:19:05.032811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.041 [2024-12-14 00:19:05.032824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.041 qpair failed and we were unable to recover it. 00:38:26.041 [2024-12-14 00:19:05.032911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.041 [2024-12-14 00:19:05.032926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.041 qpair failed and we were unable to recover it. 00:38:26.041 [2024-12-14 00:19:05.032989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.041 [2024-12-14 00:19:05.033002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.041 qpair failed and we were unable to recover it. 00:38:26.041 [2024-12-14 00:19:05.033163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.041 [2024-12-14 00:19:05.033176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.041 qpair failed and we were unable to recover it. 00:38:26.041 [2024-12-14 00:19:05.033265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.041 [2024-12-14 00:19:05.033278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.041 qpair failed and we were unable to recover it. 00:38:26.041 [2024-12-14 00:19:05.033412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.041 [2024-12-14 00:19:05.033425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.041 qpair failed and we were unable to recover it. 00:38:26.041 [2024-12-14 00:19:05.033583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.041 [2024-12-14 00:19:05.033608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:26.041 qpair failed and we were unable to recover it. 00:38:26.041 [2024-12-14 00:19:05.033786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.041 [2024-12-14 00:19:05.033817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.041 qpair failed and we were unable to recover it. 00:38:26.041 [2024-12-14 00:19:05.033928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.041 [2024-12-14 00:19:05.033954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:26.041 qpair failed and we were unable to recover it. 00:38:26.041 [2024-12-14 00:19:05.034036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.041 [2024-12-14 00:19:05.034050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.041 qpair failed and we were unable to recover it. 00:38:26.041 [2024-12-14 00:19:05.034137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.041 [2024-12-14 00:19:05.034150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.041 qpair failed and we were unable to recover it. 00:38:26.041 [2024-12-14 00:19:05.034221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.041 [2024-12-14 00:19:05.034234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.041 qpair failed and we were unable to recover it. 00:38:26.041 [2024-12-14 00:19:05.034313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.041 [2024-12-14 00:19:05.034326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.041 qpair failed and we were unable to recover it. 00:38:26.041 [2024-12-14 00:19:05.034424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.041 [2024-12-14 00:19:05.034443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.041 qpair failed and we were unable to recover it. 00:38:26.041 [2024-12-14 00:19:05.034535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.041 [2024-12-14 00:19:05.034550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.041 qpair failed and we were unable to recover it. 00:38:26.041 [2024-12-14 00:19:05.034637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.041 [2024-12-14 00:19:05.034650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.041 qpair failed and we were unable to recover it. 00:38:26.041 [2024-12-14 00:19:05.034730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.041 [2024-12-14 00:19:05.034743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.041 qpair failed and we were unable to recover it. 00:38:26.041 [2024-12-14 00:19:05.034864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.041 [2024-12-14 00:19:05.034885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.041 qpair failed and we were unable to recover it. 00:38:26.041 [2024-12-14 00:19:05.035050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.041 [2024-12-14 00:19:05.035064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.041 qpair failed and we were unable to recover it. 00:38:26.041 [2024-12-14 00:19:05.035199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.041 [2024-12-14 00:19:05.035212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.041 qpair failed and we were unable to recover it. 00:38:26.041 [2024-12-14 00:19:05.035311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.041 [2024-12-14 00:19:05.035325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.041 qpair failed and we were unable to recover it. 00:38:26.041 [2024-12-14 00:19:05.035407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.041 [2024-12-14 00:19:05.035420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.041 qpair failed and we were unable to recover it. 00:38:26.041 [2024-12-14 00:19:05.035503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.042 [2024-12-14 00:19:05.035516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.042 qpair failed and we were unable to recover it. 00:38:26.042 [2024-12-14 00:19:05.035582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.042 [2024-12-14 00:19:05.035595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.042 qpair failed and we were unable to recover it. 00:38:26.042 [2024-12-14 00:19:05.035766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.042 [2024-12-14 00:19:05.035780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.042 qpair failed and we were unable to recover it. 00:38:26.042 [2024-12-14 00:19:05.035852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.042 [2024-12-14 00:19:05.035866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.042 qpair failed and we were unable to recover it. 00:38:26.042 [2024-12-14 00:19:05.035961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.042 [2024-12-14 00:19:05.035974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.042 qpair failed and we were unable to recover it. 00:38:26.042 [2024-12-14 00:19:05.036110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.042 [2024-12-14 00:19:05.036123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.042 qpair failed and we were unable to recover it. 00:38:26.042 [2024-12-14 00:19:05.036296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.042 [2024-12-14 00:19:05.036320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:26.042 qpair failed and we were unable to recover it. 00:38:26.042 [2024-12-14 00:19:05.036491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.042 [2024-12-14 00:19:05.036513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:26.042 qpair failed and we were unable to recover it. 00:38:26.042 [2024-12-14 00:19:05.036616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.042 [2024-12-14 00:19:05.036637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:26.042 qpair failed and we were unable to recover it. 00:38:26.042 [2024-12-14 00:19:05.036852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.042 [2024-12-14 00:19:05.036873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:26.042 qpair failed and we were unable to recover it. 00:38:26.042 [2024-12-14 00:19:05.036958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.042 [2024-12-14 00:19:05.036978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:26.042 qpair failed and we were unable to recover it. 00:38:26.042 [2024-12-14 00:19:05.037096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.042 [2024-12-14 00:19:05.037118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:26.042 qpair failed and we were unable to recover it. 00:38:26.042 [2024-12-14 00:19:05.037267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.042 [2024-12-14 00:19:05.037283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.042 qpair failed and we were unable to recover it. 00:38:26.042 [2024-12-14 00:19:05.037532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.042 [2024-12-14 00:19:05.037546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.042 qpair failed and we were unable to recover it. 00:38:26.042 [2024-12-14 00:19:05.037619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.042 [2024-12-14 00:19:05.037631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.042 qpair failed and we were unable to recover it. 00:38:26.042 [2024-12-14 00:19:05.037710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.042 [2024-12-14 00:19:05.037724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.042 qpair failed and we were unable to recover it. 00:38:26.042 [2024-12-14 00:19:05.037803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.042 [2024-12-14 00:19:05.037816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.042 qpair failed and we were unable to recover it. 00:38:26.042 [2024-12-14 00:19:05.037989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.042 [2024-12-14 00:19:05.038003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.042 qpair failed and we were unable to recover it. 00:38:26.042 [2024-12-14 00:19:05.038151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.042 [2024-12-14 00:19:05.038164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.042 qpair failed and we were unable to recover it. 00:38:26.042 [2024-12-14 00:19:05.038272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.042 [2024-12-14 00:19:05.038288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.042 qpair failed and we were unable to recover it. 00:38:26.042 [2024-12-14 00:19:05.038443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.042 [2024-12-14 00:19:05.038457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.042 qpair failed and we were unable to recover it. 00:38:26.042 [2024-12-14 00:19:05.038615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.042 [2024-12-14 00:19:05.038629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.042 qpair failed and we were unable to recover it. 00:38:26.042 [2024-12-14 00:19:05.038704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.042 [2024-12-14 00:19:05.038719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.042 qpair failed and we were unable to recover it. 00:38:26.042 [2024-12-14 00:19:05.038802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.042 [2024-12-14 00:19:05.038816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.042 qpair failed and we were unable to recover it. 00:38:26.042 [2024-12-14 00:19:05.038892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.042 [2024-12-14 00:19:05.038906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.042 qpair failed and we were unable to recover it. 00:38:26.042 [2024-12-14 00:19:05.038974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.042 [2024-12-14 00:19:05.038988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.042 qpair failed and we were unable to recover it. 00:38:26.042 [2024-12-14 00:19:05.039060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.042 [2024-12-14 00:19:05.039073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.042 qpair failed and we were unable to recover it. 00:38:26.042 [2024-12-14 00:19:05.039153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.042 [2024-12-14 00:19:05.039166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.042 qpair failed and we were unable to recover it. 00:38:26.042 [2024-12-14 00:19:05.039308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.042 [2024-12-14 00:19:05.039322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.042 qpair failed and we were unable to recover it. 00:38:26.042 [2024-12-14 00:19:05.039397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.042 [2024-12-14 00:19:05.039410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.042 qpair failed and we were unable to recover it. 00:38:26.042 [2024-12-14 00:19:05.039484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.042 [2024-12-14 00:19:05.039499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.042 qpair failed and we were unable to recover it. 00:38:26.042 [2024-12-14 00:19:05.039640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.042 [2024-12-14 00:19:05.039653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.042 qpair failed and we were unable to recover it. 00:38:26.042 [2024-12-14 00:19:05.039722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.042 [2024-12-14 00:19:05.039736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.042 qpair failed and we were unable to recover it. 00:38:26.042 [2024-12-14 00:19:05.039820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.042 [2024-12-14 00:19:05.039833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.042 qpair failed and we were unable to recover it. 00:38:26.042 [2024-12-14 00:19:05.039975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.042 [2024-12-14 00:19:05.039988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.042 qpair failed and we were unable to recover it. 00:38:26.042 [2024-12-14 00:19:05.040077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.042 [2024-12-14 00:19:05.040090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.042 qpair failed and we were unable to recover it. 00:38:26.042 [2024-12-14 00:19:05.040179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.042 [2024-12-14 00:19:05.040192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.042 qpair failed and we were unable to recover it. 00:38:26.042 [2024-12-14 00:19:05.040262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.042 [2024-12-14 00:19:05.040276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.042 qpair failed and we were unable to recover it. 00:38:26.042 [2024-12-14 00:19:05.040373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.042 [2024-12-14 00:19:05.040386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.042 qpair failed and we were unable to recover it. 00:38:26.042 [2024-12-14 00:19:05.040462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.042 [2024-12-14 00:19:05.040478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.042 qpair failed and we were unable to recover it. 00:38:26.043 [2024-12-14 00:19:05.040561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.043 [2024-12-14 00:19:05.040574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.043 qpair failed and we were unable to recover it. 00:38:26.043 [2024-12-14 00:19:05.040720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.043 [2024-12-14 00:19:05.040733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.043 qpair failed and we were unable to recover it. 00:38:26.043 [2024-12-14 00:19:05.040803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.043 [2024-12-14 00:19:05.040817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.043 qpair failed and we were unable to recover it. 00:38:26.043 [2024-12-14 00:19:05.040899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.043 [2024-12-14 00:19:05.040912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.043 qpair failed and we were unable to recover it. 00:38:26.043 [2024-12-14 00:19:05.041052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.043 [2024-12-14 00:19:05.041066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.043 qpair failed and we were unable to recover it. 00:38:26.043 [2024-12-14 00:19:05.041143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.043 [2024-12-14 00:19:05.041156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.043 qpair failed and we were unable to recover it. 00:38:26.043 [2024-12-14 00:19:05.041259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.043 [2024-12-14 00:19:05.041288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.043 qpair failed and we were unable to recover it. 00:38:26.043 [2024-12-14 00:19:05.041465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.043 [2024-12-14 00:19:05.041490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:26.043 qpair failed and we were unable to recover it. 00:38:26.043 [2024-12-14 00:19:05.041581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.043 [2024-12-14 00:19:05.041612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:26.043 qpair failed and we were unable to recover it. 00:38:26.043 [2024-12-14 00:19:05.041700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.043 [2024-12-14 00:19:05.041715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.043 qpair failed and we were unable to recover it. 00:38:26.043 [2024-12-14 00:19:05.041795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.043 [2024-12-14 00:19:05.041808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.043 qpair failed and we were unable to recover it. 00:38:26.043 [2024-12-14 00:19:05.041991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.043 [2024-12-14 00:19:05.042005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.043 qpair failed and we were unable to recover it. 00:38:26.043 [2024-12-14 00:19:05.042074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.043 [2024-12-14 00:19:05.042087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.043 qpair failed and we were unable to recover it. 00:38:26.043 [2024-12-14 00:19:05.042171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.043 [2024-12-14 00:19:05.042185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.043 qpair failed and we were unable to recover it. 00:38:26.043 [2024-12-14 00:19:05.042264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.043 [2024-12-14 00:19:05.042276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.043 qpair failed and we were unable to recover it. 00:38:26.043 [2024-12-14 00:19:05.042372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.043 [2024-12-14 00:19:05.042385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.043 qpair failed and we were unable to recover it. 00:38:26.043 [2024-12-14 00:19:05.042527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.043 [2024-12-14 00:19:05.042541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.043 qpair failed and we were unable to recover it. 00:38:26.043 [2024-12-14 00:19:05.042624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.043 [2024-12-14 00:19:05.042638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.043 qpair failed and we were unable to recover it. 00:38:26.043 [2024-12-14 00:19:05.042720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.043 [2024-12-14 00:19:05.042734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.043 qpair failed and we were unable to recover it. 00:38:26.043 [2024-12-14 00:19:05.042907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.043 [2024-12-14 00:19:05.042923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.043 qpair failed and we were unable to recover it. 00:38:26.043 [2024-12-14 00:19:05.043020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.043 [2024-12-14 00:19:05.043032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.043 qpair failed and we were unable to recover it. 00:38:26.043 [2024-12-14 00:19:05.043121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.043 [2024-12-14 00:19:05.043134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.043 qpair failed and we were unable to recover it. 00:38:26.043 [2024-12-14 00:19:05.043292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.043 [2024-12-14 00:19:05.043306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.043 qpair failed and we were unable to recover it. 00:38:26.043 [2024-12-14 00:19:05.043578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.043 [2024-12-14 00:19:05.043593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.043 qpair failed and we were unable to recover it. 00:38:26.043 [2024-12-14 00:19:05.043794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.043 [2024-12-14 00:19:05.043808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.043 qpair failed and we were unable to recover it. 00:38:26.043 [2024-12-14 00:19:05.043936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.043 [2024-12-14 00:19:05.043949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.043 qpair failed and we were unable to recover it. 00:38:26.043 [2024-12-14 00:19:05.044101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.043 [2024-12-14 00:19:05.044115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.043 qpair failed and we were unable to recover it. 00:38:26.043 [2024-12-14 00:19:05.044284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.043 [2024-12-14 00:19:05.044299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.043 qpair failed and we were unable to recover it. 00:38:26.043 [2024-12-14 00:19:05.044452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.043 [2024-12-14 00:19:05.044471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.043 qpair failed and we were unable to recover it. 00:38:26.043 [2024-12-14 00:19:05.044551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.043 [2024-12-14 00:19:05.044563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.043 qpair failed and we were unable to recover it. 00:38:26.043 [2024-12-14 00:19:05.044744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.043 [2024-12-14 00:19:05.044758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.043 qpair failed and we were unable to recover it. 00:38:26.043 [2024-12-14 00:19:05.044892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.043 [2024-12-14 00:19:05.044905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.043 qpair failed and we were unable to recover it. 00:38:26.043 [2024-12-14 00:19:05.044976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.043 [2024-12-14 00:19:05.044989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.043 qpair failed and we were unable to recover it. 00:38:26.043 [2024-12-14 00:19:05.045073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.043 [2024-12-14 00:19:05.045086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.043 qpair failed and we were unable to recover it. 00:38:26.043 [2024-12-14 00:19:05.045173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.043 [2024-12-14 00:19:05.045186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.043 qpair failed and we were unable to recover it. 00:38:26.043 [2024-12-14 00:19:05.045290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.043 [2024-12-14 00:19:05.045303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.043 qpair failed and we were unable to recover it. 00:38:26.043 [2024-12-14 00:19:05.045383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.043 [2024-12-14 00:19:05.045396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.043 qpair failed and we were unable to recover it. 00:38:26.043 [2024-12-14 00:19:05.045470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.043 [2024-12-14 00:19:05.045485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.043 qpair failed and we were unable to recover it. 00:38:26.043 [2024-12-14 00:19:05.045627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.043 [2024-12-14 00:19:05.045641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.043 qpair failed and we were unable to recover it. 00:38:26.043 [2024-12-14 00:19:05.045731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.044 [2024-12-14 00:19:05.045744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.044 qpair failed and we were unable to recover it. 00:38:26.044 [2024-12-14 00:19:05.045838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.044 [2024-12-14 00:19:05.045852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.044 qpair failed and we were unable to recover it. 00:38:26.044 [2024-12-14 00:19:05.045994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.044 [2024-12-14 00:19:05.046007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.044 qpair failed and we were unable to recover it. 00:38:26.044 [2024-12-14 00:19:05.046082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.044 [2024-12-14 00:19:05.046095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.044 qpair failed and we were unable to recover it. 00:38:26.044 [2024-12-14 00:19:05.046180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.044 [2024-12-14 00:19:05.046193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.044 qpair failed and we were unable to recover it. 00:38:26.044 [2024-12-14 00:19:05.046345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.044 [2024-12-14 00:19:05.046358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.044 qpair failed and we were unable to recover it. 00:38:26.044 [2024-12-14 00:19:05.046448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.044 [2024-12-14 00:19:05.046461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.044 qpair failed and we were unable to recover it. 00:38:26.044 [2024-12-14 00:19:05.046531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.044 [2024-12-14 00:19:05.046548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.044 qpair failed and we were unable to recover it. 00:38:26.044 [2024-12-14 00:19:05.046619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.044 [2024-12-14 00:19:05.046632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.044 qpair failed and we were unable to recover it. 00:38:26.044 [2024-12-14 00:19:05.046764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.044 [2024-12-14 00:19:05.046777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.044 qpair failed and we were unable to recover it. 00:38:26.044 [2024-12-14 00:19:05.046858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.044 [2024-12-14 00:19:05.046871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.044 qpair failed and we were unable to recover it. 00:38:26.044 [2024-12-14 00:19:05.047007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.044 [2024-12-14 00:19:05.047019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.044 qpair failed and we were unable to recover it. 00:38:26.044 [2024-12-14 00:19:05.047102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.044 [2024-12-14 00:19:05.047116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.044 qpair failed and we were unable to recover it. 00:38:26.044 [2024-12-14 00:19:05.047189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.044 [2024-12-14 00:19:05.047202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.044 qpair failed and we were unable to recover it. 00:38:26.044 [2024-12-14 00:19:05.047277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.044 [2024-12-14 00:19:05.047290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.044 qpair failed and we were unable to recover it. 00:38:26.044 [2024-12-14 00:19:05.047365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.044 [2024-12-14 00:19:05.047377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.044 qpair failed and we were unable to recover it. 00:38:26.044 [2024-12-14 00:19:05.047451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.044 [2024-12-14 00:19:05.047465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.044 qpair failed and we were unable to recover it. 00:38:26.044 [2024-12-14 00:19:05.047615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.044 [2024-12-14 00:19:05.047629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.044 qpair failed and we were unable to recover it. 00:38:26.044 [2024-12-14 00:19:05.047760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.044 [2024-12-14 00:19:05.047773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.044 qpair failed and we were unable to recover it. 00:38:26.044 [2024-12-14 00:19:05.047850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.044 [2024-12-14 00:19:05.047863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.044 qpair failed and we were unable to recover it. 00:38:26.044 [2024-12-14 00:19:05.047951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.044 [2024-12-14 00:19:05.047964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.044 qpair failed and we were unable to recover it. 00:38:26.044 [2024-12-14 00:19:05.048035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.044 [2024-12-14 00:19:05.048049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.044 qpair failed and we were unable to recover it. 00:38:26.044 [2024-12-14 00:19:05.048116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.044 [2024-12-14 00:19:05.048129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.044 qpair failed and we were unable to recover it. 00:38:26.044 [2024-12-14 00:19:05.048265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.044 [2024-12-14 00:19:05.048278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.044 qpair failed and we were unable to recover it. 00:38:26.044 [2024-12-14 00:19:05.048373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.044 [2024-12-14 00:19:05.048387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.044 qpair failed and we were unable to recover it. 00:38:26.044 [2024-12-14 00:19:05.048539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.044 [2024-12-14 00:19:05.048554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.044 qpair failed and we were unable to recover it. 00:38:26.044 [2024-12-14 00:19:05.048623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.044 [2024-12-14 00:19:05.048636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.044 qpair failed and we were unable to recover it. 00:38:26.044 [2024-12-14 00:19:05.048772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.044 [2024-12-14 00:19:05.048785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.044 qpair failed and we were unable to recover it. 00:38:26.044 [2024-12-14 00:19:05.049011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.044 [2024-12-14 00:19:05.049024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.044 qpair failed and we were unable to recover it. 00:38:26.044 [2024-12-14 00:19:05.049107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.044 [2024-12-14 00:19:05.049120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.044 qpair failed and we were unable to recover it. 00:38:26.044 [2024-12-14 00:19:05.049203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.044 [2024-12-14 00:19:05.049217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.044 qpair failed and we were unable to recover it. 00:38:26.044 [2024-12-14 00:19:05.049305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.044 [2024-12-14 00:19:05.049318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.044 qpair failed and we were unable to recover it. 00:38:26.044 [2024-12-14 00:19:05.049400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.044 [2024-12-14 00:19:05.049414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.044 qpair failed and we were unable to recover it. 00:38:26.044 [2024-12-14 00:19:05.049566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.045 [2024-12-14 00:19:05.049579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.045 qpair failed and we were unable to recover it. 00:38:26.045 [2024-12-14 00:19:05.049652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.045 [2024-12-14 00:19:05.049666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.045 qpair failed and we were unable to recover it. 00:38:26.045 [2024-12-14 00:19:05.049869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.045 [2024-12-14 00:19:05.049883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.045 qpair failed and we were unable to recover it. 00:38:26.045 [2024-12-14 00:19:05.049948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.045 [2024-12-14 00:19:05.049961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.045 qpair failed and we were unable to recover it. 00:38:26.045 [2024-12-14 00:19:05.050110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.045 [2024-12-14 00:19:05.050123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.045 qpair failed and we were unable to recover it. 00:38:26.045 [2024-12-14 00:19:05.050203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.045 [2024-12-14 00:19:05.050218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.045 qpair failed and we were unable to recover it. 00:38:26.045 [2024-12-14 00:19:05.050290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.045 [2024-12-14 00:19:05.050303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.045 qpair failed and we were unable to recover it. 00:38:26.045 [2024-12-14 00:19:05.050445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.045 [2024-12-14 00:19:05.050459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.045 qpair failed and we were unable to recover it. 00:38:26.045 [2024-12-14 00:19:05.050547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.045 [2024-12-14 00:19:05.050560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.045 qpair failed and we were unable to recover it. 00:38:26.045 [2024-12-14 00:19:05.050771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.045 [2024-12-14 00:19:05.050785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.045 qpair failed and we were unable to recover it. 00:38:26.045 [2024-12-14 00:19:05.050927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.045 [2024-12-14 00:19:05.050940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.045 qpair failed and we were unable to recover it. 00:38:26.045 [2024-12-14 00:19:05.051019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.045 [2024-12-14 00:19:05.051033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.045 qpair failed and we were unable to recover it. 00:38:26.045 [2024-12-14 00:19:05.051119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.045 [2024-12-14 00:19:05.051133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.045 qpair failed and we were unable to recover it. 00:38:26.045 [2024-12-14 00:19:05.051224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.045 [2024-12-14 00:19:05.051237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.045 qpair failed and we were unable to recover it. 00:38:26.045 [2024-12-14 00:19:05.051327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.045 [2024-12-14 00:19:05.051343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.045 qpair failed and we were unable to recover it. 00:38:26.045 [2024-12-14 00:19:05.051479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.045 [2024-12-14 00:19:05.051493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.045 qpair failed and we were unable to recover it. 00:38:26.045 [2024-12-14 00:19:05.051641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.045 [2024-12-14 00:19:05.051655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.045 qpair failed and we were unable to recover it. 00:38:26.045 [2024-12-14 00:19:05.051751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.045 [2024-12-14 00:19:05.051764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.045 qpair failed and we were unable to recover it. 00:38:26.045 [2024-12-14 00:19:05.051858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.045 [2024-12-14 00:19:05.051872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.045 qpair failed and we were unable to recover it. 00:38:26.045 [2024-12-14 00:19:05.051975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.045 [2024-12-14 00:19:05.051988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.045 qpair failed and we were unable to recover it. 00:38:26.045 [2024-12-14 00:19:05.052055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.045 [2024-12-14 00:19:05.052072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.045 qpair failed and we were unable to recover it. 00:38:26.045 [2024-12-14 00:19:05.052141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.045 [2024-12-14 00:19:05.052154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.045 qpair failed and we were unable to recover it. 00:38:26.045 [2024-12-14 00:19:05.052221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.045 [2024-12-14 00:19:05.052235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.045 qpair failed and we were unable to recover it. 00:38:26.045 [2024-12-14 00:19:05.052329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.045 [2024-12-14 00:19:05.052342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.045 qpair failed and we were unable to recover it. 00:38:26.045 [2024-12-14 00:19:05.052419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.045 [2024-12-14 00:19:05.052433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.045 qpair failed and we were unable to recover it. 00:38:26.045 [2024-12-14 00:19:05.052528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.045 [2024-12-14 00:19:05.052541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.045 qpair failed and we were unable to recover it. 00:38:26.045 [2024-12-14 00:19:05.052695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.045 [2024-12-14 00:19:05.052709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.045 qpair failed and we were unable to recover it. 00:38:26.045 [2024-12-14 00:19:05.052823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.045 [2024-12-14 00:19:05.052837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.045 qpair failed and we were unable to recover it. 00:38:26.045 [2024-12-14 00:19:05.052921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.045 [2024-12-14 00:19:05.052934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.045 qpair failed and we were unable to recover it. 00:38:26.045 [2024-12-14 00:19:05.052998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.045 [2024-12-14 00:19:05.053011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.045 qpair failed and we were unable to recover it. 00:38:26.045 [2024-12-14 00:19:05.053090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.045 [2024-12-14 00:19:05.053103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.045 qpair failed and we were unable to recover it. 00:38:26.045 [2024-12-14 00:19:05.053307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.045 [2024-12-14 00:19:05.053321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.045 qpair failed and we were unable to recover it. 00:38:26.045 [2024-12-14 00:19:05.053392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.045 [2024-12-14 00:19:05.053405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.045 qpair failed and we were unable to recover it. 00:38:26.045 [2024-12-14 00:19:05.053542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.045 [2024-12-14 00:19:05.053555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.045 qpair failed and we were unable to recover it. 00:38:26.045 [2024-12-14 00:19:05.053628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.045 [2024-12-14 00:19:05.053641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.045 qpair failed and we were unable to recover it. 00:38:26.045 [2024-12-14 00:19:05.053740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.045 [2024-12-14 00:19:05.053752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.045 qpair failed and we were unable to recover it. 00:38:26.045 [2024-12-14 00:19:05.053832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.045 [2024-12-14 00:19:05.053846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.045 qpair failed and we were unable to recover it. 00:38:26.045 [2024-12-14 00:19:05.053987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.045 [2024-12-14 00:19:05.054000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.045 qpair failed and we were unable to recover it. 00:38:26.045 [2024-12-14 00:19:05.054071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.045 [2024-12-14 00:19:05.054084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.045 qpair failed and we were unable to recover it. 00:38:26.045 [2024-12-14 00:19:05.054188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.046 [2024-12-14 00:19:05.054201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.046 qpair failed and we were unable to recover it. 00:38:26.046 [2024-12-14 00:19:05.054273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.046 [2024-12-14 00:19:05.054286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.046 qpair failed and we were unable to recover it. 00:38:26.046 [2024-12-14 00:19:05.054367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.046 [2024-12-14 00:19:05.054380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.046 qpair failed and we were unable to recover it. 00:38:26.046 [2024-12-14 00:19:05.054447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.046 [2024-12-14 00:19:05.054460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.046 qpair failed and we were unable to recover it. 00:38:26.046 [2024-12-14 00:19:05.054537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.046 [2024-12-14 00:19:05.054549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.046 qpair failed and we were unable to recover it. 00:38:26.046 [2024-12-14 00:19:05.054633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.046 [2024-12-14 00:19:05.054647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.046 qpair failed and we were unable to recover it. 00:38:26.046 [2024-12-14 00:19:05.054715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.046 [2024-12-14 00:19:05.054727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.046 qpair failed and we were unable to recover it. 00:38:26.046 [2024-12-14 00:19:05.054862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.046 [2024-12-14 00:19:05.054876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.046 qpair failed and we were unable to recover it. 00:38:26.046 [2024-12-14 00:19:05.054965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.046 [2024-12-14 00:19:05.054977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.046 qpair failed and we were unable to recover it. 00:38:26.046 [2024-12-14 00:19:05.055076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.046 [2024-12-14 00:19:05.055089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.046 qpair failed and we were unable to recover it. 00:38:26.046 [2024-12-14 00:19:05.055148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.046 [2024-12-14 00:19:05.055162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.046 qpair failed and we were unable to recover it. 00:38:26.046 [2024-12-14 00:19:05.055245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.046 [2024-12-14 00:19:05.055257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.046 qpair failed and we were unable to recover it. 00:38:26.046 [2024-12-14 00:19:05.055339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.046 [2024-12-14 00:19:05.055352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.046 qpair failed and we were unable to recover it. 00:38:26.046 [2024-12-14 00:19:05.055421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.046 [2024-12-14 00:19:05.055434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.046 qpair failed and we were unable to recover it. 00:38:26.046 [2024-12-14 00:19:05.055505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.046 [2024-12-14 00:19:05.055518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.046 qpair failed and we were unable to recover it. 00:38:26.046 [2024-12-14 00:19:05.055666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.046 [2024-12-14 00:19:05.055682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.046 qpair failed and we were unable to recover it. 00:38:26.046 [2024-12-14 00:19:05.055772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.046 [2024-12-14 00:19:05.055795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.046 qpair failed and we were unable to recover it. 00:38:26.046 [2024-12-14 00:19:05.055880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.046 [2024-12-14 00:19:05.055893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.046 qpair failed and we were unable to recover it. 00:38:26.046 [2024-12-14 00:19:05.056034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.046 [2024-12-14 00:19:05.056047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.046 qpair failed and we were unable to recover it. 00:38:26.046 [2024-12-14 00:19:05.056150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.046 [2024-12-14 00:19:05.056163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.046 qpair failed and we were unable to recover it. 00:38:26.046 [2024-12-14 00:19:05.056249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.046 [2024-12-14 00:19:05.056263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.046 qpair failed and we were unable to recover it. 00:38:26.046 [2024-12-14 00:19:05.056409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.046 [2024-12-14 00:19:05.056423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.046 qpair failed and we were unable to recover it. 00:38:26.046 [2024-12-14 00:19:05.056506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.046 [2024-12-14 00:19:05.056520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.046 qpair failed and we were unable to recover it. 00:38:26.046 [2024-12-14 00:19:05.056591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.046 [2024-12-14 00:19:05.056604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.046 qpair failed and we were unable to recover it. 00:38:26.046 [2024-12-14 00:19:05.056685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.046 [2024-12-14 00:19:05.056698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.046 qpair failed and we were unable to recover it. 00:38:26.046 [2024-12-14 00:19:05.056864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.046 [2024-12-14 00:19:05.056878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.046 qpair failed and we were unable to recover it. 00:38:26.046 [2024-12-14 00:19:05.056944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.046 [2024-12-14 00:19:05.056958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.046 qpair failed and we were unable to recover it. 00:38:26.046 [2024-12-14 00:19:05.057108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.046 [2024-12-14 00:19:05.057121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.046 qpair failed and we were unable to recover it. 00:38:26.046 [2024-12-14 00:19:05.057198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.046 [2024-12-14 00:19:05.057211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.046 qpair failed and we were unable to recover it. 00:38:26.046 [2024-12-14 00:19:05.057345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.046 [2024-12-14 00:19:05.057359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.046 qpair failed and we were unable to recover it. 00:38:26.046 [2024-12-14 00:19:05.057433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.046 [2024-12-14 00:19:05.057453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.046 qpair failed and we were unable to recover it. 00:38:26.046 [2024-12-14 00:19:05.057596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.046 [2024-12-14 00:19:05.057610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.046 qpair failed and we were unable to recover it. 00:38:26.046 [2024-12-14 00:19:05.057701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.046 [2024-12-14 00:19:05.057714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.046 qpair failed and we were unable to recover it. 00:38:26.046 [2024-12-14 00:19:05.057860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.046 [2024-12-14 00:19:05.057874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.046 qpair failed and we were unable to recover it. 00:38:26.046 [2024-12-14 00:19:05.058024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.046 [2024-12-14 00:19:05.058038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.046 qpair failed and we were unable to recover it. 00:38:26.046 [2024-12-14 00:19:05.058183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.046 [2024-12-14 00:19:05.058197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.046 qpair failed and we were unable to recover it. 00:38:26.046 [2024-12-14 00:19:05.058270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.046 [2024-12-14 00:19:05.058283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.046 qpair failed and we were unable to recover it. 00:38:26.046 [2024-12-14 00:19:05.058416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.046 [2024-12-14 00:19:05.058429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.046 qpair failed and we were unable to recover it. 00:38:26.046 [2024-12-14 00:19:05.058510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.046 [2024-12-14 00:19:05.058523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.046 qpair failed and we were unable to recover it. 00:38:26.046 [2024-12-14 00:19:05.058605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.046 [2024-12-14 00:19:05.058618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.046 qpair failed and we were unable to recover it. 00:38:26.047 [2024-12-14 00:19:05.058691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.047 [2024-12-14 00:19:05.058705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.047 qpair failed and we were unable to recover it. 00:38:26.047 [2024-12-14 00:19:05.058785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.047 [2024-12-14 00:19:05.058798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.047 qpair failed and we were unable to recover it. 00:38:26.047 [2024-12-14 00:19:05.058888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.047 [2024-12-14 00:19:05.058903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.047 qpair failed and we were unable to recover it. 00:38:26.047 [2024-12-14 00:19:05.059126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.047 [2024-12-14 00:19:05.059139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.047 qpair failed and we were unable to recover it. 00:38:26.047 [2024-12-14 00:19:05.059214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.047 [2024-12-14 00:19:05.059227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.047 qpair failed and we were unable to recover it. 00:38:26.047 [2024-12-14 00:19:05.059318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.047 [2024-12-14 00:19:05.059336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.047 qpair failed and we were unable to recover it. 00:38:26.047 [2024-12-14 00:19:05.059404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.047 [2024-12-14 00:19:05.059419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.047 qpair failed and we were unable to recover it. 00:38:26.047 [2024-12-14 00:19:05.059567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.047 [2024-12-14 00:19:05.059581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.047 qpair failed and we were unable to recover it. 00:38:26.047 [2024-12-14 00:19:05.059662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.047 [2024-12-14 00:19:05.059676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.047 qpair failed and we were unable to recover it. 00:38:26.047 [2024-12-14 00:19:05.059753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.047 [2024-12-14 00:19:05.059766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.047 qpair failed and we were unable to recover it. 00:38:26.047 [2024-12-14 00:19:05.059859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.047 [2024-12-14 00:19:05.059872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.047 qpair failed and we were unable to recover it. 00:38:26.047 [2024-12-14 00:19:05.059941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.047 [2024-12-14 00:19:05.059954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.047 qpair failed and we were unable to recover it. 00:38:26.047 [2024-12-14 00:19:05.060024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.047 [2024-12-14 00:19:05.060037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.047 qpair failed and we were unable to recover it. 00:38:26.047 [2024-12-14 00:19:05.060115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.047 [2024-12-14 00:19:05.060128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.047 qpair failed and we were unable to recover it. 00:38:26.047 [2024-12-14 00:19:05.060199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.047 [2024-12-14 00:19:05.060213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.047 qpair failed and we were unable to recover it. 00:38:26.047 [2024-12-14 00:19:05.060302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.047 [2024-12-14 00:19:05.060317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.047 qpair failed and we were unable to recover it. 00:38:26.047 [2024-12-14 00:19:05.060389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.047 [2024-12-14 00:19:05.060402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.047 qpair failed and we were unable to recover it. 00:38:26.047 [2024-12-14 00:19:05.060486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.047 [2024-12-14 00:19:05.060501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.047 qpair failed and we were unable to recover it. 00:38:26.047 [2024-12-14 00:19:05.060581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.047 [2024-12-14 00:19:05.060595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.047 qpair failed and we were unable to recover it. 00:38:26.047 [2024-12-14 00:19:05.060737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.047 [2024-12-14 00:19:05.060750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.047 qpair failed and we were unable to recover it. 00:38:26.047 [2024-12-14 00:19:05.060828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.047 [2024-12-14 00:19:05.060842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.047 qpair failed and we were unable to recover it. 00:38:26.047 [2024-12-14 00:19:05.060911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.047 [2024-12-14 00:19:05.060924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.047 qpair failed and we were unable to recover it. 00:38:26.047 [2024-12-14 00:19:05.060996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.047 [2024-12-14 00:19:05.061010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.047 qpair failed and we were unable to recover it. 00:38:26.047 [2024-12-14 00:19:05.061078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.047 [2024-12-14 00:19:05.061091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.047 qpair failed and we were unable to recover it. 00:38:26.047 [2024-12-14 00:19:05.061158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.047 [2024-12-14 00:19:05.061171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.047 qpair failed and we were unable to recover it. 00:38:26.047 [2024-12-14 00:19:05.061260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.047 [2024-12-14 00:19:05.061275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.047 qpair failed and we were unable to recover it. 00:38:26.047 [2024-12-14 00:19:05.061342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.047 [2024-12-14 00:19:05.061355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.047 qpair failed and we were unable to recover it. 00:38:26.047 [2024-12-14 00:19:05.061495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.047 [2024-12-14 00:19:05.061509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.047 qpair failed and we were unable to recover it. 00:38:26.047 [2024-12-14 00:19:05.061579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.047 [2024-12-14 00:19:05.061592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.047 qpair failed and we were unable to recover it. 00:38:26.047 [2024-12-14 00:19:05.061669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.047 [2024-12-14 00:19:05.061682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.047 qpair failed and we were unable to recover it. 00:38:26.047 [2024-12-14 00:19:05.061848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.047 [2024-12-14 00:19:05.061862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.047 qpair failed and we were unable to recover it. 00:38:26.047 [2024-12-14 00:19:05.062015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.047 [2024-12-14 00:19:05.062029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.047 qpair failed and we were unable to recover it. 00:38:26.047 [2024-12-14 00:19:05.062119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.047 [2024-12-14 00:19:05.062133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.047 qpair failed and we were unable to recover it. 00:38:26.047 [2024-12-14 00:19:05.062208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.047 [2024-12-14 00:19:05.062222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.047 qpair failed and we were unable to recover it. 00:38:26.047 [2024-12-14 00:19:05.062292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.047 [2024-12-14 00:19:05.062305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.047 qpair failed and we were unable to recover it. 00:38:26.047 [2024-12-14 00:19:05.062372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.047 [2024-12-14 00:19:05.062385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.047 qpair failed and we were unable to recover it. 00:38:26.047 [2024-12-14 00:19:05.062464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.047 [2024-12-14 00:19:05.062477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.047 qpair failed and we were unable to recover it. 00:38:26.047 [2024-12-14 00:19:05.062615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.047 [2024-12-14 00:19:05.062629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.047 qpair failed and we were unable to recover it. 00:38:26.047 [2024-12-14 00:19:05.062717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.047 [2024-12-14 00:19:05.062730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.047 qpair failed and we were unable to recover it. 00:38:26.047 [2024-12-14 00:19:05.062822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.048 [2024-12-14 00:19:05.062835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.048 qpair failed and we were unable to recover it. 00:38:26.048 [2024-12-14 00:19:05.062974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.048 [2024-12-14 00:19:05.062988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.048 qpair failed and we were unable to recover it. 00:38:26.048 [2024-12-14 00:19:05.063055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.048 [2024-12-14 00:19:05.063069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.048 qpair failed and we were unable to recover it. 00:38:26.048 [2024-12-14 00:19:05.063208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.048 [2024-12-14 00:19:05.063221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.048 qpair failed and we were unable to recover it. 00:38:26.048 [2024-12-14 00:19:05.063313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.048 [2024-12-14 00:19:05.063326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.048 qpair failed and we were unable to recover it. 00:38:26.048 [2024-12-14 00:19:05.063475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.048 [2024-12-14 00:19:05.063489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.048 qpair failed and we were unable to recover it. 00:38:26.048 [2024-12-14 00:19:05.063563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.048 [2024-12-14 00:19:05.063577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.048 qpair failed and we were unable to recover it. 00:38:26.048 [2024-12-14 00:19:05.063647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.048 [2024-12-14 00:19:05.063660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.048 qpair failed and we were unable to recover it. 00:38:26.048 [2024-12-14 00:19:05.063799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.048 [2024-12-14 00:19:05.063813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.048 qpair failed and we were unable to recover it. 00:38:26.048 [2024-12-14 00:19:05.063889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.048 [2024-12-14 00:19:05.063902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.048 qpair failed and we were unable to recover it. 00:38:26.048 [2024-12-14 00:19:05.064053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.048 [2024-12-14 00:19:05.064067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.048 qpair failed and we were unable to recover it. 00:38:26.048 [2024-12-14 00:19:05.064139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.048 [2024-12-14 00:19:05.064152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.048 qpair failed and we were unable to recover it. 00:38:26.048 [2024-12-14 00:19:05.064242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.048 [2024-12-14 00:19:05.064255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.048 qpair failed and we were unable to recover it. 00:38:26.048 [2024-12-14 00:19:05.064321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.048 [2024-12-14 00:19:05.064334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.048 qpair failed and we were unable to recover it. 00:38:26.048 [2024-12-14 00:19:05.064422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.048 [2024-12-14 00:19:05.064435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.048 qpair failed and we were unable to recover it. 00:38:26.048 [2024-12-14 00:19:05.064597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.048 [2024-12-14 00:19:05.064611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.048 qpair failed and we were unable to recover it. 00:38:26.048 [2024-12-14 00:19:05.064751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.048 [2024-12-14 00:19:05.064767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.048 qpair failed and we were unable to recover it. 00:38:26.048 [2024-12-14 00:19:05.064858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.048 [2024-12-14 00:19:05.064871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.048 qpair failed and we were unable to recover it. 00:38:26.048 [2024-12-14 00:19:05.065007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.048 [2024-12-14 00:19:05.065021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.048 qpair failed and we were unable to recover it. 00:38:26.048 [2024-12-14 00:19:05.065092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.048 [2024-12-14 00:19:05.065105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.048 qpair failed and we were unable to recover it. 00:38:26.048 [2024-12-14 00:19:05.065190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.048 [2024-12-14 00:19:05.065203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.048 qpair failed and we were unable to recover it. 00:38:26.048 [2024-12-14 00:19:05.065377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.048 [2024-12-14 00:19:05.065390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.048 qpair failed and we were unable to recover it. 00:38:26.048 [2024-12-14 00:19:05.065528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.048 [2024-12-14 00:19:05.065542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.048 qpair failed and we were unable to recover it. 00:38:26.048 [2024-12-14 00:19:05.065622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.048 [2024-12-14 00:19:05.065636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.048 qpair failed and we were unable to recover it. 00:38:26.048 [2024-12-14 00:19:05.065783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.048 [2024-12-14 00:19:05.065797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.048 qpair failed and we were unable to recover it. 00:38:26.048 [2024-12-14 00:19:05.065878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.048 [2024-12-14 00:19:05.065892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.048 qpair failed and we were unable to recover it. 00:38:26.048 [2024-12-14 00:19:05.066027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.048 [2024-12-14 00:19:05.066040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.048 qpair failed and we were unable to recover it. 00:38:26.048 [2024-12-14 00:19:05.066108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.048 [2024-12-14 00:19:05.066121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.048 qpair failed and we were unable to recover it. 00:38:26.048 [2024-12-14 00:19:05.066194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.048 [2024-12-14 00:19:05.066207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.048 qpair failed and we were unable to recover it. 00:38:26.048 [2024-12-14 00:19:05.066370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.048 [2024-12-14 00:19:05.066384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.048 qpair failed and we were unable to recover it. 00:38:26.048 [2024-12-14 00:19:05.066489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.048 [2024-12-14 00:19:05.066519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.048 qpair failed and we were unable to recover it. 00:38:26.048 [2024-12-14 00:19:05.066590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.048 [2024-12-14 00:19:05.066603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.048 qpair failed and we were unable to recover it. 00:38:26.048 [2024-12-14 00:19:05.066672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.048 [2024-12-14 00:19:05.066685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.048 qpair failed and we were unable to recover it. 00:38:26.048 [2024-12-14 00:19:05.066842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.048 [2024-12-14 00:19:05.066856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.048 qpair failed and we were unable to recover it. 00:38:26.048 [2024-12-14 00:19:05.066923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.048 [2024-12-14 00:19:05.066936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.048 qpair failed and we were unable to recover it. 00:38:26.048 [2024-12-14 00:19:05.067005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.048 [2024-12-14 00:19:05.067018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.048 qpair failed and we were unable to recover it. 00:38:26.048 [2024-12-14 00:19:05.067098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.048 [2024-12-14 00:19:05.067111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.048 qpair failed and we were unable to recover it. 00:38:26.048 [2024-12-14 00:19:05.067194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.048 [2024-12-14 00:19:05.067208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.048 qpair failed and we were unable to recover it. 00:38:26.048 [2024-12-14 00:19:05.067277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.048 [2024-12-14 00:19:05.067291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.048 qpair failed and we were unable to recover it. 00:38:26.048 [2024-12-14 00:19:05.067427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.049 [2024-12-14 00:19:05.067463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.049 qpair failed and we were unable to recover it. 00:38:26.049 [2024-12-14 00:19:05.067598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.049 [2024-12-14 00:19:05.067612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.049 qpair failed and we were unable to recover it. 00:38:26.049 [2024-12-14 00:19:05.067685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.049 [2024-12-14 00:19:05.067699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.049 qpair failed and we were unable to recover it. 00:38:26.049 [2024-12-14 00:19:05.067769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.049 [2024-12-14 00:19:05.067782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.049 qpair failed and we were unable to recover it. 00:38:26.049 [2024-12-14 00:19:05.067940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.049 [2024-12-14 00:19:05.067955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.049 qpair failed and we were unable to recover it. 00:38:26.049 [2024-12-14 00:19:05.068037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.049 [2024-12-14 00:19:05.068051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.049 qpair failed and we were unable to recover it. 00:38:26.049 [2024-12-14 00:19:05.068128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.049 [2024-12-14 00:19:05.068141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.049 qpair failed and we were unable to recover it. 00:38:26.049 [2024-12-14 00:19:05.068290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.049 [2024-12-14 00:19:05.068304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.049 qpair failed and we were unable to recover it. 00:38:26.049 [2024-12-14 00:19:05.068384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.049 [2024-12-14 00:19:05.068398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.049 qpair failed and we were unable to recover it. 00:38:26.049 [2024-12-14 00:19:05.068543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.049 [2024-12-14 00:19:05.068557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.049 qpair failed and we were unable to recover it. 00:38:26.049 [2024-12-14 00:19:05.068634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.049 [2024-12-14 00:19:05.068647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.049 qpair failed and we were unable to recover it. 00:38:26.049 [2024-12-14 00:19:05.068785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.049 [2024-12-14 00:19:05.068800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.049 qpair failed and we were unable to recover it. 00:38:26.049 [2024-12-14 00:19:05.068880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.049 [2024-12-14 00:19:05.068893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.049 qpair failed and we were unable to recover it. 00:38:26.049 [2024-12-14 00:19:05.068980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.049 [2024-12-14 00:19:05.068993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.049 qpair failed and we were unable to recover it. 00:38:26.049 [2024-12-14 00:19:05.069144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.049 [2024-12-14 00:19:05.069158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.049 qpair failed and we were unable to recover it. 00:38:26.049 [2024-12-14 00:19:05.069303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.049 [2024-12-14 00:19:05.069316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.049 qpair failed and we were unable to recover it. 00:38:26.049 [2024-12-14 00:19:05.069449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.049 [2024-12-14 00:19:05.069464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.049 qpair failed and we were unable to recover it. 00:38:26.049 [2024-12-14 00:19:05.069548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.049 [2024-12-14 00:19:05.069564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.049 qpair failed and we were unable to recover it. 00:38:26.049 [2024-12-14 00:19:05.069699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.049 [2024-12-14 00:19:05.069712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.049 qpair failed and we were unable to recover it. 00:38:26.049 [2024-12-14 00:19:05.069782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.049 [2024-12-14 00:19:05.069795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.049 qpair failed and we were unable to recover it. 00:38:26.049 [2024-12-14 00:19:05.069873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.049 [2024-12-14 00:19:05.069887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.049 qpair failed and we were unable to recover it. 00:38:26.049 [2024-12-14 00:19:05.069991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.049 [2024-12-14 00:19:05.070004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.049 qpair failed and we were unable to recover it. 00:38:26.049 [2024-12-14 00:19:05.070090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.049 [2024-12-14 00:19:05.070104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.049 qpair failed and we were unable to recover it. 00:38:26.049 [2024-12-14 00:19:05.070172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.049 [2024-12-14 00:19:05.070186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.049 qpair failed and we were unable to recover it. 00:38:26.049 [2024-12-14 00:19:05.070262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.049 [2024-12-14 00:19:05.070275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.049 qpair failed and we were unable to recover it. 00:38:26.049 [2024-12-14 00:19:05.070429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.049 [2024-12-14 00:19:05.070448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.049 qpair failed and we were unable to recover it. 00:38:26.049 [2024-12-14 00:19:05.070526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.049 [2024-12-14 00:19:05.070540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.049 qpair failed and we were unable to recover it. 00:38:26.049 [2024-12-14 00:19:05.070613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.049 [2024-12-14 00:19:05.070626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.049 qpair failed and we were unable to recover it. 00:38:26.049 [2024-12-14 00:19:05.070767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.049 [2024-12-14 00:19:05.070781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.049 qpair failed and we were unable to recover it. 00:38:26.049 [2024-12-14 00:19:05.070862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.049 [2024-12-14 00:19:05.070875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.049 qpair failed and we were unable to recover it. 00:38:26.049 [2024-12-14 00:19:05.070955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.049 [2024-12-14 00:19:05.070969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.049 qpair failed and we were unable to recover it. 00:38:26.049 [2024-12-14 00:19:05.071143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.049 [2024-12-14 00:19:05.071158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.049 qpair failed and we were unable to recover it. 00:38:26.049 [2024-12-14 00:19:05.071231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.049 [2024-12-14 00:19:05.071245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.049 qpair failed and we were unable to recover it. 00:38:26.049 [2024-12-14 00:19:05.071399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.049 [2024-12-14 00:19:05.071412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.049 qpair failed and we were unable to recover it. 00:38:26.049 [2024-12-14 00:19:05.071498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.049 [2024-12-14 00:19:05.071512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.049 qpair failed and we were unable to recover it. 00:38:26.049 [2024-12-14 00:19:05.071589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.050 [2024-12-14 00:19:05.071603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.050 qpair failed and we were unable to recover it. 00:38:26.050 [2024-12-14 00:19:05.071761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.050 [2024-12-14 00:19:05.071775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.050 qpair failed and we were unable to recover it. 00:38:26.050 [2024-12-14 00:19:05.071918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.050 [2024-12-14 00:19:05.071931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.050 qpair failed and we were unable to recover it. 00:38:26.050 [2024-12-14 00:19:05.072070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.050 [2024-12-14 00:19:05.072084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.050 qpair failed and we were unable to recover it. 00:38:26.050 [2024-12-14 00:19:05.072155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.050 [2024-12-14 00:19:05.072169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.050 qpair failed and we were unable to recover it. 00:38:26.050 [2024-12-14 00:19:05.072326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.050 [2024-12-14 00:19:05.072340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.050 qpair failed and we were unable to recover it. 00:38:26.050 [2024-12-14 00:19:05.072481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.050 [2024-12-14 00:19:05.072495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.050 qpair failed and we were unable to recover it. 00:38:26.050 [2024-12-14 00:19:05.072580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.050 [2024-12-14 00:19:05.072593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.050 qpair failed and we were unable to recover it. 00:38:26.050 [2024-12-14 00:19:05.072737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.050 [2024-12-14 00:19:05.072750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.050 qpair failed and we were unable to recover it. 00:38:26.050 [2024-12-14 00:19:05.072833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.050 [2024-12-14 00:19:05.072846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.050 qpair failed and we were unable to recover it. 00:38:26.050 [2024-12-14 00:19:05.072989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.050 [2024-12-14 00:19:05.073003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.050 qpair failed and we were unable to recover it. 00:38:26.050 [2024-12-14 00:19:05.073158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.050 [2024-12-14 00:19:05.073171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.050 qpair failed and we were unable to recover it. 00:38:26.050 [2024-12-14 00:19:05.073274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.050 [2024-12-14 00:19:05.073288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.050 qpair failed and we were unable to recover it. 00:38:26.050 [2024-12-14 00:19:05.073367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.050 [2024-12-14 00:19:05.073380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.050 qpair failed and we were unable to recover it. 00:38:26.050 [2024-12-14 00:19:05.073461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.050 [2024-12-14 00:19:05.073476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.050 qpair failed and we were unable to recover it. 00:38:26.050 [2024-12-14 00:19:05.073630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.050 [2024-12-14 00:19:05.073644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.050 qpair failed and we were unable to recover it. 00:38:26.050 [2024-12-14 00:19:05.073725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.050 [2024-12-14 00:19:05.073740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.050 qpair failed and we were unable to recover it. 00:38:26.050 [2024-12-14 00:19:05.073823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.050 [2024-12-14 00:19:05.073836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.050 qpair failed and we were unable to recover it. 00:38:26.050 [2024-12-14 00:19:05.073909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.050 [2024-12-14 00:19:05.073922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.050 qpair failed and we were unable to recover it. 00:38:26.050 [2024-12-14 00:19:05.074098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.050 [2024-12-14 00:19:05.074114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.050 qpair failed and we were unable to recover it. 00:38:26.050 [2024-12-14 00:19:05.074197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.050 [2024-12-14 00:19:05.074216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.050 qpair failed and we were unable to recover it. 00:38:26.050 [2024-12-14 00:19:05.074380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.050 [2024-12-14 00:19:05.074393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.050 qpair failed and we were unable to recover it. 00:38:26.050 [2024-12-14 00:19:05.074549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.050 [2024-12-14 00:19:05.074565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.050 qpair failed and we were unable to recover it. 00:38:26.050 [2024-12-14 00:19:05.074703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.050 [2024-12-14 00:19:05.074717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.050 qpair failed and we were unable to recover it. 00:38:26.050 [2024-12-14 00:19:05.074809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.050 [2024-12-14 00:19:05.074822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.050 qpair failed and we were unable to recover it. 00:38:26.050 [2024-12-14 00:19:05.074897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.050 [2024-12-14 00:19:05.074910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.050 qpair failed and we were unable to recover it. 00:38:26.050 [2024-12-14 00:19:05.074979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.050 [2024-12-14 00:19:05.074993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.050 qpair failed and we were unable to recover it. 00:38:26.050 [2024-12-14 00:19:05.075068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.050 [2024-12-14 00:19:05.075081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.050 qpair failed and we were unable to recover it. 00:38:26.050 [2024-12-14 00:19:05.075150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.050 [2024-12-14 00:19:05.075163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.050 qpair failed and we were unable to recover it. 00:38:26.050 [2024-12-14 00:19:05.075230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.050 [2024-12-14 00:19:05.075244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.050 qpair failed and we were unable to recover it. 00:38:26.050 [2024-12-14 00:19:05.075389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.050 [2024-12-14 00:19:05.075402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.050 qpair failed and we were unable to recover it. 00:38:26.050 [2024-12-14 00:19:05.075478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.050 [2024-12-14 00:19:05.075493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.050 qpair failed and we were unable to recover it. 00:38:26.050 [2024-12-14 00:19:05.075575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.050 [2024-12-14 00:19:05.075588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.050 qpair failed and we were unable to recover it. 00:38:26.050 [2024-12-14 00:19:05.075670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.050 [2024-12-14 00:19:05.075683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.050 qpair failed and we were unable to recover it. 00:38:26.050 [2024-12-14 00:19:05.075774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.050 [2024-12-14 00:19:05.075787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.050 qpair failed and we were unable to recover it. 00:38:26.050 [2024-12-14 00:19:05.075856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.050 [2024-12-14 00:19:05.075869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.050 qpair failed and we were unable to recover it. 00:38:26.050 [2024-12-14 00:19:05.076023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.050 [2024-12-14 00:19:05.076037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.050 qpair failed and we were unable to recover it. 00:38:26.050 [2024-12-14 00:19:05.076122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.050 [2024-12-14 00:19:05.076135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.050 qpair failed and we were unable to recover it. 00:38:26.050 [2024-12-14 00:19:05.076283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.050 [2024-12-14 00:19:05.076298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.050 qpair failed and we were unable to recover it. 00:38:26.050 [2024-12-14 00:19:05.076376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.050 [2024-12-14 00:19:05.076389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.051 qpair failed and we were unable to recover it. 00:38:26.051 [2024-12-14 00:19:05.076485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.051 [2024-12-14 00:19:05.076498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.051 qpair failed and we were unable to recover it. 00:38:26.051 [2024-12-14 00:19:05.076636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.051 [2024-12-14 00:19:05.076649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.051 qpair failed and we were unable to recover it. 00:38:26.051 [2024-12-14 00:19:05.076720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.051 [2024-12-14 00:19:05.076733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.051 qpair failed and we were unable to recover it. 00:38:26.051 [2024-12-14 00:19:05.076811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.051 [2024-12-14 00:19:05.076825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.051 qpair failed and we were unable to recover it. 00:38:26.051 [2024-12-14 00:19:05.076966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.051 [2024-12-14 00:19:05.076979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.051 qpair failed and we were unable to recover it. 00:38:26.051 [2024-12-14 00:19:05.077043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.051 [2024-12-14 00:19:05.077056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.051 qpair failed and we were unable to recover it. 00:38:26.051 [2024-12-14 00:19:05.077140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.051 [2024-12-14 00:19:05.077154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.051 qpair failed and we were unable to recover it. 00:38:26.051 [2024-12-14 00:19:05.077222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.051 [2024-12-14 00:19:05.077235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.051 qpair failed and we were unable to recover it. 00:38:26.051 [2024-12-14 00:19:05.077370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.051 [2024-12-14 00:19:05.077384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.051 qpair failed and we were unable to recover it. 00:38:26.051 [2024-12-14 00:19:05.077483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.051 [2024-12-14 00:19:05.077497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.051 qpair failed and we were unable to recover it. 00:38:26.051 [2024-12-14 00:19:05.077566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.051 [2024-12-14 00:19:05.077583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.051 qpair failed and we were unable to recover it. 00:38:26.051 [2024-12-14 00:19:05.077673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.051 [2024-12-14 00:19:05.077686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.051 qpair failed and we were unable to recover it. 00:38:26.051 [2024-12-14 00:19:05.077767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.051 [2024-12-14 00:19:05.077779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.051 qpair failed and we were unable to recover it. 00:38:26.051 [2024-12-14 00:19:05.077866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.051 [2024-12-14 00:19:05.077880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.051 qpair failed and we were unable to recover it. 00:38:26.051 [2024-12-14 00:19:05.077971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.051 [2024-12-14 00:19:05.077984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.051 qpair failed and we were unable to recover it. 00:38:26.051 [2024-12-14 00:19:05.078067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.051 [2024-12-14 00:19:05.078081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.051 qpair failed and we were unable to recover it. 00:38:26.051 [2024-12-14 00:19:05.078221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.051 [2024-12-14 00:19:05.078234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.051 qpair failed and we were unable to recover it. 00:38:26.051 [2024-12-14 00:19:05.078311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.051 [2024-12-14 00:19:05.078325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.051 qpair failed and we were unable to recover it. 00:38:26.051 [2024-12-14 00:19:05.078460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.051 [2024-12-14 00:19:05.078473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.051 qpair failed and we were unable to recover it. 00:38:26.051 [2024-12-14 00:19:05.078611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.051 [2024-12-14 00:19:05.078625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.051 qpair failed and we were unable to recover it. 00:38:26.051 [2024-12-14 00:19:05.078697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.051 [2024-12-14 00:19:05.078710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.051 qpair failed and we were unable to recover it. 00:38:26.051 [2024-12-14 00:19:05.078794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.051 [2024-12-14 00:19:05.078807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.051 qpair failed and we were unable to recover it. 00:38:26.051 [2024-12-14 00:19:05.078946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.051 [2024-12-14 00:19:05.078961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.051 qpair failed and we were unable to recover it. 00:38:26.051 [2024-12-14 00:19:05.079116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.051 [2024-12-14 00:19:05.079131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.051 qpair failed and we were unable to recover it. 00:38:26.051 [2024-12-14 00:19:05.079206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.051 [2024-12-14 00:19:05.079219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.051 qpair failed and we were unable to recover it. 00:38:26.051 [2024-12-14 00:19:05.079358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.051 [2024-12-14 00:19:05.079372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.051 qpair failed and we were unable to recover it. 00:38:26.051 [2024-12-14 00:19:05.079445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.051 [2024-12-14 00:19:05.079459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.051 qpair failed and we were unable to recover it. 00:38:26.051 [2024-12-14 00:19:05.079533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.051 [2024-12-14 00:19:05.079546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.051 qpair failed and we were unable to recover it. 00:38:26.051 [2024-12-14 00:19:05.079613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.051 [2024-12-14 00:19:05.079627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.051 qpair failed and we were unable to recover it. 00:38:26.051 [2024-12-14 00:19:05.079709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.051 [2024-12-14 00:19:05.079723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.051 qpair failed and we were unable to recover it. 00:38:26.051 [2024-12-14 00:19:05.079815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.051 [2024-12-14 00:19:05.079828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.051 qpair failed and we were unable to recover it. 00:38:26.051 [2024-12-14 00:19:05.079975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.051 [2024-12-14 00:19:05.079989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.051 qpair failed and we were unable to recover it. 00:38:26.051 [2024-12-14 00:19:05.080058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.051 [2024-12-14 00:19:05.080071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.051 qpair failed and we were unable to recover it. 00:38:26.051 [2024-12-14 00:19:05.080155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.051 [2024-12-14 00:19:05.080169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.051 qpair failed and we were unable to recover it. 00:38:26.051 [2024-12-14 00:19:05.080319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.051 [2024-12-14 00:19:05.080333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.051 qpair failed and we were unable to recover it. 00:38:26.051 [2024-12-14 00:19:05.080482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.051 [2024-12-14 00:19:05.080496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.051 qpair failed and we were unable to recover it. 00:38:26.051 [2024-12-14 00:19:05.080639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.051 [2024-12-14 00:19:05.080652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.051 qpair failed and we were unable to recover it. 00:38:26.051 [2024-12-14 00:19:05.080720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.051 [2024-12-14 00:19:05.080734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.051 qpair failed and we were unable to recover it. 00:38:26.051 [2024-12-14 00:19:05.080809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.051 [2024-12-14 00:19:05.080822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.051 qpair failed and we were unable to recover it. 00:38:26.051 [2024-12-14 00:19:05.080903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.052 [2024-12-14 00:19:05.080917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.052 qpair failed and we were unable to recover it. 00:38:26.052 [2024-12-14 00:19:05.080996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.052 [2024-12-14 00:19:05.081010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.052 qpair failed and we were unable to recover it. 00:38:26.052 [2024-12-14 00:19:05.081073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.052 [2024-12-14 00:19:05.081087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.052 qpair failed and we were unable to recover it. 00:38:26.052 [2024-12-14 00:19:05.081156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.052 [2024-12-14 00:19:05.081169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.052 qpair failed and we were unable to recover it. 00:38:26.052 [2024-12-14 00:19:05.081241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.052 [2024-12-14 00:19:05.081255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.052 qpair failed and we were unable to recover it. 00:38:26.052 [2024-12-14 00:19:05.081323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.052 [2024-12-14 00:19:05.081351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.052 qpair failed and we were unable to recover it. 00:38:26.052 [2024-12-14 00:19:05.081424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.052 [2024-12-14 00:19:05.081443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.052 qpair failed and we were unable to recover it. 00:38:26.052 [2024-12-14 00:19:05.081626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.052 [2024-12-14 00:19:05.081653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:26.052 qpair failed and we were unable to recover it. 00:38:26.052 [2024-12-14 00:19:05.081833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.052 [2024-12-14 00:19:05.081855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:26.052 qpair failed and we were unable to recover it. 00:38:26.052 [2024-12-14 00:19:05.081970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.052 [2024-12-14 00:19:05.081992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:26.052 qpair failed and we were unable to recover it. 00:38:26.052 [2024-12-14 00:19:05.082076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.052 [2024-12-14 00:19:05.082091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.052 qpair failed and we were unable to recover it. 00:38:26.052 [2024-12-14 00:19:05.082169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.052 [2024-12-14 00:19:05.082183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.052 qpair failed and we were unable to recover it. 00:38:26.052 [2024-12-14 00:19:05.082251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.052 [2024-12-14 00:19:05.082265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.052 qpair failed and we were unable to recover it. 00:38:26.052 [2024-12-14 00:19:05.082360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.052 [2024-12-14 00:19:05.082374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.052 qpair failed and we were unable to recover it. 00:38:26.052 [2024-12-14 00:19:05.082456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.052 [2024-12-14 00:19:05.082470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.052 qpair failed and we were unable to recover it. 00:38:26.052 [2024-12-14 00:19:05.082552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.052 [2024-12-14 00:19:05.082567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.052 qpair failed and we were unable to recover it. 00:38:26.052 [2024-12-14 00:19:05.082649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.052 [2024-12-14 00:19:05.082663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.052 qpair failed and we were unable to recover it. 00:38:26.052 [2024-12-14 00:19:05.082737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.052 [2024-12-14 00:19:05.082750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.052 qpair failed and we were unable to recover it. 00:38:26.052 [2024-12-14 00:19:05.082820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.052 [2024-12-14 00:19:05.082834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.052 qpair failed and we were unable to recover it. 00:38:26.052 [2024-12-14 00:19:05.082971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.052 [2024-12-14 00:19:05.082985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.052 qpair failed and we were unable to recover it. 00:38:26.052 [2024-12-14 00:19:05.083058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.052 [2024-12-14 00:19:05.083072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.052 qpair failed and we were unable to recover it. 00:38:26.052 [2024-12-14 00:19:05.083241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.052 [2024-12-14 00:19:05.083255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.052 qpair failed and we were unable to recover it. 00:38:26.052 [2024-12-14 00:19:05.083355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.052 [2024-12-14 00:19:05.083369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.052 qpair failed and we were unable to recover it. 00:38:26.052 [2024-12-14 00:19:05.083455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.052 [2024-12-14 00:19:05.083471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.052 qpair failed and we were unable to recover it. 00:38:26.052 [2024-12-14 00:19:05.083557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.052 [2024-12-14 00:19:05.083571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.052 qpair failed and we were unable to recover it. 00:38:26.052 [2024-12-14 00:19:05.083673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.052 [2024-12-14 00:19:05.083697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:26.052 qpair failed and we were unable to recover it. 00:38:26.052 [2024-12-14 00:19:05.083793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.052 [2024-12-14 00:19:05.083815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:26.052 qpair failed and we were unable to recover it. 00:38:26.052 [2024-12-14 00:19:05.083902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.052 [2024-12-14 00:19:05.083923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:26.052 qpair failed and we were unable to recover it. 00:38:26.052 [2024-12-14 00:19:05.084080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.052 [2024-12-14 00:19:05.084101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:26.052 qpair failed and we were unable to recover it. 00:38:26.052 [2024-12-14 00:19:05.084298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.052 [2024-12-14 00:19:05.084320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:26.052 qpair failed and we were unable to recover it. 00:38:26.052 [2024-12-14 00:19:05.084420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.052 [2024-12-14 00:19:05.084446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:26.052 qpair failed and we were unable to recover it. 00:38:26.052 [2024-12-14 00:19:05.084536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.052 [2024-12-14 00:19:05.084558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:26.052 qpair failed and we were unable to recover it. 00:38:26.052 [2024-12-14 00:19:05.084643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.052 [2024-12-14 00:19:05.084664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:26.052 qpair failed and we were unable to recover it. 00:38:26.052 [2024-12-14 00:19:05.084788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.052 [2024-12-14 00:19:05.084812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:26.052 qpair failed and we were unable to recover it. 00:38:26.052 [2024-12-14 00:19:05.084915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.052 [2024-12-14 00:19:05.084932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.052 qpair failed and we were unable to recover it. 00:38:26.052 [2024-12-14 00:19:05.085006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.052 [2024-12-14 00:19:05.085019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.052 qpair failed and we were unable to recover it. 00:38:26.052 [2024-12-14 00:19:05.085155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.052 [2024-12-14 00:19:05.085169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.052 qpair failed and we were unable to recover it. 00:38:26.052 [2024-12-14 00:19:05.085255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.052 [2024-12-14 00:19:05.085268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.052 qpair failed and we were unable to recover it. 00:38:26.052 [2024-12-14 00:19:05.085340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.052 [2024-12-14 00:19:05.085353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.052 qpair failed and we were unable to recover it. 00:38:26.052 [2024-12-14 00:19:05.085445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.052 [2024-12-14 00:19:05.085460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.052 qpair failed and we were unable to recover it. 00:38:26.053 [2024-12-14 00:19:05.085543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.053 [2024-12-14 00:19:05.085556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.053 qpair failed and we were unable to recover it. 00:38:26.053 [2024-12-14 00:19:05.085623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.053 [2024-12-14 00:19:05.085641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.053 qpair failed and we were unable to recover it. 00:38:26.053 [2024-12-14 00:19:05.085730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.053 [2024-12-14 00:19:05.085743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.053 qpair failed and we were unable to recover it. 00:38:26.053 [2024-12-14 00:19:05.085811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.053 [2024-12-14 00:19:05.085824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.053 qpair failed and we were unable to recover it. 00:38:26.053 [2024-12-14 00:19:05.085910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.053 [2024-12-14 00:19:05.085925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.053 qpair failed and we were unable to recover it. 00:38:26.053 [2024-12-14 00:19:05.086061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.053 [2024-12-14 00:19:05.086075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.053 qpair failed and we were unable to recover it. 00:38:26.053 [2024-12-14 00:19:05.086162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.053 [2024-12-14 00:19:05.086175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.053 qpair failed and we were unable to recover it. 00:38:26.053 [2024-12-14 00:19:05.086259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.053 [2024-12-14 00:19:05.086273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.053 qpair failed and we were unable to recover it. 00:38:26.053 [2024-12-14 00:19:05.086348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.053 [2024-12-14 00:19:05.086362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.053 qpair failed and we were unable to recover it. 00:38:26.053 [2024-12-14 00:19:05.086450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.053 [2024-12-14 00:19:05.086465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.053 qpair failed and we were unable to recover it. 00:38:26.053 [2024-12-14 00:19:05.086558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.053 [2024-12-14 00:19:05.086583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:26.053 qpair failed and we were unable to recover it. 00:38:26.053 [2024-12-14 00:19:05.086686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.053 [2024-12-14 00:19:05.086707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:26.053 qpair failed and we were unable to recover it. 00:38:26.053 [2024-12-14 00:19:05.086819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.053 [2024-12-14 00:19:05.086840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:26.053 qpair failed and we were unable to recover it. 00:38:26.053 [2024-12-14 00:19:05.087008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.053 [2024-12-14 00:19:05.087049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:26.053 qpair failed and we were unable to recover it. 00:38:26.053 [2024-12-14 00:19:05.087188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.053 [2024-12-14 00:19:05.087230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:26.053 qpair failed and we were unable to recover it. 00:38:26.053 [2024-12-14 00:19:05.087370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.053 [2024-12-14 00:19:05.087412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:26.053 qpair failed and we were unable to recover it. 00:38:26.053 [2024-12-14 00:19:05.087568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.053 [2024-12-14 00:19:05.087612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:26.053 qpair failed and we were unable to recover it. 00:38:26.053 [2024-12-14 00:19:05.087750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.053 [2024-12-14 00:19:05.087792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:26.053 qpair failed and we were unable to recover it. 00:38:26.053 [2024-12-14 00:19:05.087929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.053 [2024-12-14 00:19:05.087971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:26.053 qpair failed and we were unable to recover it. 00:38:26.053 [2024-12-14 00:19:05.088173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.053 [2024-12-14 00:19:05.088215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:26.053 qpair failed and we were unable to recover it. 00:38:26.053 [2024-12-14 00:19:05.088362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.053 [2024-12-14 00:19:05.088383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:26.053 qpair failed and we were unable to recover it. 00:38:26.053 [2024-12-14 00:19:05.088481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.053 [2024-12-14 00:19:05.088503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:26.053 qpair failed and we were unable to recover it. 00:38:26.053 [2024-12-14 00:19:05.088581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.053 [2024-12-14 00:19:05.088597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.053 qpair failed and we were unable to recover it. 00:38:26.053 [2024-12-14 00:19:05.088672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.053 [2024-12-14 00:19:05.088688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.053 qpair failed and we were unable to recover it. 00:38:26.053 [2024-12-14 00:19:05.088836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.053 [2024-12-14 00:19:05.088879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.053 qpair failed and we were unable to recover it. 00:38:26.053 [2024-12-14 00:19:05.089029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.053 [2024-12-14 00:19:05.089070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.053 qpair failed and we were unable to recover it. 00:38:26.053 [2024-12-14 00:19:05.089199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.053 [2024-12-14 00:19:05.089240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.053 qpair failed and we were unable to recover it. 00:38:26.053 [2024-12-14 00:19:05.089373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.053 [2024-12-14 00:19:05.089430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:26.053 qpair failed and we were unable to recover it. 00:38:26.053 [2024-12-14 00:19:05.089571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.053 [2024-12-14 00:19:05.089614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:26.053 qpair failed and we were unable to recover it. 00:38:26.053 [2024-12-14 00:19:05.089747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.053 [2024-12-14 00:19:05.089790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:26.053 qpair failed and we were unable to recover it. 00:38:26.053 [2024-12-14 00:19:05.089996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.053 [2024-12-14 00:19:05.090018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:26.053 qpair failed and we were unable to recover it. 00:38:26.053 [2024-12-14 00:19:05.090209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.053 [2024-12-14 00:19:05.090251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:26.053 qpair failed and we were unable to recover it. 00:38:26.053 [2024-12-14 00:19:05.090403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.053 [2024-12-14 00:19:05.090457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:26.053 qpair failed and we were unable to recover it. 00:38:26.053 [2024-12-14 00:19:05.090592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.053 [2024-12-14 00:19:05.090634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:26.053 qpair failed and we were unable to recover it. 00:38:26.053 [2024-12-14 00:19:05.090843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.054 [2024-12-14 00:19:05.090887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:26.054 qpair failed and we were unable to recover it. 00:38:26.054 [2024-12-14 00:19:05.091033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.054 [2024-12-14 00:19:05.091076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:26.054 qpair failed and we were unable to recover it. 00:38:26.054 [2024-12-14 00:19:05.091203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.054 [2024-12-14 00:19:05.091246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:26.054 qpair failed and we were unable to recover it. 00:38:26.054 [2024-12-14 00:19:05.091395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.054 [2024-12-14 00:19:05.091450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:26.054 qpair failed and we were unable to recover it. 00:38:26.054 [2024-12-14 00:19:05.091671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.054 [2024-12-14 00:19:05.091716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:26.054 qpair failed and we were unable to recover it. 00:38:26.054 [2024-12-14 00:19:05.091861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.054 [2024-12-14 00:19:05.091903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:26.054 qpair failed and we were unable to recover it. 00:38:26.054 [2024-12-14 00:19:05.092090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.054 [2024-12-14 00:19:05.092112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:26.054 qpair failed and we were unable to recover it. 00:38:26.054 [2024-12-14 00:19:05.092195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.054 [2024-12-14 00:19:05.092217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:26.054 qpair failed and we were unable to recover it. 00:38:26.054 [2024-12-14 00:19:05.092308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.054 [2024-12-14 00:19:05.092324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.054 qpair failed and we were unable to recover it. 00:38:26.054 [2024-12-14 00:19:05.092396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.054 [2024-12-14 00:19:05.092409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.054 qpair failed and we were unable to recover it. 00:38:26.054 [2024-12-14 00:19:05.092479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.054 [2024-12-14 00:19:05.092494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.054 qpair failed and we were unable to recover it. 00:38:26.054 [2024-12-14 00:19:05.092634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.054 [2024-12-14 00:19:05.092648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.054 qpair failed and we were unable to recover it. 00:38:26.054 [2024-12-14 00:19:05.092719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.054 [2024-12-14 00:19:05.092732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.054 qpair failed and we were unable to recover it. 00:38:26.054 [2024-12-14 00:19:05.092869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.054 [2024-12-14 00:19:05.092882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.054 qpair failed and we were unable to recover it. 00:38:26.054 [2024-12-14 00:19:05.092960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.054 [2024-12-14 00:19:05.092974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.054 qpair failed and we were unable to recover it. 00:38:26.054 [2024-12-14 00:19:05.093137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.054 [2024-12-14 00:19:05.093180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.054 qpair failed and we were unable to recover it. 00:38:26.054 [2024-12-14 00:19:05.093516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.054 [2024-12-14 00:19:05.093607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.054 qpair failed and we were unable to recover it. 00:38:26.054 [2024-12-14 00:19:05.093782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.054 [2024-12-14 00:19:05.093828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:26.054 qpair failed and we were unable to recover it. 00:38:26.054 [2024-12-14 00:19:05.093949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.054 [2024-12-14 00:19:05.093970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:26.054 qpair failed and we were unable to recover it. 00:38:26.054 [2024-12-14 00:19:05.094077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.054 [2024-12-14 00:19:05.094099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:26.054 qpair failed and we were unable to recover it. 00:38:26.054 [2024-12-14 00:19:05.094266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.054 [2024-12-14 00:19:05.094288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:26.054 qpair failed and we were unable to recover it. 00:38:26.054 [2024-12-14 00:19:05.094465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.054 [2024-12-14 00:19:05.094487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:26.054 qpair failed and we were unable to recover it. 00:38:26.054 [2024-12-14 00:19:05.094644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.054 [2024-12-14 00:19:05.094690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.054 qpair failed and we were unable to recover it. 00:38:26.054 [2024-12-14 00:19:05.094845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.054 [2024-12-14 00:19:05.094900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.054 qpair failed and we were unable to recover it. 00:38:26.054 [2024-12-14 00:19:05.095066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.054 [2024-12-14 00:19:05.095112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:26.054 qpair failed and we were unable to recover it. 00:38:26.054 [2024-12-14 00:19:05.095241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.054 [2024-12-14 00:19:05.095264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:26.054 qpair failed and we were unable to recover it. 00:38:26.054 [2024-12-14 00:19:05.095385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.054 [2024-12-14 00:19:05.095406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:26.054 qpair failed and we were unable to recover it. 00:38:26.054 [2024-12-14 00:19:05.095577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.054 [2024-12-14 00:19:05.095599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:26.054 qpair failed and we were unable to recover it. 00:38:26.054 [2024-12-14 00:19:05.095703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.054 [2024-12-14 00:19:05.095724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:26.054 qpair failed and we were unable to recover it. 00:38:26.054 [2024-12-14 00:19:05.095809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.054 [2024-12-14 00:19:05.095833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:26.054 qpair failed and we were unable to recover it. 00:38:26.054 [2024-12-14 00:19:05.095918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.054 [2024-12-14 00:19:05.095939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:26.054 qpair failed and we were unable to recover it. 00:38:26.054 [2024-12-14 00:19:05.096033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.054 [2024-12-14 00:19:05.096054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:26.054 qpair failed and we were unable to recover it. 00:38:26.054 [2024-12-14 00:19:05.096237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.054 [2024-12-14 00:19:05.096282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.054 qpair failed and we were unable to recover it. 00:38:26.054 [2024-12-14 00:19:05.096447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.054 [2024-12-14 00:19:05.096496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.054 qpair failed and we were unable to recover it. 00:38:26.054 [2024-12-14 00:19:05.096665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.054 [2024-12-14 00:19:05.096709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:26.054 qpair failed and we were unable to recover it. 00:38:26.054 [2024-12-14 00:19:05.096919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.054 [2024-12-14 00:19:05.096963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:26.054 qpair failed and we were unable to recover it. 00:38:26.054 [2024-12-14 00:19:05.097097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.054 [2024-12-14 00:19:05.097138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:26.054 qpair failed and we were unable to recover it. 00:38:26.054 [2024-12-14 00:19:05.097336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.054 [2024-12-14 00:19:05.097379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:26.054 qpair failed and we were unable to recover it. 00:38:26.054 [2024-12-14 00:19:05.097598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.054 [2024-12-14 00:19:05.097642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:26.054 qpair failed and we were unable to recover it. 00:38:26.054 [2024-12-14 00:19:05.097875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.055 [2024-12-14 00:19:05.097897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:26.055 qpair failed and we were unable to recover it. 00:38:26.055 [2024-12-14 00:19:05.098067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.055 [2024-12-14 00:19:05.098089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:26.055 qpair failed and we were unable to recover it. 00:38:26.055 [2024-12-14 00:19:05.098274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.055 [2024-12-14 00:19:05.098317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:26.055 qpair failed and we were unable to recover it. 00:38:26.055 [2024-12-14 00:19:05.098540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.055 [2024-12-14 00:19:05.098583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:26.055 qpair failed and we were unable to recover it. 00:38:26.055 [2024-12-14 00:19:05.098815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.055 [2024-12-14 00:19:05.098858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:26.055 qpair failed and we were unable to recover it. 00:38:26.055 [2024-12-14 00:19:05.099068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.055 [2024-12-14 00:19:05.099089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:26.055 qpair failed and we were unable to recover it. 00:38:26.055 [2024-12-14 00:19:05.099273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.055 [2024-12-14 00:19:05.099293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:26.055 qpair failed and we were unable to recover it. 00:38:26.055 [2024-12-14 00:19:05.099392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.055 [2024-12-14 00:19:05.099413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:26.055 qpair failed and we were unable to recover it. 00:38:26.055 [2024-12-14 00:19:05.099522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.055 [2024-12-14 00:19:05.099543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:26.055 qpair failed and we were unable to recover it. 00:38:26.055 [2024-12-14 00:19:05.099639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.055 [2024-12-14 00:19:05.099654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.055 qpair failed and we were unable to recover it. 00:38:26.055 [2024-12-14 00:19:05.099755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.055 [2024-12-14 00:19:05.099796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.055 qpair failed and we were unable to recover it. 00:38:26.055 [2024-12-14 00:19:05.100001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.055 [2024-12-14 00:19:05.100042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.055 qpair failed and we were unable to recover it. 00:38:26.055 [2024-12-14 00:19:05.100266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.055 [2024-12-14 00:19:05.100307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.055 qpair failed and we were unable to recover it. 00:38:26.055 [2024-12-14 00:19:05.100520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.055 [2024-12-14 00:19:05.100562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.055 qpair failed and we were unable to recover it. 00:38:26.055 [2024-12-14 00:19:05.100733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.055 [2024-12-14 00:19:05.100775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.055 qpair failed and we were unable to recover it. 00:38:26.055 [2024-12-14 00:19:05.100908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.055 [2024-12-14 00:19:05.100950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.055 qpair failed and we were unable to recover it. 00:38:26.055 [2024-12-14 00:19:05.101109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.055 [2024-12-14 00:19:05.101150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.055 qpair failed and we were unable to recover it. 00:38:26.055 [2024-12-14 00:19:05.101306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.055 [2024-12-14 00:19:05.101355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:26.055 qpair failed and we were unable to recover it. 00:38:26.055 [2024-12-14 00:19:05.101567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.055 [2024-12-14 00:19:05.101613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:26.055 qpair failed and we were unable to recover it. 00:38:26.055 [2024-12-14 00:19:05.101767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.055 [2024-12-14 00:19:05.101810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:26.055 qpair failed and we were unable to recover it. 00:38:26.055 [2024-12-14 00:19:05.102072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.055 [2024-12-14 00:19:05.102113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:26.055 qpair failed and we were unable to recover it. 00:38:26.055 [2024-12-14 00:19:05.102258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.055 [2024-12-14 00:19:05.102280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:26.055 qpair failed and we were unable to recover it. 00:38:26.055 [2024-12-14 00:19:05.102384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.055 [2024-12-14 00:19:05.102404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:26.055 qpair failed and we were unable to recover it. 00:38:26.055 [2024-12-14 00:19:05.102574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.055 [2024-12-14 00:19:05.102595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:26.055 qpair failed and we were unable to recover it. 00:38:26.055 [2024-12-14 00:19:05.102689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.055 [2024-12-14 00:19:05.102730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:26.055 qpair failed and we were unable to recover it. 00:38:26.055 [2024-12-14 00:19:05.103001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.055 [2024-12-14 00:19:05.103045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:26.055 qpair failed and we were unable to recover it. 00:38:26.055 [2024-12-14 00:19:05.103192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.055 [2024-12-14 00:19:05.103233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:26.055 qpair failed and we were unable to recover it. 00:38:26.055 [2024-12-14 00:19:05.103378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.055 [2024-12-14 00:19:05.103420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:26.055 qpair failed and we were unable to recover it. 00:38:26.055 [2024-12-14 00:19:05.103590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.055 [2024-12-14 00:19:05.103633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:26.055 qpair failed and we were unable to recover it. 00:38:26.055 [2024-12-14 00:19:05.103846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.055 [2024-12-14 00:19:05.103888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:26.055 qpair failed and we were unable to recover it. 00:38:26.055 [2024-12-14 00:19:05.104031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.055 [2024-12-14 00:19:05.104055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:26.055 qpair failed and we were unable to recover it. 00:38:26.055 [2024-12-14 00:19:05.104162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.055 [2024-12-14 00:19:05.104183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:26.055 qpair failed and we were unable to recover it. 00:38:26.055 [2024-12-14 00:19:05.104284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.055 [2024-12-14 00:19:05.104300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.055 qpair failed and we were unable to recover it. 00:38:26.055 [2024-12-14 00:19:05.104508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.055 [2024-12-14 00:19:05.104522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.055 qpair failed and we were unable to recover it. 00:38:26.055 [2024-12-14 00:19:05.104592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.055 [2024-12-14 00:19:05.104606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.055 qpair failed and we were unable to recover it. 00:38:26.055 [2024-12-14 00:19:05.104679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.055 [2024-12-14 00:19:05.104716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.055 qpair failed and we were unable to recover it. 00:38:26.055 [2024-12-14 00:19:05.104860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.055 [2024-12-14 00:19:05.104903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.055 qpair failed and we were unable to recover it. 00:38:26.055 [2024-12-14 00:19:05.105160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.055 [2024-12-14 00:19:05.105201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.055 qpair failed and we were unable to recover it. 00:38:26.055 [2024-12-14 00:19:05.105415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.055 [2024-12-14 00:19:05.105428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.055 qpair failed and we were unable to recover it. 00:38:26.055 [2024-12-14 00:19:05.105594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.055 [2024-12-14 00:19:05.105608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.055 qpair failed and we were unable to recover it. 00:38:26.056 [2024-12-14 00:19:05.105751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.056 [2024-12-14 00:19:05.105769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.056 qpair failed and we were unable to recover it. 00:38:26.056 [2024-12-14 00:19:05.105919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.056 [2024-12-14 00:19:05.105933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.056 qpair failed and we were unable to recover it. 00:38:26.056 [2024-12-14 00:19:05.106026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.056 [2024-12-14 00:19:05.106039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.056 qpair failed and we were unable to recover it. 00:38:26.056 [2024-12-14 00:19:05.106123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.056 [2024-12-14 00:19:05.106165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.056 qpair failed and we were unable to recover it. 00:38:26.056 [2024-12-14 00:19:05.106318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.056 [2024-12-14 00:19:05.106360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.056 qpair failed and we were unable to recover it. 00:38:26.056 [2024-12-14 00:19:05.106596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.056 [2024-12-14 00:19:05.106640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.056 qpair failed and we were unable to recover it. 00:38:26.056 [2024-12-14 00:19:05.106782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.056 [2024-12-14 00:19:05.106823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.056 qpair failed and we were unable to recover it. 00:38:26.056 [2024-12-14 00:19:05.106980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.056 [2024-12-14 00:19:05.106993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.056 qpair failed and we were unable to recover it. 00:38:26.056 [2024-12-14 00:19:05.107091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.056 [2024-12-14 00:19:05.107104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.056 qpair failed and we were unable to recover it. 00:38:26.056 [2024-12-14 00:19:05.107215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.056 [2024-12-14 00:19:05.107229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.056 qpair failed and we were unable to recover it. 00:38:26.056 [2024-12-14 00:19:05.107393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.056 [2024-12-14 00:19:05.107435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.056 qpair failed and we were unable to recover it. 00:38:26.056 [2024-12-14 00:19:05.107582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.056 [2024-12-14 00:19:05.107625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.056 qpair failed and we were unable to recover it. 00:38:26.056 [2024-12-14 00:19:05.107911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.056 [2024-12-14 00:19:05.107953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.056 qpair failed and we were unable to recover it. 00:38:26.056 [2024-12-14 00:19:05.108043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.056 [2024-12-14 00:19:05.108056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.056 qpair failed and we were unable to recover it. 00:38:26.056 [2024-12-14 00:19:05.108131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.056 [2024-12-14 00:19:05.108145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.056 qpair failed and we were unable to recover it. 00:38:26.056 [2024-12-14 00:19:05.108261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.056 [2024-12-14 00:19:05.108304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.056 qpair failed and we were unable to recover it. 00:38:26.056 [2024-12-14 00:19:05.108533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.056 [2024-12-14 00:19:05.108576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.056 qpair failed and we were unable to recover it. 00:38:26.056 [2024-12-14 00:19:05.108729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.056 [2024-12-14 00:19:05.108776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:26.056 qpair failed and we were unable to recover it. 00:38:26.056 [2024-12-14 00:19:05.109061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.056 [2024-12-14 00:19:05.109109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.056 qpair failed and we were unable to recover it. 00:38:26.056 [2024-12-14 00:19:05.109298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.056 [2024-12-14 00:19:05.109321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:26.056 qpair failed and we were unable to recover it. 00:38:26.056 [2024-12-14 00:19:05.109431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.056 [2024-12-14 00:19:05.109486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.056 qpair failed and we were unable to recover it. 00:38:26.056 [2024-12-14 00:19:05.109727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.056 [2024-12-14 00:19:05.109769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.056 qpair failed and we were unable to recover it. 00:38:26.056 [2024-12-14 00:19:05.109916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.056 [2024-12-14 00:19:05.109958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.056 qpair failed and we were unable to recover it. 00:38:26.056 [2024-12-14 00:19:05.110135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.056 [2024-12-14 00:19:05.110149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.056 qpair failed and we were unable to recover it. 00:38:26.056 [2024-12-14 00:19:05.110323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.056 [2024-12-14 00:19:05.110337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.056 qpair failed and we were unable to recover it. 00:38:26.056 [2024-12-14 00:19:05.110507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.056 [2024-12-14 00:19:05.110521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.056 qpair failed and we were unable to recover it. 00:38:26.056 [2024-12-14 00:19:05.110722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.056 [2024-12-14 00:19:05.110736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.056 qpair failed and we were unable to recover it. 00:38:26.056 [2024-12-14 00:19:05.110876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.340 [2024-12-14 00:19:05.110889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.340 qpair failed and we were unable to recover it. 00:38:26.340 [2024-12-14 00:19:05.111060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.340 [2024-12-14 00:19:05.111102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.340 qpair failed and we were unable to recover it. 00:38:26.340 [2024-12-14 00:19:05.111256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.340 [2024-12-14 00:19:05.111299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.340 qpair failed and we were unable to recover it. 00:38:26.340 [2024-12-14 00:19:05.111461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.340 [2024-12-14 00:19:05.111514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:26.340 qpair failed and we were unable to recover it. 00:38:26.340 [2024-12-14 00:19:05.111678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.340 [2024-12-14 00:19:05.111732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.340 qpair failed and we were unable to recover it. 00:38:26.340 [2024-12-14 00:19:05.111967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.340 [2024-12-14 00:19:05.112013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.340 qpair failed and we were unable to recover it. 00:38:26.340 [2024-12-14 00:19:05.112224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.340 [2024-12-14 00:19:05.112285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.340 qpair failed and we were unable to recover it. 00:38:26.340 [2024-12-14 00:19:05.112518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.340 [2024-12-14 00:19:05.112541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.340 qpair failed and we were unable to recover it. 00:38:26.340 [2024-12-14 00:19:05.113784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.340 [2024-12-14 00:19:05.113823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.340 qpair failed and we were unable to recover it. 00:38:26.340 [2024-12-14 00:19:05.114053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.340 [2024-12-14 00:19:05.114098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.340 qpair failed and we were unable to recover it. 00:38:26.340 [2024-12-14 00:19:05.114300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.340 [2024-12-14 00:19:05.114342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.340 qpair failed and we were unable to recover it. 00:38:26.340 [2024-12-14 00:19:05.114552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.340 [2024-12-14 00:19:05.114596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.340 qpair failed and we were unable to recover it. 00:38:26.340 [2024-12-14 00:19:05.114862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.340 [2024-12-14 00:19:05.114916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.340 qpair failed and we were unable to recover it. 00:38:26.340 [2024-12-14 00:19:05.115109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.340 [2024-12-14 00:19:05.115169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.340 qpair failed and we were unable to recover it. 00:38:26.340 [2024-12-14 00:19:05.115266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.340 [2024-12-14 00:19:05.115286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.340 qpair failed and we were unable to recover it. 00:38:26.340 [2024-12-14 00:19:05.115458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.340 [2024-12-14 00:19:05.115484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.340 qpair failed and we were unable to recover it. 00:38:26.341 [2024-12-14 00:19:05.115642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.341 [2024-12-14 00:19:05.115664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.341 qpair failed and we were unable to recover it. 00:38:26.341 [2024-12-14 00:19:05.115827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.341 [2024-12-14 00:19:05.115846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.341 qpair failed and we were unable to recover it. 00:38:26.341 [2024-12-14 00:19:05.116018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.341 [2024-12-14 00:19:05.116035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.341 qpair failed and we were unable to recover it. 00:38:26.341 [2024-12-14 00:19:05.116126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.341 [2024-12-14 00:19:05.116140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.341 qpair failed and we were unable to recover it. 00:38:26.341 [2024-12-14 00:19:05.116239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.341 [2024-12-14 00:19:05.116253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.341 qpair failed and we were unable to recover it. 00:38:26.341 [2024-12-14 00:19:05.116324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.341 [2024-12-14 00:19:05.116338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.341 qpair failed and we were unable to recover it. 00:38:26.341 [2024-12-14 00:19:05.116408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.341 [2024-12-14 00:19:05.116421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.341 qpair failed and we were unable to recover it. 00:38:26.341 [2024-12-14 00:19:05.116665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.341 [2024-12-14 00:19:05.116680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.341 qpair failed and we were unable to recover it. 00:38:26.341 [2024-12-14 00:19:05.116785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.341 [2024-12-14 00:19:05.116800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.341 qpair failed and we were unable to recover it. 00:38:26.341 [2024-12-14 00:19:05.116889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.341 [2024-12-14 00:19:05.116902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.341 qpair failed and we were unable to recover it. 00:38:26.341 [2024-12-14 00:19:05.117044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.341 [2024-12-14 00:19:05.117057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.341 qpair failed and we were unable to recover it. 00:38:26.341 [2024-12-14 00:19:05.117285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.341 [2024-12-14 00:19:05.117298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.341 qpair failed and we were unable to recover it. 00:38:26.341 [2024-12-14 00:19:05.117446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.341 [2024-12-14 00:19:05.117460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.341 qpair failed and we were unable to recover it. 00:38:26.341 [2024-12-14 00:19:05.117629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.341 [2024-12-14 00:19:05.117643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.341 qpair failed and we were unable to recover it. 00:38:26.341 [2024-12-14 00:19:05.117729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.341 [2024-12-14 00:19:05.117746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.341 qpair failed and we were unable to recover it. 00:38:26.341 [2024-12-14 00:19:05.117839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.341 [2024-12-14 00:19:05.117853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.341 qpair failed and we were unable to recover it. 00:38:26.341 [2024-12-14 00:19:05.118033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.341 [2024-12-14 00:19:05.118047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.341 qpair failed and we were unable to recover it. 00:38:26.341 [2024-12-14 00:19:05.118125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.341 [2024-12-14 00:19:05.118139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.341 qpair failed and we were unable to recover it. 00:38:26.341 [2024-12-14 00:19:05.118341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.341 [2024-12-14 00:19:05.118354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.341 qpair failed and we were unable to recover it. 00:38:26.341 [2024-12-14 00:19:05.118503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.341 [2024-12-14 00:19:05.118517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.341 qpair failed and we were unable to recover it. 00:38:26.341 [2024-12-14 00:19:05.118608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.341 [2024-12-14 00:19:05.118623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.341 qpair failed and we were unable to recover it. 00:38:26.341 [2024-12-14 00:19:05.118768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.341 [2024-12-14 00:19:05.118781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.341 qpair failed and we were unable to recover it. 00:38:26.341 [2024-12-14 00:19:05.118853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.341 [2024-12-14 00:19:05.118866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.341 qpair failed and we were unable to recover it. 00:38:26.341 [2024-12-14 00:19:05.118957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.341 [2024-12-14 00:19:05.118970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.341 qpair failed and we were unable to recover it. 00:38:26.341 [2024-12-14 00:19:05.119043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.341 [2024-12-14 00:19:05.119056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.341 qpair failed and we were unable to recover it. 00:38:26.341 [2024-12-14 00:19:05.119121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.341 [2024-12-14 00:19:05.119135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.341 qpair failed and we were unable to recover it. 00:38:26.341 [2024-12-14 00:19:05.119217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.341 [2024-12-14 00:19:05.119230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.341 qpair failed and we were unable to recover it. 00:38:26.341 [2024-12-14 00:19:05.119368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.341 [2024-12-14 00:19:05.119382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.341 qpair failed and we were unable to recover it. 00:38:26.341 [2024-12-14 00:19:05.119520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.341 [2024-12-14 00:19:05.119534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.341 qpair failed and we were unable to recover it. 00:38:26.341 [2024-12-14 00:19:05.119715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.341 [2024-12-14 00:19:05.119729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.341 qpair failed and we were unable to recover it. 00:38:26.341 [2024-12-14 00:19:05.119817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.341 [2024-12-14 00:19:05.119853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.341 qpair failed and we were unable to recover it. 00:38:26.341 [2024-12-14 00:19:05.119932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.341 [2024-12-14 00:19:05.119945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.341 qpair failed and we were unable to recover it. 00:38:26.341 [2024-12-14 00:19:05.120042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.341 [2024-12-14 00:19:05.120056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.341 qpair failed and we were unable to recover it. 00:38:26.341 [2024-12-14 00:19:05.120190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.341 [2024-12-14 00:19:05.120204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.341 qpair failed and we were unable to recover it. 00:38:26.341 [2024-12-14 00:19:05.120347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.341 [2024-12-14 00:19:05.120361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.341 qpair failed and we were unable to recover it. 00:38:26.341 [2024-12-14 00:19:05.120491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.341 [2024-12-14 00:19:05.120505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.341 qpair failed and we were unable to recover it. 00:38:26.341 [2024-12-14 00:19:05.120643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.341 [2024-12-14 00:19:05.120658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.341 qpair failed and we were unable to recover it. 00:38:26.341 [2024-12-14 00:19:05.120759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.341 [2024-12-14 00:19:05.120772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.341 qpair failed and we were unable to recover it. 00:38:26.341 [2024-12-14 00:19:05.120925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.341 [2024-12-14 00:19:05.120939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.342 qpair failed and we were unable to recover it. 00:38:26.342 [2024-12-14 00:19:05.121101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.342 [2024-12-14 00:19:05.121114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.342 qpair failed and we were unable to recover it. 00:38:26.342 [2024-12-14 00:19:05.121218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.342 [2024-12-14 00:19:05.121231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.342 qpair failed and we were unable to recover it. 00:38:26.342 [2024-12-14 00:19:05.121388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.342 [2024-12-14 00:19:05.121402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.342 qpair failed and we were unable to recover it. 00:38:26.342 [2024-12-14 00:19:05.121492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.342 [2024-12-14 00:19:05.121506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.342 qpair failed and we were unable to recover it. 00:38:26.342 [2024-12-14 00:19:05.121733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.342 [2024-12-14 00:19:05.121747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.342 qpair failed and we were unable to recover it. 00:38:26.342 [2024-12-14 00:19:05.121827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.342 [2024-12-14 00:19:05.121840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.342 qpair failed and we were unable to recover it. 00:38:26.342 [2024-12-14 00:19:05.121987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.342 [2024-12-14 00:19:05.122000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.342 qpair failed and we were unable to recover it. 00:38:26.342 [2024-12-14 00:19:05.122080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.342 [2024-12-14 00:19:05.122093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.342 qpair failed and we were unable to recover it. 00:38:26.342 [2024-12-14 00:19:05.122296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.342 [2024-12-14 00:19:05.122309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.342 qpair failed and we were unable to recover it. 00:38:26.342 [2024-12-14 00:19:05.122386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.342 [2024-12-14 00:19:05.122399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.342 qpair failed and we were unable to recover it. 00:38:26.342 [2024-12-14 00:19:05.122538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.342 [2024-12-14 00:19:05.122552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.342 qpair failed and we were unable to recover it. 00:38:26.342 [2024-12-14 00:19:05.122646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.342 [2024-12-14 00:19:05.122659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.342 qpair failed and we were unable to recover it. 00:38:26.342 [2024-12-14 00:19:05.122748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.342 [2024-12-14 00:19:05.122761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.342 qpair failed and we were unable to recover it. 00:38:26.342 [2024-12-14 00:19:05.122859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.342 [2024-12-14 00:19:05.122872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.342 qpair failed and we were unable to recover it. 00:38:26.342 [2024-12-14 00:19:05.122955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.342 [2024-12-14 00:19:05.122969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.342 qpair failed and we were unable to recover it. 00:38:26.342 [2024-12-14 00:19:05.123106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.342 [2024-12-14 00:19:05.123122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.342 qpair failed and we were unable to recover it. 00:38:26.342 [2024-12-14 00:19:05.123215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.342 [2024-12-14 00:19:05.123229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.342 qpair failed and we were unable to recover it. 00:38:26.342 [2024-12-14 00:19:05.123379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.342 [2024-12-14 00:19:05.123393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.342 qpair failed and we were unable to recover it. 00:38:26.342 [2024-12-14 00:19:05.123628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.342 [2024-12-14 00:19:05.123642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.342 qpair failed and we were unable to recover it. 00:38:26.342 [2024-12-14 00:19:05.123790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.342 [2024-12-14 00:19:05.123803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.342 qpair failed and we were unable to recover it. 00:38:26.342 [2024-12-14 00:19:05.123887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.342 [2024-12-14 00:19:05.123901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.342 qpair failed and we were unable to recover it. 00:38:26.342 [2024-12-14 00:19:05.124052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.342 [2024-12-14 00:19:05.124066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.342 qpair failed and we were unable to recover it. 00:38:26.342 [2024-12-14 00:19:05.124293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.342 [2024-12-14 00:19:05.124335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.342 qpair failed and we were unable to recover it. 00:38:26.342 [2024-12-14 00:19:05.124467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.342 [2024-12-14 00:19:05.124512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.342 qpair failed and we were unable to recover it. 00:38:26.342 [2024-12-14 00:19:05.124729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.342 [2024-12-14 00:19:05.124771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.342 qpair failed and we were unable to recover it. 00:38:26.342 [2024-12-14 00:19:05.124983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.342 [2024-12-14 00:19:05.125024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.342 qpair failed and we were unable to recover it. 00:38:26.342 [2024-12-14 00:19:05.125216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.342 [2024-12-14 00:19:05.125257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.342 qpair failed and we were unable to recover it. 00:38:26.342 [2024-12-14 00:19:05.125471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.342 [2024-12-14 00:19:05.125514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.342 qpair failed and we were unable to recover it. 00:38:26.342 [2024-12-14 00:19:05.125726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.342 [2024-12-14 00:19:05.125768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.342 qpair failed and we were unable to recover it. 00:38:26.342 [2024-12-14 00:19:05.125929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.342 [2024-12-14 00:19:05.125971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.342 qpair failed and we were unable to recover it. 00:38:26.342 [2024-12-14 00:19:05.126189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.342 [2024-12-14 00:19:05.126229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.342 qpair failed and we were unable to recover it. 00:38:26.342 [2024-12-14 00:19:05.126464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.342 [2024-12-14 00:19:05.126478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.342 qpair failed and we were unable to recover it. 00:38:26.342 [2024-12-14 00:19:05.126544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.342 [2024-12-14 00:19:05.126558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.342 qpair failed and we were unable to recover it. 00:38:26.342 [2024-12-14 00:19:05.126653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.342 [2024-12-14 00:19:05.126667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.342 qpair failed and we were unable to recover it. 00:38:26.342 [2024-12-14 00:19:05.126829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.342 [2024-12-14 00:19:05.126870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.342 qpair failed and we were unable to recover it. 00:38:26.342 [2024-12-14 00:19:05.127023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.342 [2024-12-14 00:19:05.127064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.342 qpair failed and we were unable to recover it. 00:38:26.342 [2024-12-14 00:19:05.127197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.342 [2024-12-14 00:19:05.127238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.342 qpair failed and we were unable to recover it. 00:38:26.342 [2024-12-14 00:19:05.127507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.342 [2024-12-14 00:19:05.127552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.342 qpair failed and we were unable to recover it. 00:38:26.342 [2024-12-14 00:19:05.127674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.342 [2024-12-14 00:19:05.127716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.342 qpair failed and we were unable to recover it. 00:38:26.343 [2024-12-14 00:19:05.127876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.343 [2024-12-14 00:19:05.127917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.343 qpair failed and we were unable to recover it. 00:38:26.343 [2024-12-14 00:19:05.128062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.343 [2024-12-14 00:19:05.128103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.343 qpair failed and we were unable to recover it. 00:38:26.343 [2024-12-14 00:19:05.128252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.343 [2024-12-14 00:19:05.128293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.343 qpair failed and we were unable to recover it. 00:38:26.343 [2024-12-14 00:19:05.128455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.343 [2024-12-14 00:19:05.128498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.343 qpair failed and we were unable to recover it. 00:38:26.343 [2024-12-14 00:19:05.128707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.343 [2024-12-14 00:19:05.128750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.343 qpair failed and we were unable to recover it. 00:38:26.343 [2024-12-14 00:19:05.129000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.343 [2024-12-14 00:19:05.129014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.343 qpair failed and we were unable to recover it. 00:38:26.343 [2024-12-14 00:19:05.129181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.343 [2024-12-14 00:19:05.129195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.343 qpair failed and we were unable to recover it. 00:38:26.343 [2024-12-14 00:19:05.129299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.343 [2024-12-14 00:19:05.129341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.343 qpair failed and we were unable to recover it. 00:38:26.343 [2024-12-14 00:19:05.129486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.343 [2024-12-14 00:19:05.129530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.343 qpair failed and we were unable to recover it. 00:38:26.343 [2024-12-14 00:19:05.129729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.343 [2024-12-14 00:19:05.129772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.343 qpair failed and we were unable to recover it. 00:38:26.343 [2024-12-14 00:19:05.130056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.343 [2024-12-14 00:19:05.130069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.343 qpair failed and we were unable to recover it. 00:38:26.343 [2024-12-14 00:19:05.130152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.343 [2024-12-14 00:19:05.130165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.343 qpair failed and we were unable to recover it. 00:38:26.343 [2024-12-14 00:19:05.130316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.343 [2024-12-14 00:19:05.130329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.343 qpair failed and we were unable to recover it. 00:38:26.343 [2024-12-14 00:19:05.130416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.343 [2024-12-14 00:19:05.130430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.343 qpair failed and we were unable to recover it. 00:38:26.343 [2024-12-14 00:19:05.130524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.343 [2024-12-14 00:19:05.130538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.343 qpair failed and we were unable to recover it. 00:38:26.343 [2024-12-14 00:19:05.130630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.343 [2024-12-14 00:19:05.130644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.343 qpair failed and we were unable to recover it. 00:38:26.343 [2024-12-14 00:19:05.130715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.343 [2024-12-14 00:19:05.130738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.343 qpair failed and we were unable to recover it. 00:38:26.343 [2024-12-14 00:19:05.130882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.343 [2024-12-14 00:19:05.130895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.343 qpair failed and we were unable to recover it. 00:38:26.343 [2024-12-14 00:19:05.131050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.343 [2024-12-14 00:19:05.131091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.343 qpair failed and we were unable to recover it. 00:38:26.343 [2024-12-14 00:19:05.131289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.343 [2024-12-14 00:19:05.131331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.343 qpair failed and we were unable to recover it. 00:38:26.343 [2024-12-14 00:19:05.131593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.343 [2024-12-14 00:19:05.131639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.343 qpair failed and we were unable to recover it. 00:38:26.343 [2024-12-14 00:19:05.131782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.343 [2024-12-14 00:19:05.131823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.343 qpair failed and we were unable to recover it. 00:38:26.343 [2024-12-14 00:19:05.131976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.343 [2024-12-14 00:19:05.132018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.343 qpair failed and we were unable to recover it. 00:38:26.343 [2024-12-14 00:19:05.132203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.343 [2024-12-14 00:19:05.132229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.343 qpair failed and we were unable to recover it. 00:38:26.343 [2024-12-14 00:19:05.132446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.343 [2024-12-14 00:19:05.132460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.343 qpair failed and we were unable to recover it. 00:38:26.343 [2024-12-14 00:19:05.132560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.343 [2024-12-14 00:19:05.132602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.343 qpair failed and we were unable to recover it. 00:38:26.343 [2024-12-14 00:19:05.132830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.343 [2024-12-14 00:19:05.132871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.343 qpair failed and we were unable to recover it. 00:38:26.343 [2024-12-14 00:19:05.133066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.343 [2024-12-14 00:19:05.133108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.343 qpair failed and we were unable to recover it. 00:38:26.343 [2024-12-14 00:19:05.133235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.343 [2024-12-14 00:19:05.133248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.343 qpair failed and we were unable to recover it. 00:38:26.343 [2024-12-14 00:19:05.133345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.343 [2024-12-14 00:19:05.133358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.343 qpair failed and we were unable to recover it. 00:38:26.343 [2024-12-14 00:19:05.133466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.343 [2024-12-14 00:19:05.133509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.343 qpair failed and we were unable to recover it. 00:38:26.343 [2024-12-14 00:19:05.133822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.343 [2024-12-14 00:19:05.133865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.343 qpair failed and we were unable to recover it. 00:38:26.343 [2024-12-14 00:19:05.134007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.343 [2024-12-14 00:19:05.134048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.343 qpair failed and we were unable to recover it. 00:38:26.343 [2024-12-14 00:19:05.134201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.343 [2024-12-14 00:19:05.134214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.343 qpair failed and we were unable to recover it. 00:38:26.343 [2024-12-14 00:19:05.134348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.343 [2024-12-14 00:19:05.134361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.343 qpair failed and we were unable to recover it. 00:38:26.343 [2024-12-14 00:19:05.134434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.343 [2024-12-14 00:19:05.134456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.343 qpair failed and we were unable to recover it. 00:38:26.343 [2024-12-14 00:19:05.134540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.343 [2024-12-14 00:19:05.134554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.343 qpair failed and we were unable to recover it. 00:38:26.343 [2024-12-14 00:19:05.134725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.343 [2024-12-14 00:19:05.134766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.343 qpair failed and we were unable to recover it. 00:38:26.343 [2024-12-14 00:19:05.134918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.343 [2024-12-14 00:19:05.134961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.343 qpair failed and we were unable to recover it. 00:38:26.344 [2024-12-14 00:19:05.135120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.344 [2024-12-14 00:19:05.135161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.344 qpair failed and we were unable to recover it. 00:38:26.344 [2024-12-14 00:19:05.135308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.344 [2024-12-14 00:19:05.135321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.344 qpair failed and we were unable to recover it. 00:38:26.344 [2024-12-14 00:19:05.135525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.344 [2024-12-14 00:19:05.135539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.344 qpair failed and we were unable to recover it. 00:38:26.344 [2024-12-14 00:19:05.135612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.344 [2024-12-14 00:19:05.135633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.344 qpair failed and we were unable to recover it. 00:38:26.344 [2024-12-14 00:19:05.135774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.344 [2024-12-14 00:19:05.135810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.344 qpair failed and we were unable to recover it. 00:38:26.344 [2024-12-14 00:19:05.136027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.344 [2024-12-14 00:19:05.136068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.344 qpair failed and we were unable to recover it. 00:38:26.344 [2024-12-14 00:19:05.136222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.344 [2024-12-14 00:19:05.136263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.344 qpair failed and we were unable to recover it. 00:38:26.344 [2024-12-14 00:19:05.136386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.344 [2024-12-14 00:19:05.136411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.344 qpair failed and we were unable to recover it. 00:38:26.344 [2024-12-14 00:19:05.136521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.344 [2024-12-14 00:19:05.136535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.344 qpair failed and we were unable to recover it. 00:38:26.344 [2024-12-14 00:19:05.136743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.344 [2024-12-14 00:19:05.136757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.344 qpair failed and we were unable to recover it. 00:38:26.344 [2024-12-14 00:19:05.136834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.344 [2024-12-14 00:19:05.136847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.344 qpair failed and we were unable to recover it. 00:38:26.344 [2024-12-14 00:19:05.137026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.344 [2024-12-14 00:19:05.137068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.344 qpair failed and we were unable to recover it. 00:38:26.344 [2024-12-14 00:19:05.137215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.344 [2024-12-14 00:19:05.137258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.344 qpair failed and we were unable to recover it. 00:38:26.344 [2024-12-14 00:19:05.137390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.344 [2024-12-14 00:19:05.137432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.344 qpair failed and we were unable to recover it. 00:38:26.344 [2024-12-14 00:19:05.137652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.344 [2024-12-14 00:19:05.137695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.344 qpair failed and we were unable to recover it. 00:38:26.344 [2024-12-14 00:19:05.137831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.344 [2024-12-14 00:19:05.137874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.344 qpair failed and we were unable to recover it. 00:38:26.344 [2024-12-14 00:19:05.138137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.344 [2024-12-14 00:19:05.138179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.344 qpair failed and we were unable to recover it. 00:38:26.344 [2024-12-14 00:19:05.138457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.344 [2024-12-14 00:19:05.138514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.344 qpair failed and we were unable to recover it. 00:38:26.344 [2024-12-14 00:19:05.138719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.344 [2024-12-14 00:19:05.138761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.344 qpair failed and we were unable to recover it. 00:38:26.344 [2024-12-14 00:19:05.139058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.344 [2024-12-14 00:19:05.139101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.344 qpair failed and we were unable to recover it. 00:38:26.344 [2024-12-14 00:19:05.139258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.344 [2024-12-14 00:19:05.139271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.344 qpair failed and we were unable to recover it. 00:38:26.344 [2024-12-14 00:19:05.139406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.344 [2024-12-14 00:19:05.139419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.344 qpair failed and we were unable to recover it. 00:38:26.344 [2024-12-14 00:19:05.139601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.344 [2024-12-14 00:19:05.139616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.344 qpair failed and we were unable to recover it. 00:38:26.344 [2024-12-14 00:19:05.139731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.344 [2024-12-14 00:19:05.139773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.344 qpair failed and we were unable to recover it. 00:38:26.344 [2024-12-14 00:19:05.139927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.344 [2024-12-14 00:19:05.139969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.344 qpair failed and we were unable to recover it. 00:38:26.344 [2024-12-14 00:19:05.140127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.344 [2024-12-14 00:19:05.140169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.344 qpair failed and we were unable to recover it. 00:38:26.344 [2024-12-14 00:19:05.140368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.344 [2024-12-14 00:19:05.140381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.344 qpair failed and we were unable to recover it. 00:38:26.344 [2024-12-14 00:19:05.140462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.344 [2024-12-14 00:19:05.140476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.344 qpair failed and we were unable to recover it. 00:38:26.344 [2024-12-14 00:19:05.140682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.344 [2024-12-14 00:19:05.140695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.344 qpair failed and we were unable to recover it. 00:38:26.344 [2024-12-14 00:19:05.140897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.344 [2024-12-14 00:19:05.140910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.344 qpair failed and we were unable to recover it. 00:38:26.344 [2024-12-14 00:19:05.141136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.344 [2024-12-14 00:19:05.141178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.344 qpair failed and we were unable to recover it. 00:38:26.344 [2024-12-14 00:19:05.141456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.344 [2024-12-14 00:19:05.141500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.344 qpair failed and we were unable to recover it. 00:38:26.344 [2024-12-14 00:19:05.141655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.344 [2024-12-14 00:19:05.141697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.344 qpair failed and we were unable to recover it. 00:38:26.344 [2024-12-14 00:19:05.141898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.344 [2024-12-14 00:19:05.141940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.344 qpair failed and we were unable to recover it. 00:38:26.344 [2024-12-14 00:19:05.142067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.344 [2024-12-14 00:19:05.142109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.344 qpair failed and we were unable to recover it. 00:38:26.344 [2024-12-14 00:19:05.142395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.344 [2024-12-14 00:19:05.142409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.344 qpair failed and we were unable to recover it. 00:38:26.344 [2024-12-14 00:19:05.142496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.345 [2024-12-14 00:19:05.142509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.345 qpair failed and we were unable to recover it. 00:38:26.345 [2024-12-14 00:19:05.142590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.345 [2024-12-14 00:19:05.142603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.345 qpair failed and we were unable to recover it. 00:38:26.345 [2024-12-14 00:19:05.142814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.345 [2024-12-14 00:19:05.142856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.345 qpair failed and we were unable to recover it. 00:38:26.345 [2024-12-14 00:19:05.143058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.345 [2024-12-14 00:19:05.143100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.345 qpair failed and we were unable to recover it. 00:38:26.345 [2024-12-14 00:19:05.143307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.345 [2024-12-14 00:19:05.143320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.345 qpair failed and we were unable to recover it. 00:38:26.345 [2024-12-14 00:19:05.143461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.345 [2024-12-14 00:19:05.143496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.345 qpair failed and we were unable to recover it. 00:38:26.345 [2024-12-14 00:19:05.143666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.345 [2024-12-14 00:19:05.143680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.345 qpair failed and we were unable to recover it. 00:38:26.345 [2024-12-14 00:19:05.143832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.345 [2024-12-14 00:19:05.143874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.345 qpair failed and we were unable to recover it. 00:38:26.345 [2024-12-14 00:19:05.144143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.345 [2024-12-14 00:19:05.144186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.345 qpair failed and we were unable to recover it. 00:38:26.345 [2024-12-14 00:19:05.144336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.345 [2024-12-14 00:19:05.144348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.345 qpair failed and we were unable to recover it. 00:38:26.345 [2024-12-14 00:19:05.144518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.345 [2024-12-14 00:19:05.144532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.345 qpair failed and we were unable to recover it. 00:38:26.345 [2024-12-14 00:19:05.144681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.345 [2024-12-14 00:19:05.144724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.345 qpair failed and we were unable to recover it. 00:38:26.345 [2024-12-14 00:19:05.144854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.345 [2024-12-14 00:19:05.144895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.345 qpair failed and we were unable to recover it. 00:38:26.345 [2024-12-14 00:19:05.145095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.345 [2024-12-14 00:19:05.145136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.345 qpair failed and we were unable to recover it. 00:38:26.345 [2024-12-14 00:19:05.145325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.345 [2024-12-14 00:19:05.145338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.345 qpair failed and we were unable to recover it. 00:38:26.345 [2024-12-14 00:19:05.145415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.345 [2024-12-14 00:19:05.145428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.345 qpair failed and we were unable to recover it. 00:38:26.345 [2024-12-14 00:19:05.145601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.345 [2024-12-14 00:19:05.145614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.345 qpair failed and we were unable to recover it. 00:38:26.345 [2024-12-14 00:19:05.145687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.345 [2024-12-14 00:19:05.145737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.345 qpair failed and we were unable to recover it. 00:38:26.345 [2024-12-14 00:19:05.145943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.345 [2024-12-14 00:19:05.145986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.345 qpair failed and we were unable to recover it. 00:38:26.345 [2024-12-14 00:19:05.146135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.345 [2024-12-14 00:19:05.146175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.345 qpair failed and we were unable to recover it. 00:38:26.345 [2024-12-14 00:19:05.146327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.345 [2024-12-14 00:19:05.146340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.345 qpair failed and we were unable to recover it. 00:38:26.345 [2024-12-14 00:19:05.146424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.345 [2024-12-14 00:19:05.146446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.345 qpair failed and we were unable to recover it. 00:38:26.345 [2024-12-14 00:19:05.146519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.345 [2024-12-14 00:19:05.146534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.345 qpair failed and we were unable to recover it. 00:38:26.345 [2024-12-14 00:19:05.146695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.345 [2024-12-14 00:19:05.146730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.345 qpair failed and we were unable to recover it. 00:38:26.345 [2024-12-14 00:19:05.146928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.345 [2024-12-14 00:19:05.146969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.345 qpair failed and we were unable to recover it. 00:38:26.345 [2024-12-14 00:19:05.147191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.345 [2024-12-14 00:19:05.147234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.345 qpair failed and we were unable to recover it. 00:38:26.345 [2024-12-14 00:19:05.147384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.345 [2024-12-14 00:19:05.147397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.345 qpair failed and we were unable to recover it. 00:38:26.345 [2024-12-14 00:19:05.147473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.345 [2024-12-14 00:19:05.147488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.345 qpair failed and we were unable to recover it. 00:38:26.345 [2024-12-14 00:19:05.147580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.345 [2024-12-14 00:19:05.147595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.345 qpair failed and we were unable to recover it. 00:38:26.345 [2024-12-14 00:19:05.147687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.345 [2024-12-14 00:19:05.147701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.345 qpair failed and we were unable to recover it. 00:38:26.345 [2024-12-14 00:19:05.147791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.345 [2024-12-14 00:19:05.147805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.345 qpair failed and we were unable to recover it. 00:38:26.345 [2024-12-14 00:19:05.147963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.345 [2024-12-14 00:19:05.147977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.345 qpair failed and we were unable to recover it. 00:38:26.345 [2024-12-14 00:19:05.148051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.345 [2024-12-14 00:19:05.148066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.345 qpair failed and we were unable to recover it. 00:38:26.345 [2024-12-14 00:19:05.149046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.345 [2024-12-14 00:19:05.149084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.345 qpair failed and we were unable to recover it. 00:38:26.345 [2024-12-14 00:19:05.149235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.345 [2024-12-14 00:19:05.149251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.345 qpair failed and we were unable to recover it. 00:38:26.345 [2024-12-14 00:19:05.149323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.345 [2024-12-14 00:19:05.149336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.345 qpair failed and we were unable to recover it. 00:38:26.345 [2024-12-14 00:19:05.149491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.345 [2024-12-14 00:19:05.149536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.345 qpair failed and we were unable to recover it. 00:38:26.345 [2024-12-14 00:19:05.149676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.346 [2024-12-14 00:19:05.149720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.346 qpair failed and we were unable to recover it. 00:38:26.346 [2024-12-14 00:19:05.149920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.346 [2024-12-14 00:19:05.149962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.346 qpair failed and we were unable to recover it. 00:38:26.346 [2024-12-14 00:19:05.150168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.346 [2024-12-14 00:19:05.150210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.346 qpair failed and we were unable to recover it. 00:38:26.346 [2024-12-14 00:19:05.150343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.346 [2024-12-14 00:19:05.150385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.346 qpair failed and we were unable to recover it. 00:38:26.346 [2024-12-14 00:19:05.150602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.346 [2024-12-14 00:19:05.150646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.346 qpair failed and we were unable to recover it. 00:38:26.346 [2024-12-14 00:19:05.150854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.346 [2024-12-14 00:19:05.150894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.346 qpair failed and we were unable to recover it. 00:38:26.346 [2024-12-14 00:19:05.151037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.346 [2024-12-14 00:19:05.151051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.346 qpair failed and we were unable to recover it. 00:38:26.346 [2024-12-14 00:19:05.151222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.346 [2024-12-14 00:19:05.151263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.346 qpair failed and we were unable to recover it. 00:38:26.346 [2024-12-14 00:19:05.151467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.346 [2024-12-14 00:19:05.151510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.346 qpair failed and we were unable to recover it. 00:38:26.346 [2024-12-14 00:19:05.151648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.346 [2024-12-14 00:19:05.151689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.346 qpair failed and we were unable to recover it. 00:38:26.346 [2024-12-14 00:19:05.151852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.346 [2024-12-14 00:19:05.151895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.346 qpair failed and we were unable to recover it. 00:38:26.346 [2024-12-14 00:19:05.152052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.346 [2024-12-14 00:19:05.152066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.346 qpair failed and we were unable to recover it. 00:38:26.346 [2024-12-14 00:19:05.152135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.346 [2024-12-14 00:19:05.152148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.346 qpair failed and we were unable to recover it. 00:38:26.346 [2024-12-14 00:19:05.152281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.346 [2024-12-14 00:19:05.152294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.346 qpair failed and we were unable to recover it. 00:38:26.346 [2024-12-14 00:19:05.152380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.346 [2024-12-14 00:19:05.152393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.346 qpair failed and we were unable to recover it. 00:38:26.346 [2024-12-14 00:19:05.152551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.346 [2024-12-14 00:19:05.152565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.346 qpair failed and we were unable to recover it. 00:38:26.346 [2024-12-14 00:19:05.152654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.346 [2024-12-14 00:19:05.152668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.346 qpair failed and we were unable to recover it. 00:38:26.346 [2024-12-14 00:19:05.152735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.346 [2024-12-14 00:19:05.152748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.346 qpair failed and we were unable to recover it. 00:38:26.346 [2024-12-14 00:19:05.152919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.346 [2024-12-14 00:19:05.152961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.346 qpair failed and we were unable to recover it. 00:38:26.346 [2024-12-14 00:19:05.153110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.346 [2024-12-14 00:19:05.153152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.346 qpair failed and we were unable to recover it. 00:38:26.346 [2024-12-14 00:19:05.153354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.346 [2024-12-14 00:19:05.153396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.346 qpair failed and we were unable to recover it. 00:38:26.346 [2024-12-14 00:19:05.153628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.346 [2024-12-14 00:19:05.153672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.346 qpair failed and we were unable to recover it. 00:38:26.346 [2024-12-14 00:19:05.153829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.346 [2024-12-14 00:19:05.153871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.346 qpair failed and we were unable to recover it. 00:38:26.346 [2024-12-14 00:19:05.154013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.346 [2024-12-14 00:19:05.154055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.346 qpair failed and we were unable to recover it. 00:38:26.346 [2024-12-14 00:19:05.154185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.346 [2024-12-14 00:19:05.154202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.346 qpair failed and we were unable to recover it. 00:38:26.346 [2024-12-14 00:19:05.154336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.346 [2024-12-14 00:19:05.154349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.346 qpair failed and we were unable to recover it. 00:38:26.346 [2024-12-14 00:19:05.154419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.346 [2024-12-14 00:19:05.154432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.346 qpair failed and we were unable to recover it. 00:38:26.346 [2024-12-14 00:19:05.154531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.346 [2024-12-14 00:19:05.154545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.346 qpair failed and we were unable to recover it. 00:38:26.346 [2024-12-14 00:19:05.154626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.346 [2024-12-14 00:19:05.154640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.346 qpair failed and we were unable to recover it. 00:38:26.346 [2024-12-14 00:19:05.154713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.346 [2024-12-14 00:19:05.154726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.346 qpair failed and we were unable to recover it. 00:38:26.346 [2024-12-14 00:19:05.154792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.346 [2024-12-14 00:19:05.154805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.346 qpair failed and we were unable to recover it. 00:38:26.346 [2024-12-14 00:19:05.154963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.346 [2024-12-14 00:19:05.154978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.346 qpair failed and we were unable to recover it. 00:38:26.346 [2024-12-14 00:19:05.155066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.346 [2024-12-14 00:19:05.155107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.346 qpair failed and we were unable to recover it. 00:38:26.346 [2024-12-14 00:19:05.155254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.346 [2024-12-14 00:19:05.155295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.346 qpair failed and we were unable to recover it. 00:38:26.346 [2024-12-14 00:19:05.156514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.346 [2024-12-14 00:19:05.156542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.346 qpair failed and we were unable to recover it. 00:38:26.346 [2024-12-14 00:19:05.156817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.346 [2024-12-14 00:19:05.156861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.346 qpair failed and we were unable to recover it. 00:38:26.346 [2024-12-14 00:19:05.157073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.346 [2024-12-14 00:19:05.157117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.346 qpair failed and we were unable to recover it. 00:38:26.346 [2024-12-14 00:19:05.157325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.346 [2024-12-14 00:19:05.157367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.346 qpair failed and we were unable to recover it. 00:38:26.346 [2024-12-14 00:19:05.157543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.346 [2024-12-14 00:19:05.157587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.346 qpair failed and we were unable to recover it. 00:38:26.346 [2024-12-14 00:19:05.157724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.346 [2024-12-14 00:19:05.157767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.347 qpair failed and we were unable to recover it. 00:38:26.347 [2024-12-14 00:19:05.158017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.347 [2024-12-14 00:19:05.158031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.347 qpair failed and we were unable to recover it. 00:38:26.347 [2024-12-14 00:19:05.158124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.347 [2024-12-14 00:19:05.158138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.347 qpair failed and we were unable to recover it. 00:38:26.347 [2024-12-14 00:19:05.158281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.347 [2024-12-14 00:19:05.158320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.347 qpair failed and we were unable to recover it. 00:38:26.347 [2024-12-14 00:19:05.158465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.347 [2024-12-14 00:19:05.158509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.347 qpair failed and we were unable to recover it. 00:38:26.347 [2024-12-14 00:19:05.158675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.347 [2024-12-14 00:19:05.158717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.347 qpair failed and we were unable to recover it. 00:38:26.347 [2024-12-14 00:19:05.158922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.347 [2024-12-14 00:19:05.158965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.347 qpair failed and we were unable to recover it. 00:38:26.347 [2024-12-14 00:19:05.159120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.347 [2024-12-14 00:19:05.159162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.347 qpair failed and we were unable to recover it. 00:38:26.347 [2024-12-14 00:19:05.159306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.347 [2024-12-14 00:19:05.159348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.347 qpair failed and we were unable to recover it. 00:38:26.347 [2024-12-14 00:19:05.160094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.347 [2024-12-14 00:19:05.160115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.347 qpair failed and we were unable to recover it. 00:38:26.347 [2024-12-14 00:19:05.160277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.347 [2024-12-14 00:19:05.160291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.347 qpair failed and we were unable to recover it. 00:38:26.347 [2024-12-14 00:19:05.160433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.347 [2024-12-14 00:19:05.160457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.347 qpair failed and we were unable to recover it. 00:38:26.347 [2024-12-14 00:19:05.160537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.347 [2024-12-14 00:19:05.160551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.347 qpair failed and we were unable to recover it. 00:38:26.347 [2024-12-14 00:19:05.160703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.347 [2024-12-14 00:19:05.160717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.347 qpair failed and we were unable to recover it. 00:38:26.347 [2024-12-14 00:19:05.160805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.347 [2024-12-14 00:19:05.160819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.347 qpair failed and we were unable to recover it. 00:38:26.347 [2024-12-14 00:19:05.161022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.347 [2024-12-14 00:19:05.161036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.347 qpair failed and we were unable to recover it. 00:38:26.347 [2024-12-14 00:19:05.161127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.347 [2024-12-14 00:19:05.161140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.347 qpair failed and we were unable to recover it. 00:38:26.347 [2024-12-14 00:19:05.161348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.347 [2024-12-14 00:19:05.161363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.347 qpair failed and we were unable to recover it. 00:38:26.347 [2024-12-14 00:19:05.161473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.347 [2024-12-14 00:19:05.161488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.347 qpair failed and we were unable to recover it. 00:38:26.347 [2024-12-14 00:19:05.161622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.347 [2024-12-14 00:19:05.161636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.347 qpair failed and we were unable to recover it. 00:38:26.347 [2024-12-14 00:19:05.161791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.347 [2024-12-14 00:19:05.161806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.347 qpair failed and we were unable to recover it. 00:38:26.347 [2024-12-14 00:19:05.161967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.347 [2024-12-14 00:19:05.161981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.347 qpair failed and we were unable to recover it. 00:38:26.347 [2024-12-14 00:19:05.162123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.347 [2024-12-14 00:19:05.162137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.347 qpair failed and we were unable to recover it. 00:38:26.347 [2024-12-14 00:19:05.162288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.347 [2024-12-14 00:19:05.162302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.347 qpair failed and we were unable to recover it. 00:38:26.347 [2024-12-14 00:19:05.162379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.347 [2024-12-14 00:19:05.162393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.347 qpair failed and we were unable to recover it. 00:38:26.347 [2024-12-14 00:19:05.162472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.347 [2024-12-14 00:19:05.162489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.347 qpair failed and we were unable to recover it. 00:38:26.347 [2024-12-14 00:19:05.162641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.347 [2024-12-14 00:19:05.162655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.347 qpair failed and we were unable to recover it. 00:38:26.347 [2024-12-14 00:19:05.162809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.347 [2024-12-14 00:19:05.162823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.347 qpair failed and we were unable to recover it. 00:38:26.347 [2024-12-14 00:19:05.162897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.347 [2024-12-14 00:19:05.162911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.347 qpair failed and we were unable to recover it. 00:38:26.347 [2024-12-14 00:19:05.162998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.347 [2024-12-14 00:19:05.163011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.347 qpair failed and we were unable to recover it. 00:38:26.347 [2024-12-14 00:19:05.163167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.347 [2024-12-14 00:19:05.163180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.347 qpair failed and we were unable to recover it. 00:38:26.347 [2024-12-14 00:19:05.163281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.347 [2024-12-14 00:19:05.163294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.347 qpair failed and we were unable to recover it. 00:38:26.347 [2024-12-14 00:19:05.163375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.347 [2024-12-14 00:19:05.163389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.347 qpair failed and we were unable to recover it. 00:38:26.347 [2024-12-14 00:19:05.163532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.347 [2024-12-14 00:19:05.163563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.347 qpair failed and we were unable to recover it. 00:38:26.347 [2024-12-14 00:19:05.163645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.347 [2024-12-14 00:19:05.163659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.347 qpair failed and we were unable to recover it. 00:38:26.347 [2024-12-14 00:19:05.163809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.347 [2024-12-14 00:19:05.163823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.347 qpair failed and we were unable to recover it. 00:38:26.347 [2024-12-14 00:19:05.163968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.347 [2024-12-14 00:19:05.164010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.347 qpair failed and we were unable to recover it. 00:38:26.347 [2024-12-14 00:19:05.164158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.347 [2024-12-14 00:19:05.164201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.347 qpair failed and we were unable to recover it. 00:38:26.347 [2024-12-14 00:19:05.164404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.347 [2024-12-14 00:19:05.164467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.347 qpair failed and we were unable to recover it. 00:38:26.347 [2024-12-14 00:19:05.164579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.347 [2024-12-14 00:19:05.164594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.347 qpair failed and we were unable to recover it. 00:38:26.347 [2024-12-14 00:19:05.164742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.348 [2024-12-14 00:19:05.164756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.348 qpair failed and we were unable to recover it. 00:38:26.348 [2024-12-14 00:19:05.164904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.348 [2024-12-14 00:19:05.164917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.348 qpair failed and we were unable to recover it. 00:38:26.348 [2024-12-14 00:19:05.164993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.348 [2024-12-14 00:19:05.165007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.348 qpair failed and we were unable to recover it. 00:38:26.348 [2024-12-14 00:19:05.165083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.348 [2024-12-14 00:19:05.165096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.348 qpair failed and we were unable to recover it. 00:38:26.348 [2024-12-14 00:19:05.165163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.348 [2024-12-14 00:19:05.165176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.348 qpair failed and we were unable to recover it. 00:38:26.348 [2024-12-14 00:19:05.165272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.348 [2024-12-14 00:19:05.165314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.348 qpair failed and we were unable to recover it. 00:38:26.348 [2024-12-14 00:19:05.165462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.348 [2024-12-14 00:19:05.165505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.348 qpair failed and we were unable to recover it. 00:38:26.348 [2024-12-14 00:19:05.165651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.348 [2024-12-14 00:19:05.165693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.348 qpair failed and we were unable to recover it. 00:38:26.348 [2024-12-14 00:19:05.165840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.348 [2024-12-14 00:19:05.165881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.348 qpair failed and we were unable to recover it. 00:38:26.348 [2024-12-14 00:19:05.166033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.348 [2024-12-14 00:19:05.166075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.348 qpair failed and we were unable to recover it. 00:38:26.348 [2024-12-14 00:19:05.166215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.348 [2024-12-14 00:19:05.166257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.348 qpair failed and we were unable to recover it. 00:38:26.348 [2024-12-14 00:19:05.166399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.348 [2024-12-14 00:19:05.166454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.348 qpair failed and we were unable to recover it. 00:38:26.348 [2024-12-14 00:19:05.166606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.348 [2024-12-14 00:19:05.166649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.348 qpair failed and we were unable to recover it. 00:38:26.348 [2024-12-14 00:19:05.167714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.348 [2024-12-14 00:19:05.167741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.348 qpair failed and we were unable to recover it. 00:38:26.348 [2024-12-14 00:19:05.168002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.348 [2024-12-14 00:19:05.168017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.348 qpair failed and we were unable to recover it. 00:38:26.348 [2024-12-14 00:19:05.168181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.348 [2024-12-14 00:19:05.168225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.348 qpair failed and we were unable to recover it. 00:38:26.348 [2024-12-14 00:19:05.168385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.348 [2024-12-14 00:19:05.168429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.348 qpair failed and we were unable to recover it. 00:38:26.348 [2024-12-14 00:19:05.168785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.348 [2024-12-14 00:19:05.168843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.348 qpair failed and we were unable to recover it. 00:38:26.348 [2024-12-14 00:19:05.169038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.348 [2024-12-14 00:19:05.169081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.348 qpair failed and we were unable to recover it. 00:38:26.348 [2024-12-14 00:19:05.169233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.348 [2024-12-14 00:19:05.169274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.348 qpair failed and we were unable to recover it. 00:38:26.348 [2024-12-14 00:19:05.169364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.348 [2024-12-14 00:19:05.169378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.348 qpair failed and we were unable to recover it. 00:38:26.348 [2024-12-14 00:19:05.169545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.348 [2024-12-14 00:19:05.169592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.348 qpair failed and we were unable to recover it. 00:38:26.348 [2024-12-14 00:19:05.169788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.348 [2024-12-14 00:19:05.169831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.348 qpair failed and we were unable to recover it. 00:38:26.348 [2024-12-14 00:19:05.170758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.348 [2024-12-14 00:19:05.170783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.348 qpair failed and we were unable to recover it. 00:38:26.348 [2024-12-14 00:19:05.170901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.348 [2024-12-14 00:19:05.170918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.348 qpair failed and we were unable to recover it. 00:38:26.348 [2024-12-14 00:19:05.171053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.348 [2024-12-14 00:19:05.171082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.348 qpair failed and we were unable to recover it. 00:38:26.348 [2024-12-14 00:19:05.171174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.348 [2024-12-14 00:19:05.171187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.348 qpair failed and we were unable to recover it. 00:38:26.348 [2024-12-14 00:19:05.171333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.348 [2024-12-14 00:19:05.171347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.348 qpair failed and we were unable to recover it. 00:38:26.348 [2024-12-14 00:19:05.171489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.348 [2024-12-14 00:19:05.171503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.348 qpair failed and we were unable to recover it. 00:38:26.348 [2024-12-14 00:19:05.171652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.348 [2024-12-14 00:19:05.171694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.348 qpair failed and we were unable to recover it. 00:38:26.348 [2024-12-14 00:19:05.171899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.348 [2024-12-14 00:19:05.171942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.348 qpair failed and we were unable to recover it. 00:38:26.348 [2024-12-14 00:19:05.172092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.348 [2024-12-14 00:19:05.172134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.348 qpair failed and we were unable to recover it. 00:38:26.348 [2024-12-14 00:19:05.172308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.348 [2024-12-14 00:19:05.172321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.348 qpair failed and we were unable to recover it. 00:38:26.348 [2024-12-14 00:19:05.172471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.348 [2024-12-14 00:19:05.172485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.348 qpair failed and we were unable to recover it. 00:38:26.348 [2024-12-14 00:19:05.172564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.348 [2024-12-14 00:19:05.172578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.348 qpair failed and we were unable to recover it. 00:38:26.348 [2024-12-14 00:19:05.172771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.348 [2024-12-14 00:19:05.172813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.348 qpair failed and we were unable to recover it. 00:38:26.348 [2024-12-14 00:19:05.172960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.348 [2024-12-14 00:19:05.173001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.348 qpair failed and we were unable to recover it. 00:38:26.348 [2024-12-14 00:19:05.173147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.348 [2024-12-14 00:19:05.173191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.348 qpair failed and we were unable to recover it. 00:38:26.348 [2024-12-14 00:19:05.173387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.348 [2024-12-14 00:19:05.173412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.348 qpair failed and we were unable to recover it. 00:38:26.348 [2024-12-14 00:19:05.173609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.348 [2024-12-14 00:19:05.173654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.349 qpair failed and we were unable to recover it. 00:38:26.349 [2024-12-14 00:19:05.173795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.349 [2024-12-14 00:19:05.173836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.349 qpair failed and we were unable to recover it. 00:38:26.349 [2024-12-14 00:19:05.173979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.349 [2024-12-14 00:19:05.174021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.349 qpair failed and we were unable to recover it. 00:38:26.349 [2024-12-14 00:19:05.174305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.349 [2024-12-14 00:19:05.174346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.349 qpair failed and we were unable to recover it. 00:38:26.349 [2024-12-14 00:19:05.174551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.349 [2024-12-14 00:19:05.174595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.349 qpair failed and we were unable to recover it. 00:38:26.349 [2024-12-14 00:19:05.174800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.349 [2024-12-14 00:19:05.174842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.349 qpair failed and we were unable to recover it. 00:38:26.349 [2024-12-14 00:19:05.174990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.349 [2024-12-14 00:19:05.175033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.349 qpair failed and we were unable to recover it. 00:38:26.349 [2024-12-14 00:19:05.175248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.349 [2024-12-14 00:19:05.175290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.349 qpair failed and we were unable to recover it. 00:38:26.349 [2024-12-14 00:19:05.175541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.349 [2024-12-14 00:19:05.175584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.349 qpair failed and we were unable to recover it. 00:38:26.349 [2024-12-14 00:19:05.175788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.349 [2024-12-14 00:19:05.175830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.349 qpair failed and we were unable to recover it. 00:38:26.349 [2024-12-14 00:19:05.176045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.349 [2024-12-14 00:19:05.176087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.349 qpair failed and we were unable to recover it. 00:38:26.349 [2024-12-14 00:19:05.176219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.349 [2024-12-14 00:19:05.176260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.349 qpair failed and we were unable to recover it. 00:38:26.349 [2024-12-14 00:19:05.176520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.349 [2024-12-14 00:19:05.176564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.349 qpair failed and we were unable to recover it. 00:38:26.349 [2024-12-14 00:19:05.176704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.349 [2024-12-14 00:19:05.176748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.349 qpair failed and we were unable to recover it. 00:38:26.349 [2024-12-14 00:19:05.177020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.349 [2024-12-14 00:19:05.177063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.349 qpair failed and we were unable to recover it. 00:38:26.349 [2024-12-14 00:19:05.177214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.349 [2024-12-14 00:19:05.177264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.349 qpair failed and we were unable to recover it. 00:38:26.349 [2024-12-14 00:19:05.177425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.349 [2024-12-14 00:19:05.177445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.349 qpair failed and we were unable to recover it. 00:38:26.349 [2024-12-14 00:19:05.177552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.349 [2024-12-14 00:19:05.177575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.349 qpair failed and we were unable to recover it. 00:38:26.349 [2024-12-14 00:19:05.177662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.349 [2024-12-14 00:19:05.177675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.349 qpair failed and we were unable to recover it. 00:38:26.349 [2024-12-14 00:19:05.177839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.349 [2024-12-14 00:19:05.177883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.349 qpair failed and we were unable to recover it. 00:38:26.349 [2024-12-14 00:19:05.178113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.349 [2024-12-14 00:19:05.178155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.349 qpair failed and we were unable to recover it. 00:38:26.349 [2024-12-14 00:19:05.178292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.349 [2024-12-14 00:19:05.178335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.349 qpair failed and we were unable to recover it. 00:38:26.349 [2024-12-14 00:19:05.178498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.349 [2024-12-14 00:19:05.178512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.349 qpair failed and we were unable to recover it. 00:38:26.349 [2024-12-14 00:19:05.178670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.349 [2024-12-14 00:19:05.178712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.349 qpair failed and we were unable to recover it. 00:38:26.349 [2024-12-14 00:19:05.178924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.349 [2024-12-14 00:19:05.178966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.349 qpair failed and we were unable to recover it. 00:38:26.349 [2024-12-14 00:19:05.179096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.349 [2024-12-14 00:19:05.179138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.349 qpair failed and we were unable to recover it. 00:38:26.349 [2024-12-14 00:19:05.179352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.349 [2024-12-14 00:19:05.179400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.349 qpair failed and we were unable to recover it. 00:38:26.349 [2024-12-14 00:19:05.179554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.349 [2024-12-14 00:19:05.179598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.349 qpair failed and we were unable to recover it. 00:38:26.349 [2024-12-14 00:19:05.179797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.349 [2024-12-14 00:19:05.179839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.349 qpair failed and we were unable to recover it. 00:38:26.349 [2024-12-14 00:19:05.180104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.349 [2024-12-14 00:19:05.180147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.349 qpair failed and we were unable to recover it. 00:38:26.349 [2024-12-14 00:19:05.180342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.349 [2024-12-14 00:19:05.180384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.349 qpair failed and we were unable to recover it. 00:38:26.349 [2024-12-14 00:19:05.180536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.349 [2024-12-14 00:19:05.180580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.349 qpair failed and we were unable to recover it. 00:38:26.349 [2024-12-14 00:19:05.180745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.349 [2024-12-14 00:19:05.180788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.349 qpair failed and we were unable to recover it. 00:38:26.349 [2024-12-14 00:19:05.181063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.349 [2024-12-14 00:19:05.181106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.349 qpair failed and we were unable to recover it. 00:38:26.349 [2024-12-14 00:19:05.181361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.349 [2024-12-14 00:19:05.181402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.349 qpair failed and we were unable to recover it. 00:38:26.349 [2024-12-14 00:19:05.181496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.349 [2024-12-14 00:19:05.181510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.349 qpair failed and we were unable to recover it. 00:38:26.349 [2024-12-14 00:19:05.181739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.350 [2024-12-14 00:19:05.181754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.350 qpair failed and we were unable to recover it. 00:38:26.350 [2024-12-14 00:19:05.181909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.350 [2024-12-14 00:19:05.181922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.350 qpair failed and we were unable to recover it. 00:38:26.350 [2024-12-14 00:19:05.182004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.350 [2024-12-14 00:19:05.182018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.350 qpair failed and we were unable to recover it. 00:38:26.350 [2024-12-14 00:19:05.182156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.350 [2024-12-14 00:19:05.182197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.350 qpair failed and we were unable to recover it. 00:38:26.350 [2024-12-14 00:19:05.182407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.350 [2024-12-14 00:19:05.182463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.350 qpair failed and we were unable to recover it. 00:38:26.350 [2024-12-14 00:19:05.182616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.350 [2024-12-14 00:19:05.182660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.350 qpair failed and we were unable to recover it. 00:38:26.350 [2024-12-14 00:19:05.182878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.350 [2024-12-14 00:19:05.182932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.350 qpair failed and we were unable to recover it. 00:38:26.350 [2024-12-14 00:19:05.183143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.350 [2024-12-14 00:19:05.183156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.350 qpair failed and we were unable to recover it. 00:38:26.350 [2024-12-14 00:19:05.183328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.350 [2024-12-14 00:19:05.183371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.350 qpair failed and we were unable to recover it. 00:38:26.350 [2024-12-14 00:19:05.183585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.350 [2024-12-14 00:19:05.183629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.350 qpair failed and we were unable to recover it. 00:38:26.350 [2024-12-14 00:19:05.183768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.350 [2024-12-14 00:19:05.183810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.350 qpair failed and we were unable to recover it. 00:38:26.350 [2024-12-14 00:19:05.184020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.350 [2024-12-14 00:19:05.184063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.350 qpair failed and we were unable to recover it. 00:38:26.350 [2024-12-14 00:19:05.184206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.350 [2024-12-14 00:19:05.184248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.350 qpair failed and we were unable to recover it. 00:38:26.350 [2024-12-14 00:19:05.184385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.350 [2024-12-14 00:19:05.184409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.350 qpair failed and we were unable to recover it. 00:38:26.350 [2024-12-14 00:19:05.184550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.350 [2024-12-14 00:19:05.184565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.350 qpair failed and we were unable to recover it. 00:38:26.350 [2024-12-14 00:19:05.184636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.350 [2024-12-14 00:19:05.184649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.350 qpair failed and we were unable to recover it. 00:38:26.350 [2024-12-14 00:19:05.184871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.350 [2024-12-14 00:19:05.184913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.350 qpair failed and we were unable to recover it. 00:38:26.350 [2024-12-14 00:19:05.185189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.350 [2024-12-14 00:19:05.185275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:26.350 qpair failed and we were unable to recover it. 00:38:26.350 [2024-12-14 00:19:05.185530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.350 [2024-12-14 00:19:05.185555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:26.350 qpair failed and we were unable to recover it. 00:38:26.350 [2024-12-14 00:19:05.185670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.350 [2024-12-14 00:19:05.185714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:26.350 qpair failed and we were unable to recover it. 00:38:26.350 [2024-12-14 00:19:05.185871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.350 [2024-12-14 00:19:05.185913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:26.350 qpair failed and we were unable to recover it. 00:38:26.350 [2024-12-14 00:19:05.186059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.350 [2024-12-14 00:19:05.186103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:26.350 qpair failed and we were unable to recover it. 00:38:26.350 [2024-12-14 00:19:05.186236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.350 [2024-12-14 00:19:05.186279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:26.350 qpair failed and we were unable to recover it. 00:38:26.350 [2024-12-14 00:19:05.186485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.350 [2024-12-14 00:19:05.186500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.350 qpair failed and we were unable to recover it. 00:38:26.350 [2024-12-14 00:19:05.186658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.350 [2024-12-14 00:19:05.186701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.350 qpair failed and we were unable to recover it. 00:38:26.350 [2024-12-14 00:19:05.186904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.350 [2024-12-14 00:19:05.186945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.350 qpair failed and we were unable to recover it. 00:38:26.350 [2024-12-14 00:19:05.187154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.350 [2024-12-14 00:19:05.187192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.350 qpair failed and we were unable to recover it. 00:38:26.350 [2024-12-14 00:19:05.187272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.350 [2024-12-14 00:19:05.187286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.350 qpair failed and we were unable to recover it. 00:38:26.350 [2024-12-14 00:19:05.187358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.350 [2024-12-14 00:19:05.187371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.350 qpair failed and we were unable to recover it. 00:38:26.350 [2024-12-14 00:19:05.187477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.350 [2024-12-14 00:19:05.187519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.350 qpair failed and we were unable to recover it. 00:38:26.350 [2024-12-14 00:19:05.187659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.350 [2024-12-14 00:19:05.187707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.350 qpair failed and we were unable to recover it. 00:38:26.350 [2024-12-14 00:19:05.187848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.350 [2024-12-14 00:19:05.187891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.350 qpair failed and we were unable to recover it. 00:38:26.350 [2024-12-14 00:19:05.188054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.350 [2024-12-14 00:19:05.188100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.350 qpair failed and we were unable to recover it. 00:38:26.350 [2024-12-14 00:19:05.188250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.350 [2024-12-14 00:19:05.188293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.350 qpair failed and we were unable to recover it. 00:38:26.350 [2024-12-14 00:19:05.188460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.350 [2024-12-14 00:19:05.188504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.350 qpair failed and we were unable to recover it. 00:38:26.350 [2024-12-14 00:19:05.188634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.350 [2024-12-14 00:19:05.188677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.350 qpair failed and we were unable to recover it. 00:38:26.350 [2024-12-14 00:19:05.188819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.350 [2024-12-14 00:19:05.188862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.350 qpair failed and we were unable to recover it. 00:38:26.350 [2024-12-14 00:19:05.189005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.350 [2024-12-14 00:19:05.189048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.350 qpair failed and we were unable to recover it. 00:38:26.350 [2024-12-14 00:19:05.189187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.350 [2024-12-14 00:19:05.189229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.350 qpair failed and we were unable to recover it. 00:38:26.350 [2024-12-14 00:19:05.189364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.350 [2024-12-14 00:19:05.189378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.350 qpair failed and we were unable to recover it. 00:38:26.351 [2024-12-14 00:19:05.189466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.351 [2024-12-14 00:19:05.189481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.351 qpair failed and we were unable to recover it. 00:38:26.351 [2024-12-14 00:19:05.189567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.351 [2024-12-14 00:19:05.189581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.351 qpair failed and we were unable to recover it. 00:38:26.351 [2024-12-14 00:19:05.189648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.351 [2024-12-14 00:19:05.189662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.351 qpair failed and we were unable to recover it. 00:38:26.351 [2024-12-14 00:19:05.189740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.351 [2024-12-14 00:19:05.189753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.351 qpair failed and we were unable to recover it. 00:38:26.351 [2024-12-14 00:19:05.189876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.351 [2024-12-14 00:19:05.189917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.351 qpair failed and we were unable to recover it. 00:38:26.351 [2024-12-14 00:19:05.190122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.351 [2024-12-14 00:19:05.190165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.351 qpair failed and we were unable to recover it. 00:38:26.351 [2024-12-14 00:19:05.190295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.351 [2024-12-14 00:19:05.190337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.351 qpair failed and we were unable to recover it. 00:38:26.351 [2024-12-14 00:19:05.190524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.351 [2024-12-14 00:19:05.190538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.351 qpair failed and we were unable to recover it. 00:38:26.351 [2024-12-14 00:19:05.190615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.351 [2024-12-14 00:19:05.190629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.351 qpair failed and we were unable to recover it. 00:38:26.351 [2024-12-14 00:19:05.190706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.351 [2024-12-14 00:19:05.190720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.351 qpair failed and we were unable to recover it. 00:38:26.351 [2024-12-14 00:19:05.190807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.351 [2024-12-14 00:19:05.190820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.351 qpair failed and we were unable to recover it. 00:38:26.351 [2024-12-14 00:19:05.190894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.351 [2024-12-14 00:19:05.190910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.351 qpair failed and we were unable to recover it. 00:38:26.351 [2024-12-14 00:19:05.190999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.351 [2024-12-14 00:19:05.191013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.351 qpair failed and we were unable to recover it. 00:38:26.351 [2024-12-14 00:19:05.191228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.351 [2024-12-14 00:19:05.191272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.351 qpair failed and we were unable to recover it. 00:38:26.351 [2024-12-14 00:19:05.191419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.351 [2024-12-14 00:19:05.191475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.351 qpair failed and we were unable to recover it. 00:38:26.351 [2024-12-14 00:19:05.191633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.351 [2024-12-14 00:19:05.191676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.351 qpair failed and we were unable to recover it. 00:38:26.351 [2024-12-14 00:19:05.191888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.351 [2024-12-14 00:19:05.191930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.351 qpair failed and we were unable to recover it. 00:38:26.351 [2024-12-14 00:19:05.192193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.351 [2024-12-14 00:19:05.192280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.351 qpair failed and we were unable to recover it. 00:38:26.351 [2024-12-14 00:19:05.192462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.351 [2024-12-14 00:19:05.192509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:26.351 qpair failed and we were unable to recover it. 00:38:26.351 [2024-12-14 00:19:05.192610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.351 [2024-12-14 00:19:05.192634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:26.351 qpair failed and we were unable to recover it. 00:38:26.351 [2024-12-14 00:19:05.192727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.351 [2024-12-14 00:19:05.192742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.351 qpair failed and we were unable to recover it. 00:38:26.351 [2024-12-14 00:19:05.192884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.351 [2024-12-14 00:19:05.192898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.351 qpair failed and we were unable to recover it. 00:38:26.351 [2024-12-14 00:19:05.193062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.351 [2024-12-14 00:19:05.193076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.351 qpair failed and we were unable to recover it. 00:38:26.351 [2024-12-14 00:19:05.193162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.351 [2024-12-14 00:19:05.193176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.351 qpair failed and we were unable to recover it. 00:38:26.351 [2024-12-14 00:19:05.193258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.351 [2024-12-14 00:19:05.193272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.351 qpair failed and we were unable to recover it. 00:38:26.351 [2024-12-14 00:19:05.193420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.351 [2024-12-14 00:19:05.193433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.351 qpair failed and we were unable to recover it. 00:38:26.351 [2024-12-14 00:19:05.193540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.351 [2024-12-14 00:19:05.193554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.351 qpair failed and we were unable to recover it. 00:38:26.351 [2024-12-14 00:19:05.193720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.351 [2024-12-14 00:19:05.193734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.351 qpair failed and we were unable to recover it. 00:38:26.351 [2024-12-14 00:19:05.193805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.351 [2024-12-14 00:19:05.193819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.351 qpair failed and we were unable to recover it. 00:38:26.351 [2024-12-14 00:19:05.193892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.351 [2024-12-14 00:19:05.193905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.351 qpair failed and we were unable to recover it. 00:38:26.351 [2024-12-14 00:19:05.193987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.351 [2024-12-14 00:19:05.194034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.351 qpair failed and we were unable to recover it. 00:38:26.351 [2024-12-14 00:19:05.194180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.351 [2024-12-14 00:19:05.194222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.351 qpair failed and we were unable to recover it. 00:38:26.351 [2024-12-14 00:19:05.194362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.351 [2024-12-14 00:19:05.194405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.351 qpair failed and we were unable to recover it. 00:38:26.351 [2024-12-14 00:19:05.194549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.351 [2024-12-14 00:19:05.194592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.351 qpair failed and we were unable to recover it. 00:38:26.351 [2024-12-14 00:19:05.194798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.351 [2024-12-14 00:19:05.194840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.351 qpair failed and we were unable to recover it. 00:38:26.351 [2024-12-14 00:19:05.194973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.351 [2024-12-14 00:19:05.195015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.351 qpair failed and we were unable to recover it. 00:38:26.351 [2024-12-14 00:19:05.195157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.351 [2024-12-14 00:19:05.195200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.351 qpair failed and we were unable to recover it. 00:38:26.351 [2024-12-14 00:19:05.195344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.351 [2024-12-14 00:19:05.195386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.351 qpair failed and we were unable to recover it. 00:38:26.351 [2024-12-14 00:19:05.195589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.352 [2024-12-14 00:19:05.195635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.352 qpair failed and we were unable to recover it. 00:38:26.352 [2024-12-14 00:19:05.195839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.352 [2024-12-14 00:19:05.195894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.352 qpair failed and we were unable to recover it. 00:38:26.352 [2024-12-14 00:19:05.196085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.352 [2024-12-14 00:19:05.196099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.352 qpair failed and we were unable to recover it. 00:38:26.352 [2024-12-14 00:19:05.196184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.352 [2024-12-14 00:19:05.196197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.352 qpair failed and we were unable to recover it. 00:38:26.352 [2024-12-14 00:19:05.196399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.352 [2024-12-14 00:19:05.196457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.352 qpair failed and we were unable to recover it. 00:38:26.352 [2024-12-14 00:19:05.196604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.352 [2024-12-14 00:19:05.196646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.352 qpair failed and we were unable to recover it. 00:38:26.352 [2024-12-14 00:19:05.196856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.352 [2024-12-14 00:19:05.196898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.352 qpair failed and we were unable to recover it. 00:38:26.352 [2024-12-14 00:19:05.197126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.352 [2024-12-14 00:19:05.197168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.352 qpair failed and we were unable to recover it. 00:38:26.352 [2024-12-14 00:19:05.197327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.352 [2024-12-14 00:19:05.197369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.352 qpair failed and we were unable to recover it. 00:38:26.352 [2024-12-14 00:19:05.197504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.352 [2024-12-14 00:19:05.197548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.352 qpair failed and we were unable to recover it. 00:38:26.352 [2024-12-14 00:19:05.197746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.352 [2024-12-14 00:19:05.197789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.352 qpair failed and we were unable to recover it. 00:38:26.352 [2024-12-14 00:19:05.197931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.352 [2024-12-14 00:19:05.197972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.352 qpair failed and we were unable to recover it. 00:38:26.352 [2024-12-14 00:19:05.198113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.352 [2024-12-14 00:19:05.198127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.352 qpair failed and we were unable to recover it. 00:38:26.352 [2024-12-14 00:19:05.198883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.352 [2024-12-14 00:19:05.198909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.352 qpair failed and we were unable to recover it. 00:38:26.352 [2024-12-14 00:19:05.199016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.352 [2024-12-14 00:19:05.199031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.352 qpair failed and we were unable to recover it. 00:38:26.352 [2024-12-14 00:19:05.199259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.352 [2024-12-14 00:19:05.199273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.352 qpair failed and we were unable to recover it. 00:38:26.352 [2024-12-14 00:19:05.199433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.352 [2024-12-14 00:19:05.199458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.352 qpair failed and we were unable to recover it. 00:38:26.352 [2024-12-14 00:19:05.199539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.352 [2024-12-14 00:19:05.199552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.352 qpair failed and we were unable to recover it. 00:38:26.352 [2024-12-14 00:19:05.199637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.352 [2024-12-14 00:19:05.199652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.352 qpair failed and we were unable to recover it. 00:38:26.352 [2024-12-14 00:19:05.199801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.352 [2024-12-14 00:19:05.199843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.352 qpair failed and we were unable to recover it. 00:38:26.352 [2024-12-14 00:19:05.200125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.352 [2024-12-14 00:19:05.200167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.352 qpair failed and we were unable to recover it. 00:38:26.352 [2024-12-14 00:19:05.200332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.352 [2024-12-14 00:19:05.200379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.352 qpair failed and we were unable to recover it. 00:38:26.352 [2024-12-14 00:19:05.200630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.352 [2024-12-14 00:19:05.200674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.352 qpair failed and we were unable to recover it. 00:38:26.352 [2024-12-14 00:19:05.200993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.352 [2024-12-14 00:19:05.201036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.352 qpair failed and we were unable to recover it. 00:38:26.352 [2024-12-14 00:19:05.201320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.352 [2024-12-14 00:19:05.201363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.352 qpair failed and we were unable to recover it. 00:38:26.352 [2024-12-14 00:19:05.201522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.352 [2024-12-14 00:19:05.201566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.352 qpair failed and we were unable to recover it. 00:38:26.352 [2024-12-14 00:19:05.201777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.352 [2024-12-14 00:19:05.201818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.352 qpair failed and we were unable to recover it. 00:38:26.352 [2024-12-14 00:19:05.202056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.352 [2024-12-14 00:19:05.202099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.352 qpair failed and we were unable to recover it. 00:38:26.352 [2024-12-14 00:19:05.202259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.352 [2024-12-14 00:19:05.202300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.352 qpair failed and we were unable to recover it. 00:38:26.352 [2024-12-14 00:19:05.202452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.352 [2024-12-14 00:19:05.202496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.352 qpair failed and we were unable to recover it. 00:38:26.352 [2024-12-14 00:19:05.202635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.352 [2024-12-14 00:19:05.202678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.352 qpair failed and we were unable to recover it. 00:38:26.352 [2024-12-14 00:19:05.202876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.352 [2024-12-14 00:19:05.202918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.352 qpair failed and we were unable to recover it. 00:38:26.352 [2024-12-14 00:19:05.203149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.352 [2024-12-14 00:19:05.203165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.352 qpair failed and we were unable to recover it. 00:38:26.352 [2024-12-14 00:19:05.203326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.352 [2024-12-14 00:19:05.203339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.352 qpair failed and we were unable to recover it. 00:38:26.352 [2024-12-14 00:19:05.203491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.352 [2024-12-14 00:19:05.203505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.352 qpair failed and we were unable to recover it. 00:38:26.352 [2024-12-14 00:19:05.203600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.352 [2024-12-14 00:19:05.203614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.352 qpair failed and we were unable to recover it. 00:38:26.352 [2024-12-14 00:19:05.203771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.352 [2024-12-14 00:19:05.203813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.352 qpair failed and we were unable to recover it. 00:38:26.352 [2024-12-14 00:19:05.203941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.352 [2024-12-14 00:19:05.203982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.352 qpair failed and we were unable to recover it. 00:38:26.352 [2024-12-14 00:19:05.204183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.352 [2024-12-14 00:19:05.204226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.352 qpair failed and we were unable to recover it. 00:38:26.352 [2024-12-14 00:19:05.204362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.352 [2024-12-14 00:19:05.204375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.353 qpair failed and we were unable to recover it. 00:38:26.353 [2024-12-14 00:19:05.204535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.353 [2024-12-14 00:19:05.204577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.353 qpair failed and we were unable to recover it. 00:38:26.353 [2024-12-14 00:19:05.204777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.353 [2024-12-14 00:19:05.204819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.353 qpair failed and we were unable to recover it. 00:38:26.353 [2024-12-14 00:19:05.205025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.353 [2024-12-14 00:19:05.205081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.353 qpair failed and we were unable to recover it. 00:38:26.353 [2024-12-14 00:19:05.205232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.353 [2024-12-14 00:19:05.205246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.353 qpair failed and we were unable to recover it. 00:38:26.353 [2024-12-14 00:19:05.205420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.353 [2024-12-14 00:19:05.205476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.353 qpair failed and we were unable to recover it. 00:38:26.353 [2024-12-14 00:19:05.205598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.353 [2024-12-14 00:19:05.205640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.353 qpair failed and we were unable to recover it. 00:38:26.353 [2024-12-14 00:19:05.205861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.353 [2024-12-14 00:19:05.205904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.353 qpair failed and we were unable to recover it. 00:38:26.353 [2024-12-14 00:19:05.206186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.353 [2024-12-14 00:19:05.206229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.353 qpair failed and we were unable to recover it. 00:38:26.353 [2024-12-14 00:19:05.206354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.353 [2024-12-14 00:19:05.206396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.353 qpair failed and we were unable to recover it. 00:38:26.353 [2024-12-14 00:19:05.206569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.353 [2024-12-14 00:19:05.206584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.353 qpair failed and we were unable to recover it. 00:38:26.353 [2024-12-14 00:19:05.206751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.353 [2024-12-14 00:19:05.206795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.353 qpair failed and we were unable to recover it. 00:38:26.353 [2024-12-14 00:19:05.207015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.353 [2024-12-14 00:19:05.207058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.353 qpair failed and we were unable to recover it. 00:38:26.353 [2024-12-14 00:19:05.207252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.353 [2024-12-14 00:19:05.207294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.353 qpair failed and we were unable to recover it. 00:38:26.353 [2024-12-14 00:19:05.207454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.353 [2024-12-14 00:19:05.207498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.353 qpair failed and we were unable to recover it. 00:38:26.353 [2024-12-14 00:19:05.207654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.353 [2024-12-14 00:19:05.207696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.353 qpair failed and we were unable to recover it. 00:38:26.353 [2024-12-14 00:19:05.207831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.353 [2024-12-14 00:19:05.207873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.353 qpair failed and we were unable to recover it. 00:38:26.353 [2024-12-14 00:19:05.208009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.353 [2024-12-14 00:19:05.208051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.353 qpair failed and we were unable to recover it. 00:38:26.353 [2024-12-14 00:19:05.208248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.353 [2024-12-14 00:19:05.208290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.353 qpair failed and we were unable to recover it. 00:38:26.353 [2024-12-14 00:19:05.208519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.353 [2024-12-14 00:19:05.208533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.353 qpair failed and we were unable to recover it. 00:38:26.353 [2024-12-14 00:19:05.208629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.353 [2024-12-14 00:19:05.208643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.353 qpair failed and we were unable to recover it. 00:38:26.353 [2024-12-14 00:19:05.208734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.353 [2024-12-14 00:19:05.208748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.353 qpair failed and we were unable to recover it. 00:38:26.353 [2024-12-14 00:19:05.208914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.353 [2024-12-14 00:19:05.208957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.353 qpair failed and we were unable to recover it. 00:38:26.353 [2024-12-14 00:19:05.209260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.353 [2024-12-14 00:19:05.209302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.353 qpair failed and we were unable to recover it. 00:38:26.353 [2024-12-14 00:19:05.209456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.353 [2024-12-14 00:19:05.209499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.353 qpair failed and we were unable to recover it. 00:38:26.353 [2024-12-14 00:19:05.209719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.353 [2024-12-14 00:19:05.209763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.353 qpair failed and we were unable to recover it. 00:38:26.353 [2024-12-14 00:19:05.209969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.353 [2024-12-14 00:19:05.210034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.353 qpair failed and we were unable to recover it. 00:38:26.353 [2024-12-14 00:19:05.210183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.353 [2024-12-14 00:19:05.210197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.353 qpair failed and we were unable to recover it. 00:38:26.353 [2024-12-14 00:19:05.210277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.353 [2024-12-14 00:19:05.210291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.353 qpair failed and we were unable to recover it. 00:38:26.353 [2024-12-14 00:19:05.210522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.353 [2024-12-14 00:19:05.210565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.353 qpair failed and we were unable to recover it. 00:38:26.353 [2024-12-14 00:19:05.210702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.353 [2024-12-14 00:19:05.210744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.353 qpair failed and we were unable to recover it. 00:38:26.353 [2024-12-14 00:19:05.210888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.353 [2024-12-14 00:19:05.210931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.353 qpair failed and we were unable to recover it. 00:38:26.353 [2024-12-14 00:19:05.211097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.353 [2024-12-14 00:19:05.211143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.353 qpair failed and we were unable to recover it. 00:38:26.353 [2024-12-14 00:19:05.211315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.353 [2024-12-14 00:19:05.211330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.353 qpair failed and we were unable to recover it. 00:38:26.353 [2024-12-14 00:19:05.211500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.353 [2024-12-14 00:19:05.211546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.353 qpair failed and we were unable to recover it. 00:38:26.353 [2024-12-14 00:19:05.211758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.353 [2024-12-14 00:19:05.211801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.353 qpair failed and we were unable to recover it. 00:38:26.353 [2024-12-14 00:19:05.211941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.354 [2024-12-14 00:19:05.211983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.354 qpair failed and we were unable to recover it. 00:38:26.354 [2024-12-14 00:19:05.212118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.354 [2024-12-14 00:19:05.212160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.354 qpair failed and we were unable to recover it. 00:38:26.354 [2024-12-14 00:19:05.212306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.354 [2024-12-14 00:19:05.212348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.354 qpair failed and we were unable to recover it. 00:38:26.354 [2024-12-14 00:19:05.212489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.354 [2024-12-14 00:19:05.212503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.354 qpair failed and we were unable to recover it. 00:38:26.354 [2024-12-14 00:19:05.212724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.354 [2024-12-14 00:19:05.212766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.354 qpair failed and we were unable to recover it. 00:38:26.354 [2024-12-14 00:19:05.212909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.354 [2024-12-14 00:19:05.212952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.354 qpair failed and we were unable to recover it. 00:38:26.354 [2024-12-14 00:19:05.213077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.354 [2024-12-14 00:19:05.213118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.354 qpair failed and we were unable to recover it. 00:38:26.354 [2024-12-14 00:19:05.213320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.354 [2024-12-14 00:19:05.213334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.354 qpair failed and we were unable to recover it. 00:38:26.354 [2024-12-14 00:19:05.213418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.354 [2024-12-14 00:19:05.213432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.354 qpair failed and we were unable to recover it. 00:38:26.354 [2024-12-14 00:19:05.213506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.354 [2024-12-14 00:19:05.213519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.354 qpair failed and we were unable to recover it. 00:38:26.354 [2024-12-14 00:19:05.213742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.354 [2024-12-14 00:19:05.213786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.354 qpair failed and we were unable to recover it. 00:38:26.354 [2024-12-14 00:19:05.213931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.354 [2024-12-14 00:19:05.213973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.354 qpair failed and we were unable to recover it. 00:38:26.354 [2024-12-14 00:19:05.214163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.354 [2024-12-14 00:19:05.214206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.354 qpair failed and we were unable to recover it. 00:38:26.354 [2024-12-14 00:19:05.214365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.354 [2024-12-14 00:19:05.214379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.354 qpair failed and we were unable to recover it. 00:38:26.354 [2024-12-14 00:19:05.214447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.354 [2024-12-14 00:19:05.214461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.354 qpair failed and we were unable to recover it. 00:38:26.354 [2024-12-14 00:19:05.214544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.354 [2024-12-14 00:19:05.214557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.354 qpair failed and we were unable to recover it. 00:38:26.354 [2024-12-14 00:19:05.214631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.354 [2024-12-14 00:19:05.214644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.354 qpair failed and we were unable to recover it. 00:38:26.354 [2024-12-14 00:19:05.214731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.354 [2024-12-14 00:19:05.214744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.354 qpair failed and we were unable to recover it. 00:38:26.354 [2024-12-14 00:19:05.214825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.354 [2024-12-14 00:19:05.214839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.354 qpair failed and we were unable to recover it. 00:38:26.354 [2024-12-14 00:19:05.214993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.354 [2024-12-14 00:19:05.215006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.354 qpair failed and we were unable to recover it. 00:38:26.354 [2024-12-14 00:19:05.215173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.354 [2024-12-14 00:19:05.215216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.354 qpair failed and we were unable to recover it. 00:38:26.354 [2024-12-14 00:19:05.215372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.354 [2024-12-14 00:19:05.215413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.354 qpair failed and we were unable to recover it. 00:38:26.354 [2024-12-14 00:19:05.215552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.354 [2024-12-14 00:19:05.215595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.354 qpair failed and we were unable to recover it. 00:38:26.354 [2024-12-14 00:19:05.215786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.354 [2024-12-14 00:19:05.215828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.354 qpair failed and we were unable to recover it. 00:38:26.354 [2024-12-14 00:19:05.216021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.354 [2024-12-14 00:19:05.216068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.354 qpair failed and we were unable to recover it. 00:38:26.354 [2024-12-14 00:19:05.216273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.354 [2024-12-14 00:19:05.216287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.354 qpair failed and we were unable to recover it. 00:38:26.354 [2024-12-14 00:19:05.216435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.354 [2024-12-14 00:19:05.216490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.354 qpair failed and we were unable to recover it. 00:38:26.354 [2024-12-14 00:19:05.216633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.354 [2024-12-14 00:19:05.216676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.354 qpair failed and we were unable to recover it. 00:38:26.354 [2024-12-14 00:19:05.216958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.354 [2024-12-14 00:19:05.217000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.354 qpair failed and we were unable to recover it. 00:38:26.354 [2024-12-14 00:19:05.217133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.354 [2024-12-14 00:19:05.217160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.354 qpair failed and we were unable to recover it. 00:38:26.354 [2024-12-14 00:19:05.217366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.354 [2024-12-14 00:19:05.217379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.354 qpair failed and we were unable to recover it. 00:38:26.354 [2024-12-14 00:19:05.217532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.354 [2024-12-14 00:19:05.217546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.354 qpair failed and we were unable to recover it. 00:38:26.354 [2024-12-14 00:19:05.217631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.354 [2024-12-14 00:19:05.217644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.354 qpair failed and we were unable to recover it. 00:38:26.354 [2024-12-14 00:19:05.217842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.354 [2024-12-14 00:19:05.217884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.354 qpair failed and we were unable to recover it. 00:38:26.354 [2024-12-14 00:19:05.218148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.354 [2024-12-14 00:19:05.218191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.354 qpair failed and we were unable to recover it. 00:38:26.354 [2024-12-14 00:19:05.218404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.354 [2024-12-14 00:19:05.218417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.354 qpair failed and we were unable to recover it. 00:38:26.354 [2024-12-14 00:19:05.218634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.354 [2024-12-14 00:19:05.218648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.354 qpair failed and we were unable to recover it. 00:38:26.354 [2024-12-14 00:19:05.218812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.354 [2024-12-14 00:19:05.218855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.354 qpair failed and we were unable to recover it. 00:38:26.354 [2024-12-14 00:19:05.219088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.354 [2024-12-14 00:19:05.219129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.354 qpair failed and we were unable to recover it. 00:38:26.354 [2024-12-14 00:19:05.219399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.354 [2024-12-14 00:19:05.219464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.355 qpair failed and we were unable to recover it. 00:38:26.355 [2024-12-14 00:19:05.219617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.355 [2024-12-14 00:19:05.219659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.355 qpair failed and we were unable to recover it. 00:38:26.355 [2024-12-14 00:19:05.219858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.355 [2024-12-14 00:19:05.219900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.355 qpair failed and we were unable to recover it. 00:38:26.355 [2024-12-14 00:19:05.220095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.355 [2024-12-14 00:19:05.220136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.355 qpair failed and we were unable to recover it. 00:38:26.355 [2024-12-14 00:19:05.220277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.355 [2024-12-14 00:19:05.220319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.355 qpair failed and we were unable to recover it. 00:38:26.355 [2024-12-14 00:19:05.220545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.355 [2024-12-14 00:19:05.220559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.355 qpair failed and we were unable to recover it. 00:38:26.355 [2024-12-14 00:19:05.220696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.355 [2024-12-14 00:19:05.220710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.355 qpair failed and we were unable to recover it. 00:38:26.355 [2024-12-14 00:19:05.220850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.355 [2024-12-14 00:19:05.220864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.355 qpair failed and we were unable to recover it. 00:38:26.355 [2024-12-14 00:19:05.221049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.355 [2024-12-14 00:19:05.221089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.355 qpair failed and we were unable to recover it. 00:38:26.355 [2024-12-14 00:19:05.221349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.355 [2024-12-14 00:19:05.221392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.355 qpair failed and we were unable to recover it. 00:38:26.355 [2024-12-14 00:19:05.221609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.355 [2024-12-14 00:19:05.221651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.355 qpair failed and we were unable to recover it. 00:38:26.355 [2024-12-14 00:19:05.221803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.355 [2024-12-14 00:19:05.221844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.355 qpair failed and we were unable to recover it. 00:38:26.355 [2024-12-14 00:19:05.221981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.355 [2024-12-14 00:19:05.222022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.355 qpair failed and we were unable to recover it. 00:38:26.355 [2024-12-14 00:19:05.222215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.355 [2024-12-14 00:19:05.222258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.355 qpair failed and we were unable to recover it. 00:38:26.355 [2024-12-14 00:19:05.222482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.355 [2024-12-14 00:19:05.222496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.355 qpair failed and we were unable to recover it. 00:38:26.355 [2024-12-14 00:19:05.222709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.355 [2024-12-14 00:19:05.222731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.355 qpair failed and we were unable to recover it. 00:38:26.355 [2024-12-14 00:19:05.222907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.355 [2024-12-14 00:19:05.222921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.355 qpair failed and we were unable to recover it. 00:38:26.355 [2024-12-14 00:19:05.223082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.355 [2024-12-14 00:19:05.223125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.355 qpair failed and we were unable to recover it. 00:38:26.355 [2024-12-14 00:19:05.223354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.355 [2024-12-14 00:19:05.223396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.355 qpair failed and we were unable to recover it. 00:38:26.355 [2024-12-14 00:19:05.223575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.355 [2024-12-14 00:19:05.223619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.355 qpair failed and we were unable to recover it. 00:38:26.355 [2024-12-14 00:19:05.223893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.355 [2024-12-14 00:19:05.223935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.355 qpair failed and we were unable to recover it. 00:38:26.355 [2024-12-14 00:19:05.224084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.355 [2024-12-14 00:19:05.224125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.355 qpair failed and we were unable to recover it. 00:38:26.355 [2024-12-14 00:19:05.224368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.355 [2024-12-14 00:19:05.224410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.355 qpair failed and we were unable to recover it. 00:38:26.355 [2024-12-14 00:19:05.224566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.355 [2024-12-14 00:19:05.224609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.355 qpair failed and we were unable to recover it. 00:38:26.355 [2024-12-14 00:19:05.224835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.355 [2024-12-14 00:19:05.224877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.355 qpair failed and we were unable to recover it. 00:38:26.355 [2024-12-14 00:19:05.225158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.355 [2024-12-14 00:19:05.225207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.355 qpair failed and we were unable to recover it. 00:38:26.355 [2024-12-14 00:19:05.225416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.355 [2024-12-14 00:19:05.225468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.355 qpair failed and we were unable to recover it. 00:38:26.355 [2024-12-14 00:19:05.225555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.355 [2024-12-14 00:19:05.225568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.355 qpair failed and we were unable to recover it. 00:38:26.355 [2024-12-14 00:19:05.225655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.355 [2024-12-14 00:19:05.225668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.355 qpair failed and we were unable to recover it. 00:38:26.355 [2024-12-14 00:19:05.225821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.355 [2024-12-14 00:19:05.225835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.355 qpair failed and we were unable to recover it. 00:38:26.355 [2024-12-14 00:19:05.225992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.355 [2024-12-14 00:19:05.226005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.355 qpair failed and we were unable to recover it. 00:38:26.355 [2024-12-14 00:19:05.226157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.355 [2024-12-14 00:19:05.226200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.355 qpair failed and we were unable to recover it. 00:38:26.355 [2024-12-14 00:19:05.226408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.355 [2024-12-14 00:19:05.226463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.355 qpair failed and we were unable to recover it. 00:38:26.355 [2024-12-14 00:19:05.226618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.355 [2024-12-14 00:19:05.226661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.355 qpair failed and we were unable to recover it. 00:38:26.355 [2024-12-14 00:19:05.226797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.355 [2024-12-14 00:19:05.226839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.355 qpair failed and we were unable to recover it. 00:38:26.355 [2024-12-14 00:19:05.226969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.355 [2024-12-14 00:19:05.227010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.355 qpair failed and we were unable to recover it. 00:38:26.355 [2024-12-14 00:19:05.227166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.355 [2024-12-14 00:19:05.227208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.355 qpair failed and we were unable to recover it. 00:38:26.355 [2024-12-14 00:19:05.227429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.355 [2024-12-14 00:19:05.227485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.355 qpair failed and we were unable to recover it. 00:38:26.355 [2024-12-14 00:19:05.227629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.355 [2024-12-14 00:19:05.227670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.355 qpair failed and we were unable to recover it. 00:38:26.355 [2024-12-14 00:19:05.227914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.355 [2024-12-14 00:19:05.227956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.355 qpair failed and we were unable to recover it. 00:38:26.356 [2024-12-14 00:19:05.228100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.356 [2024-12-14 00:19:05.228142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.356 qpair failed and we were unable to recover it. 00:38:26.356 [2024-12-14 00:19:05.228408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.356 [2024-12-14 00:19:05.228421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.356 qpair failed and we were unable to recover it. 00:38:26.356 [2024-12-14 00:19:05.228581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.356 [2024-12-14 00:19:05.228594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.356 qpair failed and we were unable to recover it. 00:38:26.356 [2024-12-14 00:19:05.228694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.356 [2024-12-14 00:19:05.228708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.356 qpair failed and we were unable to recover it. 00:38:26.356 [2024-12-14 00:19:05.228793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.356 [2024-12-14 00:19:05.228806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.356 qpair failed and we were unable to recover it. 00:38:26.356 [2024-12-14 00:19:05.228876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.356 [2024-12-14 00:19:05.228889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.356 qpair failed and we were unable to recover it. 00:38:26.356 [2024-12-14 00:19:05.229033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.356 [2024-12-14 00:19:05.229075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.356 qpair failed and we were unable to recover it. 00:38:26.356 [2024-12-14 00:19:05.229209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.356 [2024-12-14 00:19:05.229252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.356 qpair failed and we were unable to recover it. 00:38:26.356 [2024-12-14 00:19:05.229520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.356 [2024-12-14 00:19:05.229566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.356 qpair failed and we were unable to recover it. 00:38:26.356 [2024-12-14 00:19:05.229725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.356 [2024-12-14 00:19:05.229739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.356 qpair failed and we were unable to recover it. 00:38:26.356 [2024-12-14 00:19:05.229837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.356 [2024-12-14 00:19:05.229852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.356 qpair failed and we were unable to recover it. 00:38:26.356 [2024-12-14 00:19:05.229942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.356 [2024-12-14 00:19:05.229960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.356 qpair failed and we were unable to recover it. 00:38:26.356 [2024-12-14 00:19:05.230115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.356 [2024-12-14 00:19:05.230129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.356 qpair failed and we were unable to recover it. 00:38:26.356 [2024-12-14 00:19:05.230202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.356 [2024-12-14 00:19:05.230215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.356 qpair failed and we were unable to recover it. 00:38:26.356 [2024-12-14 00:19:05.230358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.356 [2024-12-14 00:19:05.230371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.356 qpair failed and we were unable to recover it. 00:38:26.356 [2024-12-14 00:19:05.230448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.356 [2024-12-14 00:19:05.230461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.356 qpair failed and we were unable to recover it. 00:38:26.356 [2024-12-14 00:19:05.230594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.356 [2024-12-14 00:19:05.230607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.356 qpair failed and we were unable to recover it. 00:38:26.356 [2024-12-14 00:19:05.230774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.356 [2024-12-14 00:19:05.230788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.356 qpair failed and we were unable to recover it. 00:38:26.356 [2024-12-14 00:19:05.230867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.356 [2024-12-14 00:19:05.230880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.356 qpair failed and we were unable to recover it. 00:38:26.356 [2024-12-14 00:19:05.230963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.356 [2024-12-14 00:19:05.230975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.356 qpair failed and we were unable to recover it. 00:38:26.356 [2024-12-14 00:19:05.231119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.356 [2024-12-14 00:19:05.231132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.356 qpair failed and we were unable to recover it. 00:38:26.356 [2024-12-14 00:19:05.231217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.356 [2024-12-14 00:19:05.231230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.356 qpair failed and we were unable to recover it. 00:38:26.356 [2024-12-14 00:19:05.231370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.356 [2024-12-14 00:19:05.231383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.356 qpair failed and we were unable to recover it. 00:38:26.356 [2024-12-14 00:19:05.231532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.356 [2024-12-14 00:19:05.231546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.356 qpair failed and we were unable to recover it. 00:38:26.356 [2024-12-14 00:19:05.231706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.356 [2024-12-14 00:19:05.231719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.356 qpair failed and we were unable to recover it. 00:38:26.356 [2024-12-14 00:19:05.231866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.356 [2024-12-14 00:19:05.231882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.356 qpair failed and we were unable to recover it. 00:38:26.356 [2024-12-14 00:19:05.231981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.356 [2024-12-14 00:19:05.231994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.356 qpair failed and we were unable to recover it. 00:38:26.356 [2024-12-14 00:19:05.232197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.356 [2024-12-14 00:19:05.232210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.356 qpair failed and we were unable to recover it. 00:38:26.356 [2024-12-14 00:19:05.232306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.356 [2024-12-14 00:19:05.232319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.356 qpair failed and we were unable to recover it. 00:38:26.356 [2024-12-14 00:19:05.232394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.356 [2024-12-14 00:19:05.232407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.356 qpair failed and we were unable to recover it. 00:38:26.356 [2024-12-14 00:19:05.232553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.356 [2024-12-14 00:19:05.232567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.356 qpair failed and we were unable to recover it. 00:38:26.356 [2024-12-14 00:19:05.232661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.356 [2024-12-14 00:19:05.232674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.356 qpair failed and we were unable to recover it. 00:38:26.356 [2024-12-14 00:19:05.232743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.356 [2024-12-14 00:19:05.232756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.356 qpair failed and we were unable to recover it. 00:38:26.356 [2024-12-14 00:19:05.232893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.356 [2024-12-14 00:19:05.232906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.356 qpair failed and we were unable to recover it. 00:38:26.356 [2024-12-14 00:19:05.232984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.356 [2024-12-14 00:19:05.232997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.356 qpair failed and we were unable to recover it. 00:38:26.356 [2024-12-14 00:19:05.233138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.356 [2024-12-14 00:19:05.233151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.356 qpair failed and we were unable to recover it. 00:38:26.356 [2024-12-14 00:19:05.233292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.356 [2024-12-14 00:19:05.233305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.356 qpair failed and we were unable to recover it. 00:38:26.356 [2024-12-14 00:19:05.233375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.356 [2024-12-14 00:19:05.233388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.356 qpair failed and we were unable to recover it. 00:38:26.356 [2024-12-14 00:19:05.233482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.356 [2024-12-14 00:19:05.233497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.356 qpair failed and we were unable to recover it. 00:38:26.356 [2024-12-14 00:19:05.233641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.357 [2024-12-14 00:19:05.233660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.357 qpair failed and we were unable to recover it. 00:38:26.357 [2024-12-14 00:19:05.233726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.357 [2024-12-14 00:19:05.233739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.357 qpair failed and we were unable to recover it. 00:38:26.357 [2024-12-14 00:19:05.233819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.357 [2024-12-14 00:19:05.233833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.357 qpair failed and we were unable to recover it. 00:38:26.357 [2024-12-14 00:19:05.233985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.357 [2024-12-14 00:19:05.233997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.357 qpair failed and we were unable to recover it. 00:38:26.357 [2024-12-14 00:19:05.234073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.357 [2024-12-14 00:19:05.234086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.357 qpair failed and we were unable to recover it. 00:38:26.357 [2024-12-14 00:19:05.234170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.357 [2024-12-14 00:19:05.234183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.357 qpair failed and we were unable to recover it. 00:38:26.357 [2024-12-14 00:19:05.234256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.357 [2024-12-14 00:19:05.234269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.357 qpair failed and we were unable to recover it. 00:38:26.357 [2024-12-14 00:19:05.234414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.357 [2024-12-14 00:19:05.234428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.357 qpair failed and we were unable to recover it. 00:38:26.357 [2024-12-14 00:19:05.234500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.357 [2024-12-14 00:19:05.234514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.357 qpair failed and we were unable to recover it. 00:38:26.357 [2024-12-14 00:19:05.234655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.357 [2024-12-14 00:19:05.234668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.357 qpair failed and we were unable to recover it. 00:38:26.357 [2024-12-14 00:19:05.234807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.357 [2024-12-14 00:19:05.234820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.357 qpair failed and we were unable to recover it. 00:38:26.357 [2024-12-14 00:19:05.234994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.357 [2024-12-14 00:19:05.235007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.357 qpair failed and we were unable to recover it. 00:38:26.357 [2024-12-14 00:19:05.235089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.357 [2024-12-14 00:19:05.235101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.357 qpair failed and we were unable to recover it. 00:38:26.357 [2024-12-14 00:19:05.235186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.357 [2024-12-14 00:19:05.235199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.357 qpair failed and we were unable to recover it. 00:38:26.357 [2024-12-14 00:19:05.235285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.357 [2024-12-14 00:19:05.235298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.357 qpair failed and we were unable to recover it. 00:38:26.357 [2024-12-14 00:19:05.235382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.357 [2024-12-14 00:19:05.235396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.357 qpair failed and we were unable to recover it. 00:38:26.357 [2024-12-14 00:19:05.235471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.357 [2024-12-14 00:19:05.235485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.357 qpair failed and we were unable to recover it. 00:38:26.357 [2024-12-14 00:19:05.235653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.357 [2024-12-14 00:19:05.235667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.357 qpair failed and we were unable to recover it. 00:38:26.357 [2024-12-14 00:19:05.235736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.357 [2024-12-14 00:19:05.235750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.357 qpair failed and we were unable to recover it. 00:38:26.357 [2024-12-14 00:19:05.235824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.357 [2024-12-14 00:19:05.235837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.357 qpair failed and we were unable to recover it. 00:38:26.357 [2024-12-14 00:19:05.235978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.357 [2024-12-14 00:19:05.235992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.357 qpair failed and we were unable to recover it. 00:38:26.357 [2024-12-14 00:19:05.236128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.357 [2024-12-14 00:19:05.236142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.357 qpair failed and we were unable to recover it. 00:38:26.357 [2024-12-14 00:19:05.236297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.357 [2024-12-14 00:19:05.236310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.357 qpair failed and we were unable to recover it. 00:38:26.357 [2024-12-14 00:19:05.236402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.357 [2024-12-14 00:19:05.236416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.357 qpair failed and we were unable to recover it. 00:38:26.357 [2024-12-14 00:19:05.236497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.357 [2024-12-14 00:19:05.236511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.357 qpair failed and we were unable to recover it. 00:38:26.357 [2024-12-14 00:19:05.236603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.357 [2024-12-14 00:19:05.236617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.357 qpair failed and we were unable to recover it. 00:38:26.357 [2024-12-14 00:19:05.236702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.357 [2024-12-14 00:19:05.236717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.357 qpair failed and we were unable to recover it. 00:38:26.357 [2024-12-14 00:19:05.236803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.357 [2024-12-14 00:19:05.236815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.357 qpair failed and we were unable to recover it. 00:38:26.357 [2024-12-14 00:19:05.236917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.357 [2024-12-14 00:19:05.236930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.357 qpair failed and we were unable to recover it. 00:38:26.357 [2024-12-14 00:19:05.237013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.357 [2024-12-14 00:19:05.237027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.357 qpair failed and we were unable to recover it. 00:38:26.357 [2024-12-14 00:19:05.237113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.357 [2024-12-14 00:19:05.237126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.357 qpair failed and we were unable to recover it. 00:38:26.357 [2024-12-14 00:19:05.237287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.357 [2024-12-14 00:19:05.237300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.357 qpair failed and we were unable to recover it. 00:38:26.357 [2024-12-14 00:19:05.237443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.357 [2024-12-14 00:19:05.237457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.357 qpair failed and we were unable to recover it. 00:38:26.357 [2024-12-14 00:19:05.237525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.357 [2024-12-14 00:19:05.237539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.357 qpair failed and we were unable to recover it. 00:38:26.357 [2024-12-14 00:19:05.237676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.357 [2024-12-14 00:19:05.237689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.357 qpair failed and we were unable to recover it. 00:38:26.357 [2024-12-14 00:19:05.237781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.357 [2024-12-14 00:19:05.237794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.357 qpair failed and we were unable to recover it. 00:38:26.357 [2024-12-14 00:19:05.238048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.357 [2024-12-14 00:19:05.238062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.357 qpair failed and we were unable to recover it. 00:38:26.357 [2024-12-14 00:19:05.238142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.357 [2024-12-14 00:19:05.238155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.357 qpair failed and we were unable to recover it. 00:38:26.357 [2024-12-14 00:19:05.238356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.357 [2024-12-14 00:19:05.238370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.357 qpair failed and we were unable to recover it. 00:38:26.357 [2024-12-14 00:19:05.238531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.358 [2024-12-14 00:19:05.238546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.358 qpair failed and we were unable to recover it. 00:38:26.358 [2024-12-14 00:19:05.238651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.358 [2024-12-14 00:19:05.238665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.358 qpair failed and we were unable to recover it. 00:38:26.358 [2024-12-14 00:19:05.238817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.358 [2024-12-14 00:19:05.238832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.358 qpair failed and we were unable to recover it. 00:38:26.358 [2024-12-14 00:19:05.239033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.358 [2024-12-14 00:19:05.239046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.358 qpair failed and we were unable to recover it. 00:38:26.358 [2024-12-14 00:19:05.239134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.358 [2024-12-14 00:19:05.239147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.358 qpair failed and we were unable to recover it. 00:38:26.358 [2024-12-14 00:19:05.239248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.358 [2024-12-14 00:19:05.239261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.358 qpair failed and we were unable to recover it. 00:38:26.358 [2024-12-14 00:19:05.239342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.358 [2024-12-14 00:19:05.239355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.358 qpair failed and we were unable to recover it. 00:38:26.358 [2024-12-14 00:19:05.239434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.358 [2024-12-14 00:19:05.239452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.358 qpair failed and we were unable to recover it. 00:38:26.358 [2024-12-14 00:19:05.239691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.358 [2024-12-14 00:19:05.239705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.358 qpair failed and we were unable to recover it. 00:38:26.358 [2024-12-14 00:19:05.239791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.358 [2024-12-14 00:19:05.239803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.358 qpair failed and we were unable to recover it. 00:38:26.358 [2024-12-14 00:19:05.239873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.358 [2024-12-14 00:19:05.239886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.358 qpair failed and we were unable to recover it. 00:38:26.358 [2024-12-14 00:19:05.240020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.358 [2024-12-14 00:19:05.240034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.358 qpair failed and we were unable to recover it. 00:38:26.358 [2024-12-14 00:19:05.240125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.358 [2024-12-14 00:19:05.240139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.358 qpair failed and we were unable to recover it. 00:38:26.358 [2024-12-14 00:19:05.240224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.358 [2024-12-14 00:19:05.240254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.358 qpair failed and we were unable to recover it. 00:38:26.358 [2024-12-14 00:19:05.240462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.358 [2024-12-14 00:19:05.240477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.358 qpair failed and we were unable to recover it. 00:38:26.358 [2024-12-14 00:19:05.240575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.358 [2024-12-14 00:19:05.240589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.358 qpair failed and we were unable to recover it. 00:38:26.358 [2024-12-14 00:19:05.240736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.358 [2024-12-14 00:19:05.240750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.358 qpair failed and we were unable to recover it. 00:38:26.358 [2024-12-14 00:19:05.240893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.358 [2024-12-14 00:19:05.240908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.358 qpair failed and we were unable to recover it. 00:38:26.358 [2024-12-14 00:19:05.241055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.358 [2024-12-14 00:19:05.241069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.358 qpair failed and we were unable to recover it. 00:38:26.358 [2024-12-14 00:19:05.241156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.358 [2024-12-14 00:19:05.241170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.358 qpair failed and we were unable to recover it. 00:38:26.358 [2024-12-14 00:19:05.241319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.358 [2024-12-14 00:19:05.241333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.358 qpair failed and we were unable to recover it. 00:38:26.358 [2024-12-14 00:19:05.241423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.358 [2024-12-14 00:19:05.241447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.358 qpair failed and we were unable to recover it. 00:38:26.358 [2024-12-14 00:19:05.241531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.358 [2024-12-14 00:19:05.241545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.358 qpair failed and we were unable to recover it. 00:38:26.358 [2024-12-14 00:19:05.241684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.358 [2024-12-14 00:19:05.241697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.358 qpair failed and we were unable to recover it. 00:38:26.358 [2024-12-14 00:19:05.241848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.358 [2024-12-14 00:19:05.241862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.358 qpair failed and we were unable to recover it. 00:38:26.358 [2024-12-14 00:19:05.242009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.358 [2024-12-14 00:19:05.242027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.358 qpair failed and we were unable to recover it. 00:38:26.358 [2024-12-14 00:19:05.242123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.358 [2024-12-14 00:19:05.242136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.358 qpair failed and we were unable to recover it. 00:38:26.358 [2024-12-14 00:19:05.242217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.358 [2024-12-14 00:19:05.242235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.358 qpair failed and we were unable to recover it. 00:38:26.358 [2024-12-14 00:19:05.242385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.358 [2024-12-14 00:19:05.242399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.358 qpair failed and we were unable to recover it. 00:38:26.358 [2024-12-14 00:19:05.242473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.358 [2024-12-14 00:19:05.242487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.358 qpair failed and we were unable to recover it. 00:38:26.358 [2024-12-14 00:19:05.242563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.358 [2024-12-14 00:19:05.242576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.358 qpair failed and we were unable to recover it. 00:38:26.358 [2024-12-14 00:19:05.242659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.358 [2024-12-14 00:19:05.242672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.358 qpair failed and we were unable to recover it. 00:38:26.358 [2024-12-14 00:19:05.242767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.358 [2024-12-14 00:19:05.242780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.358 qpair failed and we were unable to recover it. 00:38:26.358 [2024-12-14 00:19:05.242987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.358 [2024-12-14 00:19:05.243002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.358 qpair failed and we were unable to recover it. 00:38:26.358 [2024-12-14 00:19:05.243146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.358 [2024-12-14 00:19:05.243159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.358 qpair failed and we were unable to recover it. 00:38:26.358 [2024-12-14 00:19:05.243240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.358 [2024-12-14 00:19:05.243254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.358 qpair failed and we were unable to recover it. 00:38:26.358 [2024-12-14 00:19:05.243322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.359 [2024-12-14 00:19:05.243335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.359 qpair failed and we were unable to recover it. 00:38:26.359 [2024-12-14 00:19:05.243408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.359 [2024-12-14 00:19:05.243422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.359 qpair failed and we were unable to recover it. 00:38:26.359 [2024-12-14 00:19:05.243501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.359 [2024-12-14 00:19:05.243516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.359 qpair failed and we were unable to recover it. 00:38:26.359 [2024-12-14 00:19:05.243602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.359 [2024-12-14 00:19:05.243616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.359 qpair failed and we were unable to recover it. 00:38:26.359 [2024-12-14 00:19:05.243711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.359 [2024-12-14 00:19:05.243725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.359 qpair failed and we were unable to recover it. 00:38:26.359 [2024-12-14 00:19:05.243813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.359 [2024-12-14 00:19:05.243827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.359 qpair failed and we were unable to recover it. 00:38:26.359 [2024-12-14 00:19:05.243969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.359 [2024-12-14 00:19:05.243984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.359 qpair failed and we were unable to recover it. 00:38:26.359 [2024-12-14 00:19:05.244190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.359 [2024-12-14 00:19:05.244204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.359 qpair failed and we were unable to recover it. 00:38:26.359 [2024-12-14 00:19:05.244292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.359 [2024-12-14 00:19:05.244305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.359 qpair failed and we were unable to recover it. 00:38:26.359 [2024-12-14 00:19:05.244378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.359 [2024-12-14 00:19:05.244392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.359 qpair failed and we were unable to recover it. 00:38:26.359 [2024-12-14 00:19:05.244463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.359 [2024-12-14 00:19:05.244477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.359 qpair failed and we were unable to recover it. 00:38:26.359 [2024-12-14 00:19:05.244629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.359 [2024-12-14 00:19:05.244643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.359 qpair failed and we were unable to recover it. 00:38:26.359 [2024-12-14 00:19:05.244843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.359 [2024-12-14 00:19:05.244858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.359 qpair failed and we were unable to recover it. 00:38:26.359 [2024-12-14 00:19:05.245022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.359 [2024-12-14 00:19:05.245036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.359 qpair failed and we were unable to recover it. 00:38:26.359 [2024-12-14 00:19:05.245124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.359 [2024-12-14 00:19:05.245137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.359 qpair failed and we were unable to recover it. 00:38:26.359 [2024-12-14 00:19:05.245342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.359 [2024-12-14 00:19:05.245357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.359 qpair failed and we were unable to recover it. 00:38:26.359 [2024-12-14 00:19:05.245435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.359 [2024-12-14 00:19:05.245453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.359 qpair failed and we were unable to recover it. 00:38:26.359 [2024-12-14 00:19:05.245662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.359 [2024-12-14 00:19:05.245677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.359 qpair failed and we were unable to recover it. 00:38:26.359 [2024-12-14 00:19:05.245822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.359 [2024-12-14 00:19:05.245836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.359 qpair failed and we were unable to recover it. 00:38:26.359 [2024-12-14 00:19:05.245987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.359 [2024-12-14 00:19:05.246001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.359 qpair failed and we were unable to recover it. 00:38:26.359 [2024-12-14 00:19:05.246086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.359 [2024-12-14 00:19:05.246099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.359 qpair failed and we were unable to recover it. 00:38:26.359 [2024-12-14 00:19:05.246187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.359 [2024-12-14 00:19:05.246201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.359 qpair failed and we were unable to recover it. 00:38:26.359 [2024-12-14 00:19:05.246423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.359 [2024-12-14 00:19:05.246442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.359 qpair failed and we were unable to recover it. 00:38:26.359 [2024-12-14 00:19:05.246598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.359 [2024-12-14 00:19:05.246613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.359 qpair failed and we were unable to recover it. 00:38:26.359 [2024-12-14 00:19:05.246697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.359 [2024-12-14 00:19:05.246710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.359 qpair failed and we were unable to recover it. 00:38:26.359 [2024-12-14 00:19:05.246865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.359 [2024-12-14 00:19:05.246880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.359 qpair failed and we were unable to recover it. 00:38:26.359 [2024-12-14 00:19:05.247014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.359 [2024-12-14 00:19:05.247028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.359 qpair failed and we were unable to recover it. 00:38:26.359 [2024-12-14 00:19:05.247206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.359 [2024-12-14 00:19:05.247220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.359 qpair failed and we were unable to recover it. 00:38:26.359 [2024-12-14 00:19:05.247369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.359 [2024-12-14 00:19:05.247383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.359 qpair failed and we were unable to recover it. 00:38:26.359 [2024-12-14 00:19:05.247482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.359 [2024-12-14 00:19:05.247496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.359 qpair failed and we were unable to recover it. 00:38:26.359 [2024-12-14 00:19:05.247744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.359 [2024-12-14 00:19:05.247759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.359 qpair failed and we were unable to recover it. 00:38:26.359 [2024-12-14 00:19:05.247844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.359 [2024-12-14 00:19:05.247861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.359 qpair failed and we were unable to recover it. 00:38:26.359 [2024-12-14 00:19:05.248000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.359 [2024-12-14 00:19:05.248015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.359 qpair failed and we were unable to recover it. 00:38:26.359 [2024-12-14 00:19:05.248100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.359 [2024-12-14 00:19:05.248114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.359 qpair failed and we were unable to recover it. 00:38:26.359 [2024-12-14 00:19:05.248282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.359 [2024-12-14 00:19:05.248296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.359 qpair failed and we were unable to recover it. 00:38:26.359 [2024-12-14 00:19:05.248447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.359 [2024-12-14 00:19:05.248461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.359 qpair failed and we were unable to recover it. 00:38:26.359 [2024-12-14 00:19:05.248615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.359 [2024-12-14 00:19:05.248630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.359 qpair failed and we were unable to recover it. 00:38:26.359 [2024-12-14 00:19:05.248721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.359 [2024-12-14 00:19:05.248736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.359 qpair failed and we were unable to recover it. 00:38:26.359 [2024-12-14 00:19:05.248901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.359 [2024-12-14 00:19:05.248915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.359 qpair failed and we were unable to recover it. 00:38:26.359 [2024-12-14 00:19:05.249118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.360 [2024-12-14 00:19:05.249132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.360 qpair failed and we were unable to recover it. 00:38:26.360 [2024-12-14 00:19:05.249307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.360 [2024-12-14 00:19:05.249322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.360 qpair failed and we were unable to recover it. 00:38:26.360 [2024-12-14 00:19:05.249532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.360 [2024-12-14 00:19:05.249547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.360 qpair failed and we were unable to recover it. 00:38:26.360 [2024-12-14 00:19:05.249775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.360 [2024-12-14 00:19:05.249790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.360 qpair failed and we were unable to recover it. 00:38:26.360 [2024-12-14 00:19:05.249888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.360 [2024-12-14 00:19:05.249902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.360 qpair failed and we were unable to recover it. 00:38:26.360 [2024-12-14 00:19:05.250056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.360 [2024-12-14 00:19:05.250071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.360 qpair failed and we were unable to recover it. 00:38:26.360 [2024-12-14 00:19:05.250251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.360 [2024-12-14 00:19:05.250266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.360 qpair failed and we were unable to recover it. 00:38:26.360 [2024-12-14 00:19:05.250400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.360 [2024-12-14 00:19:05.250415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.360 qpair failed and we were unable to recover it. 00:38:26.360 [2024-12-14 00:19:05.250568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.360 [2024-12-14 00:19:05.250583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.360 qpair failed and we were unable to recover it. 00:38:26.360 [2024-12-14 00:19:05.250751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.360 [2024-12-14 00:19:05.250765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.360 qpair failed and we were unable to recover it. 00:38:26.360 [2024-12-14 00:19:05.250838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.360 [2024-12-14 00:19:05.250853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.360 qpair failed and we were unable to recover it. 00:38:26.360 [2024-12-14 00:19:05.251015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.360 [2024-12-14 00:19:05.251030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.360 qpair failed and we were unable to recover it. 00:38:26.360 [2024-12-14 00:19:05.251136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.360 [2024-12-14 00:19:05.251151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.360 qpair failed and we were unable to recover it. 00:38:26.360 [2024-12-14 00:19:05.251426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.360 [2024-12-14 00:19:05.251450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.360 qpair failed and we were unable to recover it. 00:38:26.360 [2024-12-14 00:19:05.251606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.360 [2024-12-14 00:19:05.251626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.360 qpair failed and we were unable to recover it. 00:38:26.360 [2024-12-14 00:19:05.251776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.360 [2024-12-14 00:19:05.251791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.360 qpair failed and we were unable to recover it. 00:38:26.360 [2024-12-14 00:19:05.251877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.360 [2024-12-14 00:19:05.251892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.360 qpair failed and we were unable to recover it. 00:38:26.360 [2024-12-14 00:19:05.251984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.360 [2024-12-14 00:19:05.252000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.360 qpair failed and we were unable to recover it. 00:38:26.360 [2024-12-14 00:19:05.252139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.360 [2024-12-14 00:19:05.252154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.360 qpair failed and we were unable to recover it. 00:38:26.360 [2024-12-14 00:19:05.252354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.360 [2024-12-14 00:19:05.252369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.360 qpair failed and we were unable to recover it. 00:38:26.360 [2024-12-14 00:19:05.252535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.360 [2024-12-14 00:19:05.252551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.360 qpair failed and we were unable to recover it. 00:38:26.360 [2024-12-14 00:19:05.252656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.360 [2024-12-14 00:19:05.252670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.360 qpair failed and we were unable to recover it. 00:38:26.360 [2024-12-14 00:19:05.252817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.360 [2024-12-14 00:19:05.252832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.360 qpair failed and we were unable to recover it. 00:38:26.360 [2024-12-14 00:19:05.252972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.360 [2024-12-14 00:19:05.252987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.360 qpair failed and we were unable to recover it. 00:38:26.360 [2024-12-14 00:19:05.253181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.360 [2024-12-14 00:19:05.253196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.360 qpair failed and we were unable to recover it. 00:38:26.360 [2024-12-14 00:19:05.253451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.360 [2024-12-14 00:19:05.253466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.360 qpair failed and we were unable to recover it. 00:38:26.360 [2024-12-14 00:19:05.253559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.360 [2024-12-14 00:19:05.253574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.360 qpair failed and we were unable to recover it. 00:38:26.360 [2024-12-14 00:19:05.253780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.360 [2024-12-14 00:19:05.253795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.360 qpair failed and we were unable to recover it. 00:38:26.360 [2024-12-14 00:19:05.253975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.360 [2024-12-14 00:19:05.253990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.360 qpair failed and we were unable to recover it. 00:38:26.360 [2024-12-14 00:19:05.254199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.360 [2024-12-14 00:19:05.254214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.360 qpair failed and we were unable to recover it. 00:38:26.360 [2024-12-14 00:19:05.254321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.360 [2024-12-14 00:19:05.254337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.360 qpair failed and we were unable to recover it. 00:38:26.360 [2024-12-14 00:19:05.254503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.360 [2024-12-14 00:19:05.254518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.360 qpair failed and we were unable to recover it. 00:38:26.360 [2024-12-14 00:19:05.254588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.360 [2024-12-14 00:19:05.254605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.360 qpair failed and we were unable to recover it. 00:38:26.360 [2024-12-14 00:19:05.254779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.360 [2024-12-14 00:19:05.254794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.360 qpair failed and we were unable to recover it. 00:38:26.360 [2024-12-14 00:19:05.254890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.360 [2024-12-14 00:19:05.254906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.360 qpair failed and we were unable to recover it. 00:38:26.360 [2024-12-14 00:19:05.255004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.360 [2024-12-14 00:19:05.255019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.360 qpair failed and we were unable to recover it. 00:38:26.360 [2024-12-14 00:19:05.255160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.360 [2024-12-14 00:19:05.255175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.360 qpair failed and we were unable to recover it. 00:38:26.360 [2024-12-14 00:19:05.255323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.360 [2024-12-14 00:19:05.255338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.360 qpair failed and we were unable to recover it. 00:38:26.360 [2024-12-14 00:19:05.255550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.360 [2024-12-14 00:19:05.255566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.360 qpair failed and we were unable to recover it. 00:38:26.360 [2024-12-14 00:19:05.255653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.361 [2024-12-14 00:19:05.255668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.361 qpair failed and we were unable to recover it. 00:38:26.361 [2024-12-14 00:19:05.255735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.361 [2024-12-14 00:19:05.255751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.361 qpair failed and we were unable to recover it. 00:38:26.361 [2024-12-14 00:19:05.255851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.361 [2024-12-14 00:19:05.255867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.361 qpair failed and we were unable to recover it. 00:38:26.361 [2024-12-14 00:19:05.255952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.361 [2024-12-14 00:19:05.255966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.361 qpair failed and we were unable to recover it. 00:38:26.361 [2024-12-14 00:19:05.256050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.361 [2024-12-14 00:19:05.256065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.361 qpair failed and we were unable to recover it. 00:38:26.361 [2024-12-14 00:19:05.256161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.361 [2024-12-14 00:19:05.256176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.361 qpair failed and we were unable to recover it. 00:38:26.361 [2024-12-14 00:19:05.256341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.361 [2024-12-14 00:19:05.256357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.361 qpair failed and we were unable to recover it. 00:38:26.361 [2024-12-14 00:19:05.256446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.361 [2024-12-14 00:19:05.256479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.361 qpair failed and we were unable to recover it. 00:38:26.361 [2024-12-14 00:19:05.256591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.361 [2024-12-14 00:19:05.256607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.361 qpair failed and we were unable to recover it. 00:38:26.361 [2024-12-14 00:19:05.256818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.361 [2024-12-14 00:19:05.256834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.361 qpair failed and we were unable to recover it. 00:38:26.361 [2024-12-14 00:19:05.257055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.361 [2024-12-14 00:19:05.257071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.361 qpair failed and we were unable to recover it. 00:38:26.361 [2024-12-14 00:19:05.257170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.361 [2024-12-14 00:19:05.257186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.361 qpair failed and we were unable to recover it. 00:38:26.361 [2024-12-14 00:19:05.257292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.361 [2024-12-14 00:19:05.257307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.361 qpair failed and we were unable to recover it. 00:38:26.361 [2024-12-14 00:19:05.257483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.361 [2024-12-14 00:19:05.257500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.361 qpair failed and we were unable to recover it. 00:38:26.361 [2024-12-14 00:19:05.257652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.361 [2024-12-14 00:19:05.257667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.361 qpair failed and we were unable to recover it. 00:38:26.361 [2024-12-14 00:19:05.257817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.361 [2024-12-14 00:19:05.257832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.361 qpair failed and we were unable to recover it. 00:38:26.361 [2024-12-14 00:19:05.257995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.361 [2024-12-14 00:19:05.258010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.361 qpair failed and we were unable to recover it. 00:38:26.361 [2024-12-14 00:19:05.258151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.361 [2024-12-14 00:19:05.258166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.361 qpair failed and we were unable to recover it. 00:38:26.361 [2024-12-14 00:19:05.258253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.361 [2024-12-14 00:19:05.258268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.361 qpair failed and we were unable to recover it. 00:38:26.361 [2024-12-14 00:19:05.258458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.361 [2024-12-14 00:19:05.258474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.361 qpair failed and we were unable to recover it. 00:38:26.361 [2024-12-14 00:19:05.258570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.361 [2024-12-14 00:19:05.258585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.361 qpair failed and we were unable to recover it. 00:38:26.361 [2024-12-14 00:19:05.258673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.361 [2024-12-14 00:19:05.258688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.361 qpair failed and we were unable to recover it. 00:38:26.361 [2024-12-14 00:19:05.258921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.361 [2024-12-14 00:19:05.258936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.361 qpair failed and we were unable to recover it. 00:38:26.361 [2024-12-14 00:19:05.259076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.361 [2024-12-14 00:19:05.259091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.361 qpair failed and we were unable to recover it. 00:38:26.361 [2024-12-14 00:19:05.259267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.361 [2024-12-14 00:19:05.259282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.361 qpair failed and we were unable to recover it. 00:38:26.361 [2024-12-14 00:19:05.259450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.361 [2024-12-14 00:19:05.259465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.361 qpair failed and we were unable to recover it. 00:38:26.361 [2024-12-14 00:19:05.259626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.361 [2024-12-14 00:19:05.259641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.361 qpair failed and we were unable to recover it. 00:38:26.361 [2024-12-14 00:19:05.259746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.361 [2024-12-14 00:19:05.259761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.361 qpair failed and we were unable to recover it. 00:38:26.361 [2024-12-14 00:19:05.259969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.361 [2024-12-14 00:19:05.259983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.361 qpair failed and we were unable to recover it. 00:38:26.361 [2024-12-14 00:19:05.260062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.361 [2024-12-14 00:19:05.260077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.361 qpair failed and we were unable to recover it. 00:38:26.361 [2024-12-14 00:19:05.260161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.361 [2024-12-14 00:19:05.260176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.361 qpair failed and we were unable to recover it. 00:38:26.361 [2024-12-14 00:19:05.260329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.361 [2024-12-14 00:19:05.260345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.361 qpair failed and we were unable to recover it. 00:38:26.361 [2024-12-14 00:19:05.260414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.361 [2024-12-14 00:19:05.260429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.361 qpair failed and we were unable to recover it. 00:38:26.361 [2024-12-14 00:19:05.260512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.361 [2024-12-14 00:19:05.260531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.361 qpair failed and we were unable to recover it. 00:38:26.361 [2024-12-14 00:19:05.260612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.361 [2024-12-14 00:19:05.260628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.361 qpair failed and we were unable to recover it. 00:38:26.361 [2024-12-14 00:19:05.260717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.361 [2024-12-14 00:19:05.260733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.361 qpair failed and we were unable to recover it. 00:38:26.361 [2024-12-14 00:19:05.260883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.361 [2024-12-14 00:19:05.260899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.361 qpair failed and we were unable to recover it. 00:38:26.361 [2024-12-14 00:19:05.260994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.361 [2024-12-14 00:19:05.261009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.361 qpair failed and we were unable to recover it. 00:38:26.361 [2024-12-14 00:19:05.261227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.361 [2024-12-14 00:19:05.261245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.361 qpair failed and we were unable to recover it. 00:38:26.361 [2024-12-14 00:19:05.261326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.361 [2024-12-14 00:19:05.261348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.361 qpair failed and we were unable to recover it. 00:38:26.362 [2024-12-14 00:19:05.261485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.362 [2024-12-14 00:19:05.261502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.362 qpair failed and we were unable to recover it. 00:38:26.362 [2024-12-14 00:19:05.261590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.362 [2024-12-14 00:19:05.261606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.362 qpair failed and we were unable to recover it. 00:38:26.362 [2024-12-14 00:19:05.261685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.362 [2024-12-14 00:19:05.261701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.362 qpair failed and we were unable to recover it. 00:38:26.362 [2024-12-14 00:19:05.261936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.362 [2024-12-14 00:19:05.261951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.362 qpair failed and we were unable to recover it. 00:38:26.362 [2024-12-14 00:19:05.262039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.362 [2024-12-14 00:19:05.262055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.362 qpair failed and we were unable to recover it. 00:38:26.362 [2024-12-14 00:19:05.262258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.362 [2024-12-14 00:19:05.262274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.362 qpair failed and we were unable to recover it. 00:38:26.362 [2024-12-14 00:19:05.262350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.362 [2024-12-14 00:19:05.262366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.362 qpair failed and we were unable to recover it. 00:38:26.362 [2024-12-14 00:19:05.262507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.362 [2024-12-14 00:19:05.262524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.362 qpair failed and we were unable to recover it. 00:38:26.362 [2024-12-14 00:19:05.262608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.362 [2024-12-14 00:19:05.262623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.362 qpair failed and we were unable to recover it. 00:38:26.362 [2024-12-14 00:19:05.262709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.362 [2024-12-14 00:19:05.262725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.362 qpair failed and we were unable to recover it. 00:38:26.362 [2024-12-14 00:19:05.262816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.362 [2024-12-14 00:19:05.262832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.362 qpair failed and we were unable to recover it. 00:38:26.362 [2024-12-14 00:19:05.262991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.362 [2024-12-14 00:19:05.263006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.362 qpair failed and we were unable to recover it. 00:38:26.362 [2024-12-14 00:19:05.263237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.362 [2024-12-14 00:19:05.263253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.362 qpair failed and we were unable to recover it. 00:38:26.362 [2024-12-14 00:19:05.263344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.362 [2024-12-14 00:19:05.263359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.362 qpair failed and we were unable to recover it. 00:38:26.362 [2024-12-14 00:19:05.263460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.362 [2024-12-14 00:19:05.263476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.362 qpair failed and we were unable to recover it. 00:38:26.362 [2024-12-14 00:19:05.263638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.362 [2024-12-14 00:19:05.263654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.362 qpair failed and we were unable to recover it. 00:38:26.362 [2024-12-14 00:19:05.263750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.362 [2024-12-14 00:19:05.263765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.362 qpair failed and we were unable to recover it. 00:38:26.362 [2024-12-14 00:19:05.263907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.362 [2024-12-14 00:19:05.263922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.362 qpair failed and we were unable to recover it. 00:38:26.362 [2024-12-14 00:19:05.264018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.362 [2024-12-14 00:19:05.264033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.362 qpair failed and we were unable to recover it. 00:38:26.362 [2024-12-14 00:19:05.264133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.362 [2024-12-14 00:19:05.264149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.362 qpair failed and we were unable to recover it. 00:38:26.362 [2024-12-14 00:19:05.264371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.362 [2024-12-14 00:19:05.264418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.362 qpair failed and we were unable to recover it. 00:38:26.362 [2024-12-14 00:19:05.264679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.362 [2024-12-14 00:19:05.264725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:26.362 qpair failed and we were unable to recover it. 00:38:26.362 [2024-12-14 00:19:05.264988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.362 [2024-12-14 00:19:05.265032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:26.362 qpair failed and we were unable to recover it. 00:38:26.362 [2024-12-14 00:19:05.265254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.362 [2024-12-14 00:19:05.265272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.362 qpair failed and we were unable to recover it. 00:38:26.362 [2024-12-14 00:19:05.265412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.362 [2024-12-14 00:19:05.265427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.362 qpair failed and we were unable to recover it. 00:38:26.362 [2024-12-14 00:19:05.265681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.362 [2024-12-14 00:19:05.265698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.362 qpair failed and we were unable to recover it. 00:38:26.362 [2024-12-14 00:19:05.265782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.362 [2024-12-14 00:19:05.265798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.362 qpair failed and we were unable to recover it. 00:38:26.362 [2024-12-14 00:19:05.265965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.362 [2024-12-14 00:19:05.265981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.362 qpair failed and we were unable to recover it. 00:38:26.362 [2024-12-14 00:19:05.266139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.362 [2024-12-14 00:19:05.266155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.362 qpair failed and we were unable to recover it. 00:38:26.362 [2024-12-14 00:19:05.266242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.362 [2024-12-14 00:19:05.266257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.362 qpair failed and we were unable to recover it. 00:38:26.362 [2024-12-14 00:19:05.266418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.362 [2024-12-14 00:19:05.266434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.362 qpair failed and we were unable to recover it. 00:38:26.362 [2024-12-14 00:19:05.266513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.362 [2024-12-14 00:19:05.266528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.362 qpair failed and we were unable to recover it. 00:38:26.362 [2024-12-14 00:19:05.266615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.362 [2024-12-14 00:19:05.266631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.362 qpair failed and we were unable to recover it. 00:38:26.362 [2024-12-14 00:19:05.266785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.362 [2024-12-14 00:19:05.266803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.363 qpair failed and we were unable to recover it. 00:38:26.363 [2024-12-14 00:19:05.266960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.363 [2024-12-14 00:19:05.266976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.363 qpair failed and we were unable to recover it. 00:38:26.363 [2024-12-14 00:19:05.267135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.363 [2024-12-14 00:19:05.267151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.363 qpair failed and we were unable to recover it. 00:38:26.363 [2024-12-14 00:19:05.267250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.363 [2024-12-14 00:19:05.267290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.363 qpair failed and we were unable to recover it. 00:38:26.363 [2024-12-14 00:19:05.267517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.363 [2024-12-14 00:19:05.267560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.363 qpair failed and we were unable to recover it. 00:38:26.363 [2024-12-14 00:19:05.267796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.363 [2024-12-14 00:19:05.267838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.363 qpair failed and we were unable to recover it. 00:38:26.363 [2024-12-14 00:19:05.268046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.363 [2024-12-14 00:19:05.268087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.363 qpair failed and we were unable to recover it. 00:38:26.363 [2024-12-14 00:19:05.268282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.363 [2024-12-14 00:19:05.268324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.363 qpair failed and we were unable to recover it. 00:38:26.363 [2024-12-14 00:19:05.268549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.363 [2024-12-14 00:19:05.268592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.363 qpair failed and we were unable to recover it. 00:38:26.363 [2024-12-14 00:19:05.268846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.363 [2024-12-14 00:19:05.268862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.363 qpair failed and we were unable to recover it. 00:38:26.363 [2024-12-14 00:19:05.269020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.363 [2024-12-14 00:19:05.269036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.363 qpair failed and we were unable to recover it. 00:38:26.363 [2024-12-14 00:19:05.269141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.363 [2024-12-14 00:19:05.269157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.363 qpair failed and we were unable to recover it. 00:38:26.363 [2024-12-14 00:19:05.269313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.363 [2024-12-14 00:19:05.269329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.363 qpair failed and we were unable to recover it. 00:38:26.363 [2024-12-14 00:19:05.269536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.363 [2024-12-14 00:19:05.269553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.363 qpair failed and we were unable to recover it. 00:38:26.363 [2024-12-14 00:19:05.269707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.363 [2024-12-14 00:19:05.269723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.363 qpair failed and we were unable to recover it. 00:38:26.363 [2024-12-14 00:19:05.269860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.363 [2024-12-14 00:19:05.269876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.363 qpair failed and we were unable to recover it. 00:38:26.363 [2024-12-14 00:19:05.270032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.363 [2024-12-14 00:19:05.270049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.363 qpair failed and we were unable to recover it. 00:38:26.363 [2024-12-14 00:19:05.270185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.363 [2024-12-14 00:19:05.270201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.363 qpair failed and we were unable to recover it. 00:38:26.363 [2024-12-14 00:19:05.270304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.363 [2024-12-14 00:19:05.270321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.363 qpair failed and we were unable to recover it. 00:38:26.363 [2024-12-14 00:19:05.270548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.363 [2024-12-14 00:19:05.270564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.363 qpair failed and we were unable to recover it. 00:38:26.363 [2024-12-14 00:19:05.270703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.363 [2024-12-14 00:19:05.270718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.363 qpair failed and we were unable to recover it. 00:38:26.363 [2024-12-14 00:19:05.270963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.363 [2024-12-14 00:19:05.270979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.363 qpair failed and we were unable to recover it. 00:38:26.363 [2024-12-14 00:19:05.271071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.363 [2024-12-14 00:19:05.271088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.363 qpair failed and we were unable to recover it. 00:38:26.363 [2024-12-14 00:19:05.271196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.363 [2024-12-14 00:19:05.271212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.363 qpair failed and we were unable to recover it. 00:38:26.363 [2024-12-14 00:19:05.271369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.363 [2024-12-14 00:19:05.271386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.363 qpair failed and we were unable to recover it. 00:38:26.363 [2024-12-14 00:19:05.271543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.363 [2024-12-14 00:19:05.271560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.363 qpair failed and we were unable to recover it. 00:38:26.363 [2024-12-14 00:19:05.271652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.363 [2024-12-14 00:19:05.271668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.363 qpair failed and we were unable to recover it. 00:38:26.363 [2024-12-14 00:19:05.271791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.363 [2024-12-14 00:19:05.271820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.363 qpair failed and we were unable to recover it. 00:38:26.363 [2024-12-14 00:19:05.271997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.363 [2024-12-14 00:19:05.272025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:26.363 qpair failed and we were unable to recover it. 00:38:26.363 [2024-12-14 00:19:05.272195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.363 [2024-12-14 00:19:05.272218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:26.363 qpair failed and we were unable to recover it. 00:38:26.363 [2024-12-14 00:19:05.272332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.363 [2024-12-14 00:19:05.272353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:26.363 qpair failed and we were unable to recover it. 00:38:26.363 [2024-12-14 00:19:05.272472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.363 [2024-12-14 00:19:05.272495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:26.363 qpair failed and we were unable to recover it. 00:38:26.363 [2024-12-14 00:19:05.272648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.363 [2024-12-14 00:19:05.272670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:26.363 qpair failed and we were unable to recover it. 00:38:26.363 [2024-12-14 00:19:05.272836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.363 [2024-12-14 00:19:05.272858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:26.363 qpair failed and we were unable to recover it. 00:38:26.363 [2024-12-14 00:19:05.273030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.363 [2024-12-14 00:19:05.273052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:26.363 qpair failed and we were unable to recover it. 00:38:26.363 [2024-12-14 00:19:05.273155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.363 [2024-12-14 00:19:05.273177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:26.363 qpair failed and we were unable to recover it. 00:38:26.363 [2024-12-14 00:19:05.273308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.363 [2024-12-14 00:19:05.273325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.363 qpair failed and we were unable to recover it. 00:38:26.363 [2024-12-14 00:19:05.273530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.363 [2024-12-14 00:19:05.273555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.363 qpair failed and we were unable to recover it. 00:38:26.363 [2024-12-14 00:19:05.273804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.363 [2024-12-14 00:19:05.273826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.363 qpair failed and we were unable to recover it. 00:38:26.364 [2024-12-14 00:19:05.273945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.364 [2024-12-14 00:19:05.273967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.364 qpair failed and we were unable to recover it. 00:38:26.364 [2024-12-14 00:19:05.274134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.364 [2024-12-14 00:19:05.274160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.364 qpair failed and we were unable to recover it. 00:38:26.364 [2024-12-14 00:19:05.274332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.364 [2024-12-14 00:19:05.274354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.364 qpair failed and we were unable to recover it. 00:38:26.364 [2024-12-14 00:19:05.274465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.364 [2024-12-14 00:19:05.274487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.364 qpair failed and we were unable to recover it. 00:38:26.364 [2024-12-14 00:19:05.274604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.364 [2024-12-14 00:19:05.274625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.364 qpair failed and we were unable to recover it. 00:38:26.364 [2024-12-14 00:19:05.274879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.364 [2024-12-14 00:19:05.274901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.364 qpair failed and we were unable to recover it. 00:38:26.364 [2024-12-14 00:19:05.275165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.364 [2024-12-14 00:19:05.275187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.364 qpair failed and we were unable to recover it. 00:38:26.364 [2024-12-14 00:19:05.275285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.364 [2024-12-14 00:19:05.275302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.364 qpair failed and we were unable to recover it. 00:38:26.364 [2024-12-14 00:19:05.275481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.364 [2024-12-14 00:19:05.275498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.364 qpair failed and we were unable to recover it. 00:38:26.364 [2024-12-14 00:19:05.275647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.364 [2024-12-14 00:19:05.275663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.364 qpair failed and we were unable to recover it. 00:38:26.364 [2024-12-14 00:19:05.275797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.364 [2024-12-14 00:19:05.275813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.364 qpair failed and we were unable to recover it. 00:38:26.364 [2024-12-14 00:19:05.275909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.364 [2024-12-14 00:19:05.275933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.364 qpair failed and we were unable to recover it. 00:38:26.364 [2024-12-14 00:19:05.276038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.364 [2024-12-14 00:19:05.276059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.364 qpair failed and we were unable to recover it. 00:38:26.364 [2024-12-14 00:19:05.276212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.364 [2024-12-14 00:19:05.276227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.364 qpair failed and we were unable to recover it. 00:38:26.364 [2024-12-14 00:19:05.276389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.364 [2024-12-14 00:19:05.276405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.364 qpair failed and we were unable to recover it. 00:38:26.364 [2024-12-14 00:19:05.276562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.364 [2024-12-14 00:19:05.276578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.364 qpair failed and we were unable to recover it. 00:38:26.364 [2024-12-14 00:19:05.276832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.364 [2024-12-14 00:19:05.276848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.364 qpair failed and we were unable to recover it. 00:38:26.364 [2024-12-14 00:19:05.276929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.364 [2024-12-14 00:19:05.276945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.364 qpair failed and we were unable to recover it. 00:38:26.364 [2024-12-14 00:19:05.277043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.364 [2024-12-14 00:19:05.277059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.364 qpair failed and we were unable to recover it. 00:38:26.364 [2024-12-14 00:19:05.277203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.364 [2024-12-14 00:19:05.277219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.364 qpair failed and we were unable to recover it. 00:38:26.364 [2024-12-14 00:19:05.277322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.364 [2024-12-14 00:19:05.277338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.364 qpair failed and we were unable to recover it. 00:38:26.364 [2024-12-14 00:19:05.277432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.364 [2024-12-14 00:19:05.277453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.364 qpair failed and we were unable to recover it. 00:38:26.364 [2024-12-14 00:19:05.277546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.364 [2024-12-14 00:19:05.277562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.364 qpair failed and we were unable to recover it. 00:38:26.364 [2024-12-14 00:19:05.277652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.364 [2024-12-14 00:19:05.277668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.364 qpair failed and we were unable to recover it. 00:38:26.364 [2024-12-14 00:19:05.277826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.364 [2024-12-14 00:19:05.277842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.364 qpair failed and we were unable to recover it. 00:38:26.364 [2024-12-14 00:19:05.277917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.364 [2024-12-14 00:19:05.277933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.364 qpair failed and we were unable to recover it. 00:38:26.364 [2024-12-14 00:19:05.278004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.364 [2024-12-14 00:19:05.278019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.364 qpair failed and we were unable to recover it. 00:38:26.364 [2024-12-14 00:19:05.278165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.364 [2024-12-14 00:19:05.278181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.364 qpair failed and we were unable to recover it. 00:38:26.364 [2024-12-14 00:19:05.278363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.364 [2024-12-14 00:19:05.278387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.364 qpair failed and we were unable to recover it. 00:38:26.364 [2024-12-14 00:19:05.278558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.364 [2024-12-14 00:19:05.278580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.364 qpair failed and we were unable to recover it. 00:38:26.364 [2024-12-14 00:19:05.278680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.364 [2024-12-14 00:19:05.278701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.364 qpair failed and we were unable to recover it. 00:38:26.364 [2024-12-14 00:19:05.278838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.364 [2024-12-14 00:19:05.278861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.364 qpair failed and we were unable to recover it. 00:38:26.364 [2024-12-14 00:19:05.279082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.364 [2024-12-14 00:19:05.279103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.364 qpair failed and we were unable to recover it. 00:38:26.364 [2024-12-14 00:19:05.279373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.364 [2024-12-14 00:19:05.279395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.364 qpair failed and we were unable to recover it. 00:38:26.364 [2024-12-14 00:19:05.279557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.364 [2024-12-14 00:19:05.279581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.364 qpair failed and we were unable to recover it. 00:38:26.364 [2024-12-14 00:19:05.279701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.364 [2024-12-14 00:19:05.279722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.364 qpair failed and we were unable to recover it. 00:38:26.364 [2024-12-14 00:19:05.279875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.364 [2024-12-14 00:19:05.279896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.364 qpair failed and we were unable to recover it. 00:38:26.364 [2024-12-14 00:19:05.280060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.364 [2024-12-14 00:19:05.280081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.364 qpair failed and we were unable to recover it. 00:38:26.364 [2024-12-14 00:19:05.280242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.364 [2024-12-14 00:19:05.280263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.365 qpair failed and we were unable to recover it. 00:38:26.365 [2024-12-14 00:19:05.280454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.365 [2024-12-14 00:19:05.280476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.365 qpair failed and we were unable to recover it. 00:38:26.365 [2024-12-14 00:19:05.280589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.365 [2024-12-14 00:19:05.280611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.365 qpair failed and we were unable to recover it. 00:38:26.365 [2024-12-14 00:19:05.280803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.365 [2024-12-14 00:19:05.280828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.365 qpair failed and we were unable to recover it. 00:38:26.365 [2024-12-14 00:19:05.280985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.365 [2024-12-14 00:19:05.281007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.365 qpair failed and we were unable to recover it. 00:38:26.365 [2024-12-14 00:19:05.281172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.365 [2024-12-14 00:19:05.281193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.365 qpair failed and we were unable to recover it. 00:38:26.365 [2024-12-14 00:19:05.281280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.365 [2024-12-14 00:19:05.281301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.365 qpair failed and we were unable to recover it. 00:38:26.365 [2024-12-14 00:19:05.281565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.365 [2024-12-14 00:19:05.281587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.365 qpair failed and we were unable to recover it. 00:38:26.365 [2024-12-14 00:19:05.281702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.365 [2024-12-14 00:19:05.281722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.365 qpair failed and we were unable to recover it. 00:38:26.365 [2024-12-14 00:19:05.281811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.365 [2024-12-14 00:19:05.281827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.365 qpair failed and we were unable to recover it. 00:38:26.365 [2024-12-14 00:19:05.282046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.365 [2024-12-14 00:19:05.282062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.365 qpair failed and we were unable to recover it. 00:38:26.365 [2024-12-14 00:19:05.282217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.365 [2024-12-14 00:19:05.282232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.365 qpair failed and we were unable to recover it. 00:38:26.365 [2024-12-14 00:19:05.282411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.365 [2024-12-14 00:19:05.282427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.365 qpair failed and we were unable to recover it. 00:38:26.365 [2024-12-14 00:19:05.282523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.365 [2024-12-14 00:19:05.282539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.365 qpair failed and we were unable to recover it. 00:38:26.365 [2024-12-14 00:19:05.282639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.365 [2024-12-14 00:19:05.282655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.365 qpair failed and we were unable to recover it. 00:38:26.365 [2024-12-14 00:19:05.282800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.365 [2024-12-14 00:19:05.282816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.365 qpair failed and we were unable to recover it. 00:38:26.365 [2024-12-14 00:19:05.282994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.365 [2024-12-14 00:19:05.283010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.365 qpair failed and we were unable to recover it. 00:38:26.365 [2024-12-14 00:19:05.283243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.365 [2024-12-14 00:19:05.283259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.365 qpair failed and we were unable to recover it. 00:38:26.365 [2024-12-14 00:19:05.283359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.365 [2024-12-14 00:19:05.283375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.365 qpair failed and we were unable to recover it. 00:38:26.365 [2024-12-14 00:19:05.283556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.365 [2024-12-14 00:19:05.283573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.365 qpair failed and we were unable to recover it. 00:38:26.365 [2024-12-14 00:19:05.283713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.365 [2024-12-14 00:19:05.283729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.365 qpair failed and we were unable to recover it. 00:38:26.365 [2024-12-14 00:19:05.283823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.365 [2024-12-14 00:19:05.283838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.365 qpair failed and we were unable to recover it. 00:38:26.365 [2024-12-14 00:19:05.283939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.365 [2024-12-14 00:19:05.283955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.365 qpair failed and we were unable to recover it. 00:38:26.365 [2024-12-14 00:19:05.284026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.365 [2024-12-14 00:19:05.284041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.365 qpair failed and we were unable to recover it. 00:38:26.365 [2024-12-14 00:19:05.284219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.365 [2024-12-14 00:19:05.284236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.365 qpair failed and we were unable to recover it. 00:38:26.365 [2024-12-14 00:19:05.284405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.365 [2024-12-14 00:19:05.284420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.365 qpair failed and we were unable to recover it. 00:38:26.365 [2024-12-14 00:19:05.284681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.365 [2024-12-14 00:19:05.284698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.365 qpair failed and we were unable to recover it. 00:38:26.365 [2024-12-14 00:19:05.284844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.365 [2024-12-14 00:19:05.284859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.365 qpair failed and we were unable to recover it. 00:38:26.365 [2024-12-14 00:19:05.285094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.365 [2024-12-14 00:19:05.285110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.365 qpair failed and we were unable to recover it. 00:38:26.365 [2024-12-14 00:19:05.285218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.365 [2024-12-14 00:19:05.285234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.365 qpair failed and we were unable to recover it. 00:38:26.365 [2024-12-14 00:19:05.285373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.365 [2024-12-14 00:19:05.285414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:26.365 qpair failed and we were unable to recover it. 00:38:26.365 [2024-12-14 00:19:05.285544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.365 [2024-12-14 00:19:05.285569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:26.365 qpair failed and we were unable to recover it. 00:38:26.365 [2024-12-14 00:19:05.285795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.365 [2024-12-14 00:19:05.285827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:26.365 qpair failed and we were unable to recover it. 00:38:26.365 [2024-12-14 00:19:05.285936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.365 [2024-12-14 00:19:05.285953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.365 qpair failed and we were unable to recover it. 00:38:26.365 [2024-12-14 00:19:05.286119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.365 [2024-12-14 00:19:05.286135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.365 qpair failed and we were unable to recover it. 00:38:26.365 [2024-12-14 00:19:05.286334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.365 [2024-12-14 00:19:05.286350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.365 qpair failed and we were unable to recover it. 00:38:26.365 [2024-12-14 00:19:05.286504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.365 [2024-12-14 00:19:05.286520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.365 qpair failed and we were unable to recover it. 00:38:26.365 [2024-12-14 00:19:05.286668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.365 [2024-12-14 00:19:05.286685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.365 qpair failed and we were unable to recover it. 00:38:26.365 [2024-12-14 00:19:05.286838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.365 [2024-12-14 00:19:05.286854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.365 qpair failed and we were unable to recover it. 00:38:26.365 [2024-12-14 00:19:05.287012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.365 [2024-12-14 00:19:05.287027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.365 qpair failed and we were unable to recover it. 00:38:26.366 [2024-12-14 00:19:05.287130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.366 [2024-12-14 00:19:05.287146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.366 qpair failed and we were unable to recover it. 00:38:26.366 [2024-12-14 00:19:05.287359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.366 [2024-12-14 00:19:05.287375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.366 qpair failed and we were unable to recover it. 00:38:26.366 [2024-12-14 00:19:05.287465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.366 [2024-12-14 00:19:05.287482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.366 qpair failed and we were unable to recover it. 00:38:26.366 [2024-12-14 00:19:05.287696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.366 [2024-12-14 00:19:05.287715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.366 qpair failed and we were unable to recover it. 00:38:26.366 [2024-12-14 00:19:05.287818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.366 [2024-12-14 00:19:05.287834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.366 qpair failed and we were unable to recover it. 00:38:26.366 [2024-12-14 00:19:05.287976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.366 [2024-12-14 00:19:05.287993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.366 qpair failed and we were unable to recover it. 00:38:26.366 [2024-12-14 00:19:05.288157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.366 [2024-12-14 00:19:05.288173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.366 qpair failed and we were unable to recover it. 00:38:26.366 [2024-12-14 00:19:05.288251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.366 [2024-12-14 00:19:05.288267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.366 qpair failed and we were unable to recover it. 00:38:26.366 [2024-12-14 00:19:05.288415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.366 [2024-12-14 00:19:05.288430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.366 qpair failed and we were unable to recover it. 00:38:26.366 [2024-12-14 00:19:05.288541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.366 [2024-12-14 00:19:05.288557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.366 qpair failed and we were unable to recover it. 00:38:26.366 [2024-12-14 00:19:05.288727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.366 [2024-12-14 00:19:05.288743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.366 qpair failed and we were unable to recover it. 00:38:26.366 [2024-12-14 00:19:05.288839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.366 [2024-12-14 00:19:05.288855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.366 qpair failed and we were unable to recover it. 00:38:26.366 [2024-12-14 00:19:05.289003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.366 [2024-12-14 00:19:05.289019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.366 qpair failed and we were unable to recover it. 00:38:26.366 [2024-12-14 00:19:05.289176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.366 [2024-12-14 00:19:05.289192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.366 qpair failed and we were unable to recover it. 00:38:26.366 [2024-12-14 00:19:05.289399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.366 [2024-12-14 00:19:05.289416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.366 qpair failed and we were unable to recover it. 00:38:26.366 [2024-12-14 00:19:05.289571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.366 [2024-12-14 00:19:05.289587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.366 qpair failed and we were unable to recover it. 00:38:26.366 [2024-12-14 00:19:05.289746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.366 [2024-12-14 00:19:05.289761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.366 qpair failed and we were unable to recover it. 00:38:26.366 [2024-12-14 00:19:05.289922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.366 [2024-12-14 00:19:05.289937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.366 qpair failed and we were unable to recover it. 00:38:26.366 [2024-12-14 00:19:05.290130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.366 [2024-12-14 00:19:05.290147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.366 qpair failed and we were unable to recover it. 00:38:26.366 [2024-12-14 00:19:05.290230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.366 [2024-12-14 00:19:05.290250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.366 qpair failed and we were unable to recover it. 00:38:26.366 [2024-12-14 00:19:05.290409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.366 [2024-12-14 00:19:05.290424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.366 qpair failed and we were unable to recover it. 00:38:26.366 [2024-12-14 00:19:05.290585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.366 [2024-12-14 00:19:05.290602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.366 qpair failed and we were unable to recover it. 00:38:26.366 [2024-12-14 00:19:05.290712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.366 [2024-12-14 00:19:05.290727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.366 qpair failed and we were unable to recover it. 00:38:26.366 [2024-12-14 00:19:05.290805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.366 [2024-12-14 00:19:05.290820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.366 qpair failed and we were unable to recover it. 00:38:26.366 [2024-12-14 00:19:05.290985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.366 [2024-12-14 00:19:05.291000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.366 qpair failed and we were unable to recover it. 00:38:26.366 [2024-12-14 00:19:05.291087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.366 [2024-12-14 00:19:05.291102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.366 qpair failed and we were unable to recover it. 00:38:26.366 [2024-12-14 00:19:05.291309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.366 [2024-12-14 00:19:05.291324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.366 qpair failed and we were unable to recover it. 00:38:26.366 [2024-12-14 00:19:05.291413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.366 [2024-12-14 00:19:05.291428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.366 qpair failed and we were unable to recover it. 00:38:26.366 [2024-12-14 00:19:05.291587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.366 [2024-12-14 00:19:05.291603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.366 qpair failed and we were unable to recover it. 00:38:26.366 [2024-12-14 00:19:05.291707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.366 [2024-12-14 00:19:05.291722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.366 qpair failed and we were unable to recover it. 00:38:26.366 [2024-12-14 00:19:05.291914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.366 [2024-12-14 00:19:05.291944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:26.366 qpair failed and we were unable to recover it. 00:38:26.366 [2024-12-14 00:19:05.292118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.366 [2024-12-14 00:19:05.292142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.366 qpair failed and we were unable to recover it. 00:38:26.366 [2024-12-14 00:19:05.292312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.366 [2024-12-14 00:19:05.292358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:26.366 qpair failed and we were unable to recover it. 00:38:26.366 [2024-12-14 00:19:05.292526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.366 [2024-12-14 00:19:05.292545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.366 qpair failed and we were unable to recover it. 00:38:26.366 [2024-12-14 00:19:05.292621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.366 [2024-12-14 00:19:05.292637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.366 qpair failed and we were unable to recover it. 00:38:26.366 [2024-12-14 00:19:05.292842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.366 [2024-12-14 00:19:05.292858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.366 qpair failed and we were unable to recover it. 00:38:26.366 [2024-12-14 00:19:05.292997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.366 [2024-12-14 00:19:05.293013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.366 qpair failed and we were unable to recover it. 00:38:26.366 [2024-12-14 00:19:05.293173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.366 [2024-12-14 00:19:05.293188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.366 qpair failed and we were unable to recover it. 00:38:26.366 [2024-12-14 00:19:05.293279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.366 [2024-12-14 00:19:05.293295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.367 qpair failed and we were unable to recover it. 00:38:26.367 [2024-12-14 00:19:05.293503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.367 [2024-12-14 00:19:05.293519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.367 qpair failed and we were unable to recover it. 00:38:26.367 [2024-12-14 00:19:05.293610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.367 [2024-12-14 00:19:05.293627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.367 qpair failed and we were unable to recover it. 00:38:26.367 [2024-12-14 00:19:05.293793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.367 [2024-12-14 00:19:05.293809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.367 qpair failed and we were unable to recover it. 00:38:26.367 [2024-12-14 00:19:05.294018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.367 [2024-12-14 00:19:05.294035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.367 qpair failed and we were unable to recover it. 00:38:26.367 [2024-12-14 00:19:05.294121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.367 [2024-12-14 00:19:05.294140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.367 qpair failed and we were unable to recover it. 00:38:26.367 [2024-12-14 00:19:05.294279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.367 [2024-12-14 00:19:05.294295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.367 qpair failed and we were unable to recover it. 00:38:26.367 [2024-12-14 00:19:05.294375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.367 [2024-12-14 00:19:05.294391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.367 qpair failed and we were unable to recover it. 00:38:26.367 [2024-12-14 00:19:05.294532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.367 [2024-12-14 00:19:05.294548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.367 qpair failed and we were unable to recover it. 00:38:26.367 [2024-12-14 00:19:05.294650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.367 [2024-12-14 00:19:05.294666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.367 qpair failed and we were unable to recover it. 00:38:26.367 [2024-12-14 00:19:05.294822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.367 [2024-12-14 00:19:05.294838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.367 qpair failed and we were unable to recover it. 00:38:26.367 [2024-12-14 00:19:05.294913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.367 [2024-12-14 00:19:05.294929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.367 qpair failed and we were unable to recover it. 00:38:26.367 [2024-12-14 00:19:05.295028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.367 [2024-12-14 00:19:05.295044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.367 qpair failed and we were unable to recover it. 00:38:26.367 [2024-12-14 00:19:05.295260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.367 [2024-12-14 00:19:05.295276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.367 qpair failed and we were unable to recover it. 00:38:26.367 [2024-12-14 00:19:05.295359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.367 [2024-12-14 00:19:05.295375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.367 qpair failed and we were unable to recover it. 00:38:26.367 [2024-12-14 00:19:05.295473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.367 [2024-12-14 00:19:05.295489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.367 qpair failed and we were unable to recover it. 00:38:26.367 [2024-12-14 00:19:05.295702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.367 [2024-12-14 00:19:05.295718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.367 qpair failed and we were unable to recover it. 00:38:26.367 [2024-12-14 00:19:05.295799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.367 [2024-12-14 00:19:05.295815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.367 qpair failed and we were unable to recover it. 00:38:26.367 [2024-12-14 00:19:05.295962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.367 [2024-12-14 00:19:05.295978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.367 qpair failed and we were unable to recover it. 00:38:26.367 [2024-12-14 00:19:05.296076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.367 [2024-12-14 00:19:05.296092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.367 qpair failed and we were unable to recover it. 00:38:26.367 [2024-12-14 00:19:05.296288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.367 [2024-12-14 00:19:05.296304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.367 qpair failed and we were unable to recover it. 00:38:26.367 [2024-12-14 00:19:05.296450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.367 [2024-12-14 00:19:05.296467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.367 qpair failed and we were unable to recover it. 00:38:26.367 [2024-12-14 00:19:05.296611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.367 [2024-12-14 00:19:05.296627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.367 qpair failed and we were unable to recover it. 00:38:26.367 [2024-12-14 00:19:05.296740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.367 [2024-12-14 00:19:05.296756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.367 qpair failed and we were unable to recover it. 00:38:26.367 [2024-12-14 00:19:05.296901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.367 [2024-12-14 00:19:05.296917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.367 qpair failed and we were unable to recover it. 00:38:26.367 [2024-12-14 00:19:05.297066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.367 [2024-12-14 00:19:05.297081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.367 qpair failed and we were unable to recover it. 00:38:26.367 [2024-12-14 00:19:05.297233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.367 [2024-12-14 00:19:05.297249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.367 qpair failed and we were unable to recover it. 00:38:26.367 [2024-12-14 00:19:05.297457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.367 [2024-12-14 00:19:05.297474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.367 qpair failed and we were unable to recover it. 00:38:26.367 [2024-12-14 00:19:05.297673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.367 [2024-12-14 00:19:05.297689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.367 qpair failed and we were unable to recover it. 00:38:26.367 [2024-12-14 00:19:05.297799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.367 [2024-12-14 00:19:05.297816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.367 qpair failed and we were unable to recover it. 00:38:26.367 [2024-12-14 00:19:05.297921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.367 [2024-12-14 00:19:05.297937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.367 qpair failed and we were unable to recover it. 00:38:26.367 [2024-12-14 00:19:05.298093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.367 [2024-12-14 00:19:05.298109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.367 qpair failed and we were unable to recover it. 00:38:26.367 [2024-12-14 00:19:05.298282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.367 [2024-12-14 00:19:05.298310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:26.367 qpair failed and we were unable to recover it. 00:38:26.367 [2024-12-14 00:19:05.298420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.367 [2024-12-14 00:19:05.298456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.367 qpair failed and we were unable to recover it. 00:38:26.367 [2024-12-14 00:19:05.298675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.367 [2024-12-14 00:19:05.298712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:26.367 qpair failed and we were unable to recover it. 00:38:26.367 [2024-12-14 00:19:05.298831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.367 [2024-12-14 00:19:05.298848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.367 qpair failed and we were unable to recover it. 00:38:26.367 [2024-12-14 00:19:05.298936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.367 [2024-12-14 00:19:05.298952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.367 qpair failed and we were unable to recover it. 00:38:26.368 [2024-12-14 00:19:05.299025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.368 [2024-12-14 00:19:05.299041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.368 qpair failed and we were unable to recover it. 00:38:26.368 [2024-12-14 00:19:05.299207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.368 [2024-12-14 00:19:05.299223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.368 qpair failed and we were unable to recover it. 00:38:26.368 [2024-12-14 00:19:05.299309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.368 [2024-12-14 00:19:05.299325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.368 qpair failed and we were unable to recover it. 00:38:26.368 [2024-12-14 00:19:05.299417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.368 [2024-12-14 00:19:05.299433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.368 qpair failed and we were unable to recover it. 00:38:26.368 [2024-12-14 00:19:05.299538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.368 [2024-12-14 00:19:05.299554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.368 qpair failed and we were unable to recover it. 00:38:26.368 [2024-12-14 00:19:05.299652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.368 [2024-12-14 00:19:05.299668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.368 qpair failed and we were unable to recover it. 00:38:26.368 [2024-12-14 00:19:05.299842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.368 [2024-12-14 00:19:05.299857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.368 qpair failed and we were unable to recover it. 00:38:26.368 [2024-12-14 00:19:05.299939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.368 [2024-12-14 00:19:05.299955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.368 qpair failed and we were unable to recover it. 00:38:26.368 [2024-12-14 00:19:05.300058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.368 [2024-12-14 00:19:05.300074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.368 qpair failed and we were unable to recover it. 00:38:26.368 [2024-12-14 00:19:05.300152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.368 [2024-12-14 00:19:05.300168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.368 qpair failed and we were unable to recover it. 00:38:26.368 [2024-12-14 00:19:05.300307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.368 [2024-12-14 00:19:05.300323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.368 qpair failed and we were unable to recover it. 00:38:26.368 [2024-12-14 00:19:05.300457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.368 [2024-12-14 00:19:05.300474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.368 qpair failed and we were unable to recover it. 00:38:26.368 [2024-12-14 00:19:05.300672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.368 [2024-12-14 00:19:05.300688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.368 qpair failed and we were unable to recover it. 00:38:26.368 [2024-12-14 00:19:05.300831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.368 [2024-12-14 00:19:05.300848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.368 qpair failed and we were unable to recover it. 00:38:26.368 [2024-12-14 00:19:05.300940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.368 [2024-12-14 00:19:05.300965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.368 qpair failed and we were unable to recover it. 00:38:26.368 [2024-12-14 00:19:05.301047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.368 [2024-12-14 00:19:05.301064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.368 qpair failed and we were unable to recover it. 00:38:26.368 [2024-12-14 00:19:05.301208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.368 [2024-12-14 00:19:05.301224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.368 qpair failed and we were unable to recover it. 00:38:26.368 [2024-12-14 00:19:05.301314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.368 [2024-12-14 00:19:05.301330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.368 qpair failed and we were unable to recover it. 00:38:26.368 [2024-12-14 00:19:05.301402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.368 [2024-12-14 00:19:05.301418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.368 qpair failed and we were unable to recover it. 00:38:26.368 [2024-12-14 00:19:05.301605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.368 [2024-12-14 00:19:05.301622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.368 qpair failed and we were unable to recover it. 00:38:26.368 [2024-12-14 00:19:05.301715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.368 [2024-12-14 00:19:05.301731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.368 qpair failed and we were unable to recover it. 00:38:26.368 [2024-12-14 00:19:05.301966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.368 [2024-12-14 00:19:05.301981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.368 qpair failed and we were unable to recover it. 00:38:26.368 [2024-12-14 00:19:05.302077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.368 [2024-12-14 00:19:05.302093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.368 qpair failed and we were unable to recover it. 00:38:26.368 [2024-12-14 00:19:05.302193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.368 [2024-12-14 00:19:05.302209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.368 qpair failed and we were unable to recover it. 00:38:26.368 [2024-12-14 00:19:05.302280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.368 [2024-12-14 00:19:05.302296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.368 qpair failed and we were unable to recover it. 00:38:26.368 [2024-12-14 00:19:05.302505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.368 [2024-12-14 00:19:05.302522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.368 qpair failed and we were unable to recover it. 00:38:26.368 [2024-12-14 00:19:05.302669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.368 [2024-12-14 00:19:05.302685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.368 qpair failed and we were unable to recover it. 00:38:26.368 [2024-12-14 00:19:05.302844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.368 [2024-12-14 00:19:05.302860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.368 qpair failed and we were unable to recover it. 00:38:26.368 [2024-12-14 00:19:05.302958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.368 [2024-12-14 00:19:05.302974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.368 qpair failed and we were unable to recover it. 00:38:26.368 [2024-12-14 00:19:05.303062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.368 [2024-12-14 00:19:05.303078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.368 qpair failed and we were unable to recover it. 00:38:26.368 [2024-12-14 00:19:05.303168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.368 [2024-12-14 00:19:05.303183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.368 qpair failed and we were unable to recover it. 00:38:26.368 [2024-12-14 00:19:05.303346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.368 [2024-12-14 00:19:05.303362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.368 qpair failed and we were unable to recover it. 00:38:26.368 [2024-12-14 00:19:05.303574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.368 [2024-12-14 00:19:05.303591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.368 qpair failed and we were unable to recover it. 00:38:26.368 [2024-12-14 00:19:05.303671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.368 [2024-12-14 00:19:05.303687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.368 qpair failed and we were unable to recover it. 00:38:26.368 [2024-12-14 00:19:05.303906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.368 [2024-12-14 00:19:05.303922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.368 qpair failed and we were unable to recover it. 00:38:26.368 [2024-12-14 00:19:05.304011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.368 [2024-12-14 00:19:05.304030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.368 qpair failed and we were unable to recover it. 00:38:26.368 [2024-12-14 00:19:05.304171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.368 [2024-12-14 00:19:05.304187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.368 qpair failed and we were unable to recover it. 00:38:26.368 [2024-12-14 00:19:05.304290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.368 [2024-12-14 00:19:05.304306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.368 qpair failed and we were unable to recover it. 00:38:26.368 [2024-12-14 00:19:05.304385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.368 [2024-12-14 00:19:05.304401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.369 qpair failed and we were unable to recover it. 00:38:26.369 [2024-12-14 00:19:05.304576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.369 [2024-12-14 00:19:05.304592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.369 qpair failed and we were unable to recover it. 00:38:26.369 [2024-12-14 00:19:05.304738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.369 [2024-12-14 00:19:05.304754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.369 qpair failed and we were unable to recover it. 00:38:26.369 [2024-12-14 00:19:05.304904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.369 [2024-12-14 00:19:05.304920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.369 qpair failed and we were unable to recover it. 00:38:26.369 [2024-12-14 00:19:05.305019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.369 [2024-12-14 00:19:05.305034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.369 qpair failed and we were unable to recover it. 00:38:26.369 [2024-12-14 00:19:05.305193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.369 [2024-12-14 00:19:05.305209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.369 qpair failed and we were unable to recover it. 00:38:26.369 [2024-12-14 00:19:05.305428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.369 [2024-12-14 00:19:05.305459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.369 qpair failed and we were unable to recover it. 00:38:26.369 [2024-12-14 00:19:05.305538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.369 [2024-12-14 00:19:05.305554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.369 qpair failed and we were unable to recover it. 00:38:26.369 [2024-12-14 00:19:05.305710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.369 [2024-12-14 00:19:05.305726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.369 qpair failed and we were unable to recover it. 00:38:26.369 [2024-12-14 00:19:05.305829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.369 [2024-12-14 00:19:05.305845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.369 qpair failed and we were unable to recover it. 00:38:26.369 [2024-12-14 00:19:05.306008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.369 [2024-12-14 00:19:05.306024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.369 qpair failed and we were unable to recover it. 00:38:26.369 [2024-12-14 00:19:05.306201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.369 [2024-12-14 00:19:05.306217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.369 qpair failed and we were unable to recover it. 00:38:26.369 [2024-12-14 00:19:05.306374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.369 [2024-12-14 00:19:05.306390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.369 qpair failed and we were unable to recover it. 00:38:26.369 [2024-12-14 00:19:05.306560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.369 [2024-12-14 00:19:05.306578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.369 qpair failed and we were unable to recover it. 00:38:26.369 [2024-12-14 00:19:05.306683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.369 [2024-12-14 00:19:05.306699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.369 qpair failed and we were unable to recover it. 00:38:26.369 [2024-12-14 00:19:05.306853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.369 [2024-12-14 00:19:05.306869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.369 qpair failed and we were unable to recover it. 00:38:26.369 [2024-12-14 00:19:05.306949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.369 [2024-12-14 00:19:05.306965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.369 qpair failed and we were unable to recover it. 00:38:26.369 [2024-12-14 00:19:05.307107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.369 [2024-12-14 00:19:05.307123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.369 qpair failed and we were unable to recover it. 00:38:26.369 [2024-12-14 00:19:05.307288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.369 [2024-12-14 00:19:05.307305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.369 qpair failed and we were unable to recover it. 00:38:26.369 [2024-12-14 00:19:05.307394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.369 [2024-12-14 00:19:05.307410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.369 qpair failed and we were unable to recover it. 00:38:26.369 [2024-12-14 00:19:05.307514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.369 [2024-12-14 00:19:05.307531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.369 qpair failed and we were unable to recover it. 00:38:26.369 [2024-12-14 00:19:05.307672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.369 [2024-12-14 00:19:05.307688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.369 qpair failed and we were unable to recover it. 00:38:26.369 [2024-12-14 00:19:05.307787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.369 [2024-12-14 00:19:05.307802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.369 qpair failed and we were unable to recover it. 00:38:26.369 [2024-12-14 00:19:05.307895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.369 [2024-12-14 00:19:05.307911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.369 qpair failed and we were unable to recover it. 00:38:26.369 [2024-12-14 00:19:05.308127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.369 [2024-12-14 00:19:05.308143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.369 qpair failed and we were unable to recover it. 00:38:26.369 [2024-12-14 00:19:05.308296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.369 [2024-12-14 00:19:05.308312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.369 qpair failed and we were unable to recover it. 00:38:26.369 [2024-12-14 00:19:05.308452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.369 [2024-12-14 00:19:05.308468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.369 qpair failed and we were unable to recover it. 00:38:26.369 [2024-12-14 00:19:05.308562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.369 [2024-12-14 00:19:05.308577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.369 qpair failed and we were unable to recover it. 00:38:26.369 [2024-12-14 00:19:05.308761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.369 [2024-12-14 00:19:05.308777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.369 qpair failed and we were unable to recover it. 00:38:26.369 [2024-12-14 00:19:05.308917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.369 [2024-12-14 00:19:05.308933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.369 qpair failed and we were unable to recover it. 00:38:26.369 [2024-12-14 00:19:05.309097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.369 [2024-12-14 00:19:05.309113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.369 qpair failed and we were unable to recover it. 00:38:26.369 [2024-12-14 00:19:05.309254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.369 [2024-12-14 00:19:05.309270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.369 qpair failed and we were unable to recover it. 00:38:26.369 [2024-12-14 00:19:05.309444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.369 [2024-12-14 00:19:05.309460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.369 qpair failed and we were unable to recover it. 00:38:26.369 [2024-12-14 00:19:05.309667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.369 [2024-12-14 00:19:05.309683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.369 qpair failed and we were unable to recover it. 00:38:26.369 [2024-12-14 00:19:05.309826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.369 [2024-12-14 00:19:05.309841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.369 qpair failed and we were unable to recover it. 00:38:26.369 [2024-12-14 00:19:05.309950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.369 [2024-12-14 00:19:05.309966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.369 qpair failed and we were unable to recover it. 00:38:26.369 [2024-12-14 00:19:05.310067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.370 [2024-12-14 00:19:05.310083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.370 qpair failed and we were unable to recover it. 00:38:26.370 [2024-12-14 00:19:05.310169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.370 [2024-12-14 00:19:05.310188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.370 qpair failed and we were unable to recover it. 00:38:26.370 [2024-12-14 00:19:05.310407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.370 [2024-12-14 00:19:05.310423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.370 qpair failed and we were unable to recover it. 00:38:26.370 [2024-12-14 00:19:05.310681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.370 [2024-12-14 00:19:05.310707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:26.370 qpair failed and we were unable to recover it. 00:38:26.370 [2024-12-14 00:19:05.310907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.370 [2024-12-14 00:19:05.310931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.370 qpair failed and we were unable to recover it. 00:38:26.370 [2024-12-14 00:19:05.311152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.370 [2024-12-14 00:19:05.311174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.370 qpair failed and we were unable to recover it. 00:38:26.370 [2024-12-14 00:19:05.311415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.370 [2024-12-14 00:19:05.311443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.370 qpair failed and we were unable to recover it. 00:38:26.370 [2024-12-14 00:19:05.311630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.370 [2024-12-14 00:19:05.311652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.370 qpair failed and we were unable to recover it. 00:38:26.370 [2024-12-14 00:19:05.311843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.370 [2024-12-14 00:19:05.311865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.370 qpair failed and we were unable to recover it. 00:38:26.370 [2024-12-14 00:19:05.312033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.370 [2024-12-14 00:19:05.312054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.370 qpair failed and we were unable to recover it. 00:38:26.370 [2024-12-14 00:19:05.312272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.370 [2024-12-14 00:19:05.312294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.370 qpair failed and we were unable to recover it. 00:38:26.370 [2024-12-14 00:19:05.312395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.370 [2024-12-14 00:19:05.312418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.370 qpair failed and we were unable to recover it. 00:38:26.370 [2024-12-14 00:19:05.312597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.370 [2024-12-14 00:19:05.312616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.370 qpair failed and we were unable to recover it. 00:38:26.370 [2024-12-14 00:19:05.312715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.370 [2024-12-14 00:19:05.312753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.370 qpair failed and we were unable to recover it. 00:38:26.370 [2024-12-14 00:19:05.312893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.370 [2024-12-14 00:19:05.312909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.370 qpair failed and we were unable to recover it. 00:38:26.370 [2024-12-14 00:19:05.313149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.370 [2024-12-14 00:19:05.313165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.370 qpair failed and we were unable to recover it. 00:38:26.370 [2024-12-14 00:19:05.313266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.370 [2024-12-14 00:19:05.313282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.370 qpair failed and we were unable to recover it. 00:38:26.370 [2024-12-14 00:19:05.313487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.370 [2024-12-14 00:19:05.313504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.370 qpair failed and we were unable to recover it. 00:38:26.370 [2024-12-14 00:19:05.313611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.370 [2024-12-14 00:19:05.313626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.370 qpair failed and we were unable to recover it. 00:38:26.370 [2024-12-14 00:19:05.313804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.370 [2024-12-14 00:19:05.313820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.370 qpair failed and we were unable to recover it. 00:38:26.370 [2024-12-14 00:19:05.314027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.370 [2024-12-14 00:19:05.314043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.370 qpair failed and we were unable to recover it. 00:38:26.370 [2024-12-14 00:19:05.314216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.370 [2024-12-14 00:19:05.314231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.370 qpair failed and we were unable to recover it. 00:38:26.370 [2024-12-14 00:19:05.314314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.370 [2024-12-14 00:19:05.314333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.370 qpair failed and we were unable to recover it. 00:38:26.370 [2024-12-14 00:19:05.314445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.370 [2024-12-14 00:19:05.314461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.370 qpair failed and we were unable to recover it. 00:38:26.370 [2024-12-14 00:19:05.314602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.370 [2024-12-14 00:19:05.314619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.370 qpair failed and we were unable to recover it. 00:38:26.370 [2024-12-14 00:19:05.314850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.370 [2024-12-14 00:19:05.314866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.370 qpair failed and we were unable to recover it. 00:38:26.370 [2024-12-14 00:19:05.314966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.370 [2024-12-14 00:19:05.314981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.370 qpair failed and we were unable to recover it. 00:38:26.370 [2024-12-14 00:19:05.315078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.370 [2024-12-14 00:19:05.315093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.370 qpair failed and we were unable to recover it. 00:38:26.370 [2024-12-14 00:19:05.315202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.370 [2024-12-14 00:19:05.315218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.370 qpair failed and we were unable to recover it. 00:38:26.370 [2024-12-14 00:19:05.315312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.370 [2024-12-14 00:19:05.315328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.370 qpair failed and we were unable to recover it. 00:38:26.371 [2024-12-14 00:19:05.315410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.371 [2024-12-14 00:19:05.315426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.371 qpair failed and we were unable to recover it. 00:38:26.371 [2024-12-14 00:19:05.315643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.371 [2024-12-14 00:19:05.315660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.371 qpair failed and we were unable to recover it. 00:38:26.371 [2024-12-14 00:19:05.315891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.371 [2024-12-14 00:19:05.315907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.371 qpair failed and we were unable to recover it. 00:38:26.371 [2024-12-14 00:19:05.316055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.371 [2024-12-14 00:19:05.316071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.371 qpair failed and we were unable to recover it. 00:38:26.371 [2024-12-14 00:19:05.316156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.371 [2024-12-14 00:19:05.316172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.371 qpair failed and we were unable to recover it. 00:38:26.371 [2024-12-14 00:19:05.316347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.371 [2024-12-14 00:19:05.316363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.371 qpair failed and we were unable to recover it. 00:38:26.371 [2024-12-14 00:19:05.316516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.371 [2024-12-14 00:19:05.316533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.371 qpair failed and we were unable to recover it. 00:38:26.371 [2024-12-14 00:19:05.316716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.371 [2024-12-14 00:19:05.316732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.371 qpair failed and we were unable to recover it. 00:38:26.371 [2024-12-14 00:19:05.316958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.371 [2024-12-14 00:19:05.316974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.371 qpair failed and we were unable to recover it. 00:38:26.371 [2024-12-14 00:19:05.317116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.371 [2024-12-14 00:19:05.317132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.371 qpair failed and we were unable to recover it. 00:38:26.371 [2024-12-14 00:19:05.317388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.371 [2024-12-14 00:19:05.317404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.371 qpair failed and we were unable to recover it. 00:38:26.371 [2024-12-14 00:19:05.317584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.371 [2024-12-14 00:19:05.317604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.371 qpair failed and we were unable to recover it. 00:38:26.371 [2024-12-14 00:19:05.317766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.371 [2024-12-14 00:19:05.317782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.371 qpair failed and we were unable to recover it. 00:38:26.371 [2024-12-14 00:19:05.317888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.371 [2024-12-14 00:19:05.317904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.371 qpair failed and we were unable to recover it. 00:38:26.371 [2024-12-14 00:19:05.318048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.371 [2024-12-14 00:19:05.318063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.371 qpair failed and we were unable to recover it. 00:38:26.371 [2024-12-14 00:19:05.318216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.371 [2024-12-14 00:19:05.318232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.371 qpair failed and we were unable to recover it. 00:38:26.371 [2024-12-14 00:19:05.318412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.371 [2024-12-14 00:19:05.318428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.371 qpair failed and we were unable to recover it. 00:38:26.371 [2024-12-14 00:19:05.318609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.371 [2024-12-14 00:19:05.318626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.371 qpair failed and we were unable to recover it. 00:38:26.371 [2024-12-14 00:19:05.318801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.371 [2024-12-14 00:19:05.318817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.371 qpair failed and we were unable to recover it. 00:38:26.371 [2024-12-14 00:19:05.318958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.371 [2024-12-14 00:19:05.318973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.371 qpair failed and we were unable to recover it. 00:38:26.371 [2024-12-14 00:19:05.319066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.371 [2024-12-14 00:19:05.319082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.371 qpair failed and we were unable to recover it. 00:38:26.371 [2024-12-14 00:19:05.319233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.371 [2024-12-14 00:19:05.319249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.371 qpair failed and we were unable to recover it. 00:38:26.371 [2024-12-14 00:19:05.319326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.371 [2024-12-14 00:19:05.319341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.371 qpair failed and we were unable to recover it. 00:38:26.371 [2024-12-14 00:19:05.319571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.371 [2024-12-14 00:19:05.319587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.371 qpair failed and we were unable to recover it. 00:38:26.371 [2024-12-14 00:19:05.319677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.371 [2024-12-14 00:19:05.319692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.371 qpair failed and we were unable to recover it. 00:38:26.371 [2024-12-14 00:19:05.319951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.371 [2024-12-14 00:19:05.319967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.371 qpair failed and we were unable to recover it. 00:38:26.371 [2024-12-14 00:19:05.320105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.371 [2024-12-14 00:19:05.320121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.371 qpair failed and we were unable to recover it. 00:38:26.371 [2024-12-14 00:19:05.320278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.371 [2024-12-14 00:19:05.320294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.371 qpair failed and we were unable to recover it. 00:38:26.371 [2024-12-14 00:19:05.320396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.371 [2024-12-14 00:19:05.320411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.371 qpair failed and we were unable to recover it. 00:38:26.371 [2024-12-14 00:19:05.320568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.371 [2024-12-14 00:19:05.320585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.371 qpair failed and we were unable to recover it. 00:38:26.371 [2024-12-14 00:19:05.320761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.371 [2024-12-14 00:19:05.320776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.371 qpair failed and we were unable to recover it. 00:38:26.371 [2024-12-14 00:19:05.320849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.371 [2024-12-14 00:19:05.320864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.371 qpair failed and we were unable to recover it. 00:38:26.371 [2024-12-14 00:19:05.320952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.371 [2024-12-14 00:19:05.320968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.371 qpair failed and we were unable to recover it. 00:38:26.372 [2024-12-14 00:19:05.321121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.372 [2024-12-14 00:19:05.321137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.372 qpair failed and we were unable to recover it. 00:38:26.372 [2024-12-14 00:19:05.321290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.372 [2024-12-14 00:19:05.321306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.372 qpair failed and we were unable to recover it. 00:38:26.372 [2024-12-14 00:19:05.321387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.372 [2024-12-14 00:19:05.321402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.372 qpair failed and we were unable to recover it. 00:38:26.372 [2024-12-14 00:19:05.321495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.372 [2024-12-14 00:19:05.321511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.372 qpair failed and we were unable to recover it. 00:38:26.372 [2024-12-14 00:19:05.321594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.372 [2024-12-14 00:19:05.321610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.372 qpair failed and we were unable to recover it. 00:38:26.372 [2024-12-14 00:19:05.321712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.372 [2024-12-14 00:19:05.321728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.372 qpair failed and we were unable to recover it. 00:38:26.372 [2024-12-14 00:19:05.321953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.372 [2024-12-14 00:19:05.321968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.372 qpair failed and we were unable to recover it. 00:38:26.372 [2024-12-14 00:19:05.322067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.372 [2024-12-14 00:19:05.322083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.372 qpair failed and we were unable to recover it. 00:38:26.372 [2024-12-14 00:19:05.322312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.372 [2024-12-14 00:19:05.322328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.372 qpair failed and we were unable to recover it. 00:38:26.372 [2024-12-14 00:19:05.322481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.372 [2024-12-14 00:19:05.322497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.372 qpair failed and we were unable to recover it. 00:38:26.372 [2024-12-14 00:19:05.322596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.372 [2024-12-14 00:19:05.322612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.372 qpair failed and we were unable to recover it. 00:38:26.372 [2024-12-14 00:19:05.322699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.372 [2024-12-14 00:19:05.322715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.372 qpair failed and we were unable to recover it. 00:38:26.372 [2024-12-14 00:19:05.322879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.372 [2024-12-14 00:19:05.322894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.372 qpair failed and we were unable to recover it. 00:38:26.372 [2024-12-14 00:19:05.323041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.372 [2024-12-14 00:19:05.323057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.372 qpair failed and we were unable to recover it. 00:38:26.372 [2024-12-14 00:19:05.323231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.372 [2024-12-14 00:19:05.323252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.372 qpair failed and we were unable to recover it. 00:38:26.372 [2024-12-14 00:19:05.323435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.372 [2024-12-14 00:19:05.323457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.372 qpair failed and we were unable to recover it. 00:38:26.372 [2024-12-14 00:19:05.323548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.372 [2024-12-14 00:19:05.323563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.372 qpair failed and we were unable to recover it. 00:38:26.372 [2024-12-14 00:19:05.323653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.372 [2024-12-14 00:19:05.323668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.372 qpair failed and we were unable to recover it. 00:38:26.372 [2024-12-14 00:19:05.323805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.372 [2024-12-14 00:19:05.323824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.372 qpair failed and we were unable to recover it. 00:38:26.372 [2024-12-14 00:19:05.323899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.372 [2024-12-14 00:19:05.323915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.372 qpair failed and we were unable to recover it. 00:38:26.372 [2024-12-14 00:19:05.324077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.372 [2024-12-14 00:19:05.324093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.372 qpair failed and we were unable to recover it. 00:38:26.372 [2024-12-14 00:19:05.324252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.372 [2024-12-14 00:19:05.324268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.372 qpair failed and we were unable to recover it. 00:38:26.372 [2024-12-14 00:19:05.324353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.372 [2024-12-14 00:19:05.324369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.372 qpair failed and we were unable to recover it. 00:38:26.372 [2024-12-14 00:19:05.324473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.372 [2024-12-14 00:19:05.324489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.372 qpair failed and we were unable to recover it. 00:38:26.372 [2024-12-14 00:19:05.324632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.372 [2024-12-14 00:19:05.324648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.372 qpair failed and we were unable to recover it. 00:38:26.372 [2024-12-14 00:19:05.324804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.372 [2024-12-14 00:19:05.324820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.372 qpair failed and we were unable to recover it. 00:38:26.372 [2024-12-14 00:19:05.324983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.372 [2024-12-14 00:19:05.324999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.372 qpair failed and we were unable to recover it. 00:38:26.372 [2024-12-14 00:19:05.325199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.372 [2024-12-14 00:19:05.325215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.372 qpair failed and we were unable to recover it. 00:38:26.372 [2024-12-14 00:19:05.325325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.372 [2024-12-14 00:19:05.325341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.372 qpair failed and we were unable to recover it. 00:38:26.372 [2024-12-14 00:19:05.325420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.372 [2024-12-14 00:19:05.325436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.372 qpair failed and we were unable to recover it. 00:38:26.372 [2024-12-14 00:19:05.325529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.372 [2024-12-14 00:19:05.325545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.372 qpair failed and we were unable to recover it. 00:38:26.372 [2024-12-14 00:19:05.325734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.372 [2024-12-14 00:19:05.325751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.372 qpair failed and we were unable to recover it. 00:38:26.372 [2024-12-14 00:19:05.325938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.372 [2024-12-14 00:19:05.325954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.372 qpair failed and we were unable to recover it. 00:38:26.372 [2024-12-14 00:19:05.326120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.372 [2024-12-14 00:19:05.326137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.372 qpair failed and we were unable to recover it. 00:38:26.372 [2024-12-14 00:19:05.326366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.372 [2024-12-14 00:19:05.326383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.372 qpair failed and we were unable to recover it. 00:38:26.372 [2024-12-14 00:19:05.326478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.372 [2024-12-14 00:19:05.326495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.372 qpair failed and we were unable to recover it. 00:38:26.372 [2024-12-14 00:19:05.326583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.372 [2024-12-14 00:19:05.326599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.372 qpair failed and we were unable to recover it. 00:38:26.372 [2024-12-14 00:19:05.326771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.372 [2024-12-14 00:19:05.326788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.372 qpair failed and we were unable to recover it. 00:38:26.372 [2024-12-14 00:19:05.326883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.372 [2024-12-14 00:19:05.326899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.372 qpair failed and we were unable to recover it. 00:38:26.373 [2024-12-14 00:19:05.327063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.373 [2024-12-14 00:19:05.327079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.373 qpair failed and we were unable to recover it. 00:38:26.373 [2024-12-14 00:19:05.327289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.373 [2024-12-14 00:19:05.327305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.373 qpair failed and we were unable to recover it. 00:38:26.373 [2024-12-14 00:19:05.327404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.373 [2024-12-14 00:19:05.327420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.373 qpair failed and we were unable to recover it. 00:38:26.373 [2024-12-14 00:19:05.327569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.373 [2024-12-14 00:19:05.327585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.373 qpair failed and we were unable to recover it. 00:38:26.373 [2024-12-14 00:19:05.327738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.373 [2024-12-14 00:19:05.327754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.373 qpair failed and we were unable to recover it. 00:38:26.373 [2024-12-14 00:19:05.327849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.373 [2024-12-14 00:19:05.327865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.373 qpair failed and we were unable to recover it. 00:38:26.373 [2024-12-14 00:19:05.328037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.373 [2024-12-14 00:19:05.328053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.373 qpair failed and we were unable to recover it. 00:38:26.373 [2024-12-14 00:19:05.328212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.373 [2024-12-14 00:19:05.328228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.373 qpair failed and we were unable to recover it. 00:38:26.373 [2024-12-14 00:19:05.328346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.373 [2024-12-14 00:19:05.328362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.373 qpair failed and we were unable to recover it. 00:38:26.373 [2024-12-14 00:19:05.328524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.373 [2024-12-14 00:19:05.328541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.373 qpair failed and we were unable to recover it. 00:38:26.373 [2024-12-14 00:19:05.328690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.373 [2024-12-14 00:19:05.328706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.373 qpair failed and we were unable to recover it. 00:38:26.373 [2024-12-14 00:19:05.328914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.373 [2024-12-14 00:19:05.328930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.373 qpair failed and we were unable to recover it. 00:38:26.373 [2024-12-14 00:19:05.329040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.373 [2024-12-14 00:19:05.329060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.373 qpair failed and we were unable to recover it. 00:38:26.373 [2024-12-14 00:19:05.329156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.373 [2024-12-14 00:19:05.329173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.373 qpair failed and we were unable to recover it. 00:38:26.373 [2024-12-14 00:19:05.329380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.373 [2024-12-14 00:19:05.329396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.373 qpair failed and we were unable to recover it. 00:38:26.373 [2024-12-14 00:19:05.329547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.373 [2024-12-14 00:19:05.329564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.373 qpair failed and we were unable to recover it. 00:38:26.373 [2024-12-14 00:19:05.329709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.373 [2024-12-14 00:19:05.329724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.373 qpair failed and we were unable to recover it. 00:38:26.373 [2024-12-14 00:19:05.329836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.373 [2024-12-14 00:19:05.329853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.373 qpair failed and we were unable to recover it. 00:38:26.373 [2024-12-14 00:19:05.329933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.373 [2024-12-14 00:19:05.329949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.373 qpair failed and we were unable to recover it. 00:38:26.373 [2024-12-14 00:19:05.330104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.373 [2024-12-14 00:19:05.330123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.373 qpair failed and we were unable to recover it. 00:38:26.373 [2024-12-14 00:19:05.330223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.373 [2024-12-14 00:19:05.330238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.373 qpair failed and we were unable to recover it. 00:38:26.373 [2024-12-14 00:19:05.330379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.373 [2024-12-14 00:19:05.330395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.373 qpair failed and we were unable to recover it. 00:38:26.373 [2024-12-14 00:19:05.330619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.373 [2024-12-14 00:19:05.330636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.373 qpair failed and we were unable to recover it. 00:38:26.373 [2024-12-14 00:19:05.330800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.373 [2024-12-14 00:19:05.330816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.373 qpair failed and we were unable to recover it. 00:38:26.373 [2024-12-14 00:19:05.330981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.373 [2024-12-14 00:19:05.330997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.373 qpair failed and we were unable to recover it. 00:38:26.373 [2024-12-14 00:19:05.331155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.373 [2024-12-14 00:19:05.331170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.373 qpair failed and we were unable to recover it. 00:38:26.373 [2024-12-14 00:19:05.331261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.373 [2024-12-14 00:19:05.331277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.373 qpair failed and we were unable to recover it. 00:38:26.373 [2024-12-14 00:19:05.331505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.373 [2024-12-14 00:19:05.331522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.373 qpair failed and we were unable to recover it. 00:38:26.373 [2024-12-14 00:19:05.331614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.373 [2024-12-14 00:19:05.331630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.373 qpair failed and we were unable to recover it. 00:38:26.373 [2024-12-14 00:19:05.331720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.373 [2024-12-14 00:19:05.331735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.373 qpair failed and we were unable to recover it. 00:38:26.373 [2024-12-14 00:19:05.331807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.373 [2024-12-14 00:19:05.331823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.373 qpair failed and we were unable to recover it. 00:38:26.373 [2024-12-14 00:19:05.331979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.373 [2024-12-14 00:19:05.331995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.373 qpair failed and we were unable to recover it. 00:38:26.373 [2024-12-14 00:19:05.332151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.373 [2024-12-14 00:19:05.332166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.373 qpair failed and we were unable to recover it. 00:38:26.373 [2024-12-14 00:19:05.332254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.373 [2024-12-14 00:19:05.332270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.373 qpair failed and we were unable to recover it. 00:38:26.373 [2024-12-14 00:19:05.332425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.373 [2024-12-14 00:19:05.332458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.373 qpair failed and we were unable to recover it. 00:38:26.373 [2024-12-14 00:19:05.332618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.373 [2024-12-14 00:19:05.332646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.373 qpair failed and we were unable to recover it. 00:38:26.373 [2024-12-14 00:19:05.332796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.373 [2024-12-14 00:19:05.332811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.373 qpair failed and we were unable to recover it. 00:38:26.373 [2024-12-14 00:19:05.332956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.373 [2024-12-14 00:19:05.332971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.373 qpair failed and we were unable to recover it. 00:38:26.373 [2024-12-14 00:19:05.333219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.373 [2024-12-14 00:19:05.333237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.373 qpair failed and we were unable to recover it. 00:38:26.374 [2024-12-14 00:19:05.333414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.374 [2024-12-14 00:19:05.333433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.374 qpair failed and we were unable to recover it. 00:38:26.374 [2024-12-14 00:19:05.333533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.374 [2024-12-14 00:19:05.333548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.374 qpair failed and we were unable to recover it. 00:38:26.374 [2024-12-14 00:19:05.333725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.374 [2024-12-14 00:19:05.333740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.374 qpair failed and we were unable to recover it. 00:38:26.374 [2024-12-14 00:19:05.333903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.374 [2024-12-14 00:19:05.333917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.374 qpair failed and we were unable to recover it. 00:38:26.374 [2024-12-14 00:19:05.334137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.374 [2024-12-14 00:19:05.334152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.374 qpair failed and we were unable to recover it. 00:38:26.374 [2024-12-14 00:19:05.334313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.374 [2024-12-14 00:19:05.334328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.374 qpair failed and we were unable to recover it. 00:38:26.374 [2024-12-14 00:19:05.334428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.374 [2024-12-14 00:19:05.334449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.374 qpair failed and we were unable to recover it. 00:38:26.374 [2024-12-14 00:19:05.334612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.374 [2024-12-14 00:19:05.334628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.374 qpair failed and we were unable to recover it. 00:38:26.374 [2024-12-14 00:19:05.334722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.374 [2024-12-14 00:19:05.334737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.374 qpair failed and we were unable to recover it. 00:38:26.374 [2024-12-14 00:19:05.334888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.374 [2024-12-14 00:19:05.334903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.374 qpair failed and we were unable to recover it. 00:38:26.374 [2024-12-14 00:19:05.335059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.374 [2024-12-14 00:19:05.335074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.374 qpair failed and we were unable to recover it. 00:38:26.374 [2024-12-14 00:19:05.335224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.374 [2024-12-14 00:19:05.335239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.374 qpair failed and we were unable to recover it. 00:38:26.374 [2024-12-14 00:19:05.335346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.374 [2024-12-14 00:19:05.335361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.374 qpair failed and we were unable to recover it. 00:38:26.374 [2024-12-14 00:19:05.335461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.374 [2024-12-14 00:19:05.335478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.374 qpair failed and we were unable to recover it. 00:38:26.374 [2024-12-14 00:19:05.335636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.374 [2024-12-14 00:19:05.335651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.374 qpair failed and we were unable to recover it. 00:38:26.374 [2024-12-14 00:19:05.335852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.374 [2024-12-14 00:19:05.335867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.374 qpair failed and we were unable to recover it. 00:38:26.374 [2024-12-14 00:19:05.336027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.374 [2024-12-14 00:19:05.336043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.374 qpair failed and we were unable to recover it. 00:38:26.374 [2024-12-14 00:19:05.336185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.374 [2024-12-14 00:19:05.336199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.374 qpair failed and we were unable to recover it. 00:38:26.374 [2024-12-14 00:19:05.336286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.374 [2024-12-14 00:19:05.336301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.374 qpair failed and we were unable to recover it. 00:38:26.374 [2024-12-14 00:19:05.336462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.374 [2024-12-14 00:19:05.336478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.374 qpair failed and we were unable to recover it. 00:38:26.374 [2024-12-14 00:19:05.336585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.374 [2024-12-14 00:19:05.336604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.374 qpair failed and we were unable to recover it. 00:38:26.374 [2024-12-14 00:19:05.336700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.374 [2024-12-14 00:19:05.336715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.374 qpair failed and we were unable to recover it. 00:38:26.374 [2024-12-14 00:19:05.336918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.374 [2024-12-14 00:19:05.336933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.374 qpair failed and we were unable to recover it. 00:38:26.374 [2024-12-14 00:19:05.337101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.374 [2024-12-14 00:19:05.337116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.374 qpair failed and we were unable to recover it. 00:38:26.374 [2024-12-14 00:19:05.337322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.374 [2024-12-14 00:19:05.337337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.374 qpair failed and we were unable to recover it. 00:38:26.374 [2024-12-14 00:19:05.337551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.374 [2024-12-14 00:19:05.337566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.374 qpair failed and we were unable to recover it. 00:38:26.374 [2024-12-14 00:19:05.337730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.374 [2024-12-14 00:19:05.337745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.374 qpair failed and we were unable to recover it. 00:38:26.374 [2024-12-14 00:19:05.337880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.374 [2024-12-14 00:19:05.337895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.374 qpair failed and we were unable to recover it. 00:38:26.374 [2024-12-14 00:19:05.338076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.374 [2024-12-14 00:19:05.338091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.374 qpair failed and we were unable to recover it. 00:38:26.374 [2024-12-14 00:19:05.338239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.374 [2024-12-14 00:19:05.338254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.374 qpair failed and we were unable to recover it. 00:38:26.374 [2024-12-14 00:19:05.338422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.374 [2024-12-14 00:19:05.338442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.374 qpair failed and we were unable to recover it. 00:38:26.374 [2024-12-14 00:19:05.338618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.374 [2024-12-14 00:19:05.338633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.374 qpair failed and we were unable to recover it. 00:38:26.374 [2024-12-14 00:19:05.338715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.374 [2024-12-14 00:19:05.338730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.374 qpair failed and we were unable to recover it. 00:38:26.374 [2024-12-14 00:19:05.338899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.374 [2024-12-14 00:19:05.338914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.374 qpair failed and we were unable to recover it. 00:38:26.374 [2024-12-14 00:19:05.339099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.374 [2024-12-14 00:19:05.339114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.374 qpair failed and we were unable to recover it. 00:38:26.374 [2024-12-14 00:19:05.339187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.374 [2024-12-14 00:19:05.339202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.374 qpair failed and we were unable to recover it. 00:38:26.374 [2024-12-14 00:19:05.339316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.374 [2024-12-14 00:19:05.339331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.374 qpair failed and we were unable to recover it. 00:38:26.374 [2024-12-14 00:19:05.339490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.374 [2024-12-14 00:19:05.339506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.374 qpair failed and we were unable to recover it. 00:38:26.374 [2024-12-14 00:19:05.339600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.374 [2024-12-14 00:19:05.339615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.374 qpair failed and we were unable to recover it. 00:38:26.374 [2024-12-14 00:19:05.339760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.375 [2024-12-14 00:19:05.339775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.375 qpair failed and we were unable to recover it. 00:38:26.375 [2024-12-14 00:19:05.339916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.375 [2024-12-14 00:19:05.339932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.375 qpair failed and we were unable to recover it. 00:38:26.375 [2024-12-14 00:19:05.340077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.375 [2024-12-14 00:19:05.340093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.375 qpair failed and we were unable to recover it. 00:38:26.375 [2024-12-14 00:19:05.340251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.375 [2024-12-14 00:19:05.340267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.375 qpair failed and we were unable to recover it. 00:38:26.375 [2024-12-14 00:19:05.340477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.375 [2024-12-14 00:19:05.340494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.375 qpair failed and we were unable to recover it. 00:38:26.375 [2024-12-14 00:19:05.340583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.375 [2024-12-14 00:19:05.340599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.375 qpair failed and we were unable to recover it. 00:38:26.375 [2024-12-14 00:19:05.340749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.375 [2024-12-14 00:19:05.340764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.375 qpair failed and we were unable to recover it. 00:38:26.375 [2024-12-14 00:19:05.340932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.375 [2024-12-14 00:19:05.340947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.375 qpair failed and we were unable to recover it. 00:38:26.375 [2024-12-14 00:19:05.341132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.375 [2024-12-14 00:19:05.341147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.375 qpair failed and we were unable to recover it. 00:38:26.375 [2024-12-14 00:19:05.341246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.375 [2024-12-14 00:19:05.341261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.375 qpair failed and we were unable to recover it. 00:38:26.375 [2024-12-14 00:19:05.341413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.375 [2024-12-14 00:19:05.341428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.375 qpair failed and we were unable to recover it. 00:38:26.375 [2024-12-14 00:19:05.341536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.375 [2024-12-14 00:19:05.341579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.375 qpair failed and we were unable to recover it. 00:38:26.375 [2024-12-14 00:19:05.341781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.375 [2024-12-14 00:19:05.341823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.375 qpair failed and we were unable to recover it. 00:38:26.375 [2024-12-14 00:19:05.342080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.375 [2024-12-14 00:19:05.342122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.375 qpair failed and we were unable to recover it. 00:38:26.375 [2024-12-14 00:19:05.342407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.375 [2024-12-14 00:19:05.342459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.375 qpair failed and we were unable to recover it. 00:38:26.375 [2024-12-14 00:19:05.342721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.375 [2024-12-14 00:19:05.342763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.375 qpair failed and we were unable to recover it. 00:38:26.375 [2024-12-14 00:19:05.343030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.375 [2024-12-14 00:19:05.343071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.375 qpair failed and we were unable to recover it. 00:38:26.375 [2024-12-14 00:19:05.343290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.375 [2024-12-14 00:19:05.343332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.375 qpair failed and we were unable to recover it. 00:38:26.375 [2024-12-14 00:19:05.343540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.375 [2024-12-14 00:19:05.343583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.375 qpair failed and we were unable to recover it. 00:38:26.375 [2024-12-14 00:19:05.343871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.375 [2024-12-14 00:19:05.343913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.375 qpair failed and we were unable to recover it. 00:38:26.375 [2024-12-14 00:19:05.344177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.375 [2024-12-14 00:19:05.344219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.375 qpair failed and we were unable to recover it. 00:38:26.375 [2024-12-14 00:19:05.344416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.375 [2024-12-14 00:19:05.344432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.375 qpair failed and we were unable to recover it. 00:38:26.375 [2024-12-14 00:19:05.344618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.375 [2024-12-14 00:19:05.344633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.375 qpair failed and we were unable to recover it. 00:38:26.375 [2024-12-14 00:19:05.344804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.375 [2024-12-14 00:19:05.344845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.375 qpair failed and we were unable to recover it. 00:38:26.375 [2024-12-14 00:19:05.345049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.375 [2024-12-14 00:19:05.345092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.375 qpair failed and we were unable to recover it. 00:38:26.375 [2024-12-14 00:19:05.345321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.375 [2024-12-14 00:19:05.345382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.375 qpair failed and we were unable to recover it. 00:38:26.375 [2024-12-14 00:19:05.345617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.375 [2024-12-14 00:19:05.345633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.375 qpair failed and we were unable to recover it. 00:38:26.375 [2024-12-14 00:19:05.345805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.375 [2024-12-14 00:19:05.345820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.375 qpair failed and we were unable to recover it. 00:38:26.375 [2024-12-14 00:19:05.345924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.375 [2024-12-14 00:19:05.345939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.375 qpair failed and we were unable to recover it. 00:38:26.375 [2024-12-14 00:19:05.346153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.375 [2024-12-14 00:19:05.346167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.375 qpair failed and we were unable to recover it. 00:38:26.375 [2024-12-14 00:19:05.346335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.375 [2024-12-14 00:19:05.346376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.375 qpair failed and we were unable to recover it. 00:38:26.375 [2024-12-14 00:19:05.346598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.375 [2024-12-14 00:19:05.346641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.375 qpair failed and we were unable to recover it. 00:38:26.375 [2024-12-14 00:19:05.346840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.375 [2024-12-14 00:19:05.346882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.375 qpair failed and we were unable to recover it. 00:38:26.375 [2024-12-14 00:19:05.347176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.375 [2024-12-14 00:19:05.347217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.375 qpair failed and we were unable to recover it. 00:38:26.375 [2024-12-14 00:19:05.347425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.375 [2024-12-14 00:19:05.347481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.375 qpair failed and we were unable to recover it. 00:38:26.375 [2024-12-14 00:19:05.347647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.375 [2024-12-14 00:19:05.347690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.375 qpair failed and we were unable to recover it. 00:38:26.375 [2024-12-14 00:19:05.347850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.375 [2024-12-14 00:19:05.347891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.375 qpair failed and we were unable to recover it. 00:38:26.375 [2024-12-14 00:19:05.348092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.375 [2024-12-14 00:19:05.348133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.375 qpair failed and we were unable to recover it. 00:38:26.375 [2024-12-14 00:19:05.348370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.375 [2024-12-14 00:19:05.348414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.375 qpair failed and we were unable to recover it. 00:38:26.376 [2024-12-14 00:19:05.348703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.376 [2024-12-14 00:19:05.348719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.376 qpair failed and we were unable to recover it. 00:38:26.376 [2024-12-14 00:19:05.348856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.376 [2024-12-14 00:19:05.348871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.376 qpair failed and we were unable to recover it. 00:38:26.376 [2024-12-14 00:19:05.349058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.376 [2024-12-14 00:19:05.349114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.376 qpair failed and we were unable to recover it. 00:38:26.376 [2024-12-14 00:19:05.349373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.376 [2024-12-14 00:19:05.349430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.376 qpair failed and we were unable to recover it. 00:38:26.376 [2024-12-14 00:19:05.349790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.376 [2024-12-14 00:19:05.349818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.376 qpair failed and we were unable to recover it. 00:38:26.376 [2024-12-14 00:19:05.349996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.376 [2024-12-14 00:19:05.350012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.376 qpair failed and we were unable to recover it. 00:38:26.376 [2024-12-14 00:19:05.350197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.376 [2024-12-14 00:19:05.350215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.376 qpair failed and we were unable to recover it. 00:38:26.376 [2024-12-14 00:19:05.350353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.376 [2024-12-14 00:19:05.350368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.376 qpair failed and we were unable to recover it. 00:38:26.376 [2024-12-14 00:19:05.350458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.376 [2024-12-14 00:19:05.350489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.376 qpair failed and we were unable to recover it. 00:38:26.376 [2024-12-14 00:19:05.350743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.376 [2024-12-14 00:19:05.350791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.376 qpair failed and we were unable to recover it. 00:38:26.376 [2024-12-14 00:19:05.351007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.376 [2024-12-14 00:19:05.351053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:26.376 qpair failed and we were unable to recover it. 00:38:26.376 [2024-12-14 00:19:05.351286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.376 [2024-12-14 00:19:05.351333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:26.376 qpair failed and we were unable to recover it. 00:38:26.376 [2024-12-14 00:19:05.351442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.376 [2024-12-14 00:19:05.351458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.376 qpair failed and we were unable to recover it. 00:38:26.376 [2024-12-14 00:19:05.351648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.376 [2024-12-14 00:19:05.351663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.376 qpair failed and we were unable to recover it. 00:38:26.376 [2024-12-14 00:19:05.351839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.376 [2024-12-14 00:19:05.351854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.376 qpair failed and we were unable to recover it. 00:38:26.376 [2024-12-14 00:19:05.351949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.376 [2024-12-14 00:19:05.351964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.376 qpair failed and we were unable to recover it. 00:38:26.376 [2024-12-14 00:19:05.352115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.376 [2024-12-14 00:19:05.352129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.376 qpair failed and we were unable to recover it. 00:38:26.376 [2024-12-14 00:19:05.352218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.376 [2024-12-14 00:19:05.352232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.376 qpair failed and we were unable to recover it. 00:38:26.376 [2024-12-14 00:19:05.352329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.376 [2024-12-14 00:19:05.352342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.376 qpair failed and we were unable to recover it. 00:38:26.376 [2024-12-14 00:19:05.352407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.376 [2024-12-14 00:19:05.352427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.376 qpair failed and we were unable to recover it. 00:38:26.376 [2024-12-14 00:19:05.352523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.376 [2024-12-14 00:19:05.352538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.376 qpair failed and we were unable to recover it. 00:38:26.376 [2024-12-14 00:19:05.352680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.376 [2024-12-14 00:19:05.352695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.376 qpair failed and we were unable to recover it. 00:38:26.376 [2024-12-14 00:19:05.352768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.376 [2024-12-14 00:19:05.352786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.376 qpair failed and we were unable to recover it. 00:38:26.376 [2024-12-14 00:19:05.352937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.376 [2024-12-14 00:19:05.352952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.376 qpair failed and we were unable to recover it. 00:38:26.376 [2024-12-14 00:19:05.353117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.376 [2024-12-14 00:19:05.353131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.376 qpair failed and we were unable to recover it. 00:38:26.376 [2024-12-14 00:19:05.353336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.376 [2024-12-14 00:19:05.353351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.376 qpair failed and we were unable to recover it. 00:38:26.376 [2024-12-14 00:19:05.353502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.376 [2024-12-14 00:19:05.353522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.376 qpair failed and we were unable to recover it. 00:38:26.376 [2024-12-14 00:19:05.353729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.376 [2024-12-14 00:19:05.353755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.376 qpair failed and we were unable to recover it. 00:38:26.376 [2024-12-14 00:19:05.353938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.376 [2024-12-14 00:19:05.353952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.376 qpair failed and we were unable to recover it. 00:38:26.376 [2024-12-14 00:19:05.354034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.376 [2024-12-14 00:19:05.354048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.376 qpair failed and we were unable to recover it. 00:38:26.376 [2024-12-14 00:19:05.354137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.376 [2024-12-14 00:19:05.354151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.376 qpair failed and we were unable to recover it. 00:38:26.376 [2024-12-14 00:19:05.354295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.376 [2024-12-14 00:19:05.354309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.376 qpair failed and we were unable to recover it. 00:38:26.376 [2024-12-14 00:19:05.354395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.376 [2024-12-14 00:19:05.354409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.376 qpair failed and we were unable to recover it. 00:38:26.376 [2024-12-14 00:19:05.354486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.376 [2024-12-14 00:19:05.354501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.376 qpair failed and we were unable to recover it. 00:38:26.376 [2024-12-14 00:19:05.354594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.376 [2024-12-14 00:19:05.354608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.376 qpair failed and we were unable to recover it. 00:38:26.376 [2024-12-14 00:19:05.354684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.376 [2024-12-14 00:19:05.354698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.377 qpair failed and we were unable to recover it. 00:38:26.377 [2024-12-14 00:19:05.354786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.377 [2024-12-14 00:19:05.354800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.377 qpair failed and we were unable to recover it. 00:38:26.377 [2024-12-14 00:19:05.354886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.377 [2024-12-14 00:19:05.354901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.377 qpair failed and we were unable to recover it. 00:38:26.377 [2024-12-14 00:19:05.354985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.377 [2024-12-14 00:19:05.354999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.377 qpair failed and we were unable to recover it. 00:38:26.377 [2024-12-14 00:19:05.355083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.377 [2024-12-14 00:19:05.355097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.377 qpair failed and we were unable to recover it. 00:38:26.377 [2024-12-14 00:19:05.355167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.377 [2024-12-14 00:19:05.355181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.377 qpair failed and we were unable to recover it. 00:38:26.377 [2024-12-14 00:19:05.355260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.377 [2024-12-14 00:19:05.355275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.377 qpair failed and we were unable to recover it. 00:38:26.377 [2024-12-14 00:19:05.355348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.377 [2024-12-14 00:19:05.355362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.377 qpair failed and we were unable to recover it. 00:38:26.377 [2024-12-14 00:19:05.355541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.377 [2024-12-14 00:19:05.355555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.377 qpair failed and we were unable to recover it. 00:38:26.377 [2024-12-14 00:19:05.355692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.377 [2024-12-14 00:19:05.355706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.377 qpair failed and we were unable to recover it. 00:38:26.377 [2024-12-14 00:19:05.355847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.377 [2024-12-14 00:19:05.355862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.377 qpair failed and we were unable to recover it. 00:38:26.377 [2024-12-14 00:19:05.356002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.377 [2024-12-14 00:19:05.356018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.377 qpair failed and we were unable to recover it. 00:38:26.377 [2024-12-14 00:19:05.356163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.377 [2024-12-14 00:19:05.356177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.377 qpair failed and we were unable to recover it. 00:38:26.377 [2024-12-14 00:19:05.356320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.377 [2024-12-14 00:19:05.356335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.377 qpair failed and we were unable to recover it. 00:38:26.377 [2024-12-14 00:19:05.356589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.377 [2024-12-14 00:19:05.356619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.377 qpair failed and we were unable to recover it. 00:38:26.377 [2024-12-14 00:19:05.356742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.377 [2024-12-14 00:19:05.356769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:26.377 qpair failed and we were unable to recover it. 00:38:26.377 [2024-12-14 00:19:05.356994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.377 [2024-12-14 00:19:05.357017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:26.377 qpair failed and we were unable to recover it. 00:38:26.377 [2024-12-14 00:19:05.357168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.377 [2024-12-14 00:19:05.357189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:26.377 qpair failed and we were unable to recover it. 00:38:26.377 [2024-12-14 00:19:05.357288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.377 [2024-12-14 00:19:05.357310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:26.377 qpair failed and we were unable to recover it. 00:38:26.377 [2024-12-14 00:19:05.357462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.377 [2024-12-14 00:19:05.357485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:26.377 qpair failed and we were unable to recover it. 00:38:26.377 [2024-12-14 00:19:05.357653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.377 [2024-12-14 00:19:05.357675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:26.377 qpair failed and we were unable to recover it. 00:38:26.377 [2024-12-14 00:19:05.357771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.377 [2024-12-14 00:19:05.357792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:26.377 qpair failed and we were unable to recover it. 00:38:26.377 [2024-12-14 00:19:05.357950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.377 [2024-12-14 00:19:05.357975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:26.377 qpair failed and we were unable to recover it. 00:38:26.377 [2024-12-14 00:19:05.358076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.377 [2024-12-14 00:19:05.358097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:26.377 qpair failed and we were unable to recover it. 00:38:26.377 [2024-12-14 00:19:05.358261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.377 [2024-12-14 00:19:05.358281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:26.377 qpair failed and we were unable to recover it. 00:38:26.377 [2024-12-14 00:19:05.358395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.377 [2024-12-14 00:19:05.358416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:26.377 qpair failed and we were unable to recover it. 00:38:26.377 [2024-12-14 00:19:05.358525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.377 [2024-12-14 00:19:05.358544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.377 qpair failed and we were unable to recover it. 00:38:26.377 [2024-12-14 00:19:05.358792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.377 [2024-12-14 00:19:05.358809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.377 qpair failed and we were unable to recover it. 00:38:26.377 [2024-12-14 00:19:05.358912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.377 [2024-12-14 00:19:05.358927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.377 qpair failed and we were unable to recover it. 00:38:26.377 [2024-12-14 00:19:05.359081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.377 [2024-12-14 00:19:05.359096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.377 qpair failed and we were unable to recover it. 00:38:26.377 [2024-12-14 00:19:05.359171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.377 [2024-12-14 00:19:05.359195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.377 qpair failed and we were unable to recover it. 00:38:26.377 [2024-12-14 00:19:05.359341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.377 [2024-12-14 00:19:05.359356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.377 qpair failed and we were unable to recover it. 00:38:26.377 [2024-12-14 00:19:05.359501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.377 [2024-12-14 00:19:05.359517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.377 qpair failed and we were unable to recover it. 00:38:26.377 [2024-12-14 00:19:05.359670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.377 [2024-12-14 00:19:05.359685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.377 qpair failed and we were unable to recover it. 00:38:26.377 [2024-12-14 00:19:05.359847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.377 [2024-12-14 00:19:05.359861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.377 qpair failed and we were unable to recover it. 00:38:26.377 [2024-12-14 00:19:05.360002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.377 [2024-12-14 00:19:05.360017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.377 qpair failed and we were unable to recover it. 00:38:26.377 [2024-12-14 00:19:05.360117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.377 [2024-12-14 00:19:05.360132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.377 qpair failed and we were unable to recover it. 00:38:26.377 [2024-12-14 00:19:05.360230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.377 [2024-12-14 00:19:05.360245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.377 qpair failed and we were unable to recover it. 00:38:26.377 [2024-12-14 00:19:05.360457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.377 [2024-12-14 00:19:05.360473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.377 qpair failed and we were unable to recover it. 00:38:26.377 [2024-12-14 00:19:05.360565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.377 [2024-12-14 00:19:05.360580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.377 qpair failed and we were unable to recover it. 00:38:26.377 [2024-12-14 00:19:05.360732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.377 [2024-12-14 00:19:05.360746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.377 qpair failed and we were unable to recover it. 00:38:26.377 [2024-12-14 00:19:05.360886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.377 [2024-12-14 00:19:05.360901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.377 qpair failed and we were unable to recover it. 00:38:26.377 [2024-12-14 00:19:05.360981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.377 [2024-12-14 00:19:05.360996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.378 qpair failed and we were unable to recover it. 00:38:26.378 [2024-12-14 00:19:05.361203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.378 [2024-12-14 00:19:05.361218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.378 qpair failed and we were unable to recover it. 00:38:26.378 [2024-12-14 00:19:05.361392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.378 [2024-12-14 00:19:05.361407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.378 qpair failed and we were unable to recover it. 00:38:26.378 [2024-12-14 00:19:05.361580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.378 [2024-12-14 00:19:05.361596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.378 qpair failed and we were unable to recover it. 00:38:26.378 [2024-12-14 00:19:05.361681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.378 [2024-12-14 00:19:05.361696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.378 qpair failed and we were unable to recover it. 00:38:26.378 [2024-12-14 00:19:05.361837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.378 [2024-12-14 00:19:05.361851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.378 qpair failed and we were unable to recover it. 00:38:26.378 [2024-12-14 00:19:05.362006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.378 [2024-12-14 00:19:05.362021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.378 qpair failed and we were unable to recover it. 00:38:26.378 [2024-12-14 00:19:05.362099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.378 [2024-12-14 00:19:05.362114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.378 qpair failed and we were unable to recover it. 00:38:26.378 [2024-12-14 00:19:05.362201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.378 [2024-12-14 00:19:05.362217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.378 qpair failed and we were unable to recover it. 00:38:26.378 [2024-12-14 00:19:05.362291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.378 [2024-12-14 00:19:05.362306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.378 qpair failed and we were unable to recover it. 00:38:26.378 [2024-12-14 00:19:05.362469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.378 [2024-12-14 00:19:05.362484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.378 qpair failed and we were unable to recover it. 00:38:26.378 [2024-12-14 00:19:05.362640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.378 [2024-12-14 00:19:05.362654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.378 qpair failed and we were unable to recover it. 00:38:26.378 [2024-12-14 00:19:05.362920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.378 [2024-12-14 00:19:05.362948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.378 qpair failed and we were unable to recover it. 00:38:26.378 [2024-12-14 00:19:05.363175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.378 [2024-12-14 00:19:05.363197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.378 qpair failed and we were unable to recover it. 00:38:26.378 [2024-12-14 00:19:05.363300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.378 [2024-12-14 00:19:05.363322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.378 qpair failed and we were unable to recover it. 00:38:26.378 [2024-12-14 00:19:05.363423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.378 [2024-12-14 00:19:05.363450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.378 qpair failed and we were unable to recover it. 00:38:26.378 [2024-12-14 00:19:05.363545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.378 [2024-12-14 00:19:05.363567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.378 qpair failed and we were unable to recover it. 00:38:26.378 [2024-12-14 00:19:05.363654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.378 [2024-12-14 00:19:05.363676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.378 qpair failed and we were unable to recover it. 00:38:26.378 [2024-12-14 00:19:05.363755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.378 [2024-12-14 00:19:05.363771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.378 qpair failed and we were unable to recover it. 00:38:26.378 [2024-12-14 00:19:05.363911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.378 [2024-12-14 00:19:05.363925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.378 qpair failed and we were unable to recover it. 00:38:26.378 [2024-12-14 00:19:05.364064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.378 [2024-12-14 00:19:05.364080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.378 qpair failed and we were unable to recover it. 00:38:26.378 [2024-12-14 00:19:05.364289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.378 [2024-12-14 00:19:05.364304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.378 qpair failed and we were unable to recover it. 00:38:26.378 [2024-12-14 00:19:05.364459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.378 [2024-12-14 00:19:05.364474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.378 qpair failed and we were unable to recover it. 00:38:26.378 [2024-12-14 00:19:05.364651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.378 [2024-12-14 00:19:05.364666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.378 qpair failed and we were unable to recover it. 00:38:26.378 [2024-12-14 00:19:05.364821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.378 [2024-12-14 00:19:05.364836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.378 qpair failed and we were unable to recover it. 00:38:26.378 [2024-12-14 00:19:05.364924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.378 [2024-12-14 00:19:05.364941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.378 qpair failed and we were unable to recover it. 00:38:26.378 [2024-12-14 00:19:05.365018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.378 [2024-12-14 00:19:05.365033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.378 qpair failed and we were unable to recover it. 00:38:26.378 [2024-12-14 00:19:05.365108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.378 [2024-12-14 00:19:05.365123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.378 qpair failed and we were unable to recover it. 00:38:26.378 [2024-12-14 00:19:05.365259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.378 [2024-12-14 00:19:05.365274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.378 qpair failed and we were unable to recover it. 00:38:26.378 [2024-12-14 00:19:05.365424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.378 [2024-12-14 00:19:05.365453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.378 qpair failed and we were unable to recover it. 00:38:26.378 [2024-12-14 00:19:05.365557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.378 [2024-12-14 00:19:05.365573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.378 qpair failed and we were unable to recover it. 00:38:26.378 [2024-12-14 00:19:05.365647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.378 [2024-12-14 00:19:05.365662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.378 qpair failed and we were unable to recover it. 00:38:26.378 [2024-12-14 00:19:05.365776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.378 [2024-12-14 00:19:05.365790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.378 qpair failed and we were unable to recover it. 00:38:26.378 [2024-12-14 00:19:05.365876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.378 [2024-12-14 00:19:05.365892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.378 qpair failed and we were unable to recover it. 00:38:26.378 [2024-12-14 00:19:05.366121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.378 [2024-12-14 00:19:05.366136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.378 qpair failed and we were unable to recover it. 00:38:26.378 [2024-12-14 00:19:05.366295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.378 [2024-12-14 00:19:05.366311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.378 qpair failed and we were unable to recover it. 00:38:26.378 [2024-12-14 00:19:05.366482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.378 [2024-12-14 00:19:05.366498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.378 qpair failed and we were unable to recover it. 00:38:26.378 [2024-12-14 00:19:05.366584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.378 [2024-12-14 00:19:05.366599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.378 qpair failed and we were unable to recover it. 00:38:26.378 [2024-12-14 00:19:05.366678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.378 [2024-12-14 00:19:05.366693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.378 qpair failed and we were unable to recover it. 00:38:26.378 [2024-12-14 00:19:05.366789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.378 [2024-12-14 00:19:05.366804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.378 qpair failed and we were unable to recover it. 00:38:26.378 [2024-12-14 00:19:05.366963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.378 [2024-12-14 00:19:05.366979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.378 qpair failed and we were unable to recover it. 00:38:26.378 [2024-12-14 00:19:05.367209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.378 [2024-12-14 00:19:05.367224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.378 qpair failed and we were unable to recover it. 00:38:26.379 [2024-12-14 00:19:05.367395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.379 [2024-12-14 00:19:05.367409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.379 qpair failed and we were unable to recover it. 00:38:26.379 [2024-12-14 00:19:05.367512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.379 [2024-12-14 00:19:05.367528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.379 qpair failed and we were unable to recover it. 00:38:26.379 [2024-12-14 00:19:05.367665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.379 [2024-12-14 00:19:05.367680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.379 qpair failed and we were unable to recover it. 00:38:26.379 [2024-12-14 00:19:05.367837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.379 [2024-12-14 00:19:05.367852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.379 qpair failed and we were unable to recover it. 00:38:26.379 [2024-12-14 00:19:05.368025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.379 [2024-12-14 00:19:05.368040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.379 qpair failed and we were unable to recover it. 00:38:26.379 [2024-12-14 00:19:05.368135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.379 [2024-12-14 00:19:05.368150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.379 qpair failed and we were unable to recover it. 00:38:26.379 [2024-12-14 00:19:05.368245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.379 [2024-12-14 00:19:05.368260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.379 qpair failed and we were unable to recover it. 00:38:26.379 [2024-12-14 00:19:05.368409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.379 [2024-12-14 00:19:05.368424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.379 qpair failed and we were unable to recover it. 00:38:26.379 [2024-12-14 00:19:05.368534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.379 [2024-12-14 00:19:05.368561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:26.379 qpair failed and we were unable to recover it. 00:38:26.379 [2024-12-14 00:19:05.368658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.379 [2024-12-14 00:19:05.368682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.379 qpair failed and we were unable to recover it. 00:38:26.379 [2024-12-14 00:19:05.368874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.379 [2024-12-14 00:19:05.368923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:26.379 qpair failed and we were unable to recover it. 00:38:26.379 [2024-12-14 00:19:05.369020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.379 [2024-12-14 00:19:05.369037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.379 qpair failed and we were unable to recover it. 00:38:26.379 [2024-12-14 00:19:05.369136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.379 [2024-12-14 00:19:05.369151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.379 qpair failed and we were unable to recover it. 00:38:26.379 [2024-12-14 00:19:05.369286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.379 [2024-12-14 00:19:05.369301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.379 qpair failed and we were unable to recover it. 00:38:26.379 [2024-12-14 00:19:05.369383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.379 [2024-12-14 00:19:05.369398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.379 qpair failed and we were unable to recover it. 00:38:26.379 [2024-12-14 00:19:05.369483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.379 [2024-12-14 00:19:05.369499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.379 qpair failed and we were unable to recover it. 00:38:26.379 [2024-12-14 00:19:05.369601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.379 [2024-12-14 00:19:05.369616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.379 qpair failed and we were unable to recover it. 00:38:26.379 [2024-12-14 00:19:05.369755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.379 [2024-12-14 00:19:05.369770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.379 qpair failed and we were unable to recover it. 00:38:26.379 [2024-12-14 00:19:05.369881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.379 [2024-12-14 00:19:05.369896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.379 qpair failed and we were unable to recover it. 00:38:26.379 [2024-12-14 00:19:05.369984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.379 [2024-12-14 00:19:05.370008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.379 qpair failed and we were unable to recover it. 00:38:26.379 [2024-12-14 00:19:05.370106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.379 [2024-12-14 00:19:05.370121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.379 qpair failed and we were unable to recover it. 00:38:26.379 [2024-12-14 00:19:05.370259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.379 [2024-12-14 00:19:05.370273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.379 qpair failed and we were unable to recover it. 00:38:26.379 [2024-12-14 00:19:05.370350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.379 [2024-12-14 00:19:05.370365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.379 qpair failed and we were unable to recover it. 00:38:26.379 [2024-12-14 00:19:05.370456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.379 [2024-12-14 00:19:05.370475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.379 qpair failed and we were unable to recover it. 00:38:26.379 [2024-12-14 00:19:05.370699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.379 [2024-12-14 00:19:05.370714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.379 qpair failed and we were unable to recover it. 00:38:26.379 [2024-12-14 00:19:05.370787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.379 [2024-12-14 00:19:05.370801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.379 qpair failed and we were unable to recover it. 00:38:26.379 [2024-12-14 00:19:05.370882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.379 [2024-12-14 00:19:05.370897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.379 qpair failed and we were unable to recover it. 00:38:26.379 [2024-12-14 00:19:05.371034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.379 [2024-12-14 00:19:05.371049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.379 qpair failed and we were unable to recover it. 00:38:26.379 [2024-12-14 00:19:05.371149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.379 [2024-12-14 00:19:05.371168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.379 qpair failed and we were unable to recover it. 00:38:26.379 [2024-12-14 00:19:05.371325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.379 [2024-12-14 00:19:05.371340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.379 qpair failed and we were unable to recover it. 00:38:26.379 [2024-12-14 00:19:05.371566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.379 [2024-12-14 00:19:05.371582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.379 qpair failed and we were unable to recover it. 00:38:26.379 [2024-12-14 00:19:05.371664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.379 [2024-12-14 00:19:05.371679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.379 qpair failed and we were unable to recover it. 00:38:26.379 [2024-12-14 00:19:05.371819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.379 [2024-12-14 00:19:05.371834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.379 qpair failed and we were unable to recover it. 00:38:26.379 [2024-12-14 00:19:05.371927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.379 [2024-12-14 00:19:05.371942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.379 qpair failed and we were unable to recover it. 00:38:26.379 [2024-12-14 00:19:05.372014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.379 [2024-12-14 00:19:05.372029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.379 qpair failed and we were unable to recover it. 00:38:26.379 [2024-12-14 00:19:05.372120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.379 [2024-12-14 00:19:05.372135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.379 qpair failed and we were unable to recover it. 00:38:26.379 [2024-12-14 00:19:05.372236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.379 [2024-12-14 00:19:05.372252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.379 qpair failed and we were unable to recover it. 00:38:26.379 [2024-12-14 00:19:05.372425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.379 [2024-12-14 00:19:05.372448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.379 qpair failed and we were unable to recover it. 00:38:26.379 [2024-12-14 00:19:05.372602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.379 [2024-12-14 00:19:05.372619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.379 qpair failed and we were unable to recover it. 00:38:26.379 [2024-12-14 00:19:05.372710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.379 [2024-12-14 00:19:05.372726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.379 qpair failed and we were unable to recover it. 00:38:26.379 [2024-12-14 00:19:05.372900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.379 [2024-12-14 00:19:05.372916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.379 qpair failed and we were unable to recover it. 00:38:26.379 [2024-12-14 00:19:05.373062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.380 [2024-12-14 00:19:05.373078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.380 qpair failed and we were unable to recover it. 00:38:26.380 [2024-12-14 00:19:05.373150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.380 [2024-12-14 00:19:05.373165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.380 qpair failed and we were unable to recover it. 00:38:26.380 [2024-12-14 00:19:05.373249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.380 [2024-12-14 00:19:05.373266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.380 qpair failed and we were unable to recover it. 00:38:26.380 [2024-12-14 00:19:05.373405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.380 [2024-12-14 00:19:05.373421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.380 qpair failed and we were unable to recover it. 00:38:26.380 [2024-12-14 00:19:05.373664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.380 [2024-12-14 00:19:05.373682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.380 qpair failed and we were unable to recover it. 00:38:26.380 [2024-12-14 00:19:05.373818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.380 [2024-12-14 00:19:05.373833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.380 qpair failed and we were unable to recover it. 00:38:26.380 [2024-12-14 00:19:05.373972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.380 [2024-12-14 00:19:05.373988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.380 qpair failed and we were unable to recover it. 00:38:26.380 [2024-12-14 00:19:05.374074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.380 [2024-12-14 00:19:05.374091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.380 qpair failed and we were unable to recover it. 00:38:26.380 [2024-12-14 00:19:05.374176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.380 [2024-12-14 00:19:05.374192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.380 qpair failed and we were unable to recover it. 00:38:26.380 [2024-12-14 00:19:05.374430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.380 [2024-12-14 00:19:05.374455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.380 qpair failed and we were unable to recover it. 00:38:26.380 [2024-12-14 00:19:05.374689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.380 [2024-12-14 00:19:05.374704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.380 qpair failed and we were unable to recover it. 00:38:26.380 [2024-12-14 00:19:05.374889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.380 [2024-12-14 00:19:05.374904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.380 qpair failed and we were unable to recover it. 00:38:26.380 [2024-12-14 00:19:05.374988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.380 [2024-12-14 00:19:05.375003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.380 qpair failed and we were unable to recover it. 00:38:26.380 [2024-12-14 00:19:05.375106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.380 [2024-12-14 00:19:05.375120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.380 qpair failed and we were unable to recover it. 00:38:26.380 [2024-12-14 00:19:05.375266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.380 [2024-12-14 00:19:05.375282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.380 qpair failed and we were unable to recover it. 00:38:26.380 [2024-12-14 00:19:05.375373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.380 [2024-12-14 00:19:05.375388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.380 qpair failed and we were unable to recover it. 00:38:26.380 [2024-12-14 00:19:05.375478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.380 [2024-12-14 00:19:05.375493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.380 qpair failed and we were unable to recover it. 00:38:26.380 [2024-12-14 00:19:05.375648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.380 [2024-12-14 00:19:05.375663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.380 qpair failed and we were unable to recover it. 00:38:26.380 [2024-12-14 00:19:05.375803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.380 [2024-12-14 00:19:05.375818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.380 qpair failed and we were unable to recover it. 00:38:26.380 [2024-12-14 00:19:05.375896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.380 [2024-12-14 00:19:05.375911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.380 qpair failed and we were unable to recover it. 00:38:26.380 [2024-12-14 00:19:05.376024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.380 [2024-12-14 00:19:05.376040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.380 qpair failed and we were unable to recover it. 00:38:26.380 [2024-12-14 00:19:05.376138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.380 [2024-12-14 00:19:05.376154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.380 qpair failed and we were unable to recover it. 00:38:26.380 [2024-12-14 00:19:05.376383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.380 [2024-12-14 00:19:05.376401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.380 qpair failed and we were unable to recover it. 00:38:26.380 [2024-12-14 00:19:05.376496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.380 [2024-12-14 00:19:05.376513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.380 qpair failed and we were unable to recover it. 00:38:26.380 [2024-12-14 00:19:05.376602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.380 [2024-12-14 00:19:05.376618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.380 qpair failed and we were unable to recover it. 00:38:26.380 [2024-12-14 00:19:05.376715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.380 [2024-12-14 00:19:05.376731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.380 qpair failed and we were unable to recover it. 00:38:26.380 [2024-12-14 00:19:05.376876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.380 [2024-12-14 00:19:05.376891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.380 qpair failed and we were unable to recover it. 00:38:26.380 [2024-12-14 00:19:05.376972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.380 [2024-12-14 00:19:05.376988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.380 qpair failed and we were unable to recover it. 00:38:26.380 [2024-12-14 00:19:05.377131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.380 [2024-12-14 00:19:05.377147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.380 qpair failed and we were unable to recover it. 00:38:26.380 [2024-12-14 00:19:05.377288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.380 [2024-12-14 00:19:05.377304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.380 qpair failed and we were unable to recover it. 00:38:26.380 [2024-12-14 00:19:05.377523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.380 [2024-12-14 00:19:05.377539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.380 qpair failed and we were unable to recover it. 00:38:26.380 [2024-12-14 00:19:05.377691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.380 [2024-12-14 00:19:05.377706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.380 qpair failed and we were unable to recover it. 00:38:26.380 [2024-12-14 00:19:05.377799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.380 [2024-12-14 00:19:05.377814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.380 qpair failed and we were unable to recover it. 00:38:26.380 [2024-12-14 00:19:05.377887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.380 [2024-12-14 00:19:05.377903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.380 qpair failed and we were unable to recover it. 00:38:26.380 [2024-12-14 00:19:05.378062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.380 [2024-12-14 00:19:05.378078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.380 qpair failed and we were unable to recover it. 00:38:26.380 [2024-12-14 00:19:05.378247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.380 [2024-12-14 00:19:05.378262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.380 qpair failed and we were unable to recover it. 00:38:26.380 [2024-12-14 00:19:05.378373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.380 [2024-12-14 00:19:05.378389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.380 qpair failed and we were unable to recover it. 00:38:26.380 [2024-12-14 00:19:05.378594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.380 [2024-12-14 00:19:05.378610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.380 qpair failed and we were unable to recover it. 00:38:26.381 [2024-12-14 00:19:05.378766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.381 [2024-12-14 00:19:05.378782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.381 qpair failed and we were unable to recover it. 00:38:26.381 [2024-12-14 00:19:05.378985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.381 [2024-12-14 00:19:05.379001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.381 qpair failed and we were unable to recover it. 00:38:26.381 [2024-12-14 00:19:05.379100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.381 [2024-12-14 00:19:05.379115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.381 qpair failed and we were unable to recover it. 00:38:26.381 [2024-12-14 00:19:05.379211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.381 [2024-12-14 00:19:05.379227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.381 qpair failed and we were unable to recover it. 00:38:26.381 [2024-12-14 00:19:05.379367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.381 [2024-12-14 00:19:05.379392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.381 qpair failed and we were unable to recover it. 00:38:26.381 [2024-12-14 00:19:05.379563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.381 [2024-12-14 00:19:05.379580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.381 qpair failed and we were unable to recover it. 00:38:26.381 [2024-12-14 00:19:05.379667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.381 [2024-12-14 00:19:05.379683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.381 qpair failed and we were unable to recover it. 00:38:26.381 [2024-12-14 00:19:05.379840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.381 [2024-12-14 00:19:05.379855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.381 qpair failed and we were unable to recover it. 00:38:26.381 [2024-12-14 00:19:05.380059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.381 [2024-12-14 00:19:05.380075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.381 qpair failed and we were unable to recover it. 00:38:26.381 [2024-12-14 00:19:05.380229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.381 [2024-12-14 00:19:05.380244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.381 qpair failed and we were unable to recover it. 00:38:26.381 [2024-12-14 00:19:05.380423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.381 [2024-12-14 00:19:05.380444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.381 qpair failed and we were unable to recover it. 00:38:26.381 [2024-12-14 00:19:05.380606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.381 [2024-12-14 00:19:05.380630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:26.381 qpair failed and we were unable to recover it. 00:38:26.381 [2024-12-14 00:19:05.380800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.381 [2024-12-14 00:19:05.380824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.381 qpair failed and we were unable to recover it. 00:38:26.381 [2024-12-14 00:19:05.381015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.381 [2024-12-14 00:19:05.381037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.381 qpair failed and we were unable to recover it. 00:38:26.381 [2024-12-14 00:19:05.381210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.381 [2024-12-14 00:19:05.381231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.381 qpair failed and we were unable to recover it. 00:38:26.381 [2024-12-14 00:19:05.381403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.381 [2024-12-14 00:19:05.381425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.381 qpair failed and we were unable to recover it. 00:38:26.381 [2024-12-14 00:19:05.381585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.381 [2024-12-14 00:19:05.381606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.381 qpair failed and we were unable to recover it. 00:38:26.381 [2024-12-14 00:19:05.381785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.381 [2024-12-14 00:19:05.381806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.381 qpair failed and we were unable to recover it. 00:38:26.381 [2024-12-14 00:19:05.381956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.381 [2024-12-14 00:19:05.381977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.381 qpair failed and we were unable to recover it. 00:38:26.381 [2024-12-14 00:19:05.382086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.381 [2024-12-14 00:19:05.382107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.381 qpair failed and we were unable to recover it. 00:38:26.381 [2024-12-14 00:19:05.382269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.381 [2024-12-14 00:19:05.382290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.381 qpair failed and we were unable to recover it. 00:38:26.381 [2024-12-14 00:19:05.382517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.381 [2024-12-14 00:19:05.382540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.381 qpair failed and we were unable to recover it. 00:38:26.381 [2024-12-14 00:19:05.382758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.381 [2024-12-14 00:19:05.382780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.381 qpair failed and we were unable to recover it. 00:38:26.381 [2024-12-14 00:19:05.382969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.381 [2024-12-14 00:19:05.382991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.381 qpair failed and we were unable to recover it. 00:38:26.381 [2024-12-14 00:19:05.383077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.381 [2024-12-14 00:19:05.383106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.381 qpair failed and we were unable to recover it. 00:38:26.381 [2024-12-14 00:19:05.383303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.381 [2024-12-14 00:19:05.383325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.381 qpair failed and we were unable to recover it. 00:38:26.381 [2024-12-14 00:19:05.383595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.381 [2024-12-14 00:19:05.383639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.381 qpair failed and we were unable to recover it. 00:38:26.381 [2024-12-14 00:19:05.383852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.381 [2024-12-14 00:19:05.383896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.381 qpair failed and we were unable to recover it. 00:38:26.381 [2024-12-14 00:19:05.384049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.381 [2024-12-14 00:19:05.384092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.381 qpair failed and we were unable to recover it. 00:38:26.381 [2024-12-14 00:19:05.384254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.381 [2024-12-14 00:19:05.384297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.381 qpair failed and we were unable to recover it. 00:38:26.381 [2024-12-14 00:19:05.384535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.381 [2024-12-14 00:19:05.384579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.381 qpair failed and we were unable to recover it. 00:38:26.381 [2024-12-14 00:19:05.384789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.381 [2024-12-14 00:19:05.384832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.381 qpair failed and we were unable to recover it. 00:38:26.381 [2024-12-14 00:19:05.385118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.381 [2024-12-14 00:19:05.385160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.381 qpair failed and we were unable to recover it. 00:38:26.381 [2024-12-14 00:19:05.385379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.381 [2024-12-14 00:19:05.385423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.381 qpair failed and we were unable to recover it. 00:38:26.381 [2024-12-14 00:19:05.385616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.381 [2024-12-14 00:19:05.385638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.381 qpair failed and we were unable to recover it. 00:38:26.381 [2024-12-14 00:19:05.385797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.381 [2024-12-14 00:19:05.385839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.381 qpair failed and we were unable to recover it. 00:38:26.381 [2024-12-14 00:19:05.386146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.381 [2024-12-14 00:19:05.386188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.381 qpair failed and we were unable to recover it. 00:38:26.381 [2024-12-14 00:19:05.386426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.381 [2024-12-14 00:19:05.386495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.381 qpair failed and we were unable to recover it. 00:38:26.381 [2024-12-14 00:19:05.386791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.381 [2024-12-14 00:19:05.386835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.381 qpair failed and we were unable to recover it. 00:38:26.381 [2024-12-14 00:19:05.387043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.381 [2024-12-14 00:19:05.387085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.381 qpair failed and we were unable to recover it. 00:38:26.381 [2024-12-14 00:19:05.387235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.381 [2024-12-14 00:19:05.387278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.381 qpair failed and we were unable to recover it. 00:38:26.381 [2024-12-14 00:19:05.387565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.381 [2024-12-14 00:19:05.387611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.381 qpair failed and we were unable to recover it. 00:38:26.381 [2024-12-14 00:19:05.387774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.381 [2024-12-14 00:19:05.387796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.382 qpair failed and we were unable to recover it. 00:38:26.382 [2024-12-14 00:19:05.387968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.382 [2024-12-14 00:19:05.388010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.382 qpair failed and we were unable to recover it. 00:38:26.382 [2024-12-14 00:19:05.388286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.382 [2024-12-14 00:19:05.388329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.382 qpair failed and we were unable to recover it. 00:38:26.382 [2024-12-14 00:19:05.388614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.382 [2024-12-14 00:19:05.388659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.382 qpair failed and we were unable to recover it. 00:38:26.382 [2024-12-14 00:19:05.388968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.382 [2024-12-14 00:19:05.389011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.382 qpair failed and we were unable to recover it. 00:38:26.382 [2024-12-14 00:19:05.389245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.382 [2024-12-14 00:19:05.389288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.382 qpair failed and we were unable to recover it. 00:38:26.382 [2024-12-14 00:19:05.389548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.382 [2024-12-14 00:19:05.389592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.382 qpair failed and we were unable to recover it. 00:38:26.382 [2024-12-14 00:19:05.389811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.382 [2024-12-14 00:19:05.389855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.382 qpair failed and we were unable to recover it. 00:38:26.382 [2024-12-14 00:19:05.390121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.382 [2024-12-14 00:19:05.390164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.382 qpair failed and we were unable to recover it. 00:38:26.382 [2024-12-14 00:19:05.390366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.382 [2024-12-14 00:19:05.390415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.382 qpair failed and we were unable to recover it. 00:38:26.382 [2024-12-14 00:19:05.390645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.382 [2024-12-14 00:19:05.390689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.382 qpair failed and we were unable to recover it. 00:38:26.382 [2024-12-14 00:19:05.390854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.382 [2024-12-14 00:19:05.390898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.382 qpair failed and we were unable to recover it. 00:38:26.382 [2024-12-14 00:19:05.391008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.382 [2024-12-14 00:19:05.391029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.382 qpair failed and we were unable to recover it. 00:38:26.382 [2024-12-14 00:19:05.391216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.382 [2024-12-14 00:19:05.391259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.382 qpair failed and we were unable to recover it. 00:38:26.382 [2024-12-14 00:19:05.391543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.382 [2024-12-14 00:19:05.391588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.382 qpair failed and we were unable to recover it. 00:38:26.382 [2024-12-14 00:19:05.391816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.382 [2024-12-14 00:19:05.391860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.382 qpair failed and we were unable to recover it. 00:38:26.382 [2024-12-14 00:19:05.392013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.382 [2024-12-14 00:19:05.392055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.382 qpair failed and we were unable to recover it. 00:38:26.382 [2024-12-14 00:19:05.392318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.382 [2024-12-14 00:19:05.392361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.382 qpair failed and we were unable to recover it. 00:38:26.382 [2024-12-14 00:19:05.392503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.382 [2024-12-14 00:19:05.392525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.382 qpair failed and we were unable to recover it. 00:38:26.382 [2024-12-14 00:19:05.392783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.382 [2024-12-14 00:19:05.392826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.382 qpair failed and we were unable to recover it. 00:38:26.382 [2024-12-14 00:19:05.393032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.382 [2024-12-14 00:19:05.393074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.382 qpair failed and we were unable to recover it. 00:38:26.382 [2024-12-14 00:19:05.393287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.382 [2024-12-14 00:19:05.393330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.382 qpair failed and we were unable to recover it. 00:38:26.382 [2024-12-14 00:19:05.393529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.382 [2024-12-14 00:19:05.393573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.382 qpair failed and we were unable to recover it. 00:38:26.382 [2024-12-14 00:19:05.393786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.382 [2024-12-14 00:19:05.393830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.382 qpair failed and we were unable to recover it. 00:38:26.382 [2024-12-14 00:19:05.393968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.382 [2024-12-14 00:19:05.393989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.382 qpair failed and we were unable to recover it. 00:38:26.382 [2024-12-14 00:19:05.394174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.382 [2024-12-14 00:19:05.394216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.382 qpair failed and we were unable to recover it. 00:38:26.382 [2024-12-14 00:19:05.394426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.382 [2024-12-14 00:19:05.394478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.382 qpair failed and we were unable to recover it. 00:38:26.382 [2024-12-14 00:19:05.394738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.382 [2024-12-14 00:19:05.394760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.382 qpair failed and we were unable to recover it. 00:38:26.382 [2024-12-14 00:19:05.394864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.382 [2024-12-14 00:19:05.394886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.382 qpair failed and we were unable to recover it. 00:38:26.382 [2024-12-14 00:19:05.395060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.382 [2024-12-14 00:19:05.395082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.382 qpair failed and we were unable to recover it. 00:38:26.382 [2024-12-14 00:19:05.395206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.382 [2024-12-14 00:19:05.395227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.382 qpair failed and we were unable to recover it. 00:38:26.382 [2024-12-14 00:19:05.395320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.382 [2024-12-14 00:19:05.395342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.382 qpair failed and we were unable to recover it. 00:38:26.382 [2024-12-14 00:19:05.395549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.382 [2024-12-14 00:19:05.395593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.382 qpair failed and we were unable to recover it. 00:38:26.382 [2024-12-14 00:19:05.395829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.382 [2024-12-14 00:19:05.395873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.382 qpair failed and we were unable to recover it. 00:38:26.382 [2024-12-14 00:19:05.396106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.382 [2024-12-14 00:19:05.396149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.382 qpair failed and we were unable to recover it. 00:38:26.382 [2024-12-14 00:19:05.396461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.382 [2024-12-14 00:19:05.396506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.382 qpair failed and we were unable to recover it. 00:38:26.382 [2024-12-14 00:19:05.396735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.382 [2024-12-14 00:19:05.396778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.382 qpair failed and we were unable to recover it. 00:38:26.382 [2024-12-14 00:19:05.397036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.382 [2024-12-14 00:19:05.397057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.382 qpair failed and we were unable to recover it. 00:38:26.382 [2024-12-14 00:19:05.397246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.382 [2024-12-14 00:19:05.397267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.382 qpair failed and we were unable to recover it. 00:38:26.382 [2024-12-14 00:19:05.397490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.382 [2024-12-14 00:19:05.397512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.382 qpair failed and we were unable to recover it. 00:38:26.382 [2024-12-14 00:19:05.397605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.382 [2024-12-14 00:19:05.397626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.382 qpair failed and we were unable to recover it. 00:38:26.382 [2024-12-14 00:19:05.397799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.382 [2024-12-14 00:19:05.397841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.382 qpair failed and we were unable to recover it. 00:38:26.382 [2024-12-14 00:19:05.398047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.382 [2024-12-14 00:19:05.398090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.382 qpair failed and we were unable to recover it. 00:38:26.382 [2024-12-14 00:19:05.398382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.382 [2024-12-14 00:19:05.398426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.382 qpair failed and we were unable to recover it. 00:38:26.382 [2024-12-14 00:19:05.398590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.382 [2024-12-14 00:19:05.398634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.382 qpair failed and we were unable to recover it. 00:38:26.382 [2024-12-14 00:19:05.398869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.382 [2024-12-14 00:19:05.398913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.382 qpair failed and we were unable to recover it. 00:38:26.383 [2024-12-14 00:19:05.399195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.383 [2024-12-14 00:19:05.399263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.383 qpair failed and we were unable to recover it. 00:38:26.383 [2024-12-14 00:19:05.399481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.383 [2024-12-14 00:19:05.399527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.383 qpair failed and we were unable to recover it. 00:38:26.383 [2024-12-14 00:19:05.399784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.383 [2024-12-14 00:19:05.399805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.383 qpair failed and we were unable to recover it. 00:38:26.383 [2024-12-14 00:19:05.399919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.383 [2024-12-14 00:19:05.399943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.383 qpair failed and we were unable to recover it. 00:38:26.383 [2024-12-14 00:19:05.400108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.383 [2024-12-14 00:19:05.400129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.383 qpair failed and we were unable to recover it. 00:38:26.383 [2024-12-14 00:19:05.400293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.383 [2024-12-14 00:19:05.400315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.383 qpair failed and we were unable to recover it. 00:38:26.383 [2024-12-14 00:19:05.400417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.383 [2024-12-14 00:19:05.400443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.383 qpair failed and we were unable to recover it. 00:38:26.383 [2024-12-14 00:19:05.400669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.383 [2024-12-14 00:19:05.400712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.383 qpair failed and we were unable to recover it. 00:38:26.383 [2024-12-14 00:19:05.400844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.383 [2024-12-14 00:19:05.400886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.383 qpair failed and we were unable to recover it. 00:38:26.383 [2024-12-14 00:19:05.401048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.383 [2024-12-14 00:19:05.401091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.383 qpair failed and we were unable to recover it. 00:38:26.383 [2024-12-14 00:19:05.401262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.383 [2024-12-14 00:19:05.401305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.383 qpair failed and we were unable to recover it. 00:38:26.383 [2024-12-14 00:19:05.401522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.383 [2024-12-14 00:19:05.401566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.383 qpair failed and we were unable to recover it. 00:38:26.383 [2024-12-14 00:19:05.401840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.383 [2024-12-14 00:19:05.401861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.383 qpair failed and we were unable to recover it. 00:38:26.383 [2024-12-14 00:19:05.401971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.383 [2024-12-14 00:19:05.401992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.383 qpair failed and we were unable to recover it. 00:38:26.383 [2024-12-14 00:19:05.402189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.383 [2024-12-14 00:19:05.402232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.383 qpair failed and we were unable to recover it. 00:38:26.383 [2024-12-14 00:19:05.402535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.383 [2024-12-14 00:19:05.402579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.383 qpair failed and we were unable to recover it. 00:38:26.383 [2024-12-14 00:19:05.402739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.383 [2024-12-14 00:19:05.402788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.383 qpair failed and we were unable to recover it. 00:38:26.383 [2024-12-14 00:19:05.402957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.383 [2024-12-14 00:19:05.402979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.383 qpair failed and we were unable to recover it. 00:38:26.383 [2024-12-14 00:19:05.403275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.383 [2024-12-14 00:19:05.403299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.383 qpair failed and we were unable to recover it. 00:38:26.383 [2024-12-14 00:19:05.403408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.383 [2024-12-14 00:19:05.403429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.383 qpair failed and we were unable to recover it. 00:38:26.383 [2024-12-14 00:19:05.403674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.383 [2024-12-14 00:19:05.403718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.383 qpair failed and we were unable to recover it. 00:38:26.383 [2024-12-14 00:19:05.404008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.383 [2024-12-14 00:19:05.404052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.383 qpair failed and we were unable to recover it. 00:38:26.383 [2024-12-14 00:19:05.404209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.383 [2024-12-14 00:19:05.404252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.383 qpair failed and we were unable to recover it. 00:38:26.383 [2024-12-14 00:19:05.404461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.383 [2024-12-14 00:19:05.404506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.383 qpair failed and we were unable to recover it. 00:38:26.383 [2024-12-14 00:19:05.404807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.383 [2024-12-14 00:19:05.404828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.383 qpair failed and we were unable to recover it. 00:38:26.383 [2024-12-14 00:19:05.405073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.383 [2024-12-14 00:19:05.405095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.383 qpair failed and we were unable to recover it. 00:38:26.383 [2024-12-14 00:19:05.405207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.383 [2024-12-14 00:19:05.405228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.383 qpair failed and we were unable to recover it. 00:38:26.383 [2024-12-14 00:19:05.405329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.383 [2024-12-14 00:19:05.405350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.383 qpair failed and we were unable to recover it. 00:38:26.383 [2024-12-14 00:19:05.405592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.383 [2024-12-14 00:19:05.405615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.383 qpair failed and we were unable to recover it. 00:38:26.383 [2024-12-14 00:19:05.405800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.383 [2024-12-14 00:19:05.405823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.383 qpair failed and we were unable to recover it. 00:38:26.383 [2024-12-14 00:19:05.406019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.383 [2024-12-14 00:19:05.406062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.383 qpair failed and we were unable to recover it. 00:38:26.383 [2024-12-14 00:19:05.406277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.383 [2024-12-14 00:19:05.406319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.383 qpair failed and we were unable to recover it. 00:38:26.383 [2024-12-14 00:19:05.406570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.383 [2024-12-14 00:19:05.406615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.383 qpair failed and we were unable to recover it. 00:38:26.383 [2024-12-14 00:19:05.406834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.383 [2024-12-14 00:19:05.406878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.383 qpair failed and we were unable to recover it. 00:38:26.383 [2024-12-14 00:19:05.407039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.383 [2024-12-14 00:19:05.407082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.383 qpair failed and we were unable to recover it. 00:38:26.383 [2024-12-14 00:19:05.407227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.383 [2024-12-14 00:19:05.407270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.383 qpair failed and we were unable to recover it. 00:38:26.383 [2024-12-14 00:19:05.407549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.383 [2024-12-14 00:19:05.407593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.383 qpair failed and we were unable to recover it. 00:38:26.383 [2024-12-14 00:19:05.407745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.383 [2024-12-14 00:19:05.407789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.383 qpair failed and we were unable to recover it. 00:38:26.383 [2024-12-14 00:19:05.407998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.383 [2024-12-14 00:19:05.408020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.383 qpair failed and we were unable to recover it. 00:38:26.383 [2024-12-14 00:19:05.408269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.383 [2024-12-14 00:19:05.408292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.383 qpair failed and we were unable to recover it. 00:38:26.383 [2024-12-14 00:19:05.408523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.383 [2024-12-14 00:19:05.408546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.383 qpair failed and we were unable to recover it. 00:38:26.383 [2024-12-14 00:19:05.408765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.383 [2024-12-14 00:19:05.408787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.383 qpair failed and we were unable to recover it. 00:38:26.383 [2024-12-14 00:19:05.408947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.383 [2024-12-14 00:19:05.408969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.383 qpair failed and we were unable to recover it. 00:38:26.383 [2024-12-14 00:19:05.409154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.383 [2024-12-14 00:19:05.409204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.383 qpair failed and we were unable to recover it. 00:38:26.383 [2024-12-14 00:19:05.409416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.383 [2024-12-14 00:19:05.409470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.383 qpair failed and we were unable to recover it. 00:38:26.384 [2024-12-14 00:19:05.409673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.384 [2024-12-14 00:19:05.409717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.384 qpair failed and we were unable to recover it. 00:38:26.384 [2024-12-14 00:19:05.409978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.384 [2024-12-14 00:19:05.409999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.384 qpair failed and we were unable to recover it. 00:38:26.384 [2024-12-14 00:19:05.410152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.384 [2024-12-14 00:19:05.410173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.384 qpair failed and we were unable to recover it. 00:38:26.384 [2024-12-14 00:19:05.410346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.384 [2024-12-14 00:19:05.410367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.384 qpair failed and we were unable to recover it. 00:38:26.384 [2024-12-14 00:19:05.410492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.384 [2024-12-14 00:19:05.410514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.384 qpair failed and we were unable to recover it. 00:38:26.384 [2024-12-14 00:19:05.410668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.384 [2024-12-14 00:19:05.410689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.384 qpair failed and we were unable to recover it. 00:38:26.384 [2024-12-14 00:19:05.410839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.384 [2024-12-14 00:19:05.410860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.384 qpair failed and we were unable to recover it. 00:38:26.384 [2024-12-14 00:19:05.411110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.384 [2024-12-14 00:19:05.411153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.384 qpair failed and we were unable to recover it. 00:38:26.384 [2024-12-14 00:19:05.411379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.384 [2024-12-14 00:19:05.411422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.384 qpair failed and we were unable to recover it. 00:38:26.384 [2024-12-14 00:19:05.411744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.384 [2024-12-14 00:19:05.411801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.384 qpair failed and we were unable to recover it. 00:38:26.384 [2024-12-14 00:19:05.411917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.384 [2024-12-14 00:19:05.411938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.384 qpair failed and we were unable to recover it. 00:38:26.384 [2024-12-14 00:19:05.412115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.384 [2024-12-14 00:19:05.412159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.384 qpair failed and we were unable to recover it. 00:38:26.384 [2024-12-14 00:19:05.412374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.384 [2024-12-14 00:19:05.412418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.384 qpair failed and we were unable to recover it. 00:38:26.384 [2024-12-14 00:19:05.412640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.384 [2024-12-14 00:19:05.412683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.384 qpair failed and we were unable to recover it. 00:38:26.384 [2024-12-14 00:19:05.412840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.384 [2024-12-14 00:19:05.412861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.384 qpair failed and we were unable to recover it. 00:38:26.384 [2024-12-14 00:19:05.412961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.384 [2024-12-14 00:19:05.412983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.384 qpair failed and we were unable to recover it. 00:38:26.384 [2024-12-14 00:19:05.413205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.384 [2024-12-14 00:19:05.413226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.384 qpair failed and we were unable to recover it. 00:38:26.384 [2024-12-14 00:19:05.413390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.384 [2024-12-14 00:19:05.413411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.384 qpair failed and we were unable to recover it. 00:38:26.384 [2024-12-14 00:19:05.413663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.384 [2024-12-14 00:19:05.413685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.384 qpair failed and we were unable to recover it. 00:38:26.384 [2024-12-14 00:19:05.413786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.384 [2024-12-14 00:19:05.413814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.384 qpair failed and we were unable to recover it. 00:38:26.384 [2024-12-14 00:19:05.414074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.384 [2024-12-14 00:19:05.414117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.384 qpair failed and we were unable to recover it. 00:38:26.384 [2024-12-14 00:19:05.414262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.384 [2024-12-14 00:19:05.414306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.384 qpair failed and we were unable to recover it. 00:38:26.384 [2024-12-14 00:19:05.414508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.384 [2024-12-14 00:19:05.414553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.384 qpair failed and we were unable to recover it. 00:38:26.384 [2024-12-14 00:19:05.414769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.384 [2024-12-14 00:19:05.414791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.384 qpair failed and we were unable to recover it. 00:38:26.384 [2024-12-14 00:19:05.414967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.384 [2024-12-14 00:19:05.415010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.384 qpair failed and we were unable to recover it. 00:38:26.384 [2024-12-14 00:19:05.415227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.384 [2024-12-14 00:19:05.415271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.384 qpair failed and we were unable to recover it. 00:38:26.384 [2024-12-14 00:19:05.415488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.384 [2024-12-14 00:19:05.415533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.384 qpair failed and we were unable to recover it. 00:38:26.384 [2024-12-14 00:19:05.415679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.384 [2024-12-14 00:19:05.415701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.384 qpair failed and we were unable to recover it. 00:38:26.384 [2024-12-14 00:19:05.415877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.384 [2024-12-14 00:19:05.415921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.384 qpair failed and we were unable to recover it. 00:38:26.384 [2024-12-14 00:19:05.416209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.384 [2024-12-14 00:19:05.416251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.384 qpair failed and we were unable to recover it. 00:38:26.384 [2024-12-14 00:19:05.416536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.384 [2024-12-14 00:19:05.416580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.384 qpair failed and we were unable to recover it. 00:38:26.384 [2024-12-14 00:19:05.416819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.384 [2024-12-14 00:19:05.416862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.384 qpair failed and we were unable to recover it. 00:38:26.385 [2024-12-14 00:19:05.417132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.385 [2024-12-14 00:19:05.417176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.385 qpair failed and we were unable to recover it. 00:38:26.385 [2024-12-14 00:19:05.417396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.385 [2024-12-14 00:19:05.417448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.385 qpair failed and we were unable to recover it. 00:38:26.385 [2024-12-14 00:19:05.417725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.385 [2024-12-14 00:19:05.417768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.385 qpair failed and we were unable to recover it. 00:38:26.385 [2024-12-14 00:19:05.418056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.385 [2024-12-14 00:19:05.418099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.385 qpair failed and we were unable to recover it. 00:38:26.385 [2024-12-14 00:19:05.418376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.385 [2024-12-14 00:19:05.418433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.385 qpair failed and we were unable to recover it. 00:38:26.385 [2024-12-14 00:19:05.418533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.385 [2024-12-14 00:19:05.418555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.385 qpair failed and we were unable to recover it. 00:38:26.385 [2024-12-14 00:19:05.418800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.385 [2024-12-14 00:19:05.418824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.385 qpair failed and we were unable to recover it. 00:38:26.385 [2024-12-14 00:19:05.418991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.385 [2024-12-14 00:19:05.419012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.385 qpair failed and we were unable to recover it. 00:38:26.385 [2024-12-14 00:19:05.419187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.385 [2024-12-14 00:19:05.419230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.385 qpair failed and we were unable to recover it. 00:38:26.385 [2024-12-14 00:19:05.419458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.385 [2024-12-14 00:19:05.419502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.385 qpair failed and we were unable to recover it. 00:38:26.385 [2024-12-14 00:19:05.419711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.385 [2024-12-14 00:19:05.419754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.385 qpair failed and we were unable to recover it. 00:38:26.385 [2024-12-14 00:19:05.419912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.385 [2024-12-14 00:19:05.419956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.385 qpair failed and we were unable to recover it. 00:38:26.385 [2024-12-14 00:19:05.420222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.385 [2024-12-14 00:19:05.420266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.385 qpair failed and we were unable to recover it. 00:38:26.385 [2024-12-14 00:19:05.420502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.385 [2024-12-14 00:19:05.420545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.385 qpair failed and we were unable to recover it. 00:38:26.385 [2024-12-14 00:19:05.420832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.385 [2024-12-14 00:19:05.420874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.385 qpair failed and we were unable to recover it. 00:38:26.385 [2024-12-14 00:19:05.421149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.385 [2024-12-14 00:19:05.421171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.385 qpair failed and we were unable to recover it. 00:38:26.385 [2024-12-14 00:19:05.421368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.385 [2024-12-14 00:19:05.421390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.385 qpair failed and we were unable to recover it. 00:38:26.385 [2024-12-14 00:19:05.421627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.385 [2024-12-14 00:19:05.421649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.385 qpair failed and we were unable to recover it. 00:38:26.385 [2024-12-14 00:19:05.421761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.385 [2024-12-14 00:19:05.421804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.385 qpair failed and we were unable to recover it. 00:38:26.385 [2024-12-14 00:19:05.422007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.385 [2024-12-14 00:19:05.422049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.385 qpair failed and we were unable to recover it. 00:38:26.385 [2024-12-14 00:19:05.422206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.385 [2024-12-14 00:19:05.422250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.385 qpair failed and we were unable to recover it. 00:38:26.385 [2024-12-14 00:19:05.422464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.385 [2024-12-14 00:19:05.422509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.385 qpair failed and we were unable to recover it. 00:38:26.385 [2024-12-14 00:19:05.422767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.385 [2024-12-14 00:19:05.422789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.385 qpair failed and we were unable to recover it. 00:38:26.385 [2024-12-14 00:19:05.423008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.385 [2024-12-14 00:19:05.423029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.385 qpair failed and we were unable to recover it. 00:38:26.385 [2024-12-14 00:19:05.423185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.385 [2024-12-14 00:19:05.423207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.385 qpair failed and we were unable to recover it. 00:38:26.385 [2024-12-14 00:19:05.423478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.385 [2024-12-14 00:19:05.423523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.385 qpair failed and we were unable to recover it. 00:38:26.385 [2024-12-14 00:19:05.423736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.385 [2024-12-14 00:19:05.423779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.385 qpair failed and we were unable to recover it. 00:38:26.385 [2024-12-14 00:19:05.424061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.385 [2024-12-14 00:19:05.424082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.385 qpair failed and we were unable to recover it. 00:38:26.385 [2024-12-14 00:19:05.424372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.385 [2024-12-14 00:19:05.424394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.385 qpair failed and we were unable to recover it. 00:38:26.385 [2024-12-14 00:19:05.424651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.385 [2024-12-14 00:19:05.424695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.385 qpair failed and we were unable to recover it. 00:38:26.385 [2024-12-14 00:19:05.424929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.385 [2024-12-14 00:19:05.424972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.385 qpair failed and we were unable to recover it. 00:38:26.385 [2024-12-14 00:19:05.425118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.385 [2024-12-14 00:19:05.425161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.385 qpair failed and we were unable to recover it. 00:38:26.385 [2024-12-14 00:19:05.425320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.385 [2024-12-14 00:19:05.425364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.385 qpair failed and we were unable to recover it. 00:38:26.385 [2024-12-14 00:19:05.425527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.385 [2024-12-14 00:19:05.425571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.385 qpair failed and we were unable to recover it. 00:38:26.385 [2024-12-14 00:19:05.425774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.385 [2024-12-14 00:19:05.425816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.385 qpair failed and we were unable to recover it. 00:38:26.385 [2024-12-14 00:19:05.425973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.385 [2024-12-14 00:19:05.426015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.385 qpair failed and we were unable to recover it. 00:38:26.385 [2024-12-14 00:19:05.426230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.385 [2024-12-14 00:19:05.426273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.386 qpair failed and we were unable to recover it. 00:38:26.386 [2024-12-14 00:19:05.426489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.386 [2024-12-14 00:19:05.426532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.386 qpair failed and we were unable to recover it. 00:38:26.386 [2024-12-14 00:19:05.426750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.386 [2024-12-14 00:19:05.426771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.386 qpair failed and we were unable to recover it. 00:38:26.386 [2024-12-14 00:19:05.426926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.386 [2024-12-14 00:19:05.426968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.386 qpair failed and we were unable to recover it. 00:38:26.386 [2024-12-14 00:19:05.427216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.386 [2024-12-14 00:19:05.427259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.386 qpair failed and we were unable to recover it. 00:38:26.386 [2024-12-14 00:19:05.427472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.386 [2024-12-14 00:19:05.427516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.386 qpair failed and we were unable to recover it. 00:38:26.386 [2024-12-14 00:19:05.427711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.386 [2024-12-14 00:19:05.427753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.386 qpair failed and we were unable to recover it. 00:38:26.386 [2024-12-14 00:19:05.428061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.386 [2024-12-14 00:19:05.428104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.386 qpair failed and we were unable to recover it. 00:38:26.386 [2024-12-14 00:19:05.428342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.386 [2024-12-14 00:19:05.428387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.386 qpair failed and we were unable to recover it. 00:38:26.386 [2024-12-14 00:19:05.428620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.386 [2024-12-14 00:19:05.428674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.386 qpair failed and we were unable to recover it. 00:38:26.386 [2024-12-14 00:19:05.428910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.386 [2024-12-14 00:19:05.428934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.386 qpair failed and we were unable to recover it. 00:38:26.386 [2024-12-14 00:19:05.429168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.386 [2024-12-14 00:19:05.429190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.386 qpair failed and we were unable to recover it. 00:38:26.386 [2024-12-14 00:19:05.429357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.386 [2024-12-14 00:19:05.429378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.386 qpair failed and we were unable to recover it. 00:38:26.386 [2024-12-14 00:19:05.429539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.386 [2024-12-14 00:19:05.429561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.386 qpair failed and we were unable to recover it. 00:38:26.386 [2024-12-14 00:19:05.429673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.386 [2024-12-14 00:19:05.429695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.386 qpair failed and we were unable to recover it. 00:38:26.386 [2024-12-14 00:19:05.429936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.386 [2024-12-14 00:19:05.429963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.386 qpair failed and we were unable to recover it. 00:38:26.386 [2024-12-14 00:19:05.430084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.386 [2024-12-14 00:19:05.430105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.386 qpair failed and we were unable to recover it. 00:38:26.386 [2024-12-14 00:19:05.430257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.386 [2024-12-14 00:19:05.430277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.386 qpair failed and we were unable to recover it. 00:38:26.386 [2024-12-14 00:19:05.430448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.386 [2024-12-14 00:19:05.430470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.386 qpair failed and we were unable to recover it. 00:38:26.386 [2024-12-14 00:19:05.430635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.386 [2024-12-14 00:19:05.430656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.386 qpair failed and we were unable to recover it. 00:38:26.386 [2024-12-14 00:19:05.430819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.386 [2024-12-14 00:19:05.430840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.386 qpair failed and we were unable to recover it. 00:38:26.386 [2024-12-14 00:19:05.430948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.386 [2024-12-14 00:19:05.430969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.386 qpair failed and we were unable to recover it. 00:38:26.386 [2024-12-14 00:19:05.431128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.386 [2024-12-14 00:19:05.431149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.386 qpair failed and we were unable to recover it. 00:38:26.386 [2024-12-14 00:19:05.431308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.386 [2024-12-14 00:19:05.431350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.386 qpair failed and we were unable to recover it. 00:38:26.386 [2024-12-14 00:19:05.431600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.386 [2024-12-14 00:19:05.431645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.386 qpair failed and we were unable to recover it. 00:38:26.386 [2024-12-14 00:19:05.431854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.386 [2024-12-14 00:19:05.431897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.386 qpair failed and we were unable to recover it. 00:38:26.386 [2024-12-14 00:19:05.432093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.386 [2024-12-14 00:19:05.432136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.386 qpair failed and we were unable to recover it. 00:38:26.386 [2024-12-14 00:19:05.432346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.386 [2024-12-14 00:19:05.432388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.386 qpair failed and we were unable to recover it. 00:38:26.386 [2024-12-14 00:19:05.432669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.386 [2024-12-14 00:19:05.432714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.386 qpair failed and we were unable to recover it. 00:38:26.386 [2024-12-14 00:19:05.432990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.386 [2024-12-14 00:19:05.433033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.386 qpair failed and we were unable to recover it. 00:38:26.386 [2024-12-14 00:19:05.433230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.386 [2024-12-14 00:19:05.433272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.386 qpair failed and we were unable to recover it. 00:38:26.386 [2024-12-14 00:19:05.433536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.386 [2024-12-14 00:19:05.433581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.386 qpair failed and we were unable to recover it. 00:38:26.386 [2024-12-14 00:19:05.433893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.386 [2024-12-14 00:19:05.433945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.386 qpair failed and we were unable to recover it. 00:38:26.386 [2024-12-14 00:19:05.434103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.386 [2024-12-14 00:19:05.434147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.386 qpair failed and we were unable to recover it. 00:38:26.386 [2024-12-14 00:19:05.434459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.386 [2024-12-14 00:19:05.434502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.386 qpair failed and we were unable to recover it. 00:38:26.386 [2024-12-14 00:19:05.434713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.386 [2024-12-14 00:19:05.434756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.386 qpair failed and we were unable to recover it. 00:38:26.386 [2024-12-14 00:19:05.435000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.386 [2024-12-14 00:19:05.435044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.386 qpair failed and we were unable to recover it. 00:38:26.386 [2024-12-14 00:19:05.435251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.386 [2024-12-14 00:19:05.435294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.386 qpair failed and we were unable to recover it. 00:38:26.386 [2024-12-14 00:19:05.435567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.386 [2024-12-14 00:19:05.435610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.386 qpair failed and we were unable to recover it. 00:38:26.386 [2024-12-14 00:19:05.435823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.386 [2024-12-14 00:19:05.435866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.386 qpair failed and we were unable to recover it. 00:38:26.386 [2024-12-14 00:19:05.436028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.386 [2024-12-14 00:19:05.436049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.386 qpair failed and we were unable to recover it. 00:38:26.386 [2024-12-14 00:19:05.436273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.386 [2024-12-14 00:19:05.436315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.386 qpair failed and we were unable to recover it. 00:38:26.386 [2024-12-14 00:19:05.436615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.386 [2024-12-14 00:19:05.436659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.386 qpair failed and we were unable to recover it. 00:38:26.386 [2024-12-14 00:19:05.436847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.386 [2024-12-14 00:19:05.436868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.386 qpair failed and we were unable to recover it. 00:38:26.386 [2024-12-14 00:19:05.437066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.386 [2024-12-14 00:19:05.437109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.386 qpair failed and we were unable to recover it. 00:38:26.386 [2024-12-14 00:19:05.437337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.386 [2024-12-14 00:19:05.437379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.386 qpair failed and we were unable to recover it. 00:38:26.386 [2024-12-14 00:19:05.437651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.386 [2024-12-14 00:19:05.437696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.386 qpair failed and we were unable to recover it. 00:38:26.386 [2024-12-14 00:19:05.437965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.386 [2024-12-14 00:19:05.438008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.386 qpair failed and we were unable to recover it. 00:38:26.386 [2024-12-14 00:19:05.438214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.386 [2024-12-14 00:19:05.438257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.386 qpair failed and we were unable to recover it. 00:38:26.386 [2024-12-14 00:19:05.438464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.386 [2024-12-14 00:19:05.438508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.386 qpair failed and we were unable to recover it. 00:38:26.386 [2024-12-14 00:19:05.438740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.386 [2024-12-14 00:19:05.438765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.386 qpair failed and we were unable to recover it. 00:38:26.386 [2024-12-14 00:19:05.438960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.386 [2024-12-14 00:19:05.438981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.386 qpair failed and we were unable to recover it. 00:38:26.386 [2024-12-14 00:19:05.439138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.386 [2024-12-14 00:19:05.439159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.386 qpair failed and we were unable to recover it. 00:38:26.386 [2024-12-14 00:19:05.439330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.386 [2024-12-14 00:19:05.439373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.386 qpair failed and we were unable to recover it. 00:38:26.386 [2024-12-14 00:19:05.439536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.386 [2024-12-14 00:19:05.439558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.386 qpair failed and we were unable to recover it. 00:38:26.386 [2024-12-14 00:19:05.439770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.386 [2024-12-14 00:19:05.439814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.386 qpair failed and we were unable to recover it. 00:38:26.386 [2024-12-14 00:19:05.440032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.386 [2024-12-14 00:19:05.440075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.386 qpair failed and we were unable to recover it. 00:38:26.386 [2024-12-14 00:19:05.440273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.386 [2024-12-14 00:19:05.440316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.386 qpair failed and we were unable to recover it. 00:38:26.386 [2024-12-14 00:19:05.440581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.386 [2024-12-14 00:19:05.440625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.386 qpair failed and we were unable to recover it. 00:38:26.386 [2024-12-14 00:19:05.440776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.386 [2024-12-14 00:19:05.440797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.386 qpair failed and we were unable to recover it. 00:38:26.386 [2024-12-14 00:19:05.441053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.386 [2024-12-14 00:19:05.441097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.386 qpair failed and we were unable to recover it. 00:38:26.386 [2024-12-14 00:19:05.441358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.386 [2024-12-14 00:19:05.441401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.386 qpair failed and we were unable to recover it. 00:38:26.386 [2024-12-14 00:19:05.441561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.386 [2024-12-14 00:19:05.441604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.386 qpair failed and we were unable to recover it. 00:38:26.386 [2024-12-14 00:19:05.441893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.386 [2024-12-14 00:19:05.441935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.386 qpair failed and we were unable to recover it. 00:38:26.386 [2024-12-14 00:19:05.442215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.386 [2024-12-14 00:19:05.442260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.386 qpair failed and we were unable to recover it. 00:38:26.386 [2024-12-14 00:19:05.442413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.386 [2024-12-14 00:19:05.442468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.386 qpair failed and we were unable to recover it. 00:38:26.386 [2024-12-14 00:19:05.442681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.386 [2024-12-14 00:19:05.442725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.386 qpair failed and we were unable to recover it. 00:38:26.386 [2024-12-14 00:19:05.442948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.386 [2024-12-14 00:19:05.442992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.386 qpair failed and we were unable to recover it. 00:38:26.386 [2024-12-14 00:19:05.443212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.386 [2024-12-14 00:19:05.443255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.386 qpair failed and we were unable to recover it. 00:38:26.386 [2024-12-14 00:19:05.443546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.386 [2024-12-14 00:19:05.443600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.387 qpair failed and we were unable to recover it. 00:38:26.387 [2024-12-14 00:19:05.443787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.387 [2024-12-14 00:19:05.443809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.387 qpair failed and we were unable to recover it. 00:38:26.387 [2024-12-14 00:19:05.443979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.387 [2024-12-14 00:19:05.444022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.387 qpair failed and we were unable to recover it. 00:38:26.387 [2024-12-14 00:19:05.444239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.387 [2024-12-14 00:19:05.444282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.387 qpair failed and we were unable to recover it. 00:38:26.387 [2024-12-14 00:19:05.444454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.387 [2024-12-14 00:19:05.444497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.387 qpair failed and we were unable to recover it. 00:38:26.387 [2024-12-14 00:19:05.444702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.387 [2024-12-14 00:19:05.444745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.387 qpair failed and we were unable to recover it. 00:38:26.387 [2024-12-14 00:19:05.445034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.387 [2024-12-14 00:19:05.445078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.387 qpair failed and we were unable to recover it. 00:38:26.387 [2024-12-14 00:19:05.445356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.387 [2024-12-14 00:19:05.445399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.387 qpair failed and we were unable to recover it. 00:38:26.387 [2024-12-14 00:19:05.445550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.387 [2024-12-14 00:19:05.445594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.387 qpair failed and we were unable to recover it. 00:38:26.387 [2024-12-14 00:19:05.445834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.387 [2024-12-14 00:19:05.445856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.387 qpair failed and we were unable to recover it. 00:38:26.387 [2024-12-14 00:19:05.446043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.387 [2024-12-14 00:19:05.446071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.387 qpair failed and we were unable to recover it. 00:38:26.387 [2024-12-14 00:19:05.446230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.387 [2024-12-14 00:19:05.446252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.387 qpair failed and we were unable to recover it. 00:38:26.387 [2024-12-14 00:19:05.446476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.387 [2024-12-14 00:19:05.446520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.387 qpair failed and we were unable to recover it. 00:38:26.387 [2024-12-14 00:19:05.446738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.387 [2024-12-14 00:19:05.446780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.387 qpair failed and we were unable to recover it. 00:38:26.387 [2024-12-14 00:19:05.447021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.387 [2024-12-14 00:19:05.447064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.387 qpair failed and we were unable to recover it. 00:38:26.387 [2024-12-14 00:19:05.447276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.387 [2024-12-14 00:19:05.447319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.387 qpair failed and we were unable to recover it. 00:38:26.387 [2024-12-14 00:19:05.447526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.387 [2024-12-14 00:19:05.447570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.387 qpair failed and we were unable to recover it. 00:38:26.387 [2024-12-14 00:19:05.447782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.387 [2024-12-14 00:19:05.447803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.387 qpair failed and we were unable to recover it. 00:38:26.387 [2024-12-14 00:19:05.447951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.387 [2024-12-14 00:19:05.447972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.387 qpair failed and we were unable to recover it. 00:38:26.387 [2024-12-14 00:19:05.448129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.387 [2024-12-14 00:19:05.448151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.387 qpair failed and we were unable to recover it. 00:38:26.387 [2024-12-14 00:19:05.448380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.387 [2024-12-14 00:19:05.448422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.387 qpair failed and we were unable to recover it. 00:38:26.387 [2024-12-14 00:19:05.448644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.387 [2024-12-14 00:19:05.448693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.387 qpair failed and we were unable to recover it. 00:38:26.387 [2024-12-14 00:19:05.448892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.387 [2024-12-14 00:19:05.448948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.387 qpair failed and we were unable to recover it. 00:38:26.387 [2024-12-14 00:19:05.449061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.387 [2024-12-14 00:19:05.449082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.387 qpair failed and we were unable to recover it. 00:38:26.387 [2024-12-14 00:19:05.449190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.387 [2024-12-14 00:19:05.449211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.387 qpair failed and we were unable to recover it. 00:38:26.387 [2024-12-14 00:19:05.449403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.387 [2024-12-14 00:19:05.449424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.387 qpair failed and we were unable to recover it. 00:38:26.387 [2024-12-14 00:19:05.449668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.387 [2024-12-14 00:19:05.449689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.387 qpair failed and we were unable to recover it. 00:38:26.387 [2024-12-14 00:19:05.449934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.387 [2024-12-14 00:19:05.449955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.387 qpair failed and we were unable to recover it. 00:38:26.387 [2024-12-14 00:19:05.450146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.387 [2024-12-14 00:19:05.450167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.387 qpair failed and we were unable to recover it. 00:38:26.387 [2024-12-14 00:19:05.450276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.387 [2024-12-14 00:19:05.450297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.387 qpair failed and we were unable to recover it. 00:38:26.387 [2024-12-14 00:19:05.450392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.387 [2024-12-14 00:19:05.450413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.387 qpair failed and we were unable to recover it. 00:38:26.387 [2024-12-14 00:19:05.450664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.387 [2024-12-14 00:19:05.450686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.387 qpair failed and we were unable to recover it. 00:38:26.387 [2024-12-14 00:19:05.450910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.387 [2024-12-14 00:19:05.450931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.387 qpair failed and we were unable to recover it. 00:38:26.387 [2024-12-14 00:19:05.451144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.387 [2024-12-14 00:19:05.451165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.387 qpair failed and we were unable to recover it. 00:38:26.387 [2024-12-14 00:19:05.451401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.387 [2024-12-14 00:19:05.451422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.387 qpair failed and we were unable to recover it. 00:38:26.387 [2024-12-14 00:19:05.451559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.387 [2024-12-14 00:19:05.451582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.387 qpair failed and we were unable to recover it. 00:38:26.387 [2024-12-14 00:19:05.451743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.387 [2024-12-14 00:19:05.451765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.387 qpair failed and we were unable to recover it. 00:38:26.387 [2024-12-14 00:19:05.451949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.387 [2024-12-14 00:19:05.451970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.387 qpair failed and we were unable to recover it. 00:38:26.387 [2024-12-14 00:19:05.452151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.387 [2024-12-14 00:19:05.452172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.387 qpair failed and we were unable to recover it. 00:38:26.387 [2024-12-14 00:19:05.452289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.387 [2024-12-14 00:19:05.452311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.387 qpair failed and we were unable to recover it. 00:38:26.387 [2024-12-14 00:19:05.452481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.387 [2024-12-14 00:19:05.452503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.387 qpair failed and we were unable to recover it. 00:38:26.387 [2024-12-14 00:19:05.452604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.387 [2024-12-14 00:19:05.452626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.387 qpair failed and we were unable to recover it. 00:38:26.387 [2024-12-14 00:19:05.452805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.387 [2024-12-14 00:19:05.452848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.387 qpair failed and we were unable to recover it. 00:38:26.387 [2024-12-14 00:19:05.453055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.387 [2024-12-14 00:19:05.453098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.387 qpair failed and we were unable to recover it. 00:38:26.387 [2024-12-14 00:19:05.453230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.387 [2024-12-14 00:19:05.453273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.387 qpair failed and we were unable to recover it. 00:38:26.387 [2024-12-14 00:19:05.453489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.387 [2024-12-14 00:19:05.453533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.387 qpair failed and we were unable to recover it. 00:38:26.387 [2024-12-14 00:19:05.453665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.387 [2024-12-14 00:19:05.453708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.387 qpair failed and we were unable to recover it. 00:38:26.387 [2024-12-14 00:19:05.453859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.387 [2024-12-14 00:19:05.453880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.387 qpair failed and we were unable to recover it. 00:38:26.387 [2024-12-14 00:19:05.454209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.387 [2024-12-14 00:19:05.454256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:26.387 qpair failed and we were unable to recover it. 00:38:26.387 [2024-12-14 00:19:05.454450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.387 [2024-12-14 00:19:05.454476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:26.387 qpair failed and we were unable to recover it. 00:38:26.387 [2024-12-14 00:19:05.454662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.387 [2024-12-14 00:19:05.454685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:26.387 qpair failed and we were unable to recover it. 00:38:26.387 [2024-12-14 00:19:05.454857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.387 [2024-12-14 00:19:05.454900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:26.387 qpair failed and we were unable to recover it. 00:38:26.668 [2024-12-14 00:19:05.455065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.668 [2024-12-14 00:19:05.455109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:26.668 qpair failed and we were unable to recover it. 00:38:26.668 [2024-12-14 00:19:05.455335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.668 [2024-12-14 00:19:05.455378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:26.668 qpair failed and we were unable to recover it. 00:38:26.668 [2024-12-14 00:19:05.455672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.668 [2024-12-14 00:19:05.455720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.668 qpair failed and we were unable to recover it. 00:38:26.668 [2024-12-14 00:19:05.455874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.668 [2024-12-14 00:19:05.455916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.668 qpair failed and we were unable to recover it. 00:38:26.668 [2024-12-14 00:19:05.456151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.668 [2024-12-14 00:19:05.456172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.668 qpair failed and we were unable to recover it. 00:38:26.668 [2024-12-14 00:19:05.456277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.668 [2024-12-14 00:19:05.456299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.668 qpair failed and we were unable to recover it. 00:38:26.668 [2024-12-14 00:19:05.456457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.668 [2024-12-14 00:19:05.456479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.668 qpair failed and we were unable to recover it. 00:38:26.668 [2024-12-14 00:19:05.456676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.668 [2024-12-14 00:19:05.456719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.668 qpair failed and we were unable to recover it. 00:38:26.668 [2024-12-14 00:19:05.456930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.668 [2024-12-14 00:19:05.456973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.668 qpair failed and we were unable to recover it. 00:38:26.668 [2024-12-14 00:19:05.457144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.668 [2024-12-14 00:19:05.457194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.668 qpair failed and we were unable to recover it. 00:38:26.668 [2024-12-14 00:19:05.457508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.668 [2024-12-14 00:19:05.457552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.668 qpair failed and we were unable to recover it. 00:38:26.668 [2024-12-14 00:19:05.457766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.668 [2024-12-14 00:19:05.457821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.668 qpair failed and we were unable to recover it. 00:38:26.668 [2024-12-14 00:19:05.457990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.668 [2024-12-14 00:19:05.458010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.668 qpair failed and we were unable to recover it. 00:38:26.668 [2024-12-14 00:19:05.458116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.668 [2024-12-14 00:19:05.458137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.668 qpair failed and we were unable to recover it. 00:38:26.668 [2024-12-14 00:19:05.458334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.669 [2024-12-14 00:19:05.458385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.669 qpair failed and we were unable to recover it. 00:38:26.669 [2024-12-14 00:19:05.458590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.669 [2024-12-14 00:19:05.458634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.669 qpair failed and we were unable to recover it. 00:38:26.669 [2024-12-14 00:19:05.458932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.669 [2024-12-14 00:19:05.458953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.669 qpair failed and we were unable to recover it. 00:38:26.669 [2024-12-14 00:19:05.459080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.669 [2024-12-14 00:19:05.459102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.669 qpair failed and we were unable to recover it. 00:38:26.669 [2024-12-14 00:19:05.459198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.669 [2024-12-14 00:19:05.459219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.669 qpair failed and we were unable to recover it. 00:38:26.669 [2024-12-14 00:19:05.459329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.669 [2024-12-14 00:19:05.459350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.669 qpair failed and we were unable to recover it. 00:38:26.669 [2024-12-14 00:19:05.459445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.669 [2024-12-14 00:19:05.459466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.669 qpair failed and we were unable to recover it. 00:38:26.669 [2024-12-14 00:19:05.459555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.669 [2024-12-14 00:19:05.459577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.669 qpair failed and we were unable to recover it. 00:38:26.669 [2024-12-14 00:19:05.459670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.669 [2024-12-14 00:19:05.459691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.669 qpair failed and we were unable to recover it. 00:38:26.669 [2024-12-14 00:19:05.459797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.669 [2024-12-14 00:19:05.459818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.669 qpair failed and we were unable to recover it. 00:38:26.669 [2024-12-14 00:19:05.460060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.669 [2024-12-14 00:19:05.460104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.669 qpair failed and we were unable to recover it. 00:38:26.669 [2024-12-14 00:19:05.460261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.669 [2024-12-14 00:19:05.460304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.669 qpair failed and we were unable to recover it. 00:38:26.669 [2024-12-14 00:19:05.460500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.669 [2024-12-14 00:19:05.460545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.669 qpair failed and we were unable to recover it. 00:38:26.669 [2024-12-14 00:19:05.460812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.669 [2024-12-14 00:19:05.460854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.669 qpair failed and we were unable to recover it. 00:38:26.669 [2024-12-14 00:19:05.461085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.669 [2024-12-14 00:19:05.461154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.669 qpair failed and we were unable to recover it. 00:38:26.669 [2024-12-14 00:19:05.461386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.669 [2024-12-14 00:19:05.461429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.669 qpair failed and we were unable to recover it. 00:38:26.669 [2024-12-14 00:19:05.461590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.669 [2024-12-14 00:19:05.461632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.669 qpair failed and we were unable to recover it. 00:38:26.669 [2024-12-14 00:19:05.461898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.669 [2024-12-14 00:19:05.461920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.669 qpair failed and we were unable to recover it. 00:38:26.669 [2024-12-14 00:19:05.462022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.669 [2024-12-14 00:19:05.462043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.669 qpair failed and we were unable to recover it. 00:38:26.669 [2024-12-14 00:19:05.462218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.669 [2024-12-14 00:19:05.462261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.669 qpair failed and we were unable to recover it. 00:38:26.669 [2024-12-14 00:19:05.462526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.669 [2024-12-14 00:19:05.462569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.669 qpair failed and we were unable to recover it. 00:38:26.669 [2024-12-14 00:19:05.462859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.669 [2024-12-14 00:19:05.462902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.669 qpair failed and we were unable to recover it. 00:38:26.669 [2024-12-14 00:19:05.463105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.669 [2024-12-14 00:19:05.463148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.669 qpair failed and we were unable to recover it. 00:38:26.669 [2024-12-14 00:19:05.463307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.669 [2024-12-14 00:19:05.463351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.669 qpair failed and we were unable to recover it. 00:38:26.669 [2024-12-14 00:19:05.463616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.669 [2024-12-14 00:19:05.463660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.669 qpair failed and we were unable to recover it. 00:38:26.669 [2024-12-14 00:19:05.463879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.669 [2024-12-14 00:19:05.463901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.669 qpair failed and we were unable to recover it. 00:38:26.669 [2024-12-14 00:19:05.464067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.669 [2024-12-14 00:19:05.464088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.669 qpair failed and we were unable to recover it. 00:38:26.669 [2024-12-14 00:19:05.464263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.669 [2024-12-14 00:19:05.464285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.669 qpair failed and we were unable to recover it. 00:38:26.669 [2024-12-14 00:19:05.464395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.669 [2024-12-14 00:19:05.464416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.669 qpair failed and we were unable to recover it. 00:38:26.669 [2024-12-14 00:19:05.464672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.669 [2024-12-14 00:19:05.464715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.669 qpair failed and we were unable to recover it. 00:38:26.669 [2024-12-14 00:19:05.464922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.669 [2024-12-14 00:19:05.464965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.669 qpair failed and we were unable to recover it. 00:38:26.669 [2024-12-14 00:19:05.465187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.669 [2024-12-14 00:19:05.465231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.669 qpair failed and we were unable to recover it. 00:38:26.669 [2024-12-14 00:19:05.465466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.669 [2024-12-14 00:19:05.465512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.669 qpair failed and we were unable to recover it. 00:38:26.669 [2024-12-14 00:19:05.465721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.669 [2024-12-14 00:19:05.465764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.669 qpair failed and we were unable to recover it. 00:38:26.669 [2024-12-14 00:19:05.466033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.669 [2024-12-14 00:19:05.466055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.669 qpair failed and we were unable to recover it. 00:38:26.669 [2024-12-14 00:19:05.466285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.669 [2024-12-14 00:19:05.466310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.669 qpair failed and we were unable to recover it. 00:38:26.669 [2024-12-14 00:19:05.466533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.669 [2024-12-14 00:19:05.466555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.669 qpair failed and we were unable to recover it. 00:38:26.669 [2024-12-14 00:19:05.466747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.669 [2024-12-14 00:19:05.466790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.669 qpair failed and we were unable to recover it. 00:38:26.669 [2024-12-14 00:19:05.466987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.670 [2024-12-14 00:19:05.467030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.670 qpair failed and we were unable to recover it. 00:38:26.670 [2024-12-14 00:19:05.467304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.670 [2024-12-14 00:19:05.467349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.670 qpair failed and we were unable to recover it. 00:38:26.670 [2024-12-14 00:19:05.467565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.670 [2024-12-14 00:19:05.467609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.670 qpair failed and we were unable to recover it. 00:38:26.670 [2024-12-14 00:19:05.467876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.670 [2024-12-14 00:19:05.467918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.670 qpair failed and we were unable to recover it. 00:38:26.670 [2024-12-14 00:19:05.468077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.670 [2024-12-14 00:19:05.468098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.670 qpair failed and we were unable to recover it. 00:38:26.670 [2024-12-14 00:19:05.468212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.670 [2024-12-14 00:19:05.468234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.670 qpair failed and we were unable to recover it. 00:38:26.670 [2024-12-14 00:19:05.468426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.670 [2024-12-14 00:19:05.468453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.670 qpair failed and we were unable to recover it. 00:38:26.670 [2024-12-14 00:19:05.468622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.670 [2024-12-14 00:19:05.468643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.670 qpair failed and we were unable to recover it. 00:38:26.670 [2024-12-14 00:19:05.468743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.670 [2024-12-14 00:19:05.468764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.670 qpair failed and we were unable to recover it. 00:38:26.670 [2024-12-14 00:19:05.468870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.670 [2024-12-14 00:19:05.468891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.670 qpair failed and we were unable to recover it. 00:38:26.670 [2024-12-14 00:19:05.469012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.670 [2024-12-14 00:19:05.469033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.670 qpair failed and we were unable to recover it. 00:38:26.670 [2024-12-14 00:19:05.469150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.670 [2024-12-14 00:19:05.469171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.670 qpair failed and we were unable to recover it. 00:38:26.670 [2024-12-14 00:19:05.469342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.670 [2024-12-14 00:19:05.469385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.670 qpair failed and we were unable to recover it. 00:38:26.670 [2024-12-14 00:19:05.469609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.670 [2024-12-14 00:19:05.469655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.670 qpair failed and we were unable to recover it. 00:38:26.670 [2024-12-14 00:19:05.469873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.670 [2024-12-14 00:19:05.469916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.670 qpair failed and we were unable to recover it. 00:38:26.670 [2024-12-14 00:19:05.470052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.670 [2024-12-14 00:19:05.470095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.670 qpair failed and we were unable to recover it. 00:38:26.670 [2024-12-14 00:19:05.470324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.670 [2024-12-14 00:19:05.470367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.670 qpair failed and we were unable to recover it. 00:38:26.670 [2024-12-14 00:19:05.470698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.670 [2024-12-14 00:19:05.470742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.670 qpair failed and we were unable to recover it. 00:38:26.670 [2024-12-14 00:19:05.471007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.670 [2024-12-14 00:19:05.471051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.670 qpair failed and we were unable to recover it. 00:38:26.670 [2024-12-14 00:19:05.471206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.670 [2024-12-14 00:19:05.471248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.670 qpair failed and we were unable to recover it. 00:38:26.670 [2024-12-14 00:19:05.471561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.670 [2024-12-14 00:19:05.471606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.670 qpair failed and we were unable to recover it. 00:38:26.670 [2024-12-14 00:19:05.471830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.670 [2024-12-14 00:19:05.471874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.670 qpair failed and we were unable to recover it. 00:38:26.670 [2024-12-14 00:19:05.472083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.670 [2024-12-14 00:19:05.472104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.670 qpair failed and we were unable to recover it. 00:38:26.670 [2024-12-14 00:19:05.472273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.670 [2024-12-14 00:19:05.472294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.670 qpair failed and we were unable to recover it. 00:38:26.670 [2024-12-14 00:19:05.472540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.670 [2024-12-14 00:19:05.472584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.670 qpair failed and we were unable to recover it. 00:38:26.670 [2024-12-14 00:19:05.472747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.670 [2024-12-14 00:19:05.472769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.670 qpair failed and we were unable to recover it. 00:38:26.670 [2024-12-14 00:19:05.472871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.670 [2024-12-14 00:19:05.472892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.670 qpair failed and we were unable to recover it. 00:38:26.670 [2024-12-14 00:19:05.472989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.670 [2024-12-14 00:19:05.473010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.670 qpair failed and we were unable to recover it. 00:38:26.670 [2024-12-14 00:19:05.473188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.670 [2024-12-14 00:19:05.473230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.670 qpair failed and we were unable to recover it. 00:38:26.670 [2024-12-14 00:19:05.473450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.670 [2024-12-14 00:19:05.473494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.670 qpair failed and we were unable to recover it. 00:38:26.670 [2024-12-14 00:19:05.473653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.670 [2024-12-14 00:19:05.473697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.670 qpair failed and we were unable to recover it. 00:38:26.670 [2024-12-14 00:19:05.473984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.670 [2024-12-14 00:19:05.474026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.670 qpair failed and we were unable to recover it. 00:38:26.670 [2024-12-14 00:19:05.474221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.670 [2024-12-14 00:19:05.474264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.670 qpair failed and we were unable to recover it. 00:38:26.670 [2024-12-14 00:19:05.474479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.670 [2024-12-14 00:19:05.474524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.670 qpair failed and we were unable to recover it. 00:38:26.670 [2024-12-14 00:19:05.474755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.670 [2024-12-14 00:19:05.474776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.670 qpair failed and we were unable to recover it. 00:38:26.670 [2024-12-14 00:19:05.475023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.670 [2024-12-14 00:19:05.475044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.670 qpair failed and we were unable to recover it. 00:38:26.670 [2024-12-14 00:19:05.475290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.671 [2024-12-14 00:19:05.475312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.671 qpair failed and we were unable to recover it. 00:38:26.671 [2024-12-14 00:19:05.475497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.671 [2024-12-14 00:19:05.475548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.671 qpair failed and we were unable to recover it. 00:38:26.671 [2024-12-14 00:19:05.475759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.671 [2024-12-14 00:19:05.475802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.671 qpair failed and we were unable to recover it. 00:38:26.671 [2024-12-14 00:19:05.476015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.671 [2024-12-14 00:19:05.476057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.671 qpair failed and we were unable to recover it. 00:38:26.671 [2024-12-14 00:19:05.476496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.671 [2024-12-14 00:19:05.476560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.671 qpair failed and we were unable to recover it. 00:38:26.671 [2024-12-14 00:19:05.476841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.671 [2024-12-14 00:19:05.476888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.671 qpair failed and we were unable to recover it. 00:38:26.671 [2024-12-14 00:19:05.477089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.671 [2024-12-14 00:19:05.477132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.671 qpair failed and we were unable to recover it. 00:38:26.671 [2024-12-14 00:19:05.477423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.671 [2024-12-14 00:19:05.477477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.671 qpair failed and we were unable to recover it. 00:38:26.671 [2024-12-14 00:19:05.477765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.671 [2024-12-14 00:19:05.477808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.671 qpair failed and we were unable to recover it. 00:38:26.671 [2024-12-14 00:19:05.478134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.671 [2024-12-14 00:19:05.478178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.671 qpair failed and we were unable to recover it. 00:38:26.671 [2024-12-14 00:19:05.478467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.671 [2024-12-14 00:19:05.478512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.671 qpair failed and we were unable to recover it. 00:38:26.671 [2024-12-14 00:19:05.478744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.671 [2024-12-14 00:19:05.478786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.671 qpair failed and we were unable to recover it. 00:38:26.671 [2024-12-14 00:19:05.478939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.671 [2024-12-14 00:19:05.478982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.671 qpair failed and we were unable to recover it. 00:38:26.671 [2024-12-14 00:19:05.479197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.671 [2024-12-14 00:19:05.479218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.671 qpair failed and we were unable to recover it. 00:38:26.671 [2024-12-14 00:19:05.479403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.671 [2024-12-14 00:19:05.479457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.671 qpair failed and we were unable to recover it. 00:38:26.671 [2024-12-14 00:19:05.479608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.671 [2024-12-14 00:19:05.479651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.671 qpair failed and we were unable to recover it. 00:38:26.671 [2024-12-14 00:19:05.479935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.671 [2024-12-14 00:19:05.479987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.671 qpair failed and we were unable to recover it. 00:38:26.671 [2024-12-14 00:19:05.480114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.671 [2024-12-14 00:19:05.480136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.671 qpair failed and we were unable to recover it. 00:38:26.671 [2024-12-14 00:19:05.480331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.671 [2024-12-14 00:19:05.480353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.671 qpair failed and we were unable to recover it. 00:38:26.671 [2024-12-14 00:19:05.480506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.671 [2024-12-14 00:19:05.480559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.671 qpair failed and we were unable to recover it. 00:38:26.671 [2024-12-14 00:19:05.480771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.671 [2024-12-14 00:19:05.480814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.671 qpair failed and we were unable to recover it. 00:38:26.671 [2024-12-14 00:19:05.480951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.671 [2024-12-14 00:19:05.480995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.671 qpair failed and we were unable to recover it. 00:38:26.671 [2024-12-14 00:19:05.481245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.671 [2024-12-14 00:19:05.481270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.671 qpair failed and we were unable to recover it. 00:38:26.671 [2024-12-14 00:19:05.481448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.671 [2024-12-14 00:19:05.481470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.671 qpair failed and we were unable to recover it. 00:38:26.671 [2024-12-14 00:19:05.481627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.671 [2024-12-14 00:19:05.481649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.671 qpair failed and we were unable to recover it. 00:38:26.671 [2024-12-14 00:19:05.481829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.671 [2024-12-14 00:19:05.481872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.671 qpair failed and we were unable to recover it. 00:38:26.671 [2024-12-14 00:19:05.482139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.671 [2024-12-14 00:19:05.482182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.671 qpair failed and we were unable to recover it. 00:38:26.671 [2024-12-14 00:19:05.482409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.671 [2024-12-14 00:19:05.482460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.671 qpair failed and we were unable to recover it. 00:38:26.671 [2024-12-14 00:19:05.482734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.671 [2024-12-14 00:19:05.482779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.671 qpair failed and we were unable to recover it. 00:38:26.671 [2024-12-14 00:19:05.482916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.671 [2024-12-14 00:19:05.482959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.671 qpair failed and we were unable to recover it. 00:38:26.671 [2024-12-14 00:19:05.483117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.671 [2024-12-14 00:19:05.483159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.671 qpair failed and we were unable to recover it. 00:38:26.671 [2024-12-14 00:19:05.483313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.671 [2024-12-14 00:19:05.483357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.671 qpair failed and we were unable to recover it. 00:38:26.671 [2024-12-14 00:19:05.483646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.671 [2024-12-14 00:19:05.483702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.671 qpair failed and we were unable to recover it. 00:38:26.671 [2024-12-14 00:19:05.483930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.671 [2024-12-14 00:19:05.483951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.671 qpair failed and we were unable to recover it. 00:38:26.671 [2024-12-14 00:19:05.484051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.671 [2024-12-14 00:19:05.484072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.671 qpair failed and we were unable to recover it. 00:38:26.671 [2024-12-14 00:19:05.484296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.671 [2024-12-14 00:19:05.484339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.671 qpair failed and we were unable to recover it. 00:38:26.671 [2024-12-14 00:19:05.484488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.671 [2024-12-14 00:19:05.484532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.671 qpair failed and we were unable to recover it. 00:38:26.671 [2024-12-14 00:19:05.484775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.671 [2024-12-14 00:19:05.484820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.671 qpair failed and we were unable to recover it. 00:38:26.671 [2024-12-14 00:19:05.484947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.672 [2024-12-14 00:19:05.484968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.672 qpair failed and we were unable to recover it. 00:38:26.672 [2024-12-14 00:19:05.485135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.672 [2024-12-14 00:19:05.485156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.672 qpair failed and we were unable to recover it. 00:38:26.672 [2024-12-14 00:19:05.485311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.672 [2024-12-14 00:19:05.485332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.672 qpair failed and we were unable to recover it. 00:38:26.672 [2024-12-14 00:19:05.485578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.672 [2024-12-14 00:19:05.485603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.672 qpair failed and we were unable to recover it. 00:38:26.672 [2024-12-14 00:19:05.485821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.672 [2024-12-14 00:19:05.485842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.672 qpair failed and we were unable to recover it. 00:38:26.672 [2024-12-14 00:19:05.486023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.672 [2024-12-14 00:19:05.486045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.672 qpair failed and we were unable to recover it. 00:38:26.672 [2024-12-14 00:19:05.486147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.672 [2024-12-14 00:19:05.486168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.672 qpair failed and we were unable to recover it. 00:38:26.672 [2024-12-14 00:19:05.486275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.672 [2024-12-14 00:19:05.486297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.672 qpair failed and we were unable to recover it. 00:38:26.672 [2024-12-14 00:19:05.486525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.672 [2024-12-14 00:19:05.486569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.672 qpair failed and we were unable to recover it. 00:38:26.672 [2024-12-14 00:19:05.486718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.672 [2024-12-14 00:19:05.486740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.672 qpair failed and we were unable to recover it. 00:38:26.672 [2024-12-14 00:19:05.486933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.672 [2024-12-14 00:19:05.486976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.672 qpair failed and we were unable to recover it. 00:38:26.672 [2024-12-14 00:19:05.487123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.672 [2024-12-14 00:19:05.487166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.672 qpair failed and we were unable to recover it. 00:38:26.672 [2024-12-14 00:19:05.487377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.672 [2024-12-14 00:19:05.487420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.672 qpair failed and we were unable to recover it. 00:38:26.672 [2024-12-14 00:19:05.487647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.672 [2024-12-14 00:19:05.487691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.672 qpair failed and we were unable to recover it. 00:38:26.672 [2024-12-14 00:19:05.487923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.672 [2024-12-14 00:19:05.487966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.672 qpair failed and we were unable to recover it. 00:38:26.672 [2024-12-14 00:19:05.488107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.672 [2024-12-14 00:19:05.488128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.672 qpair failed and we were unable to recover it. 00:38:26.672 [2024-12-14 00:19:05.488311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.672 [2024-12-14 00:19:05.488353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.672 qpair failed and we were unable to recover it. 00:38:26.672 [2024-12-14 00:19:05.488584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.672 [2024-12-14 00:19:05.488630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.672 qpair failed and we were unable to recover it. 00:38:26.672 [2024-12-14 00:19:05.488913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.672 [2024-12-14 00:19:05.488955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.672 qpair failed and we were unable to recover it. 00:38:26.672 [2024-12-14 00:19:05.489189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.672 [2024-12-14 00:19:05.489232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.672 qpair failed and we were unable to recover it. 00:38:26.672 [2024-12-14 00:19:05.489379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.672 [2024-12-14 00:19:05.489421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.672 qpair failed and we were unable to recover it. 00:38:26.672 [2024-12-14 00:19:05.489654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.672 [2024-12-14 00:19:05.489698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.672 qpair failed and we were unable to recover it. 00:38:26.672 [2024-12-14 00:19:05.489905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.672 [2024-12-14 00:19:05.489948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.672 qpair failed and we were unable to recover it. 00:38:26.672 [2024-12-14 00:19:05.490233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.672 [2024-12-14 00:19:05.490255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.672 qpair failed and we were unable to recover it. 00:38:26.672 [2024-12-14 00:19:05.490459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.672 [2024-12-14 00:19:05.490481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.672 qpair failed and we were unable to recover it. 00:38:26.672 [2024-12-14 00:19:05.490703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.672 [2024-12-14 00:19:05.490723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.672 qpair failed and we were unable to recover it. 00:38:26.672 [2024-12-14 00:19:05.490902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.672 [2024-12-14 00:19:05.490945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.672 qpair failed and we were unable to recover it. 00:38:26.672 [2024-12-14 00:19:05.491232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.672 [2024-12-14 00:19:05.491279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.672 qpair failed and we were unable to recover it. 00:38:26.672 [2024-12-14 00:19:05.491428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.672 [2024-12-14 00:19:05.491479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.672 qpair failed and we were unable to recover it. 00:38:26.672 [2024-12-14 00:19:05.491707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.672 [2024-12-14 00:19:05.491750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.672 qpair failed and we were unable to recover it. 00:38:26.672 [2024-12-14 00:19:05.491962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.672 [2024-12-14 00:19:05.492060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:26.672 qpair failed and we were unable to recover it. 00:38:26.672 [2024-12-14 00:19:05.492257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.672 [2024-12-14 00:19:05.492289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.672 qpair failed and we were unable to recover it. 00:38:26.672 [2024-12-14 00:19:05.492472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.672 [2024-12-14 00:19:05.492488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.672 qpair failed and we were unable to recover it. 00:38:26.672 [2024-12-14 00:19:05.492578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.672 [2024-12-14 00:19:05.492592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.672 qpair failed and we were unable to recover it. 00:38:26.672 [2024-12-14 00:19:05.492742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.672 [2024-12-14 00:19:05.492756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.672 qpair failed and we were unable to recover it. 00:38:26.672 [2024-12-14 00:19:05.492938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.672 [2024-12-14 00:19:05.492979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.672 qpair failed and we were unable to recover it. 00:38:26.672 [2024-12-14 00:19:05.493198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.672 [2024-12-14 00:19:05.493240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.672 qpair failed and we were unable to recover it. 00:38:26.673 [2024-12-14 00:19:05.493434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.673 [2024-12-14 00:19:05.493490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.673 qpair failed and we were unable to recover it. 00:38:26.673 [2024-12-14 00:19:05.493629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.673 [2024-12-14 00:19:05.493671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.673 qpair failed and we were unable to recover it. 00:38:26.673 [2024-12-14 00:19:05.493833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.673 [2024-12-14 00:19:05.493874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.673 qpair failed and we were unable to recover it. 00:38:26.673 [2024-12-14 00:19:05.494077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.673 [2024-12-14 00:19:05.494090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.673 qpair failed and we were unable to recover it. 00:38:26.673 [2024-12-14 00:19:05.494300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.673 [2024-12-14 00:19:05.494341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.673 qpair failed and we were unable to recover it. 00:38:26.673 [2024-12-14 00:19:05.494550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.673 [2024-12-14 00:19:05.494594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.673 qpair failed and we were unable to recover it. 00:38:26.673 [2024-12-14 00:19:05.494808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.673 [2024-12-14 00:19:05.494866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.673 qpair failed and we were unable to recover it. 00:38:26.673 [2024-12-14 00:19:05.495126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.673 [2024-12-14 00:19:05.495170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.673 qpair failed and we were unable to recover it. 00:38:26.673 [2024-12-14 00:19:05.495376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.673 [2024-12-14 00:19:05.495418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.673 qpair failed and we were unable to recover it. 00:38:26.673 [2024-12-14 00:19:05.495691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.673 [2024-12-14 00:19:05.495733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.673 qpair failed and we were unable to recover it. 00:38:26.673 [2024-12-14 00:19:05.495961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.673 [2024-12-14 00:19:05.496003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.673 qpair failed and we were unable to recover it. 00:38:26.673 [2024-12-14 00:19:05.496109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.673 [2024-12-14 00:19:05.496123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.673 qpair failed and we were unable to recover it. 00:38:26.673 [2024-12-14 00:19:05.496340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.673 [2024-12-14 00:19:05.496381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.673 qpair failed and we were unable to recover it. 00:38:26.673 [2024-12-14 00:19:05.496625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.673 [2024-12-14 00:19:05.496668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.673 qpair failed and we were unable to recover it. 00:38:26.673 [2024-12-14 00:19:05.496834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.673 [2024-12-14 00:19:05.496876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.673 qpair failed and we were unable to recover it. 00:38:26.673 [2024-12-14 00:19:05.497020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.673 [2024-12-14 00:19:05.497061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.673 qpair failed and we were unable to recover it. 00:38:26.673 [2024-12-14 00:19:05.497342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.673 [2024-12-14 00:19:05.497385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.673 qpair failed and we were unable to recover it. 00:38:26.673 [2024-12-14 00:19:05.497542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.673 [2024-12-14 00:19:05.497585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.673 qpair failed and we were unable to recover it. 00:38:26.673 [2024-12-14 00:19:05.497793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.673 [2024-12-14 00:19:05.497834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.673 qpair failed and we were unable to recover it. 00:38:26.673 [2024-12-14 00:19:05.498094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.673 [2024-12-14 00:19:05.498136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.673 qpair failed and we were unable to recover it. 00:38:26.673 [2024-12-14 00:19:05.498302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.673 [2024-12-14 00:19:05.498344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.673 qpair failed and we were unable to recover it. 00:38:26.673 [2024-12-14 00:19:05.498556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.673 [2024-12-14 00:19:05.498599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.673 qpair failed and we were unable to recover it. 00:38:26.673 [2024-12-14 00:19:05.498825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.673 [2024-12-14 00:19:05.498866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.673 qpair failed and we were unable to recover it. 00:38:26.673 [2024-12-14 00:19:05.499080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.673 [2024-12-14 00:19:05.499122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.673 qpair failed and we were unable to recover it. 00:38:26.673 [2024-12-14 00:19:05.499315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.673 [2024-12-14 00:19:05.499334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.673 qpair failed and we were unable to recover it. 00:38:26.673 [2024-12-14 00:19:05.499495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.673 [2024-12-14 00:19:05.499538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.673 qpair failed and we were unable to recover it. 00:38:26.673 [2024-12-14 00:19:05.499765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.673 [2024-12-14 00:19:05.499807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.673 qpair failed and we were unable to recover it. 00:38:26.673 [2024-12-14 00:19:05.499957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.673 [2024-12-14 00:19:05.499999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.673 qpair failed and we were unable to recover it. 00:38:26.673 [2024-12-14 00:19:05.500112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.673 [2024-12-14 00:19:05.500126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.673 qpair failed and we were unable to recover it. 00:38:26.673 [2024-12-14 00:19:05.500306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.673 [2024-12-14 00:19:05.500346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.673 qpair failed and we were unable to recover it. 00:38:26.673 [2024-12-14 00:19:05.500555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.673 [2024-12-14 00:19:05.500598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.673 qpair failed and we were unable to recover it. 00:38:26.673 [2024-12-14 00:19:05.500791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.673 [2024-12-14 00:19:05.500833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.673 qpair failed and we were unable to recover it. 00:38:26.673 [2024-12-14 00:19:05.500999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.673 [2024-12-14 00:19:05.501016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.673 qpair failed and we were unable to recover it. 00:38:26.674 [2024-12-14 00:19:05.501164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.674 [2024-12-14 00:19:05.501207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.674 qpair failed and we were unable to recover it. 00:38:26.674 [2024-12-14 00:19:05.501418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.674 [2024-12-14 00:19:05.501485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.674 qpair failed and we were unable to recover it. 00:38:26.674 [2024-12-14 00:19:05.501716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.674 [2024-12-14 00:19:05.501757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.674 qpair failed and we were unable to recover it. 00:38:26.674 [2024-12-14 00:19:05.502037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.674 [2024-12-14 00:19:05.502079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.674 qpair failed and we were unable to recover it. 00:38:26.674 [2024-12-14 00:19:05.502212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.674 [2024-12-14 00:19:05.502225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.674 qpair failed and we were unable to recover it. 00:38:26.674 [2024-12-14 00:19:05.502311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.674 [2024-12-14 00:19:05.502325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.674 qpair failed and we were unable to recover it. 00:38:26.674 [2024-12-14 00:19:05.502488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.674 [2024-12-14 00:19:05.502531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.674 qpair failed and we were unable to recover it. 00:38:26.674 [2024-12-14 00:19:05.502766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.674 [2024-12-14 00:19:05.502808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.674 qpair failed and we were unable to recover it. 00:38:26.674 [2024-12-14 00:19:05.502948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.674 [2024-12-14 00:19:05.502990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.674 qpair failed and we were unable to recover it. 00:38:26.674 [2024-12-14 00:19:05.503208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.674 [2024-12-14 00:19:05.503249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.674 qpair failed and we were unable to recover it. 00:38:26.674 [2024-12-14 00:19:05.503475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.674 [2024-12-14 00:19:05.503519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.674 qpair failed and we were unable to recover it. 00:38:26.674 [2024-12-14 00:19:05.503657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.674 [2024-12-14 00:19:05.503698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.674 qpair failed and we were unable to recover it. 00:38:26.674 [2024-12-14 00:19:05.503843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.674 [2024-12-14 00:19:05.503885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.674 qpair failed and we were unable to recover it. 00:38:26.674 [2024-12-14 00:19:05.504171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.674 [2024-12-14 00:19:05.504213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.674 qpair failed and we were unable to recover it. 00:38:26.674 [2024-12-14 00:19:05.504435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.674 [2024-12-14 00:19:05.504500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.674 qpair failed and we were unable to recover it. 00:38:26.674 [2024-12-14 00:19:05.504711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.674 [2024-12-14 00:19:05.504753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.674 qpair failed and we were unable to recover it. 00:38:26.674 [2024-12-14 00:19:05.504912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.674 [2024-12-14 00:19:05.504954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.674 qpair failed and we were unable to recover it. 00:38:26.674 [2024-12-14 00:19:05.505173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.674 [2024-12-14 00:19:05.505215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.674 qpair failed and we were unable to recover it. 00:38:26.674 [2024-12-14 00:19:05.505523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.674 [2024-12-14 00:19:05.505586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.674 qpair failed and we were unable to recover it. 00:38:26.674 [2024-12-14 00:19:05.505868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.674 [2024-12-14 00:19:05.505910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.674 qpair failed and we were unable to recover it. 00:38:26.674 [2024-12-14 00:19:05.506141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.674 [2024-12-14 00:19:05.506183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.674 qpair failed and we were unable to recover it. 00:38:26.674 [2024-12-14 00:19:05.506411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.674 [2024-12-14 00:19:05.506466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.674 qpair failed and we were unable to recover it. 00:38:26.674 [2024-12-14 00:19:05.506693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.674 [2024-12-14 00:19:05.506734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.674 qpair failed and we were unable to recover it. 00:38:26.674 [2024-12-14 00:19:05.506934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.674 [2024-12-14 00:19:05.506947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.674 qpair failed and we were unable to recover it. 00:38:26.674 [2024-12-14 00:19:05.507106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.674 [2024-12-14 00:19:05.507149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.674 qpair failed and we were unable to recover it. 00:38:26.674 [2024-12-14 00:19:05.507344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.674 [2024-12-14 00:19:05.507386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.674 qpair failed and we were unable to recover it. 00:38:26.674 [2024-12-14 00:19:05.507684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.674 [2024-12-14 00:19:05.507727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.674 qpair failed and we were unable to recover it. 00:38:26.674 [2024-12-14 00:19:05.507959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.674 [2024-12-14 00:19:05.507973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.674 qpair failed and we were unable to recover it. 00:38:26.674 [2024-12-14 00:19:05.508116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.674 [2024-12-14 00:19:05.508128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.674 qpair failed and we were unable to recover it. 00:38:26.674 [2024-12-14 00:19:05.508357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.674 [2024-12-14 00:19:05.508399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.674 qpair failed and we were unable to recover it. 00:38:26.674 [2024-12-14 00:19:05.508558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.674 [2024-12-14 00:19:05.508601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.674 qpair failed and we were unable to recover it. 00:38:26.674 [2024-12-14 00:19:05.508749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.674 [2024-12-14 00:19:05.508791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.674 qpair failed and we were unable to recover it. 00:38:26.674 [2024-12-14 00:19:05.508998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.674 [2024-12-14 00:19:05.509039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.674 qpair failed and we were unable to recover it. 00:38:26.674 [2024-12-14 00:19:05.509295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.674 [2024-12-14 00:19:05.509337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.674 qpair failed and we were unable to recover it. 00:38:26.674 [2024-12-14 00:19:05.509596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.674 [2024-12-14 00:19:05.509640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.674 qpair failed and we were unable to recover it. 00:38:26.674 [2024-12-14 00:19:05.509845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.674 [2024-12-14 00:19:05.509858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.674 qpair failed and we were unable to recover it. 00:38:26.674 [2024-12-14 00:19:05.510024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.674 [2024-12-14 00:19:05.510065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.674 qpair failed and we were unable to recover it. 00:38:26.674 [2024-12-14 00:19:05.510347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.674 [2024-12-14 00:19:05.510390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.674 qpair failed and we were unable to recover it. 00:38:26.674 [2024-12-14 00:19:05.510567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.674 [2024-12-14 00:19:05.510610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.675 qpair failed and we were unable to recover it. 00:38:26.675 [2024-12-14 00:19:05.510834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.675 [2024-12-14 00:19:05.510876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.675 qpair failed and we were unable to recover it. 00:38:26.675 [2024-12-14 00:19:05.511020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.675 [2024-12-14 00:19:05.511067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.675 qpair failed and we were unable to recover it. 00:38:26.675 [2024-12-14 00:19:05.511347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.675 [2024-12-14 00:19:05.511390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.675 qpair failed and we were unable to recover it. 00:38:26.675 [2024-12-14 00:19:05.511609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.675 [2024-12-14 00:19:05.511652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.675 qpair failed and we were unable to recover it. 00:38:26.675 [2024-12-14 00:19:05.511874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.675 [2024-12-14 00:19:05.511916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.675 qpair failed and we were unable to recover it. 00:38:26.675 [2024-12-14 00:19:05.512146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.675 [2024-12-14 00:19:05.512188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.675 qpair failed and we were unable to recover it. 00:38:26.675 [2024-12-14 00:19:05.512384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.675 [2024-12-14 00:19:05.512426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.675 qpair failed and we were unable to recover it. 00:38:26.675 [2024-12-14 00:19:05.512728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.675 [2024-12-14 00:19:05.512771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.675 qpair failed and we were unable to recover it. 00:38:26.675 [2024-12-14 00:19:05.512981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.675 [2024-12-14 00:19:05.513023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.675 qpair failed and we were unable to recover it. 00:38:26.675 [2024-12-14 00:19:05.513145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.675 [2024-12-14 00:19:05.513158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.675 qpair failed and we were unable to recover it. 00:38:26.675 [2024-12-14 00:19:05.513385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.675 [2024-12-14 00:19:05.513398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.675 qpair failed and we were unable to recover it. 00:38:26.675 [2024-12-14 00:19:05.513586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.675 [2024-12-14 00:19:05.513634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.675 qpair failed and we were unable to recover it. 00:38:26.675 [2024-12-14 00:19:05.513777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.675 [2024-12-14 00:19:05.513820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.675 qpair failed and we were unable to recover it. 00:38:26.675 [2024-12-14 00:19:05.514031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.675 [2024-12-14 00:19:05.514072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.675 qpair failed and we were unable to recover it. 00:38:26.675 [2024-12-14 00:19:05.514255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.675 [2024-12-14 00:19:05.514269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.675 qpair failed and we were unable to recover it. 00:38:26.675 [2024-12-14 00:19:05.514489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.675 [2024-12-14 00:19:05.514502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.675 qpair failed and we were unable to recover it. 00:38:26.675 [2024-12-14 00:19:05.514572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.675 [2024-12-14 00:19:05.514585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.675 qpair failed and we were unable to recover it. 00:38:26.675 [2024-12-14 00:19:05.514669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.675 [2024-12-14 00:19:05.514682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.675 qpair failed and we were unable to recover it. 00:38:26.675 [2024-12-14 00:19:05.514760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.675 [2024-12-14 00:19:05.514774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.675 qpair failed and we were unable to recover it. 00:38:26.675 [2024-12-14 00:19:05.514922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.675 [2024-12-14 00:19:05.514935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.675 qpair failed and we were unable to recover it. 00:38:26.675 [2024-12-14 00:19:05.515170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.675 [2024-12-14 00:19:05.515211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.675 qpair failed and we were unable to recover it. 00:38:26.675 [2024-12-14 00:19:05.515373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.675 [2024-12-14 00:19:05.515416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.675 qpair failed and we were unable to recover it. 00:38:26.675 [2024-12-14 00:19:05.515646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.675 [2024-12-14 00:19:05.515687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.675 qpair failed and we were unable to recover it. 00:38:26.675 [2024-12-14 00:19:05.515814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.675 [2024-12-14 00:19:05.515827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.675 qpair failed and we were unable to recover it. 00:38:26.675 [2024-12-14 00:19:05.516037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.675 [2024-12-14 00:19:05.516079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.675 qpair failed and we were unable to recover it. 00:38:26.675 [2024-12-14 00:19:05.516363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.675 [2024-12-14 00:19:05.516405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.675 qpair failed and we were unable to recover it. 00:38:26.675 [2024-12-14 00:19:05.516551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.675 [2024-12-14 00:19:05.516595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.675 qpair failed and we were unable to recover it. 00:38:26.675 [2024-12-14 00:19:05.516881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.675 [2024-12-14 00:19:05.516923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.675 qpair failed and we were unable to recover it. 00:38:26.675 [2024-12-14 00:19:05.517085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.675 [2024-12-14 00:19:05.517128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.675 qpair failed and we were unable to recover it. 00:38:26.675 [2024-12-14 00:19:05.517311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.675 [2024-12-14 00:19:05.517325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.675 qpair failed and we were unable to recover it. 00:38:26.675 [2024-12-14 00:19:05.517560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.675 [2024-12-14 00:19:05.517603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.675 qpair failed and we were unable to recover it. 00:38:26.675 [2024-12-14 00:19:05.517874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.675 [2024-12-14 00:19:05.517917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.675 qpair failed and we were unable to recover it. 00:38:26.675 [2024-12-14 00:19:05.518119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.675 [2024-12-14 00:19:05.518133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.675 qpair failed and we were unable to recover it. 00:38:26.675 [2024-12-14 00:19:05.518299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.675 [2024-12-14 00:19:05.518341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.675 qpair failed and we were unable to recover it. 00:38:26.675 [2024-12-14 00:19:05.518539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.675 [2024-12-14 00:19:05.518583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.675 qpair failed and we were unable to recover it. 00:38:26.675 [2024-12-14 00:19:05.518741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.675 [2024-12-14 00:19:05.518782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.676 qpair failed and we were unable to recover it. 00:38:26.676 [2024-12-14 00:19:05.518964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.676 [2024-12-14 00:19:05.518978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.676 qpair failed and we were unable to recover it. 00:38:26.676 [2024-12-14 00:19:05.519081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.676 [2024-12-14 00:19:05.519122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.676 qpair failed and we were unable to recover it. 00:38:26.676 [2024-12-14 00:19:05.519328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.676 [2024-12-14 00:19:05.519370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.676 qpair failed and we were unable to recover it. 00:38:26.676 [2024-12-14 00:19:05.519666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.676 [2024-12-14 00:19:05.519705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.676 qpair failed and we were unable to recover it. 00:38:26.676 [2024-12-14 00:19:05.519795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.676 [2024-12-14 00:19:05.519808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.676 qpair failed and we were unable to recover it. 00:38:26.676 [2024-12-14 00:19:05.519960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.676 [2024-12-14 00:19:05.520014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.676 qpair failed and we were unable to recover it. 00:38:26.676 [2024-12-14 00:19:05.520300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.676 [2024-12-14 00:19:05.520343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.676 qpair failed and we were unable to recover it. 00:38:26.676 [2024-12-14 00:19:05.520480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.676 [2024-12-14 00:19:05.520543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.676 qpair failed and we were unable to recover it. 00:38:26.676 [2024-12-14 00:19:05.520746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.676 [2024-12-14 00:19:05.520789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.676 qpair failed and we were unable to recover it. 00:38:26.676 [2024-12-14 00:19:05.521009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.676 [2024-12-14 00:19:05.521022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.676 qpair failed and we were unable to recover it. 00:38:26.676 [2024-12-14 00:19:05.521167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.676 [2024-12-14 00:19:05.521209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.676 qpair failed and we were unable to recover it. 00:38:26.676 [2024-12-14 00:19:05.521414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.676 [2024-12-14 00:19:05.521477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.676 qpair failed and we were unable to recover it. 00:38:26.676 [2024-12-14 00:19:05.521636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.676 [2024-12-14 00:19:05.521677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.676 qpair failed and we were unable to recover it. 00:38:26.676 [2024-12-14 00:19:05.521885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.676 [2024-12-14 00:19:05.521926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.676 qpair failed and we were unable to recover it. 00:38:26.676 [2024-12-14 00:19:05.522114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.676 [2024-12-14 00:19:05.522157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.676 qpair failed and we were unable to recover it. 00:38:26.676 [2024-12-14 00:19:05.522359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.676 [2024-12-14 00:19:05.522372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.676 qpair failed and we were unable to recover it. 00:38:26.676 [2024-12-14 00:19:05.522609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.676 [2024-12-14 00:19:05.522653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.676 qpair failed and we were unable to recover it. 00:38:26.676 [2024-12-14 00:19:05.522919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.676 [2024-12-14 00:19:05.522961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.676 qpair failed and we were unable to recover it. 00:38:26.676 [2024-12-14 00:19:05.523215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.676 [2024-12-14 00:19:05.523229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.676 qpair failed and we were unable to recover it. 00:38:26.676 [2024-12-14 00:19:05.523403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.676 [2024-12-14 00:19:05.523417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.676 qpair failed and we were unable to recover it. 00:38:26.676 [2024-12-14 00:19:05.523586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.676 [2024-12-14 00:19:05.523600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.676 qpair failed and we were unable to recover it. 00:38:26.676 [2024-12-14 00:19:05.523713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.676 [2024-12-14 00:19:05.523755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.676 qpair failed and we were unable to recover it. 00:38:26.676 [2024-12-14 00:19:05.523904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.676 [2024-12-14 00:19:05.523946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.676 qpair failed and we were unable to recover it. 00:38:26.676 [2024-12-14 00:19:05.524175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.676 [2024-12-14 00:19:05.524217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.676 qpair failed and we were unable to recover it. 00:38:26.676 [2024-12-14 00:19:05.524354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.676 [2024-12-14 00:19:05.524396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.676 qpair failed and we were unable to recover it. 00:38:26.676 [2024-12-14 00:19:05.524709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.676 [2024-12-14 00:19:05.524751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.676 qpair failed and we were unable to recover it. 00:38:26.676 [2024-12-14 00:19:05.524963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.676 [2024-12-14 00:19:05.525004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.676 qpair failed and we were unable to recover it. 00:38:26.676 [2024-12-14 00:19:05.525274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.676 [2024-12-14 00:19:05.525316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.676 qpair failed and we were unable to recover it. 00:38:26.676 [2024-12-14 00:19:05.525589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.676 [2024-12-14 00:19:05.525631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.676 qpair failed and we were unable to recover it. 00:38:26.676 [2024-12-14 00:19:05.525825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.676 [2024-12-14 00:19:05.525839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.676 qpair failed and we were unable to recover it. 00:38:26.676 [2024-12-14 00:19:05.525909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.676 [2024-12-14 00:19:05.525922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.676 qpair failed and we were unable to recover it. 00:38:26.676 [2024-12-14 00:19:05.526085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.676 [2024-12-14 00:19:05.526126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.676 qpair failed and we were unable to recover it. 00:38:26.676 [2024-12-14 00:19:05.526342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.676 [2024-12-14 00:19:05.526384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.676 qpair failed and we were unable to recover it. 00:38:26.676 [2024-12-14 00:19:05.526707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.676 [2024-12-14 00:19:05.526750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.676 qpair failed and we were unable to recover it. 00:38:26.676 [2024-12-14 00:19:05.526905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.676 [2024-12-14 00:19:05.526918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.676 qpair failed and we were unable to recover it. 00:38:26.676 [2024-12-14 00:19:05.527143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.676 [2024-12-14 00:19:05.527156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.676 qpair failed and we were unable to recover it. 00:38:26.676 [2024-12-14 00:19:05.527260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.676 [2024-12-14 00:19:05.527274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.676 qpair failed and we were unable to recover it. 00:38:26.676 [2024-12-14 00:19:05.527450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.676 [2024-12-14 00:19:05.527464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.676 qpair failed and we were unable to recover it. 00:38:26.676 [2024-12-14 00:19:05.527684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.677 [2024-12-14 00:19:05.527725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.677 qpair failed and we were unable to recover it. 00:38:26.677 [2024-12-14 00:19:05.527937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.677 [2024-12-14 00:19:05.527979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.677 qpair failed and we were unable to recover it. 00:38:26.677 [2024-12-14 00:19:05.528121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.677 [2024-12-14 00:19:05.528172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.677 qpair failed and we were unable to recover it. 00:38:26.677 [2024-12-14 00:19:05.528397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.677 [2024-12-14 00:19:05.528410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.677 qpair failed and we were unable to recover it. 00:38:26.677 [2024-12-14 00:19:05.528526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.677 [2024-12-14 00:19:05.528539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.677 qpair failed and we were unable to recover it. 00:38:26.677 [2024-12-14 00:19:05.528705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.677 [2024-12-14 00:19:05.528719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.677 qpair failed and we were unable to recover it. 00:38:26.677 [2024-12-14 00:19:05.528886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.677 [2024-12-14 00:19:05.528900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.677 qpair failed and we were unable to recover it. 00:38:26.677 [2024-12-14 00:19:05.528993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.677 [2024-12-14 00:19:05.529009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.677 qpair failed and we were unable to recover it. 00:38:26.677 [2024-12-14 00:19:05.529106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.677 [2024-12-14 00:19:05.529119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.677 qpair failed and we were unable to recover it. 00:38:26.677 [2024-12-14 00:19:05.529221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.677 [2024-12-14 00:19:05.529234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.677 qpair failed and we were unable to recover it. 00:38:26.677 [2024-12-14 00:19:05.529460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.677 [2024-12-14 00:19:05.529503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.677 qpair failed and we were unable to recover it. 00:38:26.677 [2024-12-14 00:19:05.529717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.677 [2024-12-14 00:19:05.529762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.677 qpair failed and we were unable to recover it. 00:38:26.677 [2024-12-14 00:19:05.529946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.677 [2024-12-14 00:19:05.529960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.677 qpair failed and we were unable to recover it. 00:38:26.677 [2024-12-14 00:19:05.530057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.677 [2024-12-14 00:19:05.530093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.677 qpair failed and we were unable to recover it. 00:38:26.677 [2024-12-14 00:19:05.530287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.677 [2024-12-14 00:19:05.530338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.677 qpair failed and we were unable to recover it. 00:38:26.677 [2024-12-14 00:19:05.530499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.677 [2024-12-14 00:19:05.530542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.677 qpair failed and we were unable to recover it. 00:38:26.677 [2024-12-14 00:19:05.530756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.677 [2024-12-14 00:19:05.530798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.677 qpair failed and we were unable to recover it. 00:38:26.677 [2024-12-14 00:19:05.530950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.677 [2024-12-14 00:19:05.530992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.677 qpair failed and we were unable to recover it. 00:38:26.677 [2024-12-14 00:19:05.531299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.677 [2024-12-14 00:19:05.531340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.677 qpair failed and we were unable to recover it. 00:38:26.677 [2024-12-14 00:19:05.531623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.677 [2024-12-14 00:19:05.531667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.677 qpair failed and we were unable to recover it. 00:38:26.677 [2024-12-14 00:19:05.531870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.677 [2024-12-14 00:19:05.531912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.677 qpair failed and we were unable to recover it. 00:38:26.677 [2024-12-14 00:19:05.532115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.677 [2024-12-14 00:19:05.532128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.677 qpair failed and we were unable to recover it. 00:38:26.677 [2024-12-14 00:19:05.532308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.677 [2024-12-14 00:19:05.532349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.677 qpair failed and we were unable to recover it. 00:38:26.677 [2024-12-14 00:19:05.532586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.677 [2024-12-14 00:19:05.532629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.677 qpair failed and we were unable to recover it. 00:38:26.677 [2024-12-14 00:19:05.532776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.677 [2024-12-14 00:19:05.532817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.677 qpair failed and we were unable to recover it. 00:38:26.677 [2024-12-14 00:19:05.533045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.677 [2024-12-14 00:19:05.533087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.677 qpair failed and we were unable to recover it. 00:38:26.677 [2024-12-14 00:19:05.533372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.677 [2024-12-14 00:19:05.533413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.677 qpair failed and we were unable to recover it. 00:38:26.677 [2024-12-14 00:19:05.533575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.677 [2024-12-14 00:19:05.533617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.677 qpair failed and we were unable to recover it. 00:38:26.677 [2024-12-14 00:19:05.533816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.677 [2024-12-14 00:19:05.533857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.677 qpair failed and we were unable to recover it. 00:38:26.677 [2024-12-14 00:19:05.534076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.677 [2024-12-14 00:19:05.534118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.677 qpair failed and we were unable to recover it. 00:38:26.677 [2024-12-14 00:19:05.534397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.677 [2024-12-14 00:19:05.534445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.677 qpair failed and we were unable to recover it. 00:38:26.677 [2024-12-14 00:19:05.534653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.677 [2024-12-14 00:19:05.534696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.677 qpair failed and we were unable to recover it. 00:38:26.677 [2024-12-14 00:19:05.534916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.677 [2024-12-14 00:19:05.534959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.677 qpair failed and we were unable to recover it. 00:38:26.678 [2024-12-14 00:19:05.535147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.678 [2024-12-14 00:19:05.535178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.678 qpair failed and we were unable to recover it. 00:38:26.678 [2024-12-14 00:19:05.535301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.678 [2024-12-14 00:19:05.535343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.678 qpair failed and we were unable to recover it. 00:38:26.678 [2024-12-14 00:19:05.535498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.678 [2024-12-14 00:19:05.535541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.678 qpair failed and we were unable to recover it. 00:38:26.678 [2024-12-14 00:19:05.535826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.678 [2024-12-14 00:19:05.535867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.678 qpair failed and we were unable to recover it. 00:38:26.678 [2024-12-14 00:19:05.536057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.678 [2024-12-14 00:19:05.536098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.678 qpair failed and we were unable to recover it. 00:38:26.678 [2024-12-14 00:19:05.536281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.678 [2024-12-14 00:19:05.536294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.678 qpair failed and we were unable to recover it. 00:38:26.678 [2024-12-14 00:19:05.536501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.678 [2024-12-14 00:19:05.536543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.678 qpair failed and we were unable to recover it. 00:38:26.678 [2024-12-14 00:19:05.536694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.678 [2024-12-14 00:19:05.536737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.678 qpair failed and we were unable to recover it. 00:38:26.678 [2024-12-14 00:19:05.536931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.678 [2024-12-14 00:19:05.536972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.678 qpair failed and we were unable to recover it. 00:38:26.678 [2024-12-14 00:19:05.537192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.678 [2024-12-14 00:19:05.537205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.678 qpair failed and we were unable to recover it. 00:38:26.678 [2024-12-14 00:19:05.537308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.678 [2024-12-14 00:19:05.537321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.678 qpair failed and we were unable to recover it. 00:38:26.678 [2024-12-14 00:19:05.537467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.678 [2024-12-14 00:19:05.537481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.678 qpair failed and we were unable to recover it. 00:38:26.678 [2024-12-14 00:19:05.537585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.678 [2024-12-14 00:19:05.537597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.678 qpair failed and we were unable to recover it. 00:38:26.678 [2024-12-14 00:19:05.537665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.678 [2024-12-14 00:19:05.537678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.678 qpair failed and we were unable to recover it. 00:38:26.678 [2024-12-14 00:19:05.537831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.678 [2024-12-14 00:19:05.537879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.678 qpair failed and we were unable to recover it. 00:38:26.678 [2024-12-14 00:19:05.538016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.678 [2024-12-14 00:19:05.538058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.678 qpair failed and we were unable to recover it. 00:38:26.678 [2024-12-14 00:19:05.538181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.678 [2024-12-14 00:19:05.538223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.678 qpair failed and we were unable to recover it. 00:38:26.678 [2024-12-14 00:19:05.538435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.678 [2024-12-14 00:19:05.538487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.678 qpair failed and we were unable to recover it. 00:38:26.678 [2024-12-14 00:19:05.538714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.678 [2024-12-14 00:19:05.538756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.678 qpair failed and we were unable to recover it. 00:38:26.678 [2024-12-14 00:19:05.538897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.678 [2024-12-14 00:19:05.538938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.678 qpair failed and we were unable to recover it. 00:38:26.678 [2024-12-14 00:19:05.539140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.678 [2024-12-14 00:19:05.539182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.678 qpair failed and we were unable to recover it. 00:38:26.678 [2024-12-14 00:19:05.539391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.678 [2024-12-14 00:19:05.539404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.678 qpair failed and we were unable to recover it. 00:38:26.678 [2024-12-14 00:19:05.539567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.678 [2024-12-14 00:19:05.539580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.678 qpair failed and we were unable to recover it. 00:38:26.678 [2024-12-14 00:19:05.539712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.678 [2024-12-14 00:19:05.539726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.678 qpair failed and we were unable to recover it. 00:38:26.678 [2024-12-14 00:19:05.539894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.678 [2024-12-14 00:19:05.539936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.678 qpair failed and we were unable to recover it. 00:38:26.678 [2024-12-14 00:19:05.540143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.678 [2024-12-14 00:19:05.540184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.678 qpair failed and we were unable to recover it. 00:38:26.678 [2024-12-14 00:19:05.540326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.678 [2024-12-14 00:19:05.540367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.678 qpair failed and we were unable to recover it. 00:38:26.678 [2024-12-14 00:19:05.540666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.678 [2024-12-14 00:19:05.540709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.678 qpair failed and we were unable to recover it. 00:38:26.678 [2024-12-14 00:19:05.540859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.678 [2024-12-14 00:19:05.540902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.678 qpair failed and we were unable to recover it. 00:38:26.678 [2024-12-14 00:19:05.541134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.678 [2024-12-14 00:19:05.541176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.678 qpair failed and we were unable to recover it. 00:38:26.678 [2024-12-14 00:19:05.541372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.678 [2024-12-14 00:19:05.541415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.678 qpair failed and we were unable to recover it. 00:38:26.678 [2024-12-14 00:19:05.541712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.678 [2024-12-14 00:19:05.541755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.678 qpair failed and we were unable to recover it. 00:38:26.678 [2024-12-14 00:19:05.541945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.678 [2024-12-14 00:19:05.541987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.678 qpair failed and we were unable to recover it. 00:38:26.678 [2024-12-14 00:19:05.542149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.678 [2024-12-14 00:19:05.542197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.678 qpair failed and we were unable to recover it. 00:38:26.678 [2024-12-14 00:19:05.542350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.678 [2024-12-14 00:19:05.542364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.678 qpair failed and we were unable to recover it. 00:38:26.678 [2024-12-14 00:19:05.542483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.678 [2024-12-14 00:19:05.542527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.678 qpair failed and we were unable to recover it. 00:38:26.678 [2024-12-14 00:19:05.542721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.678 [2024-12-14 00:19:05.542764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.678 qpair failed and we were unable to recover it. 00:38:26.678 [2024-12-14 00:19:05.542997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.678 [2024-12-14 00:19:05.543038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.678 qpair failed and we were unable to recover it. 00:38:26.678 [2024-12-14 00:19:05.543215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.679 [2024-12-14 00:19:05.543228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.679 qpair failed and we were unable to recover it. 00:38:26.679 [2024-12-14 00:19:05.543446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.679 [2024-12-14 00:19:05.543489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.679 qpair failed and we were unable to recover it. 00:38:26.679 [2024-12-14 00:19:05.543744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.679 [2024-12-14 00:19:05.543786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.679 qpair failed and we were unable to recover it. 00:38:26.679 [2024-12-14 00:19:05.543993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.679 [2024-12-14 00:19:05.544007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.679 qpair failed and we were unable to recover it. 00:38:26.679 [2024-12-14 00:19:05.544233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.679 [2024-12-14 00:19:05.544247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.679 qpair failed and we were unable to recover it. 00:38:26.679 [2024-12-14 00:19:05.544404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.679 [2024-12-14 00:19:05.544473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.679 qpair failed and we were unable to recover it. 00:38:26.679 [2024-12-14 00:19:05.544679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.679 [2024-12-14 00:19:05.544722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.679 qpair failed and we were unable to recover it. 00:38:26.679 [2024-12-14 00:19:05.544982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.679 [2024-12-14 00:19:05.545023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.679 qpair failed and we were unable to recover it. 00:38:26.679 [2024-12-14 00:19:05.545227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.679 [2024-12-14 00:19:05.545269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.679 qpair failed and we were unable to recover it. 00:38:26.679 [2024-12-14 00:19:05.545529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.679 [2024-12-14 00:19:05.545573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.679 qpair failed and we were unable to recover it. 00:38:26.679 [2024-12-14 00:19:05.545882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.679 [2024-12-14 00:19:05.545924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.679 qpair failed and we were unable to recover it. 00:38:26.679 [2024-12-14 00:19:05.546128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.679 [2024-12-14 00:19:05.546170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.679 qpair failed and we were unable to recover it. 00:38:26.679 [2024-12-14 00:19:05.546369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.679 [2024-12-14 00:19:05.546382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.679 qpair failed and we were unable to recover it. 00:38:26.679 [2024-12-14 00:19:05.546570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.679 [2024-12-14 00:19:05.546613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.679 qpair failed and we were unable to recover it. 00:38:26.679 [2024-12-14 00:19:05.546827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.679 [2024-12-14 00:19:05.546869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.679 qpair failed and we were unable to recover it. 00:38:26.679 [2024-12-14 00:19:05.547019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.679 [2024-12-14 00:19:05.547062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.679 qpair failed and we were unable to recover it. 00:38:26.679 [2024-12-14 00:19:05.547283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.679 [2024-12-14 00:19:05.547298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.679 qpair failed and we were unable to recover it. 00:38:26.679 [2024-12-14 00:19:05.547457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.679 [2024-12-14 00:19:05.547499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.679 qpair failed and we were unable to recover it. 00:38:26.679 [2024-12-14 00:19:05.547653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.679 [2024-12-14 00:19:05.547695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.679 qpair failed and we were unable to recover it. 00:38:26.679 [2024-12-14 00:19:05.547925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.679 [2024-12-14 00:19:05.547967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.679 qpair failed and we were unable to recover it. 00:38:26.679 [2024-12-14 00:19:05.548173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.679 [2024-12-14 00:19:05.548186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.679 qpair failed and we were unable to recover it. 00:38:26.679 [2024-12-14 00:19:05.548400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.679 [2024-12-14 00:19:05.548468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.679 qpair failed and we were unable to recover it. 00:38:26.679 [2024-12-14 00:19:05.548681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.679 [2024-12-14 00:19:05.548724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.679 qpair failed and we were unable to recover it. 00:38:26.679 [2024-12-14 00:19:05.549030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.679 [2024-12-14 00:19:05.549072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.679 qpair failed and we were unable to recover it. 00:38:26.679 [2024-12-14 00:19:05.549284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.679 [2024-12-14 00:19:05.549326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.679 qpair failed and we were unable to recover it. 00:38:26.679 [2024-12-14 00:19:05.549541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.679 [2024-12-14 00:19:05.549585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.679 qpair failed and we were unable to recover it. 00:38:26.679 [2024-12-14 00:19:05.549790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.679 [2024-12-14 00:19:05.549851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.679 qpair failed and we were unable to recover it. 00:38:26.679 [2024-12-14 00:19:05.550069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.679 [2024-12-14 00:19:05.550104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.679 qpair failed and we were unable to recover it. 00:38:26.679 [2024-12-14 00:19:05.550267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.679 [2024-12-14 00:19:05.550281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.679 qpair failed and we were unable to recover it. 00:38:26.679 [2024-12-14 00:19:05.550450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.679 [2024-12-14 00:19:05.550463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.679 qpair failed and we were unable to recover it. 00:38:26.679 [2024-12-14 00:19:05.550652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.679 [2024-12-14 00:19:05.550695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.679 qpair failed and we were unable to recover it. 00:38:26.679 [2024-12-14 00:19:05.550920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.679 [2024-12-14 00:19:05.550962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.679 qpair failed and we were unable to recover it. 00:38:26.679 [2024-12-14 00:19:05.551145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.679 [2024-12-14 00:19:05.551158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.679 qpair failed and we were unable to recover it. 00:38:26.679 [2024-12-14 00:19:05.551306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.679 [2024-12-14 00:19:05.551319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.679 qpair failed and we were unable to recover it. 00:38:26.679 [2024-12-14 00:19:05.551413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.679 [2024-12-14 00:19:05.551427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.679 qpair failed and we were unable to recover it. 00:38:26.679 [2024-12-14 00:19:05.551593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.679 [2024-12-14 00:19:05.551634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.679 qpair failed and we were unable to recover it. 00:38:26.679 [2024-12-14 00:19:05.551831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.679 [2024-12-14 00:19:05.551873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.679 qpair failed and we were unable to recover it. 00:38:26.679 [2024-12-14 00:19:05.552156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.680 [2024-12-14 00:19:05.552198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.680 qpair failed and we were unable to recover it. 00:38:26.680 [2024-12-14 00:19:05.552461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.680 [2024-12-14 00:19:05.552505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.680 qpair failed and we were unable to recover it. 00:38:26.680 [2024-12-14 00:19:05.552727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.680 [2024-12-14 00:19:05.552769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.680 qpair failed and we were unable to recover it. 00:38:26.680 [2024-12-14 00:19:05.553069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.680 [2024-12-14 00:19:05.553110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.680 qpair failed and we were unable to recover it. 00:38:26.680 [2024-12-14 00:19:05.553316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.680 [2024-12-14 00:19:05.553330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.680 qpair failed and we were unable to recover it. 00:38:26.680 [2024-12-14 00:19:05.553426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.680 [2024-12-14 00:19:05.553444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.680 qpair failed and we were unable to recover it. 00:38:26.680 [2024-12-14 00:19:05.553622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.680 [2024-12-14 00:19:05.553635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.680 qpair failed and we were unable to recover it. 00:38:26.680 [2024-12-14 00:19:05.553790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.680 [2024-12-14 00:19:05.553832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.680 qpair failed and we were unable to recover it. 00:38:26.680 [2024-12-14 00:19:05.554117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.680 [2024-12-14 00:19:05.554160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.680 qpair failed and we were unable to recover it. 00:38:26.680 [2024-12-14 00:19:05.554357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.680 [2024-12-14 00:19:05.554370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.680 qpair failed and we were unable to recover it. 00:38:26.680 [2024-12-14 00:19:05.554542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.680 [2024-12-14 00:19:05.554585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.680 qpair failed and we were unable to recover it. 00:38:26.680 [2024-12-14 00:19:05.554793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.680 [2024-12-14 00:19:05.554835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.680 qpair failed and we were unable to recover it. 00:38:26.680 [2024-12-14 00:19:05.554991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.680 [2024-12-14 00:19:05.555033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.680 qpair failed and we were unable to recover it. 00:38:26.680 [2024-12-14 00:19:05.555207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.680 [2024-12-14 00:19:05.555221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.680 qpair failed and we were unable to recover it. 00:38:26.680 [2024-12-14 00:19:05.555459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.680 [2024-12-14 00:19:05.555489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.680 qpair failed and we were unable to recover it. 00:38:26.680 [2024-12-14 00:19:05.555669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.680 [2024-12-14 00:19:05.555709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.680 qpair failed and we were unable to recover it. 00:38:26.680 [2024-12-14 00:19:05.555944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.680 [2024-12-14 00:19:05.555987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.680 qpair failed and we were unable to recover it. 00:38:26.680 [2024-12-14 00:19:05.556147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.680 [2024-12-14 00:19:05.556189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.680 qpair failed and we were unable to recover it. 00:38:26.680 [2024-12-14 00:19:05.556391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.680 [2024-12-14 00:19:05.556433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.680 qpair failed and we were unable to recover it. 00:38:26.680 [2024-12-14 00:19:05.556685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.680 [2024-12-14 00:19:05.556733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.680 qpair failed and we were unable to recover it. 00:38:26.680 [2024-12-14 00:19:05.556974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.680 [2024-12-14 00:19:05.557016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.680 qpair failed and we were unable to recover it. 00:38:26.680 [2024-12-14 00:19:05.557293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.680 [2024-12-14 00:19:05.557334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.680 qpair failed and we were unable to recover it. 00:38:26.680 [2024-12-14 00:19:05.557556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.680 [2024-12-14 00:19:05.557598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.680 qpair failed and we were unable to recover it. 00:38:26.680 [2024-12-14 00:19:05.557811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.680 [2024-12-14 00:19:05.557852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.680 qpair failed and we were unable to recover it. 00:38:26.680 [2024-12-14 00:19:05.558134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.680 [2024-12-14 00:19:05.558176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.680 qpair failed and we were unable to recover it. 00:38:26.680 [2024-12-14 00:19:05.558387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.680 [2024-12-14 00:19:05.558429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.680 qpair failed and we were unable to recover it. 00:38:26.680 [2024-12-14 00:19:05.558705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.680 [2024-12-14 00:19:05.558748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.680 qpair failed and we were unable to recover it. 00:38:26.680 [2024-12-14 00:19:05.558952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.680 [2024-12-14 00:19:05.558993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.680 qpair failed and we were unable to recover it. 00:38:26.680 [2024-12-14 00:19:05.559189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.680 [2024-12-14 00:19:05.559224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.680 qpair failed and we were unable to recover it. 00:38:26.680 [2024-12-14 00:19:05.559447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.680 [2024-12-14 00:19:05.559460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.680 qpair failed and we were unable to recover it. 00:38:26.680 [2024-12-14 00:19:05.559707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.680 [2024-12-14 00:19:05.559721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.680 qpair failed and we were unable to recover it. 00:38:26.680 [2024-12-14 00:19:05.559940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.680 [2024-12-14 00:19:05.559955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.680 qpair failed and we were unable to recover it. 00:38:26.680 [2024-12-14 00:19:05.560097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.680 [2024-12-14 00:19:05.560110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.680 qpair failed and we were unable to recover it. 00:38:26.680 [2024-12-14 00:19:05.560218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.680 [2024-12-14 00:19:05.560235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.680 qpair failed and we were unable to recover it. 00:38:26.680 [2024-12-14 00:19:05.560367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.680 [2024-12-14 00:19:05.560381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.680 qpair failed and we were unable to recover it. 00:38:26.680 [2024-12-14 00:19:05.560484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.680 [2024-12-14 00:19:05.560498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.680 qpair failed and we were unable to recover it. 00:38:26.680 [2024-12-14 00:19:05.560588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.680 [2024-12-14 00:19:05.560601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.680 qpair failed and we were unable to recover it. 00:38:26.680 [2024-12-14 00:19:05.560698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.680 [2024-12-14 00:19:05.560711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.680 qpair failed and we were unable to recover it. 00:38:26.680 [2024-12-14 00:19:05.560886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.680 [2024-12-14 00:19:05.560928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.680 qpair failed and we were unable to recover it. 00:38:26.680 [2024-12-14 00:19:05.561058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.680 [2024-12-14 00:19:05.561100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.681 qpair failed and we were unable to recover it. 00:38:26.681 [2024-12-14 00:19:05.561308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.681 [2024-12-14 00:19:05.561349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.681 qpair failed and we were unable to recover it. 00:38:26.681 [2024-12-14 00:19:05.561493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.681 [2024-12-14 00:19:05.561535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.681 qpair failed and we were unable to recover it. 00:38:26.681 [2024-12-14 00:19:05.561783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.681 [2024-12-14 00:19:05.561825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.681 qpair failed and we were unable to recover it. 00:38:26.681 [2024-12-14 00:19:05.562037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.681 [2024-12-14 00:19:05.562078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.681 qpair failed and we were unable to recover it. 00:38:26.681 [2024-12-14 00:19:05.562229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.681 [2024-12-14 00:19:05.562271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.681 qpair failed and we were unable to recover it. 00:38:26.681 [2024-12-14 00:19:05.562495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.681 [2024-12-14 00:19:05.562538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.681 qpair failed and we were unable to recover it. 00:38:26.681 [2024-12-14 00:19:05.562748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.681 [2024-12-14 00:19:05.562791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.681 qpair failed and we were unable to recover it. 00:38:26.681 [2024-12-14 00:19:05.563026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.681 [2024-12-14 00:19:05.563039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.681 qpair failed and we were unable to recover it. 00:38:26.681 [2024-12-14 00:19:05.563276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.681 [2024-12-14 00:19:05.563317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.681 qpair failed and we were unable to recover it. 00:38:26.681 [2024-12-14 00:19:05.563580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.681 [2024-12-14 00:19:05.563623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.681 qpair failed and we were unable to recover it. 00:38:26.681 [2024-12-14 00:19:05.563764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.681 [2024-12-14 00:19:05.563805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.681 qpair failed and we were unable to recover it. 00:38:26.681 [2024-12-14 00:19:05.564033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.681 [2024-12-14 00:19:05.564074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.681 qpair failed and we were unable to recover it. 00:38:26.681 [2024-12-14 00:19:05.564264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.681 [2024-12-14 00:19:05.564310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.681 qpair failed and we were unable to recover it. 00:38:26.681 [2024-12-14 00:19:05.564572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.681 [2024-12-14 00:19:05.564626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.681 qpair failed and we were unable to recover it. 00:38:26.681 [2024-12-14 00:19:05.564755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.681 [2024-12-14 00:19:05.564796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.681 qpair failed and we were unable to recover it. 00:38:26.681 [2024-12-14 00:19:05.565007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.681 [2024-12-14 00:19:05.565048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.681 qpair failed and we were unable to recover it. 00:38:26.681 [2024-12-14 00:19:05.565261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.681 [2024-12-14 00:19:05.565302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.681 qpair failed and we were unable to recover it. 00:38:26.681 [2024-12-14 00:19:05.565517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.681 [2024-12-14 00:19:05.565560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.681 qpair failed and we were unable to recover it. 00:38:26.681 [2024-12-14 00:19:05.565769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.681 [2024-12-14 00:19:05.565810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.681 qpair failed and we were unable to recover it. 00:38:26.681 [2024-12-14 00:19:05.566031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.681 [2024-12-14 00:19:05.566047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.681 qpair failed and we were unable to recover it. 00:38:26.681 [2024-12-14 00:19:05.566150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.681 [2024-12-14 00:19:05.566164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.681 qpair failed and we were unable to recover it. 00:38:26.681 [2024-12-14 00:19:05.566312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.681 [2024-12-14 00:19:05.566326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.681 qpair failed and we were unable to recover it. 00:38:26.681 [2024-12-14 00:19:05.566399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.681 [2024-12-14 00:19:05.566412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.681 qpair failed and we were unable to recover it. 00:38:26.681 [2024-12-14 00:19:05.566554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.681 [2024-12-14 00:19:05.566568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.681 qpair failed and we were unable to recover it. 00:38:26.681 [2024-12-14 00:19:05.566812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.681 [2024-12-14 00:19:05.566852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.681 qpair failed and we were unable to recover it. 00:38:26.681 [2024-12-14 00:19:05.567043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.681 [2024-12-14 00:19:05.567082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.681 qpair failed and we were unable to recover it. 00:38:26.681 [2024-12-14 00:19:05.567274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.681 [2024-12-14 00:19:05.567316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.681 qpair failed and we were unable to recover it. 00:38:26.681 [2024-12-14 00:19:05.567508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.681 [2024-12-14 00:19:05.567551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.681 qpair failed and we were unable to recover it. 00:38:26.681 [2024-12-14 00:19:05.567761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.681 [2024-12-14 00:19:05.567802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.681 qpair failed and we were unable to recover it. 00:38:26.681 [2024-12-14 00:19:05.567958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.681 [2024-12-14 00:19:05.567999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.681 qpair failed and we were unable to recover it. 00:38:26.681 [2024-12-14 00:19:05.568212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.681 [2024-12-14 00:19:05.568255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.681 qpair failed and we were unable to recover it. 00:38:26.681 [2024-12-14 00:19:05.568361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.681 [2024-12-14 00:19:05.568375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.681 qpair failed and we were unable to recover it. 00:38:26.681 [2024-12-14 00:19:05.568594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.681 [2024-12-14 00:19:05.568609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.681 qpair failed and we were unable to recover it. 00:38:26.681 [2024-12-14 00:19:05.568785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.681 [2024-12-14 00:19:05.568799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.681 qpair failed and we were unable to recover it. 00:38:26.681 [2024-12-14 00:19:05.568882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.681 [2024-12-14 00:19:05.568896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.681 qpair failed and we were unable to recover it. 00:38:26.681 [2024-12-14 00:19:05.569054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.681 [2024-12-14 00:19:05.569067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.681 qpair failed and we were unable to recover it. 00:38:26.681 [2024-12-14 00:19:05.569145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.681 [2024-12-14 00:19:05.569159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.681 qpair failed and we were unable to recover it. 00:38:26.681 [2024-12-14 00:19:05.569303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.681 [2024-12-14 00:19:05.569317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.681 qpair failed and we were unable to recover it. 00:38:26.681 [2024-12-14 00:19:05.569492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.682 [2024-12-14 00:19:05.569506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.682 qpair failed and we were unable to recover it. 00:38:26.682 [2024-12-14 00:19:05.569588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.682 [2024-12-14 00:19:05.569603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.682 qpair failed and we were unable to recover it. 00:38:26.682 [2024-12-14 00:19:05.569673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.682 [2024-12-14 00:19:05.569686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.682 qpair failed and we were unable to recover it. 00:38:26.682 [2024-12-14 00:19:05.569775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.682 [2024-12-14 00:19:05.569788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.682 qpair failed and we were unable to recover it. 00:38:26.682 [2024-12-14 00:19:05.570023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.682 [2024-12-14 00:19:05.570065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.682 qpair failed and we were unable to recover it. 00:38:26.682 [2024-12-14 00:19:05.570203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.682 [2024-12-14 00:19:05.570245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.682 qpair failed and we were unable to recover it. 00:38:26.682 [2024-12-14 00:19:05.570458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.682 [2024-12-14 00:19:05.570500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.682 qpair failed and we were unable to recover it. 00:38:26.682 [2024-12-14 00:19:05.570652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.682 [2024-12-14 00:19:05.570693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.682 qpair failed and we were unable to recover it. 00:38:26.682 [2024-12-14 00:19:05.570853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.682 [2024-12-14 00:19:05.570895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.682 qpair failed and we were unable to recover it. 00:38:26.682 [2024-12-14 00:19:05.571171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.682 [2024-12-14 00:19:05.571212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.682 qpair failed and we were unable to recover it. 00:38:26.682 [2024-12-14 00:19:05.571542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.682 [2024-12-14 00:19:05.571588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.682 qpair failed and we were unable to recover it. 00:38:26.682 [2024-12-14 00:19:05.571808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.682 [2024-12-14 00:19:05.571850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.682 qpair failed and we were unable to recover it. 00:38:26.682 [2024-12-14 00:19:05.572121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.682 [2024-12-14 00:19:05.572135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.682 qpair failed and we were unable to recover it. 00:38:26.682 [2024-12-14 00:19:05.572293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.682 [2024-12-14 00:19:05.572334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.682 qpair failed and we were unable to recover it. 00:38:26.682 [2024-12-14 00:19:05.572597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.682 [2024-12-14 00:19:05.572640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.682 qpair failed and we were unable to recover it. 00:38:26.682 [2024-12-14 00:19:05.572853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.682 [2024-12-14 00:19:05.572894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.682 qpair failed and we were unable to recover it. 00:38:26.682 [2024-12-14 00:19:05.573145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.682 [2024-12-14 00:19:05.573159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.682 qpair failed and we were unable to recover it. 00:38:26.682 [2024-12-14 00:19:05.573323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.682 [2024-12-14 00:19:05.573365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.682 qpair failed and we were unable to recover it. 00:38:26.682 [2024-12-14 00:19:05.573663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.682 [2024-12-14 00:19:05.573707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.682 qpair failed and we were unable to recover it. 00:38:26.682 [2024-12-14 00:19:05.573917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.682 [2024-12-14 00:19:05.573959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.682 qpair failed and we were unable to recover it. 00:38:26.682 [2024-12-14 00:19:05.574096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.682 [2024-12-14 00:19:05.574138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.682 qpair failed and we were unable to recover it. 00:38:26.682 [2024-12-14 00:19:05.574346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.682 [2024-12-14 00:19:05.574395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.682 qpair failed and we were unable to recover it. 00:38:26.682 [2024-12-14 00:19:05.574536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.682 [2024-12-14 00:19:05.574578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.682 qpair failed and we were unable to recover it. 00:38:26.682 [2024-12-14 00:19:05.574835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.682 [2024-12-14 00:19:05.574873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.682 qpair failed and we were unable to recover it. 00:38:26.682 [2024-12-14 00:19:05.575019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.682 [2024-12-14 00:19:05.575032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.682 qpair failed and we were unable to recover it. 00:38:26.682 [2024-12-14 00:19:05.575245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.682 [2024-12-14 00:19:05.575258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.682 qpair failed and we were unable to recover it. 00:38:26.682 [2024-12-14 00:19:05.575331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.682 [2024-12-14 00:19:05.575344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.682 qpair failed and we were unable to recover it. 00:38:26.682 [2024-12-14 00:19:05.575489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.682 [2024-12-14 00:19:05.575503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.682 qpair failed and we were unable to recover it. 00:38:26.682 [2024-12-14 00:19:05.575657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.682 [2024-12-14 00:19:05.575670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.682 qpair failed and we were unable to recover it. 00:38:26.682 [2024-12-14 00:19:05.575817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.683 [2024-12-14 00:19:05.575830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.683 qpair failed and we were unable to recover it. 00:38:26.683 [2024-12-14 00:19:05.575909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.683 [2024-12-14 00:19:05.575923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.683 qpair failed and we were unable to recover it. 00:38:26.683 [2024-12-14 00:19:05.576134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.683 [2024-12-14 00:19:05.576176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.683 qpair failed and we were unable to recover it. 00:38:26.683 [2024-12-14 00:19:05.576369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.683 [2024-12-14 00:19:05.576411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.683 qpair failed and we were unable to recover it. 00:38:26.683 [2024-12-14 00:19:05.576711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.683 [2024-12-14 00:19:05.576755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.683 qpair failed and we were unable to recover it. 00:38:26.683 [2024-12-14 00:19:05.576893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.683 [2024-12-14 00:19:05.576935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.683 qpair failed and we were unable to recover it. 00:38:26.683 [2024-12-14 00:19:05.577103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.683 [2024-12-14 00:19:05.577146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.683 qpair failed and we were unable to recover it. 00:38:26.683 [2024-12-14 00:19:05.577361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.683 [2024-12-14 00:19:05.577403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.683 qpair failed and we were unable to recover it. 00:38:26.683 [2024-12-14 00:19:05.577679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.683 [2024-12-14 00:19:05.577723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.683 qpair failed and we were unable to recover it. 00:38:26.683 [2024-12-14 00:19:05.577928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.683 [2024-12-14 00:19:05.577970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.683 qpair failed and we were unable to recover it. 00:38:26.683 [2024-12-14 00:19:05.578145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.683 [2024-12-14 00:19:05.578167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.683 qpair failed and we were unable to recover it. 00:38:26.683 [2024-12-14 00:19:05.578315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.683 [2024-12-14 00:19:05.578357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.683 qpair failed and we were unable to recover it. 00:38:26.683 [2024-12-14 00:19:05.578581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.683 [2024-12-14 00:19:05.578623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.683 qpair failed and we were unable to recover it. 00:38:26.683 [2024-12-14 00:19:05.578784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.683 [2024-12-14 00:19:05.578826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.683 qpair failed and we were unable to recover it. 00:38:26.683 [2024-12-14 00:19:05.578967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.683 [2024-12-14 00:19:05.579009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.683 qpair failed and we were unable to recover it. 00:38:26.683 [2024-12-14 00:19:05.579212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.683 [2024-12-14 00:19:05.579254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.683 qpair failed and we were unable to recover it. 00:38:26.683 [2024-12-14 00:19:05.579453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.683 [2024-12-14 00:19:05.579467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.683 qpair failed and we were unable to recover it. 00:38:26.683 [2024-12-14 00:19:05.579615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.683 [2024-12-14 00:19:05.579629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.683 qpair failed and we were unable to recover it. 00:38:26.683 [2024-12-14 00:19:05.579811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.683 [2024-12-14 00:19:05.579825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.683 qpair failed and we were unable to recover it. 00:38:26.683 [2024-12-14 00:19:05.579963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.683 [2024-12-14 00:19:05.580019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.683 qpair failed and we were unable to recover it. 00:38:26.683 [2024-12-14 00:19:05.580225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.683 [2024-12-14 00:19:05.580271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:26.683 qpair failed and we were unable to recover it. 00:38:26.683 [2024-12-14 00:19:05.580491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.683 [2024-12-14 00:19:05.580537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:26.683 qpair failed and we were unable to recover it. 00:38:26.683 [2024-12-14 00:19:05.580645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.683 [2024-12-14 00:19:05.580660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.683 qpair failed and we were unable to recover it. 00:38:26.683 [2024-12-14 00:19:05.580747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.683 [2024-12-14 00:19:05.580760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.683 qpair failed and we were unable to recover it. 00:38:26.683 [2024-12-14 00:19:05.580929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.683 [2024-12-14 00:19:05.580971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.683 qpair failed and we were unable to recover it. 00:38:26.683 [2024-12-14 00:19:05.581129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.683 [2024-12-14 00:19:05.581172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.683 qpair failed and we were unable to recover it. 00:38:26.683 [2024-12-14 00:19:05.581381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.683 [2024-12-14 00:19:05.581429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.683 qpair failed and we were unable to recover it. 00:38:26.683 [2024-12-14 00:19:05.581592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.683 [2024-12-14 00:19:05.581606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.683 qpair failed and we were unable to recover it. 00:38:26.683 [2024-12-14 00:19:05.581775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.683 [2024-12-14 00:19:05.581789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.683 qpair failed and we were unable to recover it. 00:38:26.683 [2024-12-14 00:19:05.581949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.683 [2024-12-14 00:19:05.581963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.683 qpair failed and we were unable to recover it. 00:38:26.683 [2024-12-14 00:19:05.582112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.683 [2024-12-14 00:19:05.582126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.683 qpair failed and we were unable to recover it. 00:38:26.683 [2024-12-14 00:19:05.582224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.683 [2024-12-14 00:19:05.582238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.683 qpair failed and we were unable to recover it. 00:38:26.683 [2024-12-14 00:19:05.582461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.683 [2024-12-14 00:19:05.582478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.683 qpair failed and we were unable to recover it. 00:38:26.683 [2024-12-14 00:19:05.582649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.683 [2024-12-14 00:19:05.582663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.683 qpair failed and we were unable to recover it. 00:38:26.683 [2024-12-14 00:19:05.582848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.683 [2024-12-14 00:19:05.582891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.683 qpair failed and we were unable to recover it. 00:38:26.683 [2024-12-14 00:19:05.583023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.683 [2024-12-14 00:19:05.583065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.683 qpair failed and we were unable to recover it. 00:38:26.683 [2024-12-14 00:19:05.583351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.683 [2024-12-14 00:19:05.583392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.683 qpair failed and we were unable to recover it. 00:38:26.683 [2024-12-14 00:19:05.583608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.683 [2024-12-14 00:19:05.583651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.683 qpair failed and we were unable to recover it. 00:38:26.683 [2024-12-14 00:19:05.583872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.683 [2024-12-14 00:19:05.583912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.683 qpair failed and we were unable to recover it. 00:38:26.683 [2024-12-14 00:19:05.584135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.684 [2024-12-14 00:19:05.584178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.684 qpair failed and we were unable to recover it. 00:38:26.684 [2024-12-14 00:19:05.584367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.684 [2024-12-14 00:19:05.584420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.684 qpair failed and we were unable to recover it. 00:38:26.684 [2024-12-14 00:19:05.584562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.684 [2024-12-14 00:19:05.584576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.684 qpair failed and we were unable to recover it. 00:38:26.684 [2024-12-14 00:19:05.584779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.684 [2024-12-14 00:19:05.584793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.684 qpair failed and we were unable to recover it. 00:38:26.684 [2024-12-14 00:19:05.584932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.684 [2024-12-14 00:19:05.584945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.684 qpair failed and we were unable to recover it. 00:38:26.684 [2024-12-14 00:19:05.585103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.684 [2024-12-14 00:19:05.585146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.684 qpair failed and we were unable to recover it. 00:38:26.684 [2024-12-14 00:19:05.585367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.684 [2024-12-14 00:19:05.585409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.684 qpair failed and we were unable to recover it. 00:38:26.684 [2024-12-14 00:19:05.585577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.684 [2024-12-14 00:19:05.585618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.684 qpair failed and we were unable to recover it. 00:38:26.684 [2024-12-14 00:19:05.585847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.684 [2024-12-14 00:19:05.585889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.684 qpair failed and we were unable to recover it. 00:38:26.684 [2024-12-14 00:19:05.586085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.684 [2024-12-14 00:19:05.586126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.684 qpair failed and we were unable to recover it. 00:38:26.684 [2024-12-14 00:19:05.586357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.684 [2024-12-14 00:19:05.586371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.684 qpair failed and we were unable to recover it. 00:38:26.684 [2024-12-14 00:19:05.586519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.684 [2024-12-14 00:19:05.586563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.684 qpair failed and we were unable to recover it. 00:38:26.684 [2024-12-14 00:19:05.586821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.684 [2024-12-14 00:19:05.586863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.684 qpair failed and we were unable to recover it. 00:38:26.684 [2024-12-14 00:19:05.587071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.684 [2024-12-14 00:19:05.587113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.684 qpair failed and we were unable to recover it. 00:38:26.684 [2024-12-14 00:19:05.587328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.684 [2024-12-14 00:19:05.587369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.684 qpair failed and we were unable to recover it. 00:38:26.684 [2024-12-14 00:19:05.587596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.684 [2024-12-14 00:19:05.587639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.684 qpair failed and we were unable to recover it. 00:38:26.684 [2024-12-14 00:19:05.587869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.684 [2024-12-14 00:19:05.587911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.684 qpair failed and we were unable to recover it. 00:38:26.684 [2024-12-14 00:19:05.588170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.684 [2024-12-14 00:19:05.588212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.684 qpair failed and we were unable to recover it. 00:38:26.684 [2024-12-14 00:19:05.588373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.684 [2024-12-14 00:19:05.588390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.684 qpair failed and we were unable to recover it. 00:38:26.684 [2024-12-14 00:19:05.588474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.684 [2024-12-14 00:19:05.588488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.684 qpair failed and we were unable to recover it. 00:38:26.684 [2024-12-14 00:19:05.588719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.684 [2024-12-14 00:19:05.588733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.684 qpair failed and we were unable to recover it. 00:38:26.684 [2024-12-14 00:19:05.588936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.684 [2024-12-14 00:19:05.588950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.684 qpair failed and we were unable to recover it. 00:38:26.684 [2024-12-14 00:19:05.589027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.684 [2024-12-14 00:19:05.589041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.684 qpair failed and we were unable to recover it. 00:38:26.684 [2024-12-14 00:19:05.589279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.684 [2024-12-14 00:19:05.589321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.684 qpair failed and we were unable to recover it. 00:38:26.684 [2024-12-14 00:19:05.589486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.684 [2024-12-14 00:19:05.589529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.684 qpair failed and we were unable to recover it. 00:38:26.684 [2024-12-14 00:19:05.589741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.684 [2024-12-14 00:19:05.589783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.684 qpair failed and we were unable to recover it. 00:38:26.684 [2024-12-14 00:19:05.590091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.684 [2024-12-14 00:19:05.590132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.684 qpair failed and we were unable to recover it. 00:38:26.684 [2024-12-14 00:19:05.590282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.684 [2024-12-14 00:19:05.590335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.684 qpair failed and we were unable to recover it. 00:38:26.684 [2024-12-14 00:19:05.590543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.684 [2024-12-14 00:19:05.590557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.684 qpair failed and we were unable to recover it. 00:38:26.684 [2024-12-14 00:19:05.590712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.684 [2024-12-14 00:19:05.590725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.684 qpair failed and we were unable to recover it. 00:38:26.684 [2024-12-14 00:19:05.590815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.684 [2024-12-14 00:19:05.590870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.684 qpair failed and we were unable to recover it. 00:38:26.684 [2024-12-14 00:19:05.591138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.684 [2024-12-14 00:19:05.591180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.684 qpair failed and we were unable to recover it. 00:38:26.684 [2024-12-14 00:19:05.591372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.684 [2024-12-14 00:19:05.591408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.684 qpair failed and we were unable to recover it. 00:38:26.684 [2024-12-14 00:19:05.591542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.684 [2024-12-14 00:19:05.591558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.684 qpair failed and we were unable to recover it. 00:38:26.684 [2024-12-14 00:19:05.591699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.684 [2024-12-14 00:19:05.591713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.684 qpair failed and we were unable to recover it. 00:38:26.684 [2024-12-14 00:19:05.591803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.684 [2024-12-14 00:19:05.591816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.684 qpair failed and we were unable to recover it. 00:38:26.684 [2024-12-14 00:19:05.591902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.684 [2024-12-14 00:19:05.591940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.684 qpair failed and we were unable to recover it. 00:38:26.684 [2024-12-14 00:19:05.592101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.684 [2024-12-14 00:19:05.592145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.684 qpair failed and we were unable to recover it. 00:38:26.685 [2024-12-14 00:19:05.592364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.685 [2024-12-14 00:19:05.592418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.685 qpair failed and we were unable to recover it. 00:38:26.685 [2024-12-14 00:19:05.592582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.685 [2024-12-14 00:19:05.592625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.685 qpair failed and we were unable to recover it. 00:38:26.685 [2024-12-14 00:19:05.592881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.685 [2024-12-14 00:19:05.592923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.685 qpair failed and we were unable to recover it. 00:38:26.685 [2024-12-14 00:19:05.593157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.685 [2024-12-14 00:19:05.593200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.685 qpair failed and we were unable to recover it. 00:38:26.685 [2024-12-14 00:19:05.593354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.685 [2024-12-14 00:19:05.593407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.685 qpair failed and we were unable to recover it. 00:38:26.685 [2024-12-14 00:19:05.593576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.685 [2024-12-14 00:19:05.593590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.685 qpair failed and we were unable to recover it. 00:38:26.685 [2024-12-14 00:19:05.593740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.685 [2024-12-14 00:19:05.593781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.685 qpair failed and we were unable to recover it. 00:38:26.685 [2024-12-14 00:19:05.593933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.685 [2024-12-14 00:19:05.593975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.685 qpair failed and we were unable to recover it. 00:38:26.685 [2024-12-14 00:19:05.594124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.685 [2024-12-14 00:19:05.594166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.685 qpair failed and we were unable to recover it. 00:38:26.685 [2024-12-14 00:19:05.594306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.685 [2024-12-14 00:19:05.594320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.685 qpair failed and we were unable to recover it. 00:38:26.685 [2024-12-14 00:19:05.594462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.685 [2024-12-14 00:19:05.594476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.685 qpair failed and we were unable to recover it. 00:38:26.685 [2024-12-14 00:19:05.594687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.685 [2024-12-14 00:19:05.594701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.685 qpair failed and we were unable to recover it. 00:38:26.685 [2024-12-14 00:19:05.594786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.685 [2024-12-14 00:19:05.594799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.685 qpair failed and we were unable to recover it. 00:38:26.685 [2024-12-14 00:19:05.594884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.685 [2024-12-14 00:19:05.594897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.685 qpair failed and we were unable to recover it. 00:38:26.685 [2024-12-14 00:19:05.594996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.685 [2024-12-14 00:19:05.595038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.685 qpair failed and we were unable to recover it. 00:38:26.685 [2024-12-14 00:19:05.595255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.685 [2024-12-14 00:19:05.595297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.685 qpair failed and we were unable to recover it. 00:38:26.685 [2024-12-14 00:19:05.595508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.685 [2024-12-14 00:19:05.595552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.685 qpair failed and we were unable to recover it. 00:38:26.685 [2024-12-14 00:19:05.595761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.685 [2024-12-14 00:19:05.595802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.685 qpair failed and we were unable to recover it. 00:38:26.685 [2024-12-14 00:19:05.596010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.685 [2024-12-14 00:19:05.596052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.685 qpair failed and we were unable to recover it. 00:38:26.685 [2024-12-14 00:19:05.596307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.685 [2024-12-14 00:19:05.596349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.685 qpair failed and we were unable to recover it. 00:38:26.685 [2024-12-14 00:19:05.596635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.685 [2024-12-14 00:19:05.596649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.685 qpair failed and we were unable to recover it. 00:38:26.685 [2024-12-14 00:19:05.596872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.685 [2024-12-14 00:19:05.596886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.685 qpair failed and we were unable to recover it. 00:38:26.685 [2024-12-14 00:19:05.597062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.685 [2024-12-14 00:19:05.597104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.685 qpair failed and we were unable to recover it. 00:38:26.685 [2024-12-14 00:19:05.597254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.685 [2024-12-14 00:19:05.597295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.685 qpair failed and we were unable to recover it. 00:38:26.685 [2024-12-14 00:19:05.597485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.685 [2024-12-14 00:19:05.597529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.685 qpair failed and we were unable to recover it. 00:38:26.685 [2024-12-14 00:19:05.597728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.685 [2024-12-14 00:19:05.597769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.685 qpair failed and we were unable to recover it. 00:38:26.685 [2024-12-14 00:19:05.598030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.685 [2024-12-14 00:19:05.598072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.685 qpair failed and we were unable to recover it. 00:38:26.685 [2024-12-14 00:19:05.598204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.685 [2024-12-14 00:19:05.598246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.685 qpair failed and we were unable to recover it. 00:38:26.685 [2024-12-14 00:19:05.598443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.685 [2024-12-14 00:19:05.598457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.685 qpair failed and we were unable to recover it. 00:38:26.685 [2024-12-14 00:19:05.598601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.685 [2024-12-14 00:19:05.598614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.685 qpair failed and we were unable to recover it. 00:38:26.685 [2024-12-14 00:19:05.598768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.685 [2024-12-14 00:19:05.598781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.685 qpair failed and we were unable to recover it. 00:38:26.685 [2024-12-14 00:19:05.598918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.685 [2024-12-14 00:19:05.598931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.685 qpair failed and we were unable to recover it. 00:38:26.685 [2024-12-14 00:19:05.599142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.685 [2024-12-14 00:19:05.599184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.685 qpair failed and we were unable to recover it. 00:38:26.685 [2024-12-14 00:19:05.599392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.685 [2024-12-14 00:19:05.599433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.685 qpair failed and we were unable to recover it. 00:38:26.685 [2024-12-14 00:19:05.599591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.685 [2024-12-14 00:19:05.599633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.685 qpair failed and we were unable to recover it. 00:38:26.685 [2024-12-14 00:19:05.599778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.685 [2024-12-14 00:19:05.599826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.685 qpair failed and we were unable to recover it. 00:38:26.685 [2024-12-14 00:19:05.600045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.686 [2024-12-14 00:19:05.600087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.686 qpair failed and we were unable to recover it. 00:38:26.686 [2024-12-14 00:19:05.600319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.686 [2024-12-14 00:19:05.600361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.686 qpair failed and we were unable to recover it. 00:38:26.686 [2024-12-14 00:19:05.600511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.686 [2024-12-14 00:19:05.600554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.686 qpair failed and we were unable to recover it. 00:38:26.686 [2024-12-14 00:19:05.600787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.686 [2024-12-14 00:19:05.600829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.686 qpair failed and we were unable to recover it. 00:38:26.686 [2024-12-14 00:19:05.601091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.686 [2024-12-14 00:19:05.601116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.686 qpair failed and we were unable to recover it. 00:38:26.686 [2024-12-14 00:19:05.601295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.686 [2024-12-14 00:19:05.601338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.686 qpair failed and we were unable to recover it. 00:38:26.686 [2024-12-14 00:19:05.601553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.686 [2024-12-14 00:19:05.601596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.686 qpair failed and we were unable to recover it. 00:38:26.686 [2024-12-14 00:19:05.601810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.686 [2024-12-14 00:19:05.601852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.686 qpair failed and we were unable to recover it. 00:38:26.686 [2024-12-14 00:19:05.602054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.686 [2024-12-14 00:19:05.602096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.686 qpair failed and we were unable to recover it. 00:38:26.686 [2024-12-14 00:19:05.602338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.686 [2024-12-14 00:19:05.602351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.686 qpair failed and we were unable to recover it. 00:38:26.686 [2024-12-14 00:19:05.602528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.686 [2024-12-14 00:19:05.602542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.686 qpair failed and we were unable to recover it. 00:38:26.686 [2024-12-14 00:19:05.602710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.686 [2024-12-14 00:19:05.602751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.686 qpair failed and we were unable to recover it. 00:38:26.686 [2024-12-14 00:19:05.602958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.686 [2024-12-14 00:19:05.603001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.686 qpair failed and we were unable to recover it. 00:38:26.686 [2024-12-14 00:19:05.603195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.686 [2024-12-14 00:19:05.603209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.686 qpair failed and we were unable to recover it. 00:38:26.686 [2024-12-14 00:19:05.603356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.686 [2024-12-14 00:19:05.603369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.686 qpair failed and we were unable to recover it. 00:38:26.686 [2024-12-14 00:19:05.603500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.686 [2024-12-14 00:19:05.603514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.686 qpair failed and we were unable to recover it. 00:38:26.686 [2024-12-14 00:19:05.603601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.686 [2024-12-14 00:19:05.603615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.686 qpair failed and we were unable to recover it. 00:38:26.686 [2024-12-14 00:19:05.603766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.686 [2024-12-14 00:19:05.603779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.686 qpair failed and we were unable to recover it. 00:38:26.686 [2024-12-14 00:19:05.603873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.686 [2024-12-14 00:19:05.603886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.686 qpair failed and we were unable to recover it. 00:38:26.686 [2024-12-14 00:19:05.604035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.686 [2024-12-14 00:19:05.604048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.686 qpair failed and we were unable to recover it. 00:38:26.686 [2024-12-14 00:19:05.604149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.686 [2024-12-14 00:19:05.604191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.686 qpair failed and we were unable to recover it. 00:38:26.686 [2024-12-14 00:19:05.604410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.686 [2024-12-14 00:19:05.604460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.686 qpair failed and we were unable to recover it. 00:38:26.686 [2024-12-14 00:19:05.604670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.686 [2024-12-14 00:19:05.604713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.686 qpair failed and we were unable to recover it. 00:38:26.686 [2024-12-14 00:19:05.604867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.686 [2024-12-14 00:19:05.604909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.686 qpair failed and we were unable to recover it. 00:38:26.686 [2024-12-14 00:19:05.605174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.686 [2024-12-14 00:19:05.605215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.686 qpair failed and we were unable to recover it. 00:38:26.686 [2024-12-14 00:19:05.605435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.686 [2024-12-14 00:19:05.605510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.686 qpair failed and we were unable to recover it. 00:38:26.686 [2024-12-14 00:19:05.605781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.686 [2024-12-14 00:19:05.605822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.686 qpair failed and we were unable to recover it. 00:38:26.686 [2024-12-14 00:19:05.605985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.686 [2024-12-14 00:19:05.606028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.686 qpair failed and we were unable to recover it. 00:38:26.686 [2024-12-14 00:19:05.606237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.686 [2024-12-14 00:19:05.606291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.686 qpair failed and we were unable to recover it. 00:38:26.686 [2024-12-14 00:19:05.606565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.686 [2024-12-14 00:19:05.606579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.686 qpair failed and we were unable to recover it. 00:38:26.686 [2024-12-14 00:19:05.606777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.686 [2024-12-14 00:19:05.606790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.686 qpair failed and we were unable to recover it. 00:38:26.686 [2024-12-14 00:19:05.606968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.686 [2024-12-14 00:19:05.606981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.686 qpair failed and we were unable to recover it. 00:38:26.686 [2024-12-14 00:19:05.607219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.686 [2024-12-14 00:19:05.607260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.686 qpair failed and we were unable to recover it. 00:38:26.686 [2024-12-14 00:19:05.607480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.686 [2024-12-14 00:19:05.607523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.686 qpair failed and we were unable to recover it. 00:38:26.686 [2024-12-14 00:19:05.607735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.686 [2024-12-14 00:19:05.607778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.686 qpair failed and we were unable to recover it. 00:38:26.686 [2024-12-14 00:19:05.607925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.686 [2024-12-14 00:19:05.607966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.686 qpair failed and we were unable to recover it. 00:38:26.686 [2024-12-14 00:19:05.608117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.686 [2024-12-14 00:19:05.608159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.686 qpair failed and we were unable to recover it. 00:38:26.686 [2024-12-14 00:19:05.608302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.686 [2024-12-14 00:19:05.608345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.686 qpair failed and we were unable to recover it. 00:38:26.687 [2024-12-14 00:19:05.608505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.687 [2024-12-14 00:19:05.608519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.687 qpair failed and we were unable to recover it. 00:38:26.687 [2024-12-14 00:19:05.608747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.687 [2024-12-14 00:19:05.608795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.687 qpair failed and we were unable to recover it. 00:38:26.687 [2024-12-14 00:19:05.608953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.687 [2024-12-14 00:19:05.608995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.687 qpair failed and we were unable to recover it. 00:38:26.687 [2024-12-14 00:19:05.609293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.687 [2024-12-14 00:19:05.609335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.687 qpair failed and we were unable to recover it. 00:38:26.687 [2024-12-14 00:19:05.609577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.687 [2024-12-14 00:19:05.609620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.687 qpair failed and we were unable to recover it. 00:38:26.687 [2024-12-14 00:19:05.609828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.687 [2024-12-14 00:19:05.609869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.687 qpair failed and we were unable to recover it. 00:38:26.687 [2024-12-14 00:19:05.610179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.687 [2024-12-14 00:19:05.610220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.687 qpair failed and we were unable to recover it. 00:38:26.687 [2024-12-14 00:19:05.610482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.687 [2024-12-14 00:19:05.610524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.687 qpair failed and we were unable to recover it. 00:38:26.687 [2024-12-14 00:19:05.610822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.687 [2024-12-14 00:19:05.610864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.687 qpair failed and we were unable to recover it. 00:38:26.687 [2024-12-14 00:19:05.611106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.687 [2024-12-14 00:19:05.611148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.687 qpair failed and we were unable to recover it. 00:38:26.687 [2024-12-14 00:19:05.611363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.687 [2024-12-14 00:19:05.611414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.687 qpair failed and we were unable to recover it. 00:38:26.687 [2024-12-14 00:19:05.611577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.687 [2024-12-14 00:19:05.611590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.687 qpair failed and we were unable to recover it. 00:38:26.687 [2024-12-14 00:19:05.611745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.687 [2024-12-14 00:19:05.611758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.687 qpair failed and we were unable to recover it. 00:38:26.687 [2024-12-14 00:19:05.611975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.687 [2024-12-14 00:19:05.612017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.687 qpair failed and we were unable to recover it. 00:38:26.687 [2024-12-14 00:19:05.612168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.687 [2024-12-14 00:19:05.612209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.687 qpair failed and we were unable to recover it. 00:38:26.687 [2024-12-14 00:19:05.612412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.687 [2024-12-14 00:19:05.612463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.687 qpair failed and we were unable to recover it. 00:38:26.687 [2024-12-14 00:19:05.612655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.687 [2024-12-14 00:19:05.612669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.687 qpair failed and we were unable to recover it. 00:38:26.687 [2024-12-14 00:19:05.612829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.687 [2024-12-14 00:19:05.612870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.687 qpair failed and we were unable to recover it. 00:38:26.687 [2024-12-14 00:19:05.613132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.687 [2024-12-14 00:19:05.613174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.687 qpair failed and we were unable to recover it. 00:38:26.687 [2024-12-14 00:19:05.613328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.687 [2024-12-14 00:19:05.613370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.687 qpair failed and we were unable to recover it. 00:38:26.687 [2024-12-14 00:19:05.613546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.687 [2024-12-14 00:19:05.613559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.687 qpair failed and we were unable to recover it. 00:38:26.687 [2024-12-14 00:19:05.613645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.687 [2024-12-14 00:19:05.613658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.687 qpair failed and we were unable to recover it. 00:38:26.687 [2024-12-14 00:19:05.613743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.687 [2024-12-14 00:19:05.613756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.687 qpair failed and we were unable to recover it. 00:38:26.687 [2024-12-14 00:19:05.613901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.687 [2024-12-14 00:19:05.613915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.687 qpair failed and we were unable to recover it. 00:38:26.687 [2024-12-14 00:19:05.614001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.687 [2024-12-14 00:19:05.614014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.687 qpair failed and we were unable to recover it. 00:38:26.687 [2024-12-14 00:19:05.614173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.687 [2024-12-14 00:19:05.614186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.687 qpair failed and we were unable to recover it. 00:38:26.687 [2024-12-14 00:19:05.614400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.687 [2024-12-14 00:19:05.614413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.687 qpair failed and we were unable to recover it. 00:38:26.687 [2024-12-14 00:19:05.614571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.687 [2024-12-14 00:19:05.614585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.687 qpair failed and we were unable to recover it. 00:38:26.687 [2024-12-14 00:19:05.614671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.687 [2024-12-14 00:19:05.614685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.687 qpair failed and we were unable to recover it. 00:38:26.687 [2024-12-14 00:19:05.614904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.687 [2024-12-14 00:19:05.614946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.687 qpair failed and we were unable to recover it. 00:38:26.687 [2024-12-14 00:19:05.615219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.687 [2024-12-14 00:19:05.615232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.687 qpair failed and we were unable to recover it. 00:38:26.687 [2024-12-14 00:19:05.615379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.687 [2024-12-14 00:19:05.615422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.687 qpair failed and we were unable to recover it. 00:38:26.687 [2024-12-14 00:19:05.615580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.687 [2024-12-14 00:19:05.615622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.687 qpair failed and we were unable to recover it. 00:38:26.687 [2024-12-14 00:19:05.615901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.687 [2024-12-14 00:19:05.615943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.687 qpair failed and we were unable to recover it. 00:38:26.688 [2024-12-14 00:19:05.616103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.688 [2024-12-14 00:19:05.616145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.688 qpair failed and we were unable to recover it. 00:38:26.688 [2024-12-14 00:19:05.616429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.688 [2024-12-14 00:19:05.616483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.688 qpair failed and we were unable to recover it. 00:38:26.688 [2024-12-14 00:19:05.616747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.688 [2024-12-14 00:19:05.616789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.688 qpair failed and we were unable to recover it. 00:38:26.688 [2024-12-14 00:19:05.617071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.688 [2024-12-14 00:19:05.617112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.688 qpair failed and we were unable to recover it. 00:38:26.688 [2024-12-14 00:19:05.617395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.688 [2024-12-14 00:19:05.617436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.688 qpair failed and we were unable to recover it. 00:38:26.688 [2024-12-14 00:19:05.617749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.688 [2024-12-14 00:19:05.617791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.688 qpair failed and we were unable to recover it. 00:38:26.688 [2024-12-14 00:19:05.618018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.688 [2024-12-14 00:19:05.618060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.688 qpair failed and we were unable to recover it. 00:38:26.688 [2024-12-14 00:19:05.618340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.688 [2024-12-14 00:19:05.618380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.688 qpair failed and we were unable to recover it. 00:38:26.688 [2024-12-14 00:19:05.618532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.688 [2024-12-14 00:19:05.618546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.688 qpair failed and we were unable to recover it. 00:38:26.688 [2024-12-14 00:19:05.618705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.688 [2024-12-14 00:19:05.618746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.688 qpair failed and we were unable to recover it. 00:38:26.688 [2024-12-14 00:19:05.618897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.688 [2024-12-14 00:19:05.618939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.688 qpair failed and we were unable to recover it. 00:38:26.688 [2024-12-14 00:19:05.619159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.688 [2024-12-14 00:19:05.619214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.688 qpair failed and we were unable to recover it. 00:38:26.688 [2024-12-14 00:19:05.619460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.688 [2024-12-14 00:19:05.619473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.688 qpair failed and we were unable to recover it. 00:38:26.688 [2024-12-14 00:19:05.619619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.688 [2024-12-14 00:19:05.619633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.688 qpair failed and we were unable to recover it. 00:38:26.688 [2024-12-14 00:19:05.619788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.688 [2024-12-14 00:19:05.619802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.688 qpair failed and we were unable to recover it. 00:38:26.688 [2024-12-14 00:19:05.619883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.688 [2024-12-14 00:19:05.619897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.688 qpair failed and we were unable to recover it. 00:38:26.688 [2024-12-14 00:19:05.619994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.688 [2024-12-14 00:19:05.620008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.688 qpair failed and we were unable to recover it. 00:38:26.688 [2024-12-14 00:19:05.620145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.688 [2024-12-14 00:19:05.620188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.688 qpair failed and we were unable to recover it. 00:38:26.688 [2024-12-14 00:19:05.620383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.688 [2024-12-14 00:19:05.620426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.688 qpair failed and we were unable to recover it. 00:38:26.688 [2024-12-14 00:19:05.620712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.688 [2024-12-14 00:19:05.620754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.688 qpair failed and we were unable to recover it. 00:38:26.688 [2024-12-14 00:19:05.620972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.688 [2024-12-14 00:19:05.621026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.688 qpair failed and we were unable to recover it. 00:38:26.688 [2024-12-14 00:19:05.621251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.688 [2024-12-14 00:19:05.621294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.688 qpair failed and we were unable to recover it. 00:38:26.688 [2024-12-14 00:19:05.621453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.688 [2024-12-14 00:19:05.621467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.688 qpair failed and we were unable to recover it. 00:38:26.688 [2024-12-14 00:19:05.621638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.688 [2024-12-14 00:19:05.621684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.688 qpair failed and we were unable to recover it. 00:38:26.688 [2024-12-14 00:19:05.621830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.688 [2024-12-14 00:19:05.621872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.688 qpair failed and we were unable to recover it. 00:38:26.688 [2024-12-14 00:19:05.622016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.688 [2024-12-14 00:19:05.622058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.688 qpair failed and we were unable to recover it. 00:38:26.688 [2024-12-14 00:19:05.622208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.688 [2024-12-14 00:19:05.622245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.688 qpair failed and we were unable to recover it. 00:38:26.688 [2024-12-14 00:19:05.622466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.688 [2024-12-14 00:19:05.622479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.688 qpair failed and we were unable to recover it. 00:38:26.688 [2024-12-14 00:19:05.622632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.688 [2024-12-14 00:19:05.622645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.688 qpair failed and we were unable to recover it. 00:38:26.688 [2024-12-14 00:19:05.622886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.688 [2024-12-14 00:19:05.622929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.688 qpair failed and we were unable to recover it. 00:38:26.688 [2024-12-14 00:19:05.623141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.688 [2024-12-14 00:19:05.623183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.688 qpair failed and we were unable to recover it. 00:38:26.688 [2024-12-14 00:19:05.623377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.688 [2024-12-14 00:19:05.623391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.688 qpair failed and we were unable to recover it. 00:38:26.688 [2024-12-14 00:19:05.623627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.688 [2024-12-14 00:19:05.623672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.688 qpair failed and we were unable to recover it. 00:38:26.688 [2024-12-14 00:19:05.623837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.688 [2024-12-14 00:19:05.623878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.688 qpair failed and we were unable to recover it. 00:38:26.688 [2024-12-14 00:19:05.624075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.688 [2024-12-14 00:19:05.624135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.688 qpair failed and we were unable to recover it. 00:38:26.688 [2024-12-14 00:19:05.624234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.689 [2024-12-14 00:19:05.624248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.689 qpair failed and we were unable to recover it. 00:38:26.689 [2024-12-14 00:19:05.624360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.689 [2024-12-14 00:19:05.624374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.689 qpair failed and we were unable to recover it. 00:38:26.689 [2024-12-14 00:19:05.624469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.689 [2024-12-14 00:19:05.624483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.689 qpair failed and we were unable to recover it. 00:38:26.689 [2024-12-14 00:19:05.624567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.689 [2024-12-14 00:19:05.624581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.689 qpair failed and we were unable to recover it. 00:38:26.689 [2024-12-14 00:19:05.624692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.689 [2024-12-14 00:19:05.624735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.689 qpair failed and we were unable to recover it. 00:38:26.689 [2024-12-14 00:19:05.624945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.689 [2024-12-14 00:19:05.624986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.689 qpair failed and we were unable to recover it. 00:38:26.689 [2024-12-14 00:19:05.625245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.689 [2024-12-14 00:19:05.625287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.689 qpair failed and we were unable to recover it. 00:38:26.689 [2024-12-14 00:19:05.625447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.689 [2024-12-14 00:19:05.625489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.689 qpair failed and we were unable to recover it. 00:38:26.689 [2024-12-14 00:19:05.625700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.689 [2024-12-14 00:19:05.625742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.689 qpair failed and we were unable to recover it. 00:38:26.689 [2024-12-14 00:19:05.625961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.689 [2024-12-14 00:19:05.626003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.689 qpair failed and we were unable to recover it. 00:38:26.689 [2024-12-14 00:19:05.626256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.689 [2024-12-14 00:19:05.626268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.689 qpair failed and we were unable to recover it. 00:38:26.689 [2024-12-14 00:19:05.626429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.689 [2024-12-14 00:19:05.626483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.689 qpair failed and we were unable to recover it. 00:38:26.689 [2024-12-14 00:19:05.626697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.689 [2024-12-14 00:19:05.626738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.689 qpair failed and we were unable to recover it. 00:38:26.689 [2024-12-14 00:19:05.626895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.689 [2024-12-14 00:19:05.626937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.689 qpair failed and we were unable to recover it. 00:38:26.689 [2024-12-14 00:19:05.627205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.689 [2024-12-14 00:19:05.627247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.689 qpair failed and we were unable to recover it. 00:38:26.689 [2024-12-14 00:19:05.627460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.689 [2024-12-14 00:19:05.627486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.689 qpair failed and we were unable to recover it. 00:38:26.689 [2024-12-14 00:19:05.627624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.689 [2024-12-14 00:19:05.627637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.689 qpair failed and we were unable to recover it. 00:38:26.689 [2024-12-14 00:19:05.627789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.689 [2024-12-14 00:19:05.627831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.689 qpair failed and we were unable to recover it. 00:38:26.689 [2024-12-14 00:19:05.628029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.689 [2024-12-14 00:19:05.628071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.689 qpair failed and we were unable to recover it. 00:38:26.689 [2024-12-14 00:19:05.628272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.689 [2024-12-14 00:19:05.628314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.689 qpair failed and we were unable to recover it. 00:38:26.689 [2024-12-14 00:19:05.628564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.689 [2024-12-14 00:19:05.628578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.689 qpair failed and we were unable to recover it. 00:38:26.689 [2024-12-14 00:19:05.628740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.689 [2024-12-14 00:19:05.628754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.689 qpair failed and we were unable to recover it. 00:38:26.689 [2024-12-14 00:19:05.628905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.689 [2024-12-14 00:19:05.628947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.689 qpair failed and we were unable to recover it. 00:38:26.689 [2024-12-14 00:19:05.629096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.689 [2024-12-14 00:19:05.629138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.689 qpair failed and we were unable to recover it. 00:38:26.689 [2024-12-14 00:19:05.629346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.689 [2024-12-14 00:19:05.629388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.689 qpair failed and we were unable to recover it. 00:38:26.689 [2024-12-14 00:19:05.629620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.689 [2024-12-14 00:19:05.629634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.689 qpair failed and we were unable to recover it. 00:38:26.689 [2024-12-14 00:19:05.629869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.689 [2024-12-14 00:19:05.629882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.689 qpair failed and we were unable to recover it. 00:38:26.689 [2024-12-14 00:19:05.629977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.689 [2024-12-14 00:19:05.629991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.689 qpair failed and we were unable to recover it. 00:38:26.689 [2024-12-14 00:19:05.630089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.689 [2024-12-14 00:19:05.630103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.689 qpair failed and we were unable to recover it. 00:38:26.689 [2024-12-14 00:19:05.630187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.689 [2024-12-14 00:19:05.630201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.689 qpair failed and we were unable to recover it. 00:38:26.689 [2024-12-14 00:19:05.630350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.689 [2024-12-14 00:19:05.630364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.689 qpair failed and we were unable to recover it. 00:38:26.689 [2024-12-14 00:19:05.630485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.689 [2024-12-14 00:19:05.630528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.689 qpair failed and we were unable to recover it. 00:38:26.689 [2024-12-14 00:19:05.630731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.689 [2024-12-14 00:19:05.630773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.689 qpair failed and we were unable to recover it. 00:38:26.689 [2024-12-14 00:19:05.630928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.689 [2024-12-14 00:19:05.630970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.689 qpair failed and we were unable to recover it. 00:38:26.689 [2024-12-14 00:19:05.631207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.689 [2024-12-14 00:19:05.631221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.689 qpair failed and we were unable to recover it. 00:38:26.689 [2024-12-14 00:19:05.631324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.689 [2024-12-14 00:19:05.631366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.689 qpair failed and we were unable to recover it. 00:38:26.689 [2024-12-14 00:19:05.631660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.689 [2024-12-14 00:19:05.631702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.689 qpair failed and we were unable to recover it. 00:38:26.689 [2024-12-14 00:19:05.631855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.689 [2024-12-14 00:19:05.631897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.689 qpair failed and we were unable to recover it. 00:38:26.689 [2024-12-14 00:19:05.632185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.689 [2024-12-14 00:19:05.632226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.689 qpair failed and we were unable to recover it. 00:38:26.689 [2024-12-14 00:19:05.632429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.690 [2024-12-14 00:19:05.632479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.690 qpair failed and we were unable to recover it. 00:38:26.690 [2024-12-14 00:19:05.632698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.690 [2024-12-14 00:19:05.632711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.690 qpair failed and we were unable to recover it. 00:38:26.690 [2024-12-14 00:19:05.632820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.690 [2024-12-14 00:19:05.632834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.690 qpair failed and we were unable to recover it. 00:38:26.690 [2024-12-14 00:19:05.632986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.690 [2024-12-14 00:19:05.632999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.690 qpair failed and we were unable to recover it. 00:38:26.690 [2024-12-14 00:19:05.633140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.690 [2024-12-14 00:19:05.633182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.690 qpair failed and we were unable to recover it. 00:38:26.690 [2024-12-14 00:19:05.633407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.690 [2024-12-14 00:19:05.633460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.690 qpair failed and we were unable to recover it. 00:38:26.690 [2024-12-14 00:19:05.633626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.690 [2024-12-14 00:19:05.633668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.690 qpair failed and we were unable to recover it. 00:38:26.690 [2024-12-14 00:19:05.634021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.690 [2024-12-14 00:19:05.634064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.690 qpair failed and we were unable to recover it. 00:38:26.690 [2024-12-14 00:19:05.634258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.690 [2024-12-14 00:19:05.634271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.690 qpair failed and we were unable to recover it. 00:38:26.690 [2024-12-14 00:19:05.634380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.690 [2024-12-14 00:19:05.634396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.690 qpair failed and we were unable to recover it. 00:38:26.690 [2024-12-14 00:19:05.634624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.690 [2024-12-14 00:19:05.634680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.690 qpair failed and we were unable to recover it. 00:38:26.690 [2024-12-14 00:19:05.634880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.690 [2024-12-14 00:19:05.634924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.690 qpair failed and we were unable to recover it. 00:38:26.690 [2024-12-14 00:19:05.635204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.690 [2024-12-14 00:19:05.635247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.690 qpair failed and we were unable to recover it. 00:38:26.690 [2024-12-14 00:19:05.635513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.690 [2024-12-14 00:19:05.635557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.690 qpair failed and we were unable to recover it. 00:38:26.690 [2024-12-14 00:19:05.635812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.690 [2024-12-14 00:19:05.635825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.690 qpair failed and we were unable to recover it. 00:38:26.690 [2024-12-14 00:19:05.636080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.690 [2024-12-14 00:19:05.636092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.690 qpair failed and we were unable to recover it. 00:38:26.690 [2024-12-14 00:19:05.636235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.690 [2024-12-14 00:19:05.636248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.690 qpair failed and we were unable to recover it. 00:38:26.690 [2024-12-14 00:19:05.636328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.690 [2024-12-14 00:19:05.636341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.690 qpair failed and we were unable to recover it. 00:38:26.690 [2024-12-14 00:19:05.636431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.690 [2024-12-14 00:19:05.636450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.690 qpair failed and we were unable to recover it. 00:38:26.690 [2024-12-14 00:19:05.636542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.690 [2024-12-14 00:19:05.636556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.690 qpair failed and we were unable to recover it. 00:38:26.690 [2024-12-14 00:19:05.636759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.690 [2024-12-14 00:19:05.636772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.690 qpair failed and we were unable to recover it. 00:38:26.690 [2024-12-14 00:19:05.636865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.690 [2024-12-14 00:19:05.636879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.690 qpair failed and we were unable to recover it. 00:38:26.690 [2024-12-14 00:19:05.637023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.690 [2024-12-14 00:19:05.637037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.690 qpair failed and we were unable to recover it. 00:38:26.690 [2024-12-14 00:19:05.637185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.690 [2024-12-14 00:19:05.637228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.690 qpair failed and we were unable to recover it. 00:38:26.690 [2024-12-14 00:19:05.637385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.690 [2024-12-14 00:19:05.637425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.690 qpair failed and we were unable to recover it. 00:38:26.690 [2024-12-14 00:19:05.637639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.690 [2024-12-14 00:19:05.637680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.690 qpair failed and we were unable to recover it. 00:38:26.690 [2024-12-14 00:19:05.637883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.690 [2024-12-14 00:19:05.637923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.690 qpair failed and we were unable to recover it. 00:38:26.690 [2024-12-14 00:19:05.638190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.690 [2024-12-14 00:19:05.638232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.690 qpair failed and we were unable to recover it. 00:38:26.690 [2024-12-14 00:19:05.638454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.690 [2024-12-14 00:19:05.638467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.690 qpair failed and we were unable to recover it. 00:38:26.690 [2024-12-14 00:19:05.638569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.690 [2024-12-14 00:19:05.638582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.690 qpair failed and we were unable to recover it. 00:38:26.690 [2024-12-14 00:19:05.638739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.690 [2024-12-14 00:19:05.638752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.690 qpair failed and we were unable to recover it. 00:38:26.690 [2024-12-14 00:19:05.638847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.690 [2024-12-14 00:19:05.638860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.690 qpair failed and we were unable to recover it. 00:38:26.690 [2024-12-14 00:19:05.638941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.690 [2024-12-14 00:19:05.638955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.690 qpair failed and we were unable to recover it. 00:38:26.690 [2024-12-14 00:19:05.639181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.690 [2024-12-14 00:19:05.639195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.690 qpair failed and we were unable to recover it. 00:38:26.690 [2024-12-14 00:19:05.639330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.690 [2024-12-14 00:19:05.639343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.690 qpair failed and we were unable to recover it. 00:38:26.690 [2024-12-14 00:19:05.639412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.690 [2024-12-14 00:19:05.639426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.690 qpair failed and we were unable to recover it. 00:38:26.690 [2024-12-14 00:19:05.639571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.690 [2024-12-14 00:19:05.639627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:26.690 qpair failed and we were unable to recover it. 00:38:26.690 [2024-12-14 00:19:05.639782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.691 [2024-12-14 00:19:05.639828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.691 qpair failed and we were unable to recover it. 00:38:26.691 [2024-12-14 00:19:05.640070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.691 [2024-12-14 00:19:05.640157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:26.691 qpair failed and we were unable to recover it. 00:38:26.691 [2024-12-14 00:19:05.640343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.691 [2024-12-14 00:19:05.640390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.691 qpair failed and we were unable to recover it. 00:38:26.691 [2024-12-14 00:19:05.640548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.691 [2024-12-14 00:19:05.640598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.691 qpair failed and we were unable to recover it. 00:38:26.691 [2024-12-14 00:19:05.640861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.691 [2024-12-14 00:19:05.640904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.691 qpair failed and we were unable to recover it. 00:38:26.691 [2024-12-14 00:19:05.641130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.691 [2024-12-14 00:19:05.641173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.691 qpair failed and we were unable to recover it. 00:38:26.691 [2024-12-14 00:19:05.641394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.691 [2024-12-14 00:19:05.641436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.691 qpair failed and we were unable to recover it. 00:38:26.691 [2024-12-14 00:19:05.641649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.691 [2024-12-14 00:19:05.641691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.691 qpair failed and we were unable to recover it. 00:38:26.691 [2024-12-14 00:19:05.641837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.691 [2024-12-14 00:19:05.641880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.691 qpair failed and we were unable to recover it. 00:38:26.691 [2024-12-14 00:19:05.642108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.691 [2024-12-14 00:19:05.642149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.691 qpair failed and we were unable to recover it. 00:38:26.691 [2024-12-14 00:19:05.642401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.691 [2024-12-14 00:19:05.642414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.691 qpair failed and we were unable to recover it. 00:38:26.691 [2024-12-14 00:19:05.642641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.691 [2024-12-14 00:19:05.642655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.691 qpair failed and we were unable to recover it. 00:38:26.691 [2024-12-14 00:19:05.642821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.691 [2024-12-14 00:19:05.642863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.691 qpair failed and we were unable to recover it. 00:38:26.691 [2024-12-14 00:19:05.643076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.691 [2024-12-14 00:19:05.643117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.691 qpair failed and we were unable to recover it. 00:38:26.691 [2024-12-14 00:19:05.643322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.691 [2024-12-14 00:19:05.643365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.691 qpair failed and we were unable to recover it. 00:38:26.691 [2024-12-14 00:19:05.643533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.691 [2024-12-14 00:19:05.643577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.691 qpair failed and we were unable to recover it. 00:38:26.691 [2024-12-14 00:19:05.643717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.691 [2024-12-14 00:19:05.643759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.691 qpair failed and we were unable to recover it. 00:38:26.691 [2024-12-14 00:19:05.643921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.691 [2024-12-14 00:19:05.643964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.691 qpair failed and we were unable to recover it. 00:38:26.691 [2024-12-14 00:19:05.644173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.691 [2024-12-14 00:19:05.644216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.691 qpair failed and we were unable to recover it. 00:38:26.691 [2024-12-14 00:19:05.644371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.691 [2024-12-14 00:19:05.644413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.691 qpair failed and we were unable to recover it. 00:38:26.691 [2024-12-14 00:19:05.644697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.691 [2024-12-14 00:19:05.644741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.691 qpair failed and we were unable to recover it. 00:38:26.691 [2024-12-14 00:19:05.644879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.691 [2024-12-14 00:19:05.644921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.691 qpair failed and we were unable to recover it. 00:38:26.691 [2024-12-14 00:19:05.645215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.691 [2024-12-14 00:19:05.645257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.691 qpair failed and we were unable to recover it. 00:38:26.691 [2024-12-14 00:19:05.645539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.691 [2024-12-14 00:19:05.645553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.691 qpair failed and we were unable to recover it. 00:38:26.691 [2024-12-14 00:19:05.645654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.691 [2024-12-14 00:19:05.645668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.691 qpair failed and we were unable to recover it. 00:38:26.691 [2024-12-14 00:19:05.645817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.692 [2024-12-14 00:19:05.645859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.692 qpair failed and we were unable to recover it. 00:38:26.692 [2024-12-14 00:19:05.645999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.692 [2024-12-14 00:19:05.646042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.692 qpair failed and we were unable to recover it. 00:38:26.692 [2024-12-14 00:19:05.646303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.692 [2024-12-14 00:19:05.646346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.692 qpair failed and we were unable to recover it. 00:38:26.692 [2024-12-14 00:19:05.646544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.692 [2024-12-14 00:19:05.646587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.692 qpair failed and we were unable to recover it. 00:38:26.692 [2024-12-14 00:19:05.646894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.692 [2024-12-14 00:19:05.646937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.692 qpair failed and we were unable to recover it. 00:38:26.692 [2024-12-14 00:19:05.647139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.692 [2024-12-14 00:19:05.647180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.692 qpair failed and we were unable to recover it. 00:38:26.692 [2024-12-14 00:19:05.647446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.692 [2024-12-14 00:19:05.647490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.692 qpair failed and we were unable to recover it. 00:38:26.692 [2024-12-14 00:19:05.647673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.692 [2024-12-14 00:19:05.647685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.692 qpair failed and we were unable to recover it. 00:38:26.692 [2024-12-14 00:19:05.647777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.692 [2024-12-14 00:19:05.647790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.692 qpair failed and we were unable to recover it. 00:38:26.692 [2024-12-14 00:19:05.647960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.692 [2024-12-14 00:19:05.647973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.692 qpair failed and we were unable to recover it. 00:38:26.692 [2024-12-14 00:19:05.648072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.692 [2024-12-14 00:19:05.648085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.692 qpair failed and we were unable to recover it. 00:38:26.692 [2024-12-14 00:19:05.648261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.692 [2024-12-14 00:19:05.648273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.692 qpair failed and we were unable to recover it. 00:38:26.692 [2024-12-14 00:19:05.648357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.692 [2024-12-14 00:19:05.648370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.692 qpair failed and we were unable to recover it. 00:38:26.692 [2024-12-14 00:19:05.648508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.692 [2024-12-14 00:19:05.648521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.692 qpair failed and we were unable to recover it. 00:38:26.692 [2024-12-14 00:19:05.648763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.692 [2024-12-14 00:19:05.648804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.692 qpair failed and we were unable to recover it. 00:38:26.692 [2024-12-14 00:19:05.649011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.692 [2024-12-14 00:19:05.649065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.692 qpair failed and we were unable to recover it. 00:38:26.692 [2024-12-14 00:19:05.649223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.692 [2024-12-14 00:19:05.649236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.692 qpair failed and we were unable to recover it. 00:38:26.692 [2024-12-14 00:19:05.649401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.692 [2024-12-14 00:19:05.649452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.692 qpair failed and we were unable to recover it. 00:38:26.692 [2024-12-14 00:19:05.649582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.692 [2024-12-14 00:19:05.649629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.692 qpair failed and we were unable to recover it. 00:38:26.692 [2024-12-14 00:19:05.649898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.692 [2024-12-14 00:19:05.649941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.692 qpair failed and we were unable to recover it. 00:38:26.692 [2024-12-14 00:19:05.650152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.692 [2024-12-14 00:19:05.650195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.692 qpair failed and we were unable to recover it. 00:38:26.692 [2024-12-14 00:19:05.650474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.692 [2024-12-14 00:19:05.650517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.692 qpair failed and we were unable to recover it. 00:38:26.692 [2024-12-14 00:19:05.650706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.692 [2024-12-14 00:19:05.650720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.692 qpair failed and we were unable to recover it. 00:38:26.692 [2024-12-14 00:19:05.650944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.692 [2024-12-14 00:19:05.650958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.692 qpair failed and we were unable to recover it. 00:38:26.692 [2024-12-14 00:19:05.651112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.692 [2024-12-14 00:19:05.651153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.692 qpair failed and we were unable to recover it. 00:38:26.692 [2024-12-14 00:19:05.651298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.692 [2024-12-14 00:19:05.651341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.692 qpair failed and we were unable to recover it. 00:38:26.692 [2024-12-14 00:19:05.651606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.692 [2024-12-14 00:19:05.651650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.692 qpair failed and we were unable to recover it. 00:38:26.692 [2024-12-14 00:19:05.651791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.692 [2024-12-14 00:19:05.651833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.692 qpair failed and we were unable to recover it. 00:38:26.692 [2024-12-14 00:19:05.652066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.692 [2024-12-14 00:19:05.652108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.692 qpair failed and we were unable to recover it. 00:38:26.692 [2024-12-14 00:19:05.652238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.692 [2024-12-14 00:19:05.652280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.692 qpair failed and we were unable to recover it. 00:38:26.692 [2024-12-14 00:19:05.652527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.692 [2024-12-14 00:19:05.652540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.692 qpair failed and we were unable to recover it. 00:38:26.692 [2024-12-14 00:19:05.652754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.692 [2024-12-14 00:19:05.652767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.692 qpair failed and we were unable to recover it. 00:38:26.692 [2024-12-14 00:19:05.652928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.692 [2024-12-14 00:19:05.652941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.692 qpair failed and we were unable to recover it. 00:38:26.692 [2024-12-14 00:19:05.653137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.692 [2024-12-14 00:19:05.653178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.692 qpair failed and we were unable to recover it. 00:38:26.692 [2024-12-14 00:19:05.653381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.692 [2024-12-14 00:19:05.653423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.692 qpair failed and we were unable to recover it. 00:38:26.692 [2024-12-14 00:19:05.653644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.692 [2024-12-14 00:19:05.653687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.692 qpair failed and we were unable to recover it. 00:38:26.692 [2024-12-14 00:19:05.653824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.692 [2024-12-14 00:19:05.653866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.692 qpair failed and we were unable to recover it. 00:38:26.692 [2024-12-14 00:19:05.654089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.692 [2024-12-14 00:19:05.654131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.692 qpair failed and we were unable to recover it. 00:38:26.692 [2024-12-14 00:19:05.654261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.692 [2024-12-14 00:19:05.654302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.692 qpair failed and we were unable to recover it. 00:38:26.693 [2024-12-14 00:19:05.654457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.693 [2024-12-14 00:19:05.654501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.693 qpair failed and we were unable to recover it. 00:38:26.693 [2024-12-14 00:19:05.654697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.693 [2024-12-14 00:19:05.654710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.693 qpair failed and we were unable to recover it. 00:38:26.693 [2024-12-14 00:19:05.654962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.693 [2024-12-14 00:19:05.654976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.693 qpair failed and we were unable to recover it. 00:38:26.693 [2024-12-14 00:19:05.655168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.693 [2024-12-14 00:19:05.655210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.693 qpair failed and we were unable to recover it. 00:38:26.693 [2024-12-14 00:19:05.655529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.693 [2024-12-14 00:19:05.655573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.693 qpair failed and we were unable to recover it. 00:38:26.693 [2024-12-14 00:19:05.655767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.693 [2024-12-14 00:19:05.655809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.693 qpair failed and we were unable to recover it. 00:38:26.693 [2024-12-14 00:19:05.656022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.693 [2024-12-14 00:19:05.656064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.693 qpair failed and we were unable to recover it. 00:38:26.693 [2024-12-14 00:19:05.656272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.693 [2024-12-14 00:19:05.656314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.693 qpair failed and we were unable to recover it. 00:38:26.693 [2024-12-14 00:19:05.656521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.693 [2024-12-14 00:19:05.656565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.693 qpair failed and we were unable to recover it. 00:38:26.693 [2024-12-14 00:19:05.656671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.693 [2024-12-14 00:19:05.656684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.693 qpair failed and we were unable to recover it. 00:38:26.693 [2024-12-14 00:19:05.656823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.693 [2024-12-14 00:19:05.656836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.693 qpair failed and we were unable to recover it. 00:38:26.693 [2024-12-14 00:19:05.656904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.693 [2024-12-14 00:19:05.656916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.693 qpair failed and we were unable to recover it. 00:38:26.693 [2024-12-14 00:19:05.657077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.693 [2024-12-14 00:19:05.657090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.693 qpair failed and we were unable to recover it. 00:38:26.693 [2024-12-14 00:19:05.657166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.693 [2024-12-14 00:19:05.657179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.693 qpair failed and we were unable to recover it. 00:38:26.693 [2024-12-14 00:19:05.657335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.693 [2024-12-14 00:19:05.657348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.693 qpair failed and we were unable to recover it. 00:38:26.693 [2024-12-14 00:19:05.657556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.693 [2024-12-14 00:19:05.657598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.693 qpair failed and we were unable to recover it. 00:38:26.693 [2024-12-14 00:19:05.657798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.693 [2024-12-14 00:19:05.657840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.693 qpair failed and we were unable to recover it. 00:38:26.693 [2024-12-14 00:19:05.658038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.693 [2024-12-14 00:19:05.658083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.693 qpair failed and we were unable to recover it. 00:38:26.693 [2024-12-14 00:19:05.658280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.693 [2024-12-14 00:19:05.658293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.693 qpair failed and we were unable to recover it. 00:38:26.693 [2024-12-14 00:19:05.658386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.693 [2024-12-14 00:19:05.658402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.693 qpair failed and we were unable to recover it. 00:38:26.693 [2024-12-14 00:19:05.658547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.693 [2024-12-14 00:19:05.658561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.693 qpair failed and we were unable to recover it. 00:38:26.693 [2024-12-14 00:19:05.658777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.693 [2024-12-14 00:19:05.658819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.693 qpair failed and we were unable to recover it. 00:38:26.693 [2024-12-14 00:19:05.659087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.693 [2024-12-14 00:19:05.659130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.693 qpair failed and we were unable to recover it. 00:38:26.693 [2024-12-14 00:19:05.659270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.693 [2024-12-14 00:19:05.659308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.693 qpair failed and we were unable to recover it. 00:38:26.693 [2024-12-14 00:19:05.659543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.693 [2024-12-14 00:19:05.659557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.693 qpair failed and we were unable to recover it. 00:38:26.693 [2024-12-14 00:19:05.659645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.693 [2024-12-14 00:19:05.659658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.693 qpair failed and we were unable to recover it. 00:38:26.693 [2024-12-14 00:19:05.659753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.693 [2024-12-14 00:19:05.659766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.693 qpair failed and we were unable to recover it. 00:38:26.693 [2024-12-14 00:19:05.659915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.693 [2024-12-14 00:19:05.659957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.693 qpair failed and we were unable to recover it. 00:38:26.693 [2024-12-14 00:19:05.660158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.693 [2024-12-14 00:19:05.660200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.693 qpair failed and we were unable to recover it. 00:38:26.693 [2024-12-14 00:19:05.660332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.693 [2024-12-14 00:19:05.660373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.693 qpair failed and we were unable to recover it. 00:38:26.693 [2024-12-14 00:19:05.660531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.693 [2024-12-14 00:19:05.660545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.693 qpair failed and we were unable to recover it. 00:38:26.693 [2024-12-14 00:19:05.660617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.693 [2024-12-14 00:19:05.660629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.693 qpair failed and we were unable to recover it. 00:38:26.693 [2024-12-14 00:19:05.660786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.693 [2024-12-14 00:19:05.660828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.693 qpair failed and we were unable to recover it. 00:38:26.693 [2024-12-14 00:19:05.660976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.693 [2024-12-14 00:19:05.661018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.693 qpair failed and we were unable to recover it. 00:38:26.693 [2024-12-14 00:19:05.661239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.693 [2024-12-14 00:19:05.661281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.693 qpair failed and we were unable to recover it. 00:38:26.693 [2024-12-14 00:19:05.661519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.693 [2024-12-14 00:19:05.661561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.694 qpair failed and we were unable to recover it. 00:38:26.694 [2024-12-14 00:19:05.661703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.694 [2024-12-14 00:19:05.661745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.694 qpair failed and we were unable to recover it. 00:38:26.694 [2024-12-14 00:19:05.661884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.694 [2024-12-14 00:19:05.661927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.694 qpair failed and we were unable to recover it. 00:38:26.694 [2024-12-14 00:19:05.662217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.694 [2024-12-14 00:19:05.662258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.694 qpair failed and we were unable to recover it. 00:38:26.694 [2024-12-14 00:19:05.662400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.694 [2024-12-14 00:19:05.662456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.694 qpair failed and we were unable to recover it. 00:38:26.694 [2024-12-14 00:19:05.662682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.694 [2024-12-14 00:19:05.662742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.694 qpair failed and we were unable to recover it. 00:38:26.694 [2024-12-14 00:19:05.662890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.694 [2024-12-14 00:19:05.662932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.694 qpair failed and we were unable to recover it. 00:38:26.694 [2024-12-14 00:19:05.663138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.694 [2024-12-14 00:19:05.663180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.694 qpair failed and we were unable to recover it. 00:38:26.694 [2024-12-14 00:19:05.663325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.694 [2024-12-14 00:19:05.663368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.694 qpair failed and we were unable to recover it. 00:38:26.694 [2024-12-14 00:19:05.663671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.694 [2024-12-14 00:19:05.663685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.694 qpair failed and we were unable to recover it. 00:38:26.694 [2024-12-14 00:19:05.663760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.694 [2024-12-14 00:19:05.663772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.694 qpair failed and we were unable to recover it. 00:38:26.694 [2024-12-14 00:19:05.663860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.694 [2024-12-14 00:19:05.663873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.694 qpair failed and we were unable to recover it. 00:38:26.694 [2024-12-14 00:19:05.664035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.694 [2024-12-14 00:19:05.664077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.694 qpair failed and we were unable to recover it. 00:38:26.694 [2024-12-14 00:19:05.664277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.694 [2024-12-14 00:19:05.664318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.694 qpair failed and we were unable to recover it. 00:38:26.694 [2024-12-14 00:19:05.664546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.694 [2024-12-14 00:19:05.664560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.694 qpair failed and we were unable to recover it. 00:38:26.694 [2024-12-14 00:19:05.664709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.694 [2024-12-14 00:19:05.664723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.694 qpair failed and we were unable to recover it. 00:38:26.694 [2024-12-14 00:19:05.664884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.694 [2024-12-14 00:19:05.664898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.694 qpair failed and we were unable to recover it. 00:38:26.694 [2024-12-14 00:19:05.664978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.694 [2024-12-14 00:19:05.664991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.694 qpair failed and we were unable to recover it. 00:38:26.694 [2024-12-14 00:19:05.665175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.694 [2024-12-14 00:19:05.665217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.694 qpair failed and we were unable to recover it. 00:38:26.694 [2024-12-14 00:19:05.665431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.694 [2024-12-14 00:19:05.665482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.694 qpair failed and we were unable to recover it. 00:38:26.694 [2024-12-14 00:19:05.665702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.694 [2024-12-14 00:19:05.665744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.694 qpair failed and we were unable to recover it. 00:38:26.694 [2024-12-14 00:19:05.665936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.694 [2024-12-14 00:19:05.665978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.694 qpair failed and we were unable to recover it. 00:38:26.694 [2024-12-14 00:19:05.666193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.694 [2024-12-14 00:19:05.666243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.694 qpair failed and we were unable to recover it. 00:38:26.694 [2024-12-14 00:19:05.666480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.694 [2024-12-14 00:19:05.666493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.694 qpair failed and we were unable to recover it. 00:38:26.694 [2024-12-14 00:19:05.666735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.694 [2024-12-14 00:19:05.666750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.694 qpair failed and we were unable to recover it. 00:38:26.694 [2024-12-14 00:19:05.666822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.694 [2024-12-14 00:19:05.666835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.694 qpair failed and we were unable to recover it. 00:38:26.694 [2024-12-14 00:19:05.666997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.694 [2024-12-14 00:19:05.667010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.694 qpair failed and we were unable to recover it. 00:38:26.694 [2024-12-14 00:19:05.667182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.694 [2024-12-14 00:19:05.667223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.694 qpair failed and we were unable to recover it. 00:38:26.694 [2024-12-14 00:19:05.667361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.694 [2024-12-14 00:19:05.667404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.694 qpair failed and we were unable to recover it. 00:38:26.694 [2024-12-14 00:19:05.667584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.694 [2024-12-14 00:19:05.667627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.694 qpair failed and we were unable to recover it. 00:38:26.694 [2024-12-14 00:19:05.667840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.694 [2024-12-14 00:19:05.667882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.694 qpair failed and we were unable to recover it. 00:38:26.694 [2024-12-14 00:19:05.668176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.694 [2024-12-14 00:19:05.668214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.694 qpair failed and we were unable to recover it. 00:38:26.694 [2024-12-14 00:19:05.668353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.694 [2024-12-14 00:19:05.668366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.694 qpair failed and we were unable to recover it. 00:38:26.694 [2024-12-14 00:19:05.668465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.694 [2024-12-14 00:19:05.668478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.694 qpair failed and we were unable to recover it. 00:38:26.694 [2024-12-14 00:19:05.668625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.694 [2024-12-14 00:19:05.668666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.694 qpair failed and we were unable to recover it. 00:38:26.694 [2024-12-14 00:19:05.668869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.694 [2024-12-14 00:19:05.668911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.694 qpair failed and we were unable to recover it. 00:38:26.694 [2024-12-14 00:19:05.669180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.694 [2024-12-14 00:19:05.669234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.694 qpair failed and we were unable to recover it. 00:38:26.694 [2024-12-14 00:19:05.669393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.694 [2024-12-14 00:19:05.669406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.694 qpair failed and we were unable to recover it. 00:38:26.694 [2024-12-14 00:19:05.669496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.695 [2024-12-14 00:19:05.669509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.695 qpair failed and we were unable to recover it. 00:38:26.695 [2024-12-14 00:19:05.669804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.695 [2024-12-14 00:19:05.669846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.695 qpair failed and we were unable to recover it. 00:38:26.695 [2024-12-14 00:19:05.670056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.695 [2024-12-14 00:19:05.670098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.695 qpair failed and we were unable to recover it. 00:38:26.695 [2024-12-14 00:19:05.670286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.695 [2024-12-14 00:19:05.670299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.695 qpair failed and we were unable to recover it. 00:38:26.695 [2024-12-14 00:19:05.670535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.695 [2024-12-14 00:19:05.670577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.695 qpair failed and we were unable to recover it. 00:38:26.695 [2024-12-14 00:19:05.670805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.695 [2024-12-14 00:19:05.670847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.695 qpair failed and we were unable to recover it. 00:38:26.695 [2024-12-14 00:19:05.671007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.695 [2024-12-14 00:19:05.671051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.695 qpair failed and we were unable to recover it. 00:38:26.695 [2024-12-14 00:19:05.671204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.695 [2024-12-14 00:19:05.671217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.695 qpair failed and we were unable to recover it. 00:38:26.695 [2024-12-14 00:19:05.671447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.695 [2024-12-14 00:19:05.671462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.695 qpair failed and we were unable to recover it. 00:38:26.695 [2024-12-14 00:19:05.671545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.695 [2024-12-14 00:19:05.671559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.695 qpair failed and we were unable to recover it. 00:38:26.695 [2024-12-14 00:19:05.671715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.695 [2024-12-14 00:19:05.671758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.695 qpair failed and we were unable to recover it. 00:38:26.695 [2024-12-14 00:19:05.671907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.695 [2024-12-14 00:19:05.671949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.695 qpair failed and we were unable to recover it. 00:38:26.695 [2024-12-14 00:19:05.672179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.695 [2024-12-14 00:19:05.672223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.695 qpair failed and we were unable to recover it. 00:38:26.695 [2024-12-14 00:19:05.672422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.695 [2024-12-14 00:19:05.672436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.695 qpair failed and we were unable to recover it. 00:38:26.695 [2024-12-14 00:19:05.672542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.695 [2024-12-14 00:19:05.672585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.695 qpair failed and we were unable to recover it. 00:38:26.695 [2024-12-14 00:19:05.672820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.695 [2024-12-14 00:19:05.672862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.695 qpair failed and we were unable to recover it. 00:38:26.695 [2024-12-14 00:19:05.673074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.695 [2024-12-14 00:19:05.673116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.695 qpair failed and we were unable to recover it. 00:38:26.695 [2024-12-14 00:19:05.673258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.695 [2024-12-14 00:19:05.673302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.695 qpair failed and we were unable to recover it. 00:38:26.695 [2024-12-14 00:19:05.673453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.695 [2024-12-14 00:19:05.673475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.695 qpair failed and we were unable to recover it. 00:38:26.695 [2024-12-14 00:19:05.673553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.695 [2024-12-14 00:19:05.673595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.695 qpair failed and we were unable to recover it. 00:38:26.695 [2024-12-14 00:19:05.673801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.695 [2024-12-14 00:19:05.673843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.695 qpair failed and we were unable to recover it. 00:38:26.695 [2024-12-14 00:19:05.674070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.695 [2024-12-14 00:19:05.674111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.695 qpair failed and we were unable to recover it. 00:38:26.695 [2024-12-14 00:19:05.674310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.695 [2024-12-14 00:19:05.674323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.695 qpair failed and we were unable to recover it. 00:38:26.695 [2024-12-14 00:19:05.674494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.695 [2024-12-14 00:19:05.674509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.695 qpair failed and we were unable to recover it. 00:38:26.695 [2024-12-14 00:19:05.674693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.695 [2024-12-14 00:19:05.674736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.695 qpair failed and we were unable to recover it. 00:38:26.695 [2024-12-14 00:19:05.674872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.695 [2024-12-14 00:19:05.674914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.695 qpair failed and we were unable to recover it. 00:38:26.695 [2024-12-14 00:19:05.675113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.695 [2024-12-14 00:19:05.675175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.695 qpair failed and we were unable to recover it. 00:38:26.695 [2024-12-14 00:19:05.675351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.695 [2024-12-14 00:19:05.675364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.695 qpair failed and we were unable to recover it. 00:38:26.695 [2024-12-14 00:19:05.675581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.695 [2024-12-14 00:19:05.675625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.695 qpair failed and we were unable to recover it. 00:38:26.695 [2024-12-14 00:19:05.675772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.695 [2024-12-14 00:19:05.675815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.695 qpair failed and we were unable to recover it. 00:38:26.695 [2024-12-14 00:19:05.675997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.695 [2024-12-14 00:19:05.676042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.695 qpair failed and we were unable to recover it. 00:38:26.695 [2024-12-14 00:19:05.676240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.695 [2024-12-14 00:19:05.676295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.695 qpair failed and we were unable to recover it. 00:38:26.695 [2024-12-14 00:19:05.676514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.695 [2024-12-14 00:19:05.676558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.695 qpair failed and we were unable to recover it. 00:38:26.695 [2024-12-14 00:19:05.676853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.695 [2024-12-14 00:19:05.676894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.695 qpair failed and we were unable to recover it. 00:38:26.695 [2024-12-14 00:19:05.677111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.695 [2024-12-14 00:19:05.677153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.695 qpair failed and we were unable to recover it. 00:38:26.695 [2024-12-14 00:19:05.677409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.695 [2024-12-14 00:19:05.677422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.695 qpair failed and we were unable to recover it. 00:38:26.695 [2024-12-14 00:19:05.677510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.695 [2024-12-14 00:19:05.677561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.695 qpair failed and we were unable to recover it. 00:38:26.695 [2024-12-14 00:19:05.677771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.695 [2024-12-14 00:19:05.677813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.696 qpair failed and we were unable to recover it. 00:38:26.696 [2024-12-14 00:19:05.678050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.696 [2024-12-14 00:19:05.678092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.696 qpair failed and we were unable to recover it. 00:38:26.696 [2024-12-14 00:19:05.678238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.696 [2024-12-14 00:19:05.678250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.696 qpair failed and we were unable to recover it. 00:38:26.696 [2024-12-14 00:19:05.678350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.696 [2024-12-14 00:19:05.678363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.696 qpair failed and we were unable to recover it. 00:38:26.696 [2024-12-14 00:19:05.678513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.696 [2024-12-14 00:19:05.678527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.696 qpair failed and we were unable to recover it. 00:38:26.696 [2024-12-14 00:19:05.678606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.696 [2024-12-14 00:19:05.678654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.696 qpair failed and we were unable to recover it. 00:38:26.696 [2024-12-14 00:19:05.678853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.696 [2024-12-14 00:19:05.678895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.696 qpair failed and we were unable to recover it. 00:38:26.696 [2024-12-14 00:19:05.679106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.696 [2024-12-14 00:19:05.679147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.696 qpair failed and we were unable to recover it. 00:38:26.696 [2024-12-14 00:19:05.679283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.696 [2024-12-14 00:19:05.679296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.696 qpair failed and we were unable to recover it. 00:38:26.696 [2024-12-14 00:19:05.679525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.696 [2024-12-14 00:19:05.679539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.696 qpair failed and we were unable to recover it. 00:38:26.696 [2024-12-14 00:19:05.679623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.696 [2024-12-14 00:19:05.679636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.696 qpair failed and we were unable to recover it. 00:38:26.696 [2024-12-14 00:19:05.679784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.696 [2024-12-14 00:19:05.679797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.696 qpair failed and we were unable to recover it. 00:38:26.696 [2024-12-14 00:19:05.679975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.696 [2024-12-14 00:19:05.679988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.696 qpair failed and we were unable to recover it. 00:38:26.696 [2024-12-14 00:19:05.680192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.696 [2024-12-14 00:19:05.680204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.696 qpair failed and we were unable to recover it. 00:38:26.696 [2024-12-14 00:19:05.680380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.696 [2024-12-14 00:19:05.680394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.696 qpair failed and we were unable to recover it. 00:38:26.696 [2024-12-14 00:19:05.680477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.696 [2024-12-14 00:19:05.680491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.696 qpair failed and we were unable to recover it. 00:38:26.696 [2024-12-14 00:19:05.680582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.696 [2024-12-14 00:19:05.680595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.696 qpair failed and we were unable to recover it. 00:38:26.696 [2024-12-14 00:19:05.680758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.696 [2024-12-14 00:19:05.680801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.696 qpair failed and we were unable to recover it. 00:38:26.696 [2024-12-14 00:19:05.681052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.696 [2024-12-14 00:19:05.681093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.696 qpair failed and we were unable to recover it. 00:38:26.696 [2024-12-14 00:19:05.681309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.696 [2024-12-14 00:19:05.681351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.696 qpair failed and we were unable to recover it. 00:38:26.696 [2024-12-14 00:19:05.681570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.696 [2024-12-14 00:19:05.681614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.696 qpair failed and we were unable to recover it. 00:38:26.696 [2024-12-14 00:19:05.681849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.696 [2024-12-14 00:19:05.681891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.696 qpair failed and we were unable to recover it. 00:38:26.696 [2024-12-14 00:19:05.682048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.696 [2024-12-14 00:19:05.682090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.696 qpair failed and we were unable to recover it. 00:38:26.696 [2024-12-14 00:19:05.682329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.696 [2024-12-14 00:19:05.682342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.696 qpair failed and we were unable to recover it. 00:38:26.696 [2024-12-14 00:19:05.682489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.696 [2024-12-14 00:19:05.682512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.696 qpair failed and we were unable to recover it. 00:38:26.696 [2024-12-14 00:19:05.682686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.696 [2024-12-14 00:19:05.682728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.696 qpair failed and we were unable to recover it. 00:38:26.696 [2024-12-14 00:19:05.682874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.696 [2024-12-14 00:19:05.682916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.696 qpair failed and we were unable to recover it. 00:38:26.696 [2024-12-14 00:19:05.683112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.696 [2024-12-14 00:19:05.683154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.696 qpair failed and we were unable to recover it. 00:38:26.696 [2024-12-14 00:19:05.683357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.696 [2024-12-14 00:19:05.683370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.697 qpair failed and we were unable to recover it. 00:38:26.697 [2024-12-14 00:19:05.683587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.697 [2024-12-14 00:19:05.683603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.697 qpair failed and we were unable to recover it. 00:38:26.697 [2024-12-14 00:19:05.683765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.697 [2024-12-14 00:19:05.683778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.697 qpair failed and we were unable to recover it. 00:38:26.697 [2024-12-14 00:19:05.683930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.697 [2024-12-14 00:19:05.683972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.697 qpair failed and we were unable to recover it. 00:38:26.697 [2024-12-14 00:19:05.684192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.697 [2024-12-14 00:19:05.684233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.697 qpair failed and we were unable to recover it. 00:38:26.697 [2024-12-14 00:19:05.684452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.697 [2024-12-14 00:19:05.684495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.697 qpair failed and we were unable to recover it. 00:38:26.697 [2024-12-14 00:19:05.684706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.697 [2024-12-14 00:19:05.684747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.697 qpair failed and we were unable to recover it. 00:38:26.697 [2024-12-14 00:19:05.684960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.697 [2024-12-14 00:19:05.685003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.697 qpair failed and we were unable to recover it. 00:38:26.697 [2024-12-14 00:19:05.685259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.697 [2024-12-14 00:19:05.685301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.697 qpair failed and we were unable to recover it. 00:38:26.697 [2024-12-14 00:19:05.685525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.697 [2024-12-14 00:19:05.685567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.697 qpair failed and we were unable to recover it. 00:38:26.697 [2024-12-14 00:19:05.685854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.697 [2024-12-14 00:19:05.685895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.697 qpair failed and we were unable to recover it. 00:38:26.697 [2024-12-14 00:19:05.686108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.697 [2024-12-14 00:19:05.686150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.697 qpair failed and we were unable to recover it. 00:38:26.697 [2024-12-14 00:19:05.686300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.697 [2024-12-14 00:19:05.686342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.697 qpair failed and we were unable to recover it. 00:38:26.697 [2024-12-14 00:19:05.686565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.697 [2024-12-14 00:19:05.686618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.697 qpair failed and we were unable to recover it. 00:38:26.697 [2024-12-14 00:19:05.686707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.697 [2024-12-14 00:19:05.686720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.697 qpair failed and we were unable to recover it. 00:38:26.697 [2024-12-14 00:19:05.686865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.697 [2024-12-14 00:19:05.686892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.697 qpair failed and we were unable to recover it. 00:38:26.697 [2024-12-14 00:19:05.687028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.697 [2024-12-14 00:19:05.687070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.697 qpair failed and we were unable to recover it. 00:38:26.697 [2024-12-14 00:19:05.687360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.697 [2024-12-14 00:19:05.687401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.697 qpair failed and we were unable to recover it. 00:38:26.697 [2024-12-14 00:19:05.687665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.697 [2024-12-14 00:19:05.687678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.697 qpair failed and we were unable to recover it. 00:38:26.697 [2024-12-14 00:19:05.687841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.697 [2024-12-14 00:19:05.687854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.697 qpair failed and we were unable to recover it. 00:38:26.697 [2024-12-14 00:19:05.688082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.697 [2024-12-14 00:19:05.688095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.697 qpair failed and we were unable to recover it. 00:38:26.697 [2024-12-14 00:19:05.688239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.697 [2024-12-14 00:19:05.688280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.697 qpair failed and we were unable to recover it. 00:38:26.697 [2024-12-14 00:19:05.688593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.697 [2024-12-14 00:19:05.688636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.697 qpair failed and we were unable to recover it. 00:38:26.697 [2024-12-14 00:19:05.688849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.697 [2024-12-14 00:19:05.688890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.697 qpair failed and we were unable to recover it. 00:38:26.697 [2024-12-14 00:19:05.689084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.697 [2024-12-14 00:19:05.689126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.697 qpair failed and we were unable to recover it. 00:38:26.697 [2024-12-14 00:19:05.689370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.697 [2024-12-14 00:19:05.689412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.697 qpair failed and we were unable to recover it. 00:38:26.697 [2024-12-14 00:19:05.689604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.697 [2024-12-14 00:19:05.689617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.697 qpair failed and we were unable to recover it. 00:38:26.697 [2024-12-14 00:19:05.689784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.697 [2024-12-14 00:19:05.689826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.697 qpair failed and we were unable to recover it. 00:38:26.697 [2024-12-14 00:19:05.689980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.697 [2024-12-14 00:19:05.690023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.697 qpair failed and we were unable to recover it. 00:38:26.697 [2024-12-14 00:19:05.690191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.697 [2024-12-14 00:19:05.690236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.697 qpair failed and we were unable to recover it. 00:38:26.697 [2024-12-14 00:19:05.690356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.697 [2024-12-14 00:19:05.690373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.697 qpair failed and we were unable to recover it. 00:38:26.697 [2024-12-14 00:19:05.690598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.697 [2024-12-14 00:19:05.690612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.697 qpair failed and we were unable to recover it. 00:38:26.697 [2024-12-14 00:19:05.690843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.697 [2024-12-14 00:19:05.690855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.697 qpair failed and we were unable to recover it. 00:38:26.697 [2024-12-14 00:19:05.691032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.697 [2024-12-14 00:19:05.691045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.697 qpair failed and we were unable to recover it. 00:38:26.697 [2024-12-14 00:19:05.691144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.697 [2024-12-14 00:19:05.691157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.697 qpair failed and we were unable to recover it. 00:38:26.697 [2024-12-14 00:19:05.691298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.697 [2024-12-14 00:19:05.691310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.697 qpair failed and we were unable to recover it. 00:38:26.697 [2024-12-14 00:19:05.691479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.697 [2024-12-14 00:19:05.691522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.697 qpair failed and we were unable to recover it. 00:38:26.697 [2024-12-14 00:19:05.691737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.697 [2024-12-14 00:19:05.691779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.697 qpair failed and we were unable to recover it. 00:38:26.697 [2024-12-14 00:19:05.691922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.697 [2024-12-14 00:19:05.691964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.697 qpair failed and we were unable to recover it. 00:38:26.697 [2024-12-14 00:19:05.692113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.698 [2024-12-14 00:19:05.692155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.698 qpair failed and we were unable to recover it. 00:38:26.698 [2024-12-14 00:19:05.692302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.698 [2024-12-14 00:19:05.692344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.698 qpair failed and we were unable to recover it. 00:38:26.698 [2024-12-14 00:19:05.692555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.698 [2024-12-14 00:19:05.692570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.698 qpair failed and we were unable to recover it. 00:38:26.698 [2024-12-14 00:19:05.692643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.698 [2024-12-14 00:19:05.692664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.698 qpair failed and we were unable to recover it. 00:38:26.698 [2024-12-14 00:19:05.692824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.698 [2024-12-14 00:19:05.692837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.698 qpair failed and we were unable to recover it. 00:38:26.698 [2024-12-14 00:19:05.692982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.698 [2024-12-14 00:19:05.692995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.698 qpair failed and we were unable to recover it. 00:38:26.698 [2024-12-14 00:19:05.693169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.698 [2024-12-14 00:19:05.693182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.698 qpair failed and we were unable to recover it. 00:38:26.698 [2024-12-14 00:19:05.693460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.698 [2024-12-14 00:19:05.693502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.698 qpair failed and we were unable to recover it. 00:38:26.698 [2024-12-14 00:19:05.693733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.698 [2024-12-14 00:19:05.693775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.698 qpair failed and we were unable to recover it. 00:38:26.698 [2024-12-14 00:19:05.694052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.698 [2024-12-14 00:19:05.694093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.698 qpair failed and we were unable to recover it. 00:38:26.698 [2024-12-14 00:19:05.694237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.698 [2024-12-14 00:19:05.694279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.698 qpair failed and we were unable to recover it. 00:38:26.698 [2024-12-14 00:19:05.694495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.698 [2024-12-14 00:19:05.694538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.698 qpair failed and we were unable to recover it. 00:38:26.698 [2024-12-14 00:19:05.694777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.698 [2024-12-14 00:19:05.694790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.698 qpair failed and we were unable to recover it. 00:38:26.698 [2024-12-14 00:19:05.695012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.698 [2024-12-14 00:19:05.695024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.698 qpair failed and we were unable to recover it. 00:38:26.698 [2024-12-14 00:19:05.695164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.698 [2024-12-14 00:19:05.695177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.698 qpair failed and we were unable to recover it. 00:38:26.698 [2024-12-14 00:19:05.695332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.698 [2024-12-14 00:19:05.695374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.698 qpair failed and we were unable to recover it. 00:38:26.698 [2024-12-14 00:19:05.695604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.698 [2024-12-14 00:19:05.695646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.698 qpair failed and we were unable to recover it. 00:38:26.698 [2024-12-14 00:19:05.695933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.698 [2024-12-14 00:19:05.695975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.698 qpair failed and we were unable to recover it. 00:38:26.698 [2024-12-14 00:19:05.696284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.698 [2024-12-14 00:19:05.696327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.698 qpair failed and we were unable to recover it. 00:38:26.698 [2024-12-14 00:19:05.696538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.698 [2024-12-14 00:19:05.696588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.698 qpair failed and we were unable to recover it. 00:38:26.698 [2024-12-14 00:19:05.696850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.698 [2024-12-14 00:19:05.696862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.698 qpair failed and we were unable to recover it. 00:38:26.698 [2024-12-14 00:19:05.696943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.698 [2024-12-14 00:19:05.696956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.698 qpair failed and we were unable to recover it. 00:38:26.698 [2024-12-14 00:19:05.697191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.698 [2024-12-14 00:19:05.697233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.698 qpair failed and we were unable to recover it. 00:38:26.698 [2024-12-14 00:19:05.697392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.698 [2024-12-14 00:19:05.697433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.698 qpair failed and we were unable to recover it. 00:38:26.698 [2024-12-14 00:19:05.697660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.698 [2024-12-14 00:19:05.697702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.698 qpair failed and we were unable to recover it. 00:38:26.698 [2024-12-14 00:19:05.697838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.698 [2024-12-14 00:19:05.697880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.698 qpair failed and we were unable to recover it. 00:38:26.698 [2024-12-14 00:19:05.698101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.698 [2024-12-14 00:19:05.698143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.698 qpair failed and we were unable to recover it. 00:38:26.698 [2024-12-14 00:19:05.698269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.698 [2024-12-14 00:19:05.698282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.698 qpair failed and we were unable to recover it. 00:38:26.698 [2024-12-14 00:19:05.698460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.698 [2024-12-14 00:19:05.698473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.698 qpair failed and we were unable to recover it. 00:38:26.698 [2024-12-14 00:19:05.698647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.698 [2024-12-14 00:19:05.698660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.698 qpair failed and we were unable to recover it. 00:38:26.698 [2024-12-14 00:19:05.698833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.698 [2024-12-14 00:19:05.698847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.698 qpair failed and we were unable to recover it. 00:38:26.698 [2024-12-14 00:19:05.698996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.698 [2024-12-14 00:19:05.699009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.698 qpair failed and we were unable to recover it. 00:38:26.698 [2024-12-14 00:19:05.699172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.698 [2024-12-14 00:19:05.699216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.698 qpair failed and we were unable to recover it. 00:38:26.698 [2024-12-14 00:19:05.699354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.698 [2024-12-14 00:19:05.699395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.698 qpair failed and we were unable to recover it. 00:38:26.698 [2024-12-14 00:19:05.699591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.698 [2024-12-14 00:19:05.699666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:26.698 qpair failed and we were unable to recover it. 00:38:26.698 [2024-12-14 00:19:05.699967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.698 [2024-12-14 00:19:05.700014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:26.698 qpair failed and we were unable to recover it. 00:38:26.698 [2024-12-14 00:19:05.700262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.698 [2024-12-14 00:19:05.700309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.698 qpair failed and we were unable to recover it. 00:38:26.698 [2024-12-14 00:19:05.700534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.698 [2024-12-14 00:19:05.700549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.698 qpair failed and we were unable to recover it. 00:38:26.698 [2024-12-14 00:19:05.700650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.698 [2024-12-14 00:19:05.700692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.698 qpair failed and we were unable to recover it. 00:38:26.698 [2024-12-14 00:19:05.700896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.699 [2024-12-14 00:19:05.700938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.699 qpair failed and we were unable to recover it. 00:38:26.699 [2024-12-14 00:19:05.701162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.699 [2024-12-14 00:19:05.701203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.699 qpair failed and we were unable to recover it. 00:38:26.699 [2024-12-14 00:19:05.701360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.699 [2024-12-14 00:19:05.701401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.699 qpair failed and we were unable to recover it. 00:38:26.699 [2024-12-14 00:19:05.701652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.699 [2024-12-14 00:19:05.701715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:26.699 qpair failed and we were unable to recover it. 00:38:26.699 [2024-12-14 00:19:05.701957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.699 [2024-12-14 00:19:05.702012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.699 qpair failed and we were unable to recover it. 00:38:26.699 [2024-12-14 00:19:05.702311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.699 [2024-12-14 00:19:05.702361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:26.699 qpair failed and we were unable to recover it. 00:38:26.699 [2024-12-14 00:19:05.702621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.699 [2024-12-14 00:19:05.702637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.699 qpair failed and we were unable to recover it. 00:38:26.699 [2024-12-14 00:19:05.702773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.699 [2024-12-14 00:19:05.702815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.699 qpair failed and we were unable to recover it. 00:38:26.699 [2024-12-14 00:19:05.702976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.699 [2024-12-14 00:19:05.703017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.699 qpair failed and we were unable to recover it. 00:38:26.699 [2024-12-14 00:19:05.703172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.699 [2024-12-14 00:19:05.703215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.699 qpair failed and we were unable to recover it. 00:38:26.699 [2024-12-14 00:19:05.703365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.699 [2024-12-14 00:19:05.703406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.699 qpair failed and we were unable to recover it. 00:38:26.699 [2024-12-14 00:19:05.703659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.699 [2024-12-14 00:19:05.703701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.699 qpair failed and we were unable to recover it. 00:38:26.699 [2024-12-14 00:19:05.703906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.699 [2024-12-14 00:19:05.703948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.699 qpair failed and we were unable to recover it. 00:38:26.699 [2024-12-14 00:19:05.704080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.699 [2024-12-14 00:19:05.704120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.699 qpair failed and we were unable to recover it. 00:38:26.699 [2024-12-14 00:19:05.704404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.699 [2024-12-14 00:19:05.704456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.699 qpair failed and we were unable to recover it. 00:38:26.699 [2024-12-14 00:19:05.704696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.699 [2024-12-14 00:19:05.704710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.699 qpair failed and we were unable to recover it. 00:38:26.699 [2024-12-14 00:19:05.704815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.699 [2024-12-14 00:19:05.704828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.699 qpair failed and we were unable to recover it. 00:38:26.699 [2024-12-14 00:19:05.704981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.699 [2024-12-14 00:19:05.705022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.699 qpair failed and we were unable to recover it. 00:38:26.699 [2024-12-14 00:19:05.705297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.699 [2024-12-14 00:19:05.705339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.699 qpair failed and we were unable to recover it. 00:38:26.699 [2024-12-14 00:19:05.705646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.699 [2024-12-14 00:19:05.705691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.699 qpair failed and we were unable to recover it. 00:38:26.699 [2024-12-14 00:19:05.705974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.699 [2024-12-14 00:19:05.706015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.699 qpair failed and we were unable to recover it. 00:38:26.699 [2024-12-14 00:19:05.706356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.699 [2024-12-14 00:19:05.706403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.699 qpair failed and we were unable to recover it. 00:38:26.699 [2024-12-14 00:19:05.706621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.699 [2024-12-14 00:19:05.706639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.699 qpair failed and we were unable to recover it. 00:38:26.699 [2024-12-14 00:19:05.706889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.699 [2024-12-14 00:19:05.706930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.699 qpair failed and we were unable to recover it. 00:38:26.699 [2024-12-14 00:19:05.707134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.699 [2024-12-14 00:19:05.707175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.699 qpair failed and we were unable to recover it. 00:38:26.699 [2024-12-14 00:19:05.707414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.699 [2024-12-14 00:19:05.707467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.699 qpair failed and we were unable to recover it. 00:38:26.699 [2024-12-14 00:19:05.707658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.699 [2024-12-14 00:19:05.707699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.699 qpair failed and we were unable to recover it. 00:38:26.699 [2024-12-14 00:19:05.707976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.699 [2024-12-14 00:19:05.708017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.699 qpair failed and we were unable to recover it. 00:38:26.699 [2024-12-14 00:19:05.708211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.699 [2024-12-14 00:19:05.708251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.699 qpair failed and we were unable to recover it. 00:38:26.699 [2024-12-14 00:19:05.708511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.699 [2024-12-14 00:19:05.708547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.699 qpair failed and we were unable to recover it. 00:38:26.699 [2024-12-14 00:19:05.708783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.699 [2024-12-14 00:19:05.708809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:26.699 qpair failed and we were unable to recover it. 00:38:26.699 [2024-12-14 00:19:05.709071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.699 [2024-12-14 00:19:05.709097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:26.699 qpair failed and we were unable to recover it. 00:38:26.699 [2024-12-14 00:19:05.709223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.699 [2024-12-14 00:19:05.709245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:26.699 qpair failed and we were unable to recover it. 00:38:26.699 [2024-12-14 00:19:05.709346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.699 [2024-12-14 00:19:05.709367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:26.699 qpair failed and we were unable to recover it. 00:38:26.699 [2024-12-14 00:19:05.709464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.699 [2024-12-14 00:19:05.709486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:26.699 qpair failed and we were unable to recover it. 00:38:26.699 [2024-12-14 00:19:05.709656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.699 [2024-12-14 00:19:05.709708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:26.699 qpair failed and we were unable to recover it. 00:38:26.699 [2024-12-14 00:19:05.709943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.699 [2024-12-14 00:19:05.709986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:26.700 qpair failed and we were unable to recover it. 00:38:26.700 [2024-12-14 00:19:05.710121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.700 [2024-12-14 00:19:05.710163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:26.700 qpair failed and we were unable to recover it. 00:38:26.700 [2024-12-14 00:19:05.710377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.700 [2024-12-14 00:19:05.710420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:26.700 qpair failed and we were unable to recover it. 00:38:26.700 [2024-12-14 00:19:05.710651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.700 [2024-12-14 00:19:05.710695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:26.700 qpair failed and we were unable to recover it. 00:38:26.700 [2024-12-14 00:19:05.710954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.700 [2024-12-14 00:19:05.710998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:26.700 qpair failed and we were unable to recover it. 00:38:26.700 [2024-12-14 00:19:05.711134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.700 [2024-12-14 00:19:05.711178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:26.700 qpair failed and we were unable to recover it. 00:38:26.700 [2024-12-14 00:19:05.711309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.700 [2024-12-14 00:19:05.711351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:26.700 qpair failed and we were unable to recover it. 00:38:26.700 [2024-12-14 00:19:05.711564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.700 [2024-12-14 00:19:05.711589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:26.700 qpair failed and we were unable to recover it. 00:38:26.700 [2024-12-14 00:19:05.711811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.700 [2024-12-14 00:19:05.711832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:26.700 qpair failed and we were unable to recover it. 00:38:26.700 [2024-12-14 00:19:05.712001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.700 [2024-12-14 00:19:05.712022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:26.700 qpair failed and we were unable to recover it. 00:38:26.700 [2024-12-14 00:19:05.712149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.700 [2024-12-14 00:19:05.712206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:26.700 qpair failed and we were unable to recover it. 00:38:26.700 [2024-12-14 00:19:05.712415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.700 [2024-12-14 00:19:05.712473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:26.700 qpair failed and we were unable to recover it. 00:38:26.700 [2024-12-14 00:19:05.712687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.700 [2024-12-14 00:19:05.712730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:26.700 qpair failed and we were unable to recover it. 00:38:26.700 [2024-12-14 00:19:05.712861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.700 [2024-12-14 00:19:05.712904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:26.700 qpair failed and we were unable to recover it. 00:38:26.700 [2024-12-14 00:19:05.713167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.700 [2024-12-14 00:19:05.713210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:26.700 qpair failed and we were unable to recover it. 00:38:26.700 [2024-12-14 00:19:05.713448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.700 [2024-12-14 00:19:05.713491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:26.700 qpair failed and we were unable to recover it. 00:38:26.700 [2024-12-14 00:19:05.713701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.700 [2024-12-14 00:19:05.713744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:26.700 qpair failed and we were unable to recover it. 00:38:26.700 [2024-12-14 00:19:05.714069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.700 [2024-12-14 00:19:05.714111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:26.700 qpair failed and we were unable to recover it. 00:38:26.700 [2024-12-14 00:19:05.714394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.700 [2024-12-14 00:19:05.714449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:26.700 qpair failed and we were unable to recover it. 00:38:26.700 [2024-12-14 00:19:05.714727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.700 [2024-12-14 00:19:05.714750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:26.700 qpair failed and we were unable to recover it. 00:38:26.700 [2024-12-14 00:19:05.714938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.700 [2024-12-14 00:19:05.714959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:26.700 qpair failed and we were unable to recover it. 00:38:26.700 [2024-12-14 00:19:05.715184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.700 [2024-12-14 00:19:05.715205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:26.700 qpair failed and we were unable to recover it. 00:38:26.700 [2024-12-14 00:19:05.715403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.700 [2024-12-14 00:19:05.715458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:26.700 qpair failed and we were unable to recover it. 00:38:26.700 [2024-12-14 00:19:05.715663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.700 [2024-12-14 00:19:05.715707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:26.700 qpair failed and we were unable to recover it. 00:38:26.700 [2024-12-14 00:19:05.715919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.700 [2024-12-14 00:19:05.715961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:26.700 qpair failed and we were unable to recover it. 00:38:26.700 [2024-12-14 00:19:05.716123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.700 [2024-12-14 00:19:05.716166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:26.700 qpair failed and we were unable to recover it. 00:38:26.700 [2024-12-14 00:19:05.716452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.700 [2024-12-14 00:19:05.716474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:26.700 qpair failed and we were unable to recover it. 00:38:26.700 [2024-12-14 00:19:05.716659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.700 [2024-12-14 00:19:05.716675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.700 qpair failed and we were unable to recover it. 00:38:26.700 [2024-12-14 00:19:05.716789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.700 [2024-12-14 00:19:05.716830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.700 qpair failed and we were unable to recover it. 00:38:26.700 [2024-12-14 00:19:05.717092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.700 [2024-12-14 00:19:05.717134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.700 qpair failed and we were unable to recover it. 00:38:26.700 [2024-12-14 00:19:05.717367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.700 [2024-12-14 00:19:05.717408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.700 qpair failed and we were unable to recover it. 00:38:26.700 [2024-12-14 00:19:05.717567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.700 [2024-12-14 00:19:05.717610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.700 qpair failed and we were unable to recover it. 00:38:26.700 [2024-12-14 00:19:05.717754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.700 [2024-12-14 00:19:05.717795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.700 qpair failed and we were unable to recover it. 00:38:26.700 [2024-12-14 00:19:05.718012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.700 [2024-12-14 00:19:05.718054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.700 qpair failed and we were unable to recover it. 00:38:26.700 [2024-12-14 00:19:05.718202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.700 [2024-12-14 00:19:05.718261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:26.700 qpair failed and we were unable to recover it. 00:38:26.700 [2024-12-14 00:19:05.718417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.700 [2024-12-14 00:19:05.718469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:26.700 qpair failed and we were unable to recover it. 00:38:26.700 [2024-12-14 00:19:05.718587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.700 [2024-12-14 00:19:05.718608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:26.700 qpair failed and we were unable to recover it. 00:38:26.701 [2024-12-14 00:19:05.718853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.701 [2024-12-14 00:19:05.718875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:26.701 qpair failed and we were unable to recover it. 00:38:26.701 [2024-12-14 00:19:05.718986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.701 [2024-12-14 00:19:05.719007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:26.701 qpair failed and we were unable to recover it. 00:38:26.701 [2024-12-14 00:19:05.719273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.701 [2024-12-14 00:19:05.719328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:26.701 qpair failed and we were unable to recover it. 00:38:26.701 [2024-12-14 00:19:05.719516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.701 [2024-12-14 00:19:05.719559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:26.701 qpair failed and we were unable to recover it. 00:38:26.701 [2024-12-14 00:19:05.719774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.701 [2024-12-14 00:19:05.719817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:26.701 qpair failed and we were unable to recover it. 00:38:26.701 [2024-12-14 00:19:05.719968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.701 [2024-12-14 00:19:05.720011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:26.701 qpair failed and we were unable to recover it. 00:38:26.701 [2024-12-14 00:19:05.720151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.701 [2024-12-14 00:19:05.720193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:26.701 qpair failed and we were unable to recover it. 00:38:26.701 [2024-12-14 00:19:05.720345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.701 [2024-12-14 00:19:05.720366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:26.701 qpair failed and we were unable to recover it. 00:38:26.701 [2024-12-14 00:19:05.720574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.701 [2024-12-14 00:19:05.720620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:26.701 qpair failed and we were unable to recover it. 00:38:26.701 [2024-12-14 00:19:05.720757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.701 [2024-12-14 00:19:05.720797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:26.701 qpair failed and we were unable to recover it. 00:38:26.701 [2024-12-14 00:19:05.721015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.701 [2024-12-14 00:19:05.721065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:26.701 qpair failed and we were unable to recover it. 00:38:26.701 [2024-12-14 00:19:05.721211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.701 [2024-12-14 00:19:05.721254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:26.701 qpair failed and we were unable to recover it. 00:38:26.701 [2024-12-14 00:19:05.721425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.701 [2024-12-14 00:19:05.721480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:26.701 qpair failed and we were unable to recover it. 00:38:26.701 [2024-12-14 00:19:05.721694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.701 [2024-12-14 00:19:05.721736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:26.701 qpair failed and we were unable to recover it. 00:38:26.701 [2024-12-14 00:19:05.721872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.701 [2024-12-14 00:19:05.721915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:26.701 qpair failed and we were unable to recover it. 00:38:26.701 [2024-12-14 00:19:05.722111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.701 [2024-12-14 00:19:05.722152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:26.701 qpair failed and we were unable to recover it. 00:38:26.701 [2024-12-14 00:19:05.722368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.701 [2024-12-14 00:19:05.722412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:26.701 qpair failed and we were unable to recover it. 00:38:26.701 [2024-12-14 00:19:05.722657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.701 [2024-12-14 00:19:05.722701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:26.701 qpair failed and we were unable to recover it. 00:38:26.701 [2024-12-14 00:19:05.722920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.701 [2024-12-14 00:19:05.722962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:26.701 qpair failed and we were unable to recover it. 00:38:26.701 [2024-12-14 00:19:05.723172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.701 [2024-12-14 00:19:05.723214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:26.701 qpair failed and we were unable to recover it. 00:38:26.701 [2024-12-14 00:19:05.723448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.701 [2024-12-14 00:19:05.723493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:26.701 qpair failed and we were unable to recover it. 00:38:26.701 [2024-12-14 00:19:05.723729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.701 [2024-12-14 00:19:05.723749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:26.701 qpair failed and we were unable to recover it. 00:38:26.701 [2024-12-14 00:19:05.723927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.701 [2024-12-14 00:19:05.723948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:26.701 qpair failed and we were unable to recover it. 00:38:26.701 [2024-12-14 00:19:05.724102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.701 [2024-12-14 00:19:05.724123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:26.701 qpair failed and we were unable to recover it. 00:38:26.701 [2024-12-14 00:19:05.724300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.701 [2024-12-14 00:19:05.724347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.701 qpair failed and we were unable to recover it. 00:38:26.701 [2024-12-14 00:19:05.724496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.701 [2024-12-14 00:19:05.724539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.701 qpair failed and we were unable to recover it. 00:38:26.701 [2024-12-14 00:19:05.724741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.701 [2024-12-14 00:19:05.724782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.701 qpair failed and we were unable to recover it. 00:38:26.701 [2024-12-14 00:19:05.724945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.701 [2024-12-14 00:19:05.724987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.701 qpair failed and we were unable to recover it. 00:38:26.701 [2024-12-14 00:19:05.725285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.701 [2024-12-14 00:19:05.725326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.701 qpair failed and we were unable to recover it. 00:38:26.701 [2024-12-14 00:19:05.725619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.701 [2024-12-14 00:19:05.725662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.701 qpair failed and we were unable to recover it. 00:38:26.701 [2024-12-14 00:19:05.725922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.701 [2024-12-14 00:19:05.725963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.701 qpair failed and we were unable to recover it. 00:38:26.701 [2024-12-14 00:19:05.726195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.701 [2024-12-14 00:19:05.726237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.701 qpair failed and we were unable to recover it. 00:38:26.701 [2024-12-14 00:19:05.726452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.701 [2024-12-14 00:19:05.726466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.701 qpair failed and we were unable to recover it. 00:38:26.701 [2024-12-14 00:19:05.726611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.702 [2024-12-14 00:19:05.726653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.702 qpair failed and we were unable to recover it. 00:38:26.702 [2024-12-14 00:19:05.726775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.702 [2024-12-14 00:19:05.726817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.702 qpair failed and we were unable to recover it. 00:38:26.702 [2024-12-14 00:19:05.727026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.702 [2024-12-14 00:19:05.727067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.702 qpair failed and we were unable to recover it. 00:38:26.702 [2024-12-14 00:19:05.727332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.702 [2024-12-14 00:19:05.727373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.702 qpair failed and we were unable to recover it. 00:38:26.702 [2024-12-14 00:19:05.727682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.702 [2024-12-14 00:19:05.727696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.702 qpair failed and we were unable to recover it. 00:38:26.702 [2024-12-14 00:19:05.727859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.702 [2024-12-14 00:19:05.727900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.702 qpair failed and we were unable to recover it. 00:38:26.702 [2024-12-14 00:19:05.728164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.702 [2024-12-14 00:19:05.728206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.702 qpair failed and we were unable to recover it. 00:38:26.702 [2024-12-14 00:19:05.728352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.702 [2024-12-14 00:19:05.728365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.702 qpair failed and we were unable to recover it. 00:38:26.702 [2024-12-14 00:19:05.728592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.702 [2024-12-14 00:19:05.728636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.702 qpair failed and we were unable to recover it. 00:38:26.702 [2024-12-14 00:19:05.728780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.702 [2024-12-14 00:19:05.728821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.702 qpair failed and we were unable to recover it. 00:38:26.702 [2024-12-14 00:19:05.729102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.702 [2024-12-14 00:19:05.729144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.702 qpair failed and we were unable to recover it. 00:38:26.702 [2024-12-14 00:19:05.729283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.702 [2024-12-14 00:19:05.729325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.702 qpair failed and we were unable to recover it. 00:38:26.702 [2024-12-14 00:19:05.729528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.702 [2024-12-14 00:19:05.729571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.702 qpair failed and we were unable to recover it. 00:38:26.702 [2024-12-14 00:19:05.729742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.702 [2024-12-14 00:19:05.729782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.702 qpair failed and we were unable to recover it. 00:38:26.702 [2024-12-14 00:19:05.729996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.702 [2024-12-14 00:19:05.730038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.702 qpair failed and we were unable to recover it. 00:38:26.702 [2024-12-14 00:19:05.730184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.702 [2024-12-14 00:19:05.730225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.702 qpair failed and we were unable to recover it. 00:38:26.702 [2024-12-14 00:19:05.730367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.702 [2024-12-14 00:19:05.730380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.702 qpair failed and we were unable to recover it. 00:38:26.702 [2024-12-14 00:19:05.730519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.702 [2024-12-14 00:19:05.730533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.702 qpair failed and we were unable to recover it. 00:38:26.702 [2024-12-14 00:19:05.730677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.702 [2024-12-14 00:19:05.730720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.702 qpair failed and we were unable to recover it. 00:38:26.702 [2024-12-14 00:19:05.730871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.702 [2024-12-14 00:19:05.730913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.702 qpair failed and we were unable to recover it. 00:38:26.702 [2024-12-14 00:19:05.731124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.702 [2024-12-14 00:19:05.731166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.702 qpair failed and we were unable to recover it. 00:38:26.702 [2024-12-14 00:19:05.731363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.702 [2024-12-14 00:19:05.731376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.702 qpair failed and we were unable to recover it. 00:38:26.702 [2024-12-14 00:19:05.731592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.702 [2024-12-14 00:19:05.731606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.702 qpair failed and we were unable to recover it. 00:38:26.702 [2024-12-14 00:19:05.731762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.702 [2024-12-14 00:19:05.731803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.702 qpair failed and we were unable to recover it. 00:38:26.702 [2024-12-14 00:19:05.731942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.702 [2024-12-14 00:19:05.731984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.702 qpair failed and we were unable to recover it. 00:38:26.702 [2024-12-14 00:19:05.732191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.702 [2024-12-14 00:19:05.732232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.702 qpair failed and we were unable to recover it. 00:38:26.702 [2024-12-14 00:19:05.732414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.702 [2024-12-14 00:19:05.732427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.702 qpair failed and we were unable to recover it. 00:38:26.702 [2024-12-14 00:19:05.732592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.702 [2024-12-14 00:19:05.732633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.702 qpair failed and we were unable to recover it. 00:38:26.702 [2024-12-14 00:19:05.732847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.702 [2024-12-14 00:19:05.732890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.702 qpair failed and we were unable to recover it. 00:38:26.702 [2024-12-14 00:19:05.733166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.702 [2024-12-14 00:19:05.733207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.702 qpair failed and we were unable to recover it. 00:38:26.702 [2024-12-14 00:19:05.733463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.702 [2024-12-14 00:19:05.733494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.702 qpair failed and we were unable to recover it. 00:38:26.702 [2024-12-14 00:19:05.733584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.702 [2024-12-14 00:19:05.733597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.702 qpair failed and we were unable to recover it. 00:38:26.702 [2024-12-14 00:19:05.733693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.702 [2024-12-14 00:19:05.733707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.702 qpair failed and we were unable to recover it. 00:38:26.702 [2024-12-14 00:19:05.733850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.702 [2024-12-14 00:19:05.733892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.702 qpair failed and we were unable to recover it. 00:38:26.702 [2024-12-14 00:19:05.734152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.702 [2024-12-14 00:19:05.734194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.702 qpair failed and we were unable to recover it. 00:38:26.702 [2024-12-14 00:19:05.734481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.702 [2024-12-14 00:19:05.734524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.702 qpair failed and we were unable to recover it. 00:38:26.702 [2024-12-14 00:19:05.734681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.702 [2024-12-14 00:19:05.734723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.702 qpair failed and we were unable to recover it. 00:38:26.702 [2024-12-14 00:19:05.734877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.702 [2024-12-14 00:19:05.734919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.702 qpair failed and we were unable to recover it. 00:38:26.702 [2024-12-14 00:19:05.735141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.702 [2024-12-14 00:19:05.735181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.703 qpair failed and we were unable to recover it. 00:38:26.703 [2024-12-14 00:19:05.735484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.703 [2024-12-14 00:19:05.735529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.703 qpair failed and we were unable to recover it. 00:38:26.703 [2024-12-14 00:19:05.735745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.703 [2024-12-14 00:19:05.735777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.703 qpair failed and we were unable to recover it. 00:38:26.703 [2024-12-14 00:19:05.735930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.703 [2024-12-14 00:19:05.735944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.703 qpair failed and we were unable to recover it. 00:38:26.703 [2024-12-14 00:19:05.736136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.703 [2024-12-14 00:19:05.736148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.703 qpair failed and we were unable to recover it. 00:38:26.703 [2024-12-14 00:19:05.736293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.703 [2024-12-14 00:19:05.736307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.703 qpair failed and we were unable to recover it. 00:38:26.703 [2024-12-14 00:19:05.736452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.703 [2024-12-14 00:19:05.736469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.703 qpair failed and we were unable to recover it. 00:38:26.703 [2024-12-14 00:19:05.736610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.703 [2024-12-14 00:19:05.736652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.703 qpair failed and we were unable to recover it. 00:38:26.703 [2024-12-14 00:19:05.736861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.703 [2024-12-14 00:19:05.736901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.703 qpair failed and we were unable to recover it. 00:38:26.703 [2024-12-14 00:19:05.737180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.703 [2024-12-14 00:19:05.737222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.703 qpair failed and we were unable to recover it. 00:38:26.703 [2024-12-14 00:19:05.737453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.703 [2024-12-14 00:19:05.737497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.703 qpair failed and we were unable to recover it. 00:38:26.703 [2024-12-14 00:19:05.737717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.703 [2024-12-14 00:19:05.737730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.703 qpair failed and we were unable to recover it. 00:38:26.703 [2024-12-14 00:19:05.737878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.703 [2024-12-14 00:19:05.737919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.703 qpair failed and we were unable to recover it. 00:38:26.703 [2024-12-14 00:19:05.738202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.703 [2024-12-14 00:19:05.738243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.703 qpair failed and we were unable to recover it. 00:38:26.703 [2024-12-14 00:19:05.738393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.703 [2024-12-14 00:19:05.738434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.703 qpair failed and we were unable to recover it. 00:38:26.703 [2024-12-14 00:19:05.738539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.703 [2024-12-14 00:19:05.738551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.703 qpair failed and we were unable to recover it. 00:38:26.703 [2024-12-14 00:19:05.738642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.703 [2024-12-14 00:19:05.738656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.703 qpair failed and we were unable to recover it. 00:38:26.703 [2024-12-14 00:19:05.738817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.703 [2024-12-14 00:19:05.738859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.703 qpair failed and we were unable to recover it. 00:38:26.703 [2024-12-14 00:19:05.739123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.703 [2024-12-14 00:19:05.739164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.703 qpair failed and we were unable to recover it. 00:38:26.703 [2024-12-14 00:19:05.739299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.703 [2024-12-14 00:19:05.739339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.703 qpair failed and we were unable to recover it. 00:38:26.703 [2024-12-14 00:19:05.739553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.703 [2024-12-14 00:19:05.739568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.703 qpair failed and we were unable to recover it. 00:38:26.703 [2024-12-14 00:19:05.739650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.703 [2024-12-14 00:19:05.739663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.703 qpair failed and we were unable to recover it. 00:38:26.703 [2024-12-14 00:19:05.739895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.703 [2024-12-14 00:19:05.739937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.703 qpair failed and we were unable to recover it. 00:38:26.703 [2024-12-14 00:19:05.740148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.703 [2024-12-14 00:19:05.740190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.703 qpair failed and we were unable to recover it. 00:38:26.703 [2024-12-14 00:19:05.740346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.703 [2024-12-14 00:19:05.740388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.703 qpair failed and we were unable to recover it. 00:38:26.703 [2024-12-14 00:19:05.740614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.703 [2024-12-14 00:19:05.740658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.703 qpair failed and we were unable to recover it. 00:38:26.703 [2024-12-14 00:19:05.740870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.703 [2024-12-14 00:19:05.740911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.703 qpair failed and we were unable to recover it. 00:38:26.703 [2024-12-14 00:19:05.741216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.703 [2024-12-14 00:19:05.741258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.703 qpair failed and we were unable to recover it. 00:38:26.703 [2024-12-14 00:19:05.741434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.703 [2024-12-14 00:19:05.741451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.703 qpair failed and we were unable to recover it. 00:38:26.703 [2024-12-14 00:19:05.741644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.703 [2024-12-14 00:19:05.741658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.703 qpair failed and we were unable to recover it. 00:38:26.703 [2024-12-14 00:19:05.741812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.703 [2024-12-14 00:19:05.741853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.703 qpair failed and we were unable to recover it. 00:38:26.703 [2024-12-14 00:19:05.742145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.703 [2024-12-14 00:19:05.742187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.703 qpair failed and we were unable to recover it. 00:38:26.703 [2024-12-14 00:19:05.742397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.703 [2024-12-14 00:19:05.742445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.703 qpair failed and we were unable to recover it. 00:38:26.703 [2024-12-14 00:19:05.742721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.703 [2024-12-14 00:19:05.742763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.703 qpair failed and we were unable to recover it. 00:38:26.703 [2024-12-14 00:19:05.742999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.703 [2024-12-14 00:19:05.743013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.703 qpair failed and we were unable to recover it. 00:38:26.703 [2024-12-14 00:19:05.743164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.703 [2024-12-14 00:19:05.743177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.703 qpair failed and we were unable to recover it. 00:38:26.703 [2024-12-14 00:19:05.743414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.703 [2024-12-14 00:19:05.743427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.703 qpair failed and we were unable to recover it. 00:38:26.703 [2024-12-14 00:19:05.743591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.703 [2024-12-14 00:19:05.743604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.704 qpair failed and we were unable to recover it. 00:38:26.704 [2024-12-14 00:19:05.743764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.704 [2024-12-14 00:19:05.743807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.704 qpair failed and we were unable to recover it. 00:38:26.704 [2024-12-14 00:19:05.743945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.704 [2024-12-14 00:19:05.743987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.704 qpair failed and we were unable to recover it. 00:38:26.704 [2024-12-14 00:19:05.744243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.704 [2024-12-14 00:19:05.744284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.704 qpair failed and we were unable to recover it. 00:38:26.704 [2024-12-14 00:19:05.744502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.704 [2024-12-14 00:19:05.744516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.704 qpair failed and we were unable to recover it. 00:38:26.704 [2024-12-14 00:19:05.744726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.704 [2024-12-14 00:19:05.744767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.704 qpair failed and we were unable to recover it. 00:38:26.704 [2024-12-14 00:19:05.744998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.704 [2024-12-14 00:19:05.745040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.704 qpair failed and we were unable to recover it. 00:38:26.704 [2024-12-14 00:19:05.745202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.704 [2024-12-14 00:19:05.745251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.704 qpair failed and we were unable to recover it. 00:38:26.704 [2024-12-14 00:19:05.745460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.704 [2024-12-14 00:19:05.745473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.704 qpair failed and we were unable to recover it. 00:38:26.704 [2024-12-14 00:19:05.745561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.704 [2024-12-14 00:19:05.745576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.704 qpair failed and we were unable to recover it. 00:38:26.704 [2024-12-14 00:19:05.745712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.704 [2024-12-14 00:19:05.745725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.704 qpair failed and we were unable to recover it. 00:38:26.704 [2024-12-14 00:19:05.745886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.704 [2024-12-14 00:19:05.745900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.704 qpair failed and we were unable to recover it. 00:38:26.704 [2024-12-14 00:19:05.745993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.704 [2024-12-14 00:19:05.746007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.704 qpair failed and we were unable to recover it. 00:38:26.704 [2024-12-14 00:19:05.746084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.704 [2024-12-14 00:19:05.746098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.704 qpair failed and we were unable to recover it. 00:38:26.704 [2024-12-14 00:19:05.746258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.704 [2024-12-14 00:19:05.746272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.704 qpair failed and we were unable to recover it. 00:38:26.704 [2024-12-14 00:19:05.746381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.704 [2024-12-14 00:19:05.746423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.704 qpair failed and we were unable to recover it. 00:38:26.704 [2024-12-14 00:19:05.746666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.704 [2024-12-14 00:19:05.746708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.704 qpair failed and we were unable to recover it. 00:38:26.704 [2024-12-14 00:19:05.746970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.704 [2024-12-14 00:19:05.747012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.704 qpair failed and we were unable to recover it. 00:38:26.704 [2024-12-14 00:19:05.747166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.704 [2024-12-14 00:19:05.747207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.704 qpair failed and we were unable to recover it. 00:38:26.704 [2024-12-14 00:19:05.747422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.704 [2024-12-14 00:19:05.747474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.704 qpair failed and we were unable to recover it. 00:38:26.704 [2024-12-14 00:19:05.747682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.704 [2024-12-14 00:19:05.747696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.704 qpair failed and we were unable to recover it. 00:38:26.704 [2024-12-14 00:19:05.747847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.704 [2024-12-14 00:19:05.747860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.704 qpair failed and we were unable to recover it. 00:38:26.704 [2024-12-14 00:19:05.748038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.704 [2024-12-14 00:19:05.748052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.704 qpair failed and we were unable to recover it. 00:38:26.704 [2024-12-14 00:19:05.748218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.704 [2024-12-14 00:19:05.748261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.704 qpair failed and we were unable to recover it. 00:38:26.704 [2024-12-14 00:19:05.748476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.704 [2024-12-14 00:19:05.748519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.704 qpair failed and we were unable to recover it. 00:38:26.704 [2024-12-14 00:19:05.748711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.704 [2024-12-14 00:19:05.748725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.704 qpair failed and we were unable to recover it. 00:38:26.704 [2024-12-14 00:19:05.748862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.704 [2024-12-14 00:19:05.748876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.704 qpair failed and we were unable to recover it. 00:38:26.704 [2024-12-14 00:19:05.749112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.704 [2024-12-14 00:19:05.749154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.704 qpair failed and we were unable to recover it. 00:38:26.704 [2024-12-14 00:19:05.749427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.704 [2024-12-14 00:19:05.749481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.704 qpair failed and we were unable to recover it. 00:38:26.704 [2024-12-14 00:19:05.749698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.704 [2024-12-14 00:19:05.749720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.704 qpair failed and we were unable to recover it. 00:38:26.704 [2024-12-14 00:19:05.749866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.704 [2024-12-14 00:19:05.749880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.704 qpair failed and we were unable to recover it. 00:38:26.704 [2024-12-14 00:19:05.749981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.704 [2024-12-14 00:19:05.749995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.704 qpair failed and we were unable to recover it. 00:38:26.704 [2024-12-14 00:19:05.750135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.704 [2024-12-14 00:19:05.750177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.704 qpair failed and we were unable to recover it. 00:38:26.704 [2024-12-14 00:19:05.750371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.704 [2024-12-14 00:19:05.750413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.704 qpair failed and we were unable to recover it. 00:38:26.704 [2024-12-14 00:19:05.750546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.704 [2024-12-14 00:19:05.750588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.704 qpair failed and we were unable to recover it. 00:38:26.704 [2024-12-14 00:19:05.750796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.704 [2024-12-14 00:19:05.750809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.704 qpair failed and we were unable to recover it. 00:38:26.704 [2024-12-14 00:19:05.750984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.704 [2024-12-14 00:19:05.750997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.704 qpair failed and we were unable to recover it. 00:38:26.704 [2024-12-14 00:19:05.751237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.704 [2024-12-14 00:19:05.751279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.704 qpair failed and we were unable to recover it. 00:38:26.705 [2024-12-14 00:19:05.751405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.705 [2024-12-14 00:19:05.751454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.705 qpair failed and we were unable to recover it. 00:38:26.705 [2024-12-14 00:19:05.751675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.705 [2024-12-14 00:19:05.751717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.705 qpair failed and we were unable to recover it. 00:38:26.705 [2024-12-14 00:19:05.751977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.705 [2024-12-14 00:19:05.751990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.705 qpair failed and we were unable to recover it. 00:38:26.705 [2024-12-14 00:19:05.752155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.705 [2024-12-14 00:19:05.752168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.705 qpair failed and we were unable to recover it. 00:38:26.705 [2024-12-14 00:19:05.752328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.705 [2024-12-14 00:19:05.752341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.705 qpair failed and we were unable to recover it. 00:38:26.705 [2024-12-14 00:19:05.752499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.705 [2024-12-14 00:19:05.752513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.705 qpair failed and we were unable to recover it. 00:38:26.705 [2024-12-14 00:19:05.752593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.705 [2024-12-14 00:19:05.752606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.705 qpair failed and we were unable to recover it. 00:38:26.705 [2024-12-14 00:19:05.752839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.705 [2024-12-14 00:19:05.752882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.705 qpair failed and we were unable to recover it. 00:38:26.705 [2024-12-14 00:19:05.753117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.705 [2024-12-14 00:19:05.753158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.705 qpair failed and we were unable to recover it. 00:38:26.705 [2024-12-14 00:19:05.753379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.705 [2024-12-14 00:19:05.753391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.705 qpair failed and we were unable to recover it. 00:38:26.705 [2024-12-14 00:19:05.753492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.705 [2024-12-14 00:19:05.753505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.705 qpair failed and we were unable to recover it. 00:38:26.705 [2024-12-14 00:19:05.753717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.705 [2024-12-14 00:19:05.753733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.705 qpair failed and we were unable to recover it. 00:38:26.705 [2024-12-14 00:19:05.753900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.705 [2024-12-14 00:19:05.753914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.705 qpair failed and we were unable to recover it. 00:38:26.705 [2024-12-14 00:19:05.754129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.705 [2024-12-14 00:19:05.754170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.705 qpair failed and we were unable to recover it. 00:38:26.705 [2024-12-14 00:19:05.754368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.705 [2024-12-14 00:19:05.754410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.705 qpair failed and we were unable to recover it. 00:38:26.705 [2024-12-14 00:19:05.754642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.705 [2024-12-14 00:19:05.754684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.705 qpair failed and we were unable to recover it. 00:38:26.705 [2024-12-14 00:19:05.754842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.705 [2024-12-14 00:19:05.754884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.705 qpair failed and we were unable to recover it. 00:38:26.705 [2024-12-14 00:19:05.755077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.705 [2024-12-14 00:19:05.755118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.705 qpair failed and we were unable to recover it. 00:38:26.705 [2024-12-14 00:19:05.755333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.705 [2024-12-14 00:19:05.755375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.705 qpair failed and we were unable to recover it. 00:38:26.705 [2024-12-14 00:19:05.755607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.705 [2024-12-14 00:19:05.755655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.705 qpair failed and we were unable to recover it. 00:38:26.705 [2024-12-14 00:19:05.755815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.705 [2024-12-14 00:19:05.755829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.705 qpair failed and we were unable to recover it. 00:38:26.705 [2024-12-14 00:19:05.755984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.705 [2024-12-14 00:19:05.755998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.705 qpair failed and we were unable to recover it. 00:38:26.705 [2024-12-14 00:19:05.756144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.705 [2024-12-14 00:19:05.756158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.705 qpair failed and we were unable to recover it. 00:38:26.705 [2024-12-14 00:19:05.756234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.705 [2024-12-14 00:19:05.756247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.705 qpair failed and we were unable to recover it. 00:38:26.705 [2024-12-14 00:19:05.756396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.705 [2024-12-14 00:19:05.756410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.705 qpair failed and we were unable to recover it. 00:38:26.705 [2024-12-14 00:19:05.756648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.705 [2024-12-14 00:19:05.756692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.705 qpair failed and we were unable to recover it. 00:38:26.705 [2024-12-14 00:19:05.756953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.705 [2024-12-14 00:19:05.756994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.705 qpair failed and we were unable to recover it. 00:38:26.705 [2024-12-14 00:19:05.757300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.705 [2024-12-14 00:19:05.757341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.705 qpair failed and we were unable to recover it. 00:38:26.705 [2024-12-14 00:19:05.757548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.705 [2024-12-14 00:19:05.757599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.705 qpair failed and we were unable to recover it. 00:38:26.705 [2024-12-14 00:19:05.757795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.705 [2024-12-14 00:19:05.757808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.705 qpair failed and we were unable to recover it. 00:38:26.705 [2024-12-14 00:19:05.757883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.705 [2024-12-14 00:19:05.757911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.705 qpair failed and we were unable to recover it. 00:38:26.705 [2024-12-14 00:19:05.758133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.705 [2024-12-14 00:19:05.758175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.706 qpair failed and we were unable to recover it. 00:38:26.706 [2024-12-14 00:19:05.758406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.706 [2024-12-14 00:19:05.758450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.706 qpair failed and we were unable to recover it. 00:38:26.706 [2024-12-14 00:19:05.758597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.706 [2024-12-14 00:19:05.758612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.706 qpair failed and we were unable to recover it. 00:38:26.706 [2024-12-14 00:19:05.758762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.706 [2024-12-14 00:19:05.758803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.706 qpair failed and we were unable to recover it. 00:38:26.706 [2024-12-14 00:19:05.759093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.706 [2024-12-14 00:19:05.759134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.706 qpair failed and we were unable to recover it. 00:38:26.706 [2024-12-14 00:19:05.759344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.706 [2024-12-14 00:19:05.759386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.706 qpair failed and we were unable to recover it. 00:38:26.706 [2024-12-14 00:19:05.759643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.706 [2024-12-14 00:19:05.759685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.706 qpair failed and we were unable to recover it. 00:38:26.706 [2024-12-14 00:19:05.759934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.706 [2024-12-14 00:19:05.759947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.706 qpair failed and we were unable to recover it. 00:38:26.706 [2024-12-14 00:19:05.760116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.706 [2024-12-14 00:19:05.760158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.706 qpair failed and we were unable to recover it. 00:38:26.706 [2024-12-14 00:19:05.760373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.706 [2024-12-14 00:19:05.760414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.706 qpair failed and we were unable to recover it. 00:38:26.706 [2024-12-14 00:19:05.760588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.706 [2024-12-14 00:19:05.760612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.706 qpair failed and we were unable to recover it. 00:38:26.706 [2024-12-14 00:19:05.760757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.706 [2024-12-14 00:19:05.760770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.706 qpair failed and we were unable to recover it. 00:38:26.706 [2024-12-14 00:19:05.760920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.706 [2024-12-14 00:19:05.760934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.706 qpair failed and we were unable to recover it. 00:38:26.706 [2024-12-14 00:19:05.761010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.706 [2024-12-14 00:19:05.761024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.706 qpair failed and we were unable to recover it. 00:38:26.706 [2024-12-14 00:19:05.761172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.706 [2024-12-14 00:19:05.761185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.706 qpair failed and we were unable to recover it. 00:38:26.706 [2024-12-14 00:19:05.761282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.706 [2024-12-14 00:19:05.761337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.706 qpair failed and we were unable to recover it. 00:38:26.706 [2024-12-14 00:19:05.761490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.706 [2024-12-14 00:19:05.761533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.706 qpair failed and we were unable to recover it. 00:38:26.706 [2024-12-14 00:19:05.761731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.706 [2024-12-14 00:19:05.761773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.706 qpair failed and we were unable to recover it. 00:38:26.706 [2024-12-14 00:19:05.761965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.706 [2024-12-14 00:19:05.762007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.706 qpair failed and we were unable to recover it. 00:38:26.706 [2024-12-14 00:19:05.762164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.706 [2024-12-14 00:19:05.762206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.706 qpair failed and we were unable to recover it. 00:38:26.706 [2024-12-14 00:19:05.762502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.706 [2024-12-14 00:19:05.762519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.706 qpair failed and we were unable to recover it. 00:38:26.706 [2024-12-14 00:19:05.762612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.706 [2024-12-14 00:19:05.762625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.706 qpair failed and we were unable to recover it. 00:38:26.706 [2024-12-14 00:19:05.762778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.706 [2024-12-14 00:19:05.762792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.706 qpair failed and we were unable to recover it. 00:38:26.706 [2024-12-14 00:19:05.762874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.706 [2024-12-14 00:19:05.762888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.706 qpair failed and we were unable to recover it. 00:38:26.706 [2024-12-14 00:19:05.763087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.706 [2024-12-14 00:19:05.763141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.706 qpair failed and we were unable to recover it. 00:38:26.706 [2024-12-14 00:19:05.763408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.706 [2024-12-14 00:19:05.763461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.706 qpair failed and we were unable to recover it. 00:38:26.706 [2024-12-14 00:19:05.763692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.706 [2024-12-14 00:19:05.763705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.706 qpair failed and we were unable to recover it. 00:38:26.706 [2024-12-14 00:19:05.763859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.706 [2024-12-14 00:19:05.763872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.706 qpair failed and we were unable to recover it. 00:38:26.706 [2024-12-14 00:19:05.763959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.706 [2024-12-14 00:19:05.763973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.706 qpair failed and we were unable to recover it. 00:38:26.706 [2024-12-14 00:19:05.764192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.706 [2024-12-14 00:19:05.764205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.706 qpair failed and we were unable to recover it. 00:38:26.706 [2024-12-14 00:19:05.764370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.706 [2024-12-14 00:19:05.764383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.706 qpair failed and we were unable to recover it. 00:38:26.706 [2024-12-14 00:19:05.764585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.706 [2024-12-14 00:19:05.764628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.706 qpair failed and we were unable to recover it. 00:38:26.706 [2024-12-14 00:19:05.764890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.706 [2024-12-14 00:19:05.764931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.706 qpair failed and we were unable to recover it. 00:38:26.706 [2024-12-14 00:19:05.765055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.706 [2024-12-14 00:19:05.765098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.706 qpair failed and we were unable to recover it. 00:38:26.706 [2024-12-14 00:19:05.765263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.706 [2024-12-14 00:19:05.765305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.706 qpair failed and we were unable to recover it. 00:38:26.706 [2024-12-14 00:19:05.765525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.706 [2024-12-14 00:19:05.765568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.706 qpair failed and we were unable to recover it. 00:38:26.706 [2024-12-14 00:19:05.765839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.706 [2024-12-14 00:19:05.765881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.706 qpair failed and we were unable to recover it. 00:38:26.706 [2024-12-14 00:19:05.766041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.706 [2024-12-14 00:19:05.766083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.706 qpair failed and we were unable to recover it. 00:38:26.707 [2024-12-14 00:19:05.766234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.707 [2024-12-14 00:19:05.766275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.707 qpair failed and we were unable to recover it. 00:38:26.707 [2024-12-14 00:19:05.766428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.707 [2024-12-14 00:19:05.766478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.707 qpair failed and we were unable to recover it. 00:38:26.707 [2024-12-14 00:19:05.766644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.707 [2024-12-14 00:19:05.766685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.707 qpair failed and we were unable to recover it. 00:38:26.707 [2024-12-14 00:19:05.766970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.707 [2024-12-14 00:19:05.767012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.707 qpair failed and we were unable to recover it. 00:38:26.707 [2024-12-14 00:19:05.767281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.707 [2024-12-14 00:19:05.767323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.707 qpair failed and we were unable to recover it. 00:38:26.707 [2024-12-14 00:19:05.767516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.707 [2024-12-14 00:19:05.767530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.707 qpair failed and we were unable to recover it. 00:38:26.707 [2024-12-14 00:19:05.767746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.707 [2024-12-14 00:19:05.767759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.707 qpair failed and we were unable to recover it. 00:38:26.707 [2024-12-14 00:19:05.767862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.707 [2024-12-14 00:19:05.767876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.707 qpair failed and we were unable to recover it. 00:38:26.707 [2024-12-14 00:19:05.768023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.707 [2024-12-14 00:19:05.768037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.707 qpair failed and we were unable to recover it. 00:38:26.707 [2024-12-14 00:19:05.768129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.707 [2024-12-14 00:19:05.768153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.707 qpair failed and we were unable to recover it. 00:38:26.707 [2024-12-14 00:19:05.768316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.707 [2024-12-14 00:19:05.768329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.707 qpair failed and we were unable to recover it. 00:38:26.707 [2024-12-14 00:19:05.768479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.707 [2024-12-14 00:19:05.768493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.707 qpair failed and we were unable to recover it. 00:38:26.707 [2024-12-14 00:19:05.768663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.707 [2024-12-14 00:19:05.768706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.707 qpair failed and we were unable to recover it. 00:38:26.707 [2024-12-14 00:19:05.768905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.707 [2024-12-14 00:19:05.768947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.707 qpair failed and we were unable to recover it. 00:38:26.707 [2024-12-14 00:19:05.769157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.707 [2024-12-14 00:19:05.769199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.707 qpair failed and we were unable to recover it. 00:38:26.707 [2024-12-14 00:19:05.769455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.707 [2024-12-14 00:19:05.769499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.707 qpair failed and we were unable to recover it. 00:38:26.707 [2024-12-14 00:19:05.769664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.707 [2024-12-14 00:19:05.769706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.707 qpair failed and we were unable to recover it. 00:38:26.707 [2024-12-14 00:19:05.769921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.707 [2024-12-14 00:19:05.769963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.707 qpair failed and we were unable to recover it. 00:38:26.707 [2024-12-14 00:19:05.770158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.707 [2024-12-14 00:19:05.770199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.707 qpair failed and we were unable to recover it. 00:38:26.707 [2024-12-14 00:19:05.770399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.707 [2024-12-14 00:19:05.770451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.707 qpair failed and we were unable to recover it. 00:38:26.707 [2024-12-14 00:19:05.770670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.707 [2024-12-14 00:19:05.770684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.707 qpair failed and we were unable to recover it. 00:38:26.707 [2024-12-14 00:19:05.770834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.707 [2024-12-14 00:19:05.770875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.707 qpair failed and we were unable to recover it. 00:38:26.707 [2024-12-14 00:19:05.771003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.707 [2024-12-14 00:19:05.771053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.707 qpair failed and we were unable to recover it. 00:38:26.707 [2024-12-14 00:19:05.771270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.707 [2024-12-14 00:19:05.771313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.707 qpair failed and we were unable to recover it. 00:38:26.707 [2024-12-14 00:19:05.771569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.707 [2024-12-14 00:19:05.771584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.707 qpair failed and we were unable to recover it. 00:38:26.707 [2024-12-14 00:19:05.771721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.707 [2024-12-14 00:19:05.771734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.707 qpair failed and we were unable to recover it. 00:38:26.707 [2024-12-14 00:19:05.771818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.707 [2024-12-14 00:19:05.771831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.707 qpair failed and we were unable to recover it. 00:38:26.707 [2024-12-14 00:19:05.771979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.707 [2024-12-14 00:19:05.771992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.707 qpair failed and we were unable to recover it. 00:38:26.707 [2024-12-14 00:19:05.772178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.707 [2024-12-14 00:19:05.772227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.707 qpair failed and we were unable to recover it. 00:38:26.707 [2024-12-14 00:19:05.772430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.707 [2024-12-14 00:19:05.772485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.707 qpair failed and we were unable to recover it. 00:38:26.707 [2024-12-14 00:19:05.772735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.707 [2024-12-14 00:19:05.772748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.707 qpair failed and we were unable to recover it. 00:38:26.707 [2024-12-14 00:19:05.772979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.707 [2024-12-14 00:19:05.773022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.707 qpair failed and we were unable to recover it. 00:38:26.707 [2024-12-14 00:19:05.773300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.707 [2024-12-14 00:19:05.773342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.707 qpair failed and we were unable to recover it. 00:38:26.707 [2024-12-14 00:19:05.773465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.707 [2024-12-14 00:19:05.773478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.707 qpair failed and we were unable to recover it. 00:38:26.707 [2024-12-14 00:19:05.773568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.707 [2024-12-14 00:19:05.773581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.707 qpair failed and we were unable to recover it. 00:38:26.707 [2024-12-14 00:19:05.773734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.707 [2024-12-14 00:19:05.773746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.707 qpair failed and we were unable to recover it. 00:38:26.707 [2024-12-14 00:19:05.773902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.707 [2024-12-14 00:19:05.773944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.707 qpair failed and we were unable to recover it. 00:38:26.707 [2024-12-14 00:19:05.774150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.708 [2024-12-14 00:19:05.774192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.708 qpair failed and we were unable to recover it. 00:38:26.708 [2024-12-14 00:19:05.774400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.708 [2024-12-14 00:19:05.774450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.708 qpair failed and we were unable to recover it. 00:38:26.708 [2024-12-14 00:19:05.774745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.708 [2024-12-14 00:19:05.774788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.708 qpair failed and we were unable to recover it. 00:38:26.708 [2024-12-14 00:19:05.775000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.708 [2024-12-14 00:19:05.775041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.708 qpair failed and we were unable to recover it. 00:38:26.708 [2024-12-14 00:19:05.775289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.708 [2024-12-14 00:19:05.775331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.708 qpair failed and we were unable to recover it. 00:38:26.708 [2024-12-14 00:19:05.775472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.708 [2024-12-14 00:19:05.775515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.708 qpair failed and we were unable to recover it. 00:38:26.708 [2024-12-14 00:19:05.775747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.708 [2024-12-14 00:19:05.775789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.708 qpair failed and we were unable to recover it. 00:38:26.708 [2024-12-14 00:19:05.775993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.708 [2024-12-14 00:19:05.776017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.708 qpair failed and we were unable to recover it. 00:38:26.708 [2024-12-14 00:19:05.776175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.708 [2024-12-14 00:19:05.776217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.708 qpair failed and we were unable to recover it. 00:38:26.708 [2024-12-14 00:19:05.776342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.708 [2024-12-14 00:19:05.776384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.708 qpair failed and we were unable to recover it. 00:38:26.708 [2024-12-14 00:19:05.776696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.708 [2024-12-14 00:19:05.776739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.708 qpair failed and we were unable to recover it. 00:38:26.708 [2024-12-14 00:19:05.776974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.708 [2024-12-14 00:19:05.777015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.708 qpair failed and we were unable to recover it. 00:38:26.708 [2024-12-14 00:19:05.777332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.708 [2024-12-14 00:19:05.777376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.708 qpair failed and we were unable to recover it. 00:38:26.708 [2024-12-14 00:19:05.777688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.708 [2024-12-14 00:19:05.777741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.708 qpair failed and we were unable to recover it. 00:38:26.708 [2024-12-14 00:19:05.777865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.708 [2024-12-14 00:19:05.777878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.708 qpair failed and we were unable to recover it. 00:38:26.708 [2024-12-14 00:19:05.778032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.708 [2024-12-14 00:19:05.778045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.708 qpair failed and we were unable to recover it. 00:38:26.708 [2024-12-14 00:19:05.778223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.708 [2024-12-14 00:19:05.778235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.708 qpair failed and we were unable to recover it. 00:38:26.708 [2024-12-14 00:19:05.778453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.708 [2024-12-14 00:19:05.778496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.708 qpair failed and we were unable to recover it. 00:38:26.708 [2024-12-14 00:19:05.778727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.708 [2024-12-14 00:19:05.778768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.708 qpair failed and we were unable to recover it. 00:38:26.708 [2024-12-14 00:19:05.779033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.708 [2024-12-14 00:19:05.779046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.708 qpair failed and we were unable to recover it. 00:38:26.708 [2024-12-14 00:19:05.779186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.708 [2024-12-14 00:19:05.779199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.708 qpair failed and we were unable to recover it. 00:38:26.708 [2024-12-14 00:19:05.779348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.708 [2024-12-14 00:19:05.779388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.708 qpair failed and we were unable to recover it. 00:38:26.708 [2024-12-14 00:19:05.779710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.708 [2024-12-14 00:19:05.779753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.708 qpair failed and we were unable to recover it. 00:38:26.708 [2024-12-14 00:19:05.779965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.708 [2024-12-14 00:19:05.779978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.708 qpair failed and we were unable to recover it. 00:38:26.708 [2024-12-14 00:19:05.780155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.708 [2024-12-14 00:19:05.780195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.708 qpair failed and we were unable to recover it. 00:38:26.708 [2024-12-14 00:19:05.780402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.708 [2024-12-14 00:19:05.780470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.708 qpair failed and we were unable to recover it. 00:38:26.708 [2024-12-14 00:19:05.780736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.708 [2024-12-14 00:19:05.780779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.708 qpair failed and we were unable to recover it. 00:38:26.708 [2024-12-14 00:19:05.781109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.708 [2024-12-14 00:19:05.781151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.708 qpair failed and we were unable to recover it. 00:38:26.708 [2024-12-14 00:19:05.781294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.708 [2024-12-14 00:19:05.781335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.708 qpair failed and we were unable to recover it. 00:38:26.708 [2024-12-14 00:19:05.781492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.708 [2024-12-14 00:19:05.781535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.708 qpair failed and we were unable to recover it. 00:38:26.708 [2024-12-14 00:19:05.781701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.708 [2024-12-14 00:19:05.781744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.708 qpair failed and we were unable to recover it. 00:38:26.708 [2024-12-14 00:19:05.781961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.708 [2024-12-14 00:19:05.781974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.708 qpair failed and we were unable to recover it. 00:38:26.708 [2024-12-14 00:19:05.782085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.708 [2024-12-14 00:19:05.782127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.708 qpair failed and we were unable to recover it. 00:38:26.708 [2024-12-14 00:19:05.782407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.708 [2024-12-14 00:19:05.782458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.708 qpair failed and we were unable to recover it. 00:38:26.708 [2024-12-14 00:19:05.782730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.708 [2024-12-14 00:19:05.782772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.708 qpair failed and we were unable to recover it. 00:38:26.709 [2024-12-14 00:19:05.782979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.709 [2024-12-14 00:19:05.783021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.709 qpair failed and we were unable to recover it. 00:38:26.709 [2024-12-14 00:19:05.783236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.709 [2024-12-14 00:19:05.783277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.709 qpair failed and we were unable to recover it. 00:38:26.709 [2024-12-14 00:19:05.783415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.709 [2024-12-14 00:19:05.783465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.709 qpair failed and we were unable to recover it. 00:38:26.709 [2024-12-14 00:19:05.783756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.709 [2024-12-14 00:19:05.783810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.709 qpair failed and we were unable to recover it. 00:38:26.709 [2024-12-14 00:19:05.783963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.709 [2024-12-14 00:19:05.783977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.709 qpair failed and we were unable to recover it. 00:38:26.709 [2024-12-14 00:19:05.784076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.709 [2024-12-14 00:19:05.784089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.709 qpair failed and we were unable to recover it. 00:38:26.709 [2024-12-14 00:19:05.784184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.709 [2024-12-14 00:19:05.784197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.709 qpair failed and we were unable to recover it. 00:38:26.709 [2024-12-14 00:19:05.784335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.709 [2024-12-14 00:19:05.784354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.709 qpair failed and we were unable to recover it. 00:38:26.709 [2024-12-14 00:19:05.784530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.709 [2024-12-14 00:19:05.784543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.709 qpair failed and we were unable to recover it. 00:38:26.709 [2024-12-14 00:19:05.784632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.709 [2024-12-14 00:19:05.784645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.709 qpair failed and we were unable to recover it. 00:38:26.709 [2024-12-14 00:19:05.784794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.709 [2024-12-14 00:19:05.784806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.709 qpair failed and we were unable to recover it. 00:38:26.709 [2024-12-14 00:19:05.784961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.709 [2024-12-14 00:19:05.784974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.709 qpair failed and we were unable to recover it. 00:38:26.709 [2024-12-14 00:19:05.785122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.709 [2024-12-14 00:19:05.785135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.709 qpair failed and we were unable to recover it. 00:38:26.709 [2024-12-14 00:19:05.785213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.709 [2024-12-14 00:19:05.785226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.709 qpair failed and we were unable to recover it. 00:38:26.709 [2024-12-14 00:19:05.785309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.709 [2024-12-14 00:19:05.785322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.709 qpair failed and we were unable to recover it. 00:38:26.709 [2024-12-14 00:19:05.785474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.709 [2024-12-14 00:19:05.785488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.709 qpair failed and we were unable to recover it. 00:38:26.978 [2024-12-14 00:19:05.785646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.978 [2024-12-14 00:19:05.785689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.979 qpair failed and we were unable to recover it. 00:38:26.979 [2024-12-14 00:19:05.785845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.979 [2024-12-14 00:19:05.785888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.979 qpair failed and we were unable to recover it. 00:38:26.979 [2024-12-14 00:19:05.786148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.979 [2024-12-14 00:19:05.786189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.979 qpair failed and we were unable to recover it. 00:38:26.979 [2024-12-14 00:19:05.786335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.979 [2024-12-14 00:19:05.786378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.979 qpair failed and we were unable to recover it. 00:38:26.979 [2024-12-14 00:19:05.786646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.979 [2024-12-14 00:19:05.786676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.979 qpair failed and we were unable to recover it. 00:38:26.979 [2024-12-14 00:19:05.786822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.979 [2024-12-14 00:19:05.786864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.979 qpair failed and we were unable to recover it. 00:38:26.979 [2024-12-14 00:19:05.787084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.979 [2024-12-14 00:19:05.787126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.979 qpair failed and we were unable to recover it. 00:38:26.979 [2024-12-14 00:19:05.787394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.979 [2024-12-14 00:19:05.787445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.979 qpair failed and we were unable to recover it. 00:38:26.979 [2024-12-14 00:19:05.787586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.979 [2024-12-14 00:19:05.787627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.979 qpair failed and we were unable to recover it. 00:38:26.979 [2024-12-14 00:19:05.787778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.979 [2024-12-14 00:19:05.787820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.979 qpair failed and we were unable to recover it. 00:38:26.979 [2024-12-14 00:19:05.788076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.979 [2024-12-14 00:19:05.788090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.979 qpair failed and we were unable to recover it. 00:38:26.979 [2024-12-14 00:19:05.788189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.979 [2024-12-14 00:19:05.788202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.979 qpair failed and we were unable to recover it. 00:38:26.979 [2024-12-14 00:19:05.788356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.979 [2024-12-14 00:19:05.788369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.979 qpair failed and we were unable to recover it. 00:38:26.979 [2024-12-14 00:19:05.788463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.979 [2024-12-14 00:19:05.788487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.979 qpair failed and we were unable to recover it. 00:38:26.979 [2024-12-14 00:19:05.788655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.979 [2024-12-14 00:19:05.788704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.979 qpair failed and we were unable to recover it. 00:38:26.979 [2024-12-14 00:19:05.788854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.979 [2024-12-14 00:19:05.788896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.979 qpair failed and we were unable to recover it. 00:38:26.979 [2024-12-14 00:19:05.789098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.979 [2024-12-14 00:19:05.789140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.979 qpair failed and we were unable to recover it. 00:38:26.979 [2024-12-14 00:19:05.789424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.979 [2024-12-14 00:19:05.789476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.979 qpair failed and we were unable to recover it. 00:38:26.979 [2024-12-14 00:19:05.789610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.979 [2024-12-14 00:19:05.789651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.979 qpair failed and we were unable to recover it. 00:38:26.979 [2024-12-14 00:19:05.789928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.979 [2024-12-14 00:19:05.789941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.979 qpair failed and we were unable to recover it. 00:38:26.979 [2024-12-14 00:19:05.790111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.979 [2024-12-14 00:19:05.790135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.979 qpair failed and we were unable to recover it. 00:38:26.979 [2024-12-14 00:19:05.790294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.979 [2024-12-14 00:19:05.790335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.979 qpair failed and we were unable to recover it. 00:38:26.979 [2024-12-14 00:19:05.790536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.979 [2024-12-14 00:19:05.790580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.979 qpair failed and we were unable to recover it. 00:38:26.979 [2024-12-14 00:19:05.790733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.979 [2024-12-14 00:19:05.790775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.979 qpair failed and we were unable to recover it. 00:38:26.979 [2024-12-14 00:19:05.790893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.979 [2024-12-14 00:19:05.790906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.979 qpair failed and we were unable to recover it. 00:38:26.979 [2024-12-14 00:19:05.791062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.979 [2024-12-14 00:19:05.791102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.979 qpair failed and we were unable to recover it. 00:38:26.979 [2024-12-14 00:19:05.791311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.979 [2024-12-14 00:19:05.791365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.979 qpair failed and we were unable to recover it. 00:38:26.979 [2024-12-14 00:19:05.791601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.979 [2024-12-14 00:19:05.791636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.979 qpair failed and we were unable to recover it. 00:38:26.979 [2024-12-14 00:19:05.791787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.979 [2024-12-14 00:19:05.791800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.979 qpair failed and we were unable to recover it. 00:38:26.979 [2024-12-14 00:19:05.791924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.979 [2024-12-14 00:19:05.791967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.979 qpair failed and we were unable to recover it. 00:38:26.979 [2024-12-14 00:19:05.792159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.979 [2024-12-14 00:19:05.792201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.979 qpair failed and we were unable to recover it. 00:38:26.979 [2024-12-14 00:19:05.792340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.979 [2024-12-14 00:19:05.792382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.979 qpair failed and we were unable to recover it. 00:38:26.979 [2024-12-14 00:19:05.792616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.979 [2024-12-14 00:19:05.792629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.979 qpair failed and we were unable to recover it. 00:38:26.979 [2024-12-14 00:19:05.792773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.979 [2024-12-14 00:19:05.792814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.979 qpair failed and we were unable to recover it. 00:38:26.979 [2024-12-14 00:19:05.793032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.979 [2024-12-14 00:19:05.793074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.979 qpair failed and we were unable to recover it. 00:38:26.979 [2024-12-14 00:19:05.793341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.979 [2024-12-14 00:19:05.793382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.979 qpair failed and we were unable to recover it. 00:38:26.979 [2024-12-14 00:19:05.793594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.979 [2024-12-14 00:19:05.793637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.979 qpair failed and we were unable to recover it. 00:38:26.979 [2024-12-14 00:19:05.793841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.979 [2024-12-14 00:19:05.793872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.979 qpair failed and we were unable to recover it. 00:38:26.979 [2024-12-14 00:19:05.794048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.979 [2024-12-14 00:19:05.794077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.979 qpair failed and we were unable to recover it. 00:38:26.979 [2024-12-14 00:19:05.794227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.979 [2024-12-14 00:19:05.794240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.979 qpair failed and we were unable to recover it. 00:38:26.979 [2024-12-14 00:19:05.794391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.979 [2024-12-14 00:19:05.794433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.979 qpair failed and we were unable to recover it. 00:38:26.979 [2024-12-14 00:19:05.794798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.979 [2024-12-14 00:19:05.794888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.979 qpair failed and we were unable to recover it. 00:38:26.979 [2024-12-14 00:19:05.795192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.979 [2024-12-14 00:19:05.795238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:26.979 qpair failed and we were unable to recover it. 00:38:26.979 [2024-12-14 00:19:05.795463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.979 [2024-12-14 00:19:05.795516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:26.979 qpair failed and we were unable to recover it. 00:38:26.979 [2024-12-14 00:19:05.795565] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:38:26.979 [2024-12-14 00:19:05.795765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.979 [2024-12-14 00:19:05.795780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.979 qpair failed and we were unable to recover it. 00:38:26.979 [2024-12-14 00:19:05.795940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.979 [2024-12-14 00:19:05.795953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.979 qpair failed and we were unable to recover it. 00:38:26.979 [2024-12-14 00:19:05.796154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.979 [2024-12-14 00:19:05.796167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.979 qpair failed and we were unable to recover it. 00:38:26.979 [2024-12-14 00:19:05.796333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.979 [2024-12-14 00:19:05.796375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.979 qpair failed and we were unable to recover it. 00:38:26.979 [2024-12-14 00:19:05.796606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.979 [2024-12-14 00:19:05.796649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.979 qpair failed and we were unable to recover it. 00:38:26.979 [2024-12-14 00:19:05.796949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.979 [2024-12-14 00:19:05.796992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.979 qpair failed and we were unable to recover it. 00:38:26.979 [2024-12-14 00:19:05.797151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.979 [2024-12-14 00:19:05.797193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.979 qpair failed and we were unable to recover it. 00:38:26.979 [2024-12-14 00:19:05.797388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.979 [2024-12-14 00:19:05.797429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.979 qpair failed and we were unable to recover it. 00:38:26.979 [2024-12-14 00:19:05.797590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.979 [2024-12-14 00:19:05.797633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.979 qpair failed and we were unable to recover it. 00:38:26.979 [2024-12-14 00:19:05.797848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.979 [2024-12-14 00:19:05.797889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.979 qpair failed and we were unable to recover it. 00:38:26.979 [2024-12-14 00:19:05.798116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.979 [2024-12-14 00:19:05.798129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.979 qpair failed and we were unable to recover it. 00:38:26.979 [2024-12-14 00:19:05.798287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.979 [2024-12-14 00:19:05.798301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.979 qpair failed and we were unable to recover it. 00:38:26.979 [2024-12-14 00:19:05.798450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.979 [2024-12-14 00:19:05.798463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.979 qpair failed and we were unable to recover it. 00:38:26.979 [2024-12-14 00:19:05.798567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.979 [2024-12-14 00:19:05.798581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.979 qpair failed and we were unable to recover it. 00:38:26.979 [2024-12-14 00:19:05.798652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.979 [2024-12-14 00:19:05.798665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.979 qpair failed and we were unable to recover it. 00:38:26.979 [2024-12-14 00:19:05.798841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.979 [2024-12-14 00:19:05.798883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.980 qpair failed and we were unable to recover it. 00:38:26.980 [2024-12-14 00:19:05.799116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.980 [2024-12-14 00:19:05.799158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.980 qpair failed and we were unable to recover it. 00:38:26.980 [2024-12-14 00:19:05.799315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.980 [2024-12-14 00:19:05.799356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.980 qpair failed and we were unable to recover it. 00:38:26.980 [2024-12-14 00:19:05.799572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.980 [2024-12-14 00:19:05.799586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.980 qpair failed and we were unable to recover it. 00:38:26.980 [2024-12-14 00:19:05.799663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.980 [2024-12-14 00:19:05.799676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.980 qpair failed and we were unable to recover it. 00:38:26.980 [2024-12-14 00:19:05.799850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.980 [2024-12-14 00:19:05.799864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.980 qpair failed and we were unable to recover it. 00:38:26.980 [2024-12-14 00:19:05.800082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.980 [2024-12-14 00:19:05.800124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.980 qpair failed and we were unable to recover it. 00:38:26.980 [2024-12-14 00:19:05.800408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.980 [2024-12-14 00:19:05.800461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.980 qpair failed and we were unable to recover it. 00:38:26.980 [2024-12-14 00:19:05.800739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.980 [2024-12-14 00:19:05.800755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.980 qpair failed and we were unable to recover it. 00:38:26.980 [2024-12-14 00:19:05.800911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.980 [2024-12-14 00:19:05.800925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.980 qpair failed and we were unable to recover it. 00:38:26.980 [2024-12-14 00:19:05.801118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.980 [2024-12-14 00:19:05.801159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.980 qpair failed and we were unable to recover it. 00:38:26.980 [2024-12-14 00:19:05.801432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.980 [2024-12-14 00:19:05.801503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.980 qpair failed and we were unable to recover it. 00:38:26.980 [2024-12-14 00:19:05.801771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.980 [2024-12-14 00:19:05.801785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.980 qpair failed and we were unable to recover it. 00:38:26.980 [2024-12-14 00:19:05.801853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.980 [2024-12-14 00:19:05.801867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.980 qpair failed and we were unable to recover it. 00:38:26.980 [2024-12-14 00:19:05.802079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.980 [2024-12-14 00:19:05.802121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.980 qpair failed and we were unable to recover it. 00:38:26.980 [2024-12-14 00:19:05.802328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.980 [2024-12-14 00:19:05.802370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.980 qpair failed and we were unable to recover it. 00:38:26.980 [2024-12-14 00:19:05.802539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.980 [2024-12-14 00:19:05.802581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.980 qpair failed and we were unable to recover it. 00:38:26.980 [2024-12-14 00:19:05.802789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.980 [2024-12-14 00:19:05.802803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.980 qpair failed and we were unable to recover it. 00:38:26.980 [2024-12-14 00:19:05.802943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.980 [2024-12-14 00:19:05.802956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.980 qpair failed and we were unable to recover it. 00:38:26.980 [2024-12-14 00:19:05.803045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.980 [2024-12-14 00:19:05.803058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.980 qpair failed and we were unable to recover it. 00:38:26.980 [2024-12-14 00:19:05.803199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.980 [2024-12-14 00:19:05.803254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.980 qpair failed and we were unable to recover it. 00:38:26.980 [2024-12-14 00:19:05.803462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.980 [2024-12-14 00:19:05.803504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.980 qpair failed and we were unable to recover it. 00:38:26.980 [2024-12-14 00:19:05.803738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.980 [2024-12-14 00:19:05.803781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.980 qpair failed and we were unable to recover it. 00:38:26.980 [2024-12-14 00:19:05.803960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.980 [2024-12-14 00:19:05.803974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.980 qpair failed and we were unable to recover it. 00:38:26.980 [2024-12-14 00:19:05.804146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.980 [2024-12-14 00:19:05.804188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.980 qpair failed and we were unable to recover it. 00:38:26.980 [2024-12-14 00:19:05.804422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.980 [2024-12-14 00:19:05.804495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.980 qpair failed and we were unable to recover it. 00:38:26.980 [2024-12-14 00:19:05.804777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.980 [2024-12-14 00:19:05.804819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.980 qpair failed and we were unable to recover it. 00:38:26.980 [2024-12-14 00:19:05.804950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.980 [2024-12-14 00:19:05.804991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.980 qpair failed and we were unable to recover it. 00:38:26.980 [2024-12-14 00:19:05.805233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.980 [2024-12-14 00:19:05.805275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.980 qpair failed and we were unable to recover it. 00:38:26.980 [2024-12-14 00:19:05.805488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.980 [2024-12-14 00:19:05.805531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.980 qpair failed and we were unable to recover it. 00:38:26.980 [2024-12-14 00:19:05.805734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.980 [2024-12-14 00:19:05.805748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.980 qpair failed and we were unable to recover it. 00:38:26.980 [2024-12-14 00:19:05.805903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.980 [2024-12-14 00:19:05.805917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.980 qpair failed and we were unable to recover it. 00:38:26.980 [2024-12-14 00:19:05.805996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.980 [2024-12-14 00:19:05.806010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.980 qpair failed and we were unable to recover it. 00:38:26.980 [2024-12-14 00:19:05.806166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.980 [2024-12-14 00:19:05.806180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.980 qpair failed and we were unable to recover it. 00:38:26.980 [2024-12-14 00:19:05.806415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.980 [2024-12-14 00:19:05.806479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.980 qpair failed and we were unable to recover it. 00:38:26.980 [2024-12-14 00:19:05.806695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.980 [2024-12-14 00:19:05.806737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.980 qpair failed and we were unable to recover it. 00:38:26.980 [2024-12-14 00:19:05.806893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.980 [2024-12-14 00:19:05.806934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.980 qpair failed and we were unable to recover it. 00:38:26.980 [2024-12-14 00:19:05.807195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.980 [2024-12-14 00:19:05.807237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.980 qpair failed and we were unable to recover it. 00:38:26.980 [2024-12-14 00:19:05.807472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.980 [2024-12-14 00:19:05.807515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.980 qpair failed and we were unable to recover it. 00:38:26.980 [2024-12-14 00:19:05.807723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.980 [2024-12-14 00:19:05.807764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.980 qpair failed and we were unable to recover it. 00:38:26.980 [2024-12-14 00:19:05.807959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.980 [2024-12-14 00:19:05.808001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.980 qpair failed and we were unable to recover it. 00:38:26.980 [2024-12-14 00:19:05.808265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.980 [2024-12-14 00:19:05.808306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.980 qpair failed and we were unable to recover it. 00:38:26.980 [2024-12-14 00:19:05.808519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.980 [2024-12-14 00:19:05.808560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.980 qpair failed and we were unable to recover it. 00:38:26.980 [2024-12-14 00:19:05.808707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.980 [2024-12-14 00:19:05.808721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.980 qpair failed and we were unable to recover it. 00:38:26.980 [2024-12-14 00:19:05.808872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.980 [2024-12-14 00:19:05.808913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.980 qpair failed and we were unable to recover it. 00:38:26.980 [2024-12-14 00:19:05.809144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.980 [2024-12-14 00:19:05.809186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.980 qpair failed and we were unable to recover it. 00:38:26.980 [2024-12-14 00:19:05.809318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.980 [2024-12-14 00:19:05.809360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.980 qpair failed and we were unable to recover it. 00:38:26.980 [2024-12-14 00:19:05.809551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.980 [2024-12-14 00:19:05.809565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.980 qpair failed and we were unable to recover it. 00:38:26.980 [2024-12-14 00:19:05.809709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.980 [2024-12-14 00:19:05.809762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.980 qpair failed and we were unable to recover it. 00:38:26.980 [2024-12-14 00:19:05.809954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.980 [2024-12-14 00:19:05.809997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.980 qpair failed and we were unable to recover it. 00:38:26.980 [2024-12-14 00:19:05.810203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.980 [2024-12-14 00:19:05.810244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.980 qpair failed and we were unable to recover it. 00:38:26.980 [2024-12-14 00:19:05.810452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.980 [2024-12-14 00:19:05.810494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.980 qpair failed and we were unable to recover it. 00:38:26.980 [2024-12-14 00:19:05.810698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.980 [2024-12-14 00:19:05.810740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.980 qpair failed and we were unable to recover it. 00:38:26.980 [2024-12-14 00:19:05.811030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.980 [2024-12-14 00:19:05.811043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.980 qpair failed and we were unable to recover it. 00:38:26.980 [2024-12-14 00:19:05.811328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.980 [2024-12-14 00:19:05.811370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.980 qpair failed and we were unable to recover it. 00:38:26.980 [2024-12-14 00:19:05.811517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.980 [2024-12-14 00:19:05.811559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.980 qpair failed and we were unable to recover it. 00:38:26.981 [2024-12-14 00:19:05.811782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.981 [2024-12-14 00:19:05.811796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.981 qpair failed and we were unable to recover it. 00:38:26.981 [2024-12-14 00:19:05.811978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.981 [2024-12-14 00:19:05.812020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.981 qpair failed and we were unable to recover it. 00:38:26.981 [2024-12-14 00:19:05.812241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.981 [2024-12-14 00:19:05.812282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.981 qpair failed and we were unable to recover it. 00:38:26.981 [2024-12-14 00:19:05.812490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.981 [2024-12-14 00:19:05.812534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.981 qpair failed and we were unable to recover it. 00:38:26.981 [2024-12-14 00:19:05.812850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.981 [2024-12-14 00:19:05.812863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.981 qpair failed and we were unable to recover it. 00:38:26.981 [2024-12-14 00:19:05.813068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.981 [2024-12-14 00:19:05.813081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.981 qpair failed and we were unable to recover it. 00:38:26.981 [2024-12-14 00:19:05.813200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.981 [2024-12-14 00:19:05.813214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.981 qpair failed and we were unable to recover it. 00:38:26.981 [2024-12-14 00:19:05.813399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.981 [2024-12-14 00:19:05.813452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.981 qpair failed and we were unable to recover it. 00:38:26.981 [2024-12-14 00:19:05.813685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.981 [2024-12-14 00:19:05.813728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.981 qpair failed and we were unable to recover it. 00:38:26.981 [2024-12-14 00:19:05.813983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.981 [2024-12-14 00:19:05.814025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.981 qpair failed and we were unable to recover it. 00:38:26.981 [2024-12-14 00:19:05.814285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.981 [2024-12-14 00:19:05.814326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.981 qpair failed and we were unable to recover it. 00:38:26.981 [2024-12-14 00:19:05.814475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.981 [2024-12-14 00:19:05.814518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.981 qpair failed and we were unable to recover it. 00:38:26.981 [2024-12-14 00:19:05.814801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.981 [2024-12-14 00:19:05.814844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.981 qpair failed and we were unable to recover it. 00:38:26.981 [2024-12-14 00:19:05.815111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.981 [2024-12-14 00:19:05.815153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.981 qpair failed and we were unable to recover it. 00:38:26.981 [2024-12-14 00:19:05.815387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.981 [2024-12-14 00:19:05.815429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.981 qpair failed and we were unable to recover it. 00:38:26.981 [2024-12-14 00:19:05.815652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.981 [2024-12-14 00:19:05.815667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.981 qpair failed and we were unable to recover it. 00:38:26.981 [2024-12-14 00:19:05.815834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.981 [2024-12-14 00:19:05.815876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.981 qpair failed and we were unable to recover it. 00:38:26.981 [2024-12-14 00:19:05.816138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.981 [2024-12-14 00:19:05.816179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.981 qpair failed and we were unable to recover it. 00:38:26.981 [2024-12-14 00:19:05.816447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.981 [2024-12-14 00:19:05.816489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.981 qpair failed and we were unable to recover it. 00:38:26.981 [2024-12-14 00:19:05.816714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.981 [2024-12-14 00:19:05.816758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.981 qpair failed and we were unable to recover it. 00:38:26.981 [2024-12-14 00:19:05.816911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.981 [2024-12-14 00:19:05.816953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.981 qpair failed and we were unable to recover it. 00:38:26.981 [2024-12-14 00:19:05.817225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.981 [2024-12-14 00:19:05.817266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.981 qpair failed and we were unable to recover it. 00:38:26.981 [2024-12-14 00:19:05.817499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.981 [2024-12-14 00:19:05.817542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.981 qpair failed and we were unable to recover it. 00:38:26.981 [2024-12-14 00:19:05.817693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.981 [2024-12-14 00:19:05.817734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.981 qpair failed and we were unable to recover it. 00:38:26.981 [2024-12-14 00:19:05.817945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.981 [2024-12-14 00:19:05.817987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.981 qpair failed and we were unable to recover it. 00:38:26.981 [2024-12-14 00:19:05.818198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.981 [2024-12-14 00:19:05.818240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.981 qpair failed and we were unable to recover it. 00:38:26.981 [2024-12-14 00:19:05.818531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.981 [2024-12-14 00:19:05.818574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.981 qpair failed and we were unable to recover it. 00:38:26.981 [2024-12-14 00:19:05.818777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.981 [2024-12-14 00:19:05.818818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.981 qpair failed and we were unable to recover it. 00:38:26.981 [2024-12-14 00:19:05.819062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.981 [2024-12-14 00:19:05.819075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.981 qpair failed and we were unable to recover it. 00:38:26.981 [2024-12-14 00:19:05.819298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.981 [2024-12-14 00:19:05.819311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.981 qpair failed and we were unable to recover it. 00:38:26.981 [2024-12-14 00:19:05.819401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.981 [2024-12-14 00:19:05.819414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.981 qpair failed and we were unable to recover it. 00:38:26.981 [2024-12-14 00:19:05.819505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.981 [2024-12-14 00:19:05.819519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.981 qpair failed and we were unable to recover it. 00:38:26.981 [2024-12-14 00:19:05.819666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.981 [2024-12-14 00:19:05.819684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.981 qpair failed and we were unable to recover it. 00:38:26.981 [2024-12-14 00:19:05.819779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.981 [2024-12-14 00:19:05.819791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.981 qpair failed and we were unable to recover it. 00:38:26.981 [2024-12-14 00:19:05.820015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.981 [2024-12-14 00:19:05.820058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.981 qpair failed and we were unable to recover it. 00:38:26.981 [2024-12-14 00:19:05.820277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.981 [2024-12-14 00:19:05.820319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.981 qpair failed and we were unable to recover it. 00:38:26.981 [2024-12-14 00:19:05.820483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.981 [2024-12-14 00:19:05.820528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.981 qpair failed and we were unable to recover it. 00:38:26.981 [2024-12-14 00:19:05.820722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.981 [2024-12-14 00:19:05.820763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.981 qpair failed and we were unable to recover it. 00:38:26.981 [2024-12-14 00:19:05.820909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.981 [2024-12-14 00:19:05.820955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.981 qpair failed and we were unable to recover it. 00:38:26.981 [2024-12-14 00:19:05.821157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.981 [2024-12-14 00:19:05.821171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.981 qpair failed and we were unable to recover it. 00:38:26.981 [2024-12-14 00:19:05.821337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.981 [2024-12-14 00:19:05.821351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.981 qpair failed and we were unable to recover it. 00:38:26.981 [2024-12-14 00:19:05.821503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.981 [2024-12-14 00:19:05.821547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.981 qpair failed and we were unable to recover it. 00:38:26.981 [2024-12-14 00:19:05.821828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.981 [2024-12-14 00:19:05.821889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.981 qpair failed and we were unable to recover it. 00:38:26.981 [2024-12-14 00:19:05.822044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.981 [2024-12-14 00:19:05.822087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.981 qpair failed and we were unable to recover it. 00:38:26.981 [2024-12-14 00:19:05.822289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.981 [2024-12-14 00:19:05.822330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.981 qpair failed and we were unable to recover it. 00:38:26.981 [2024-12-14 00:19:05.822534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.981 [2024-12-14 00:19:05.822577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.981 qpair failed and we were unable to recover it. 00:38:26.981 [2024-12-14 00:19:05.822841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.981 [2024-12-14 00:19:05.822883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.981 qpair failed and we were unable to recover it. 00:38:26.981 [2024-12-14 00:19:05.823138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.981 [2024-12-14 00:19:05.823152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.981 qpair failed and we were unable to recover it. 00:38:26.981 [2024-12-14 00:19:05.823316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.981 [2024-12-14 00:19:05.823357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.981 qpair failed and we were unable to recover it. 00:38:26.981 [2024-12-14 00:19:05.823622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.981 [2024-12-14 00:19:05.823664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.981 qpair failed and we were unable to recover it. 00:38:26.981 [2024-12-14 00:19:05.823818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.981 [2024-12-14 00:19:05.823859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.981 qpair failed and we were unable to recover it. 00:38:26.981 [2024-12-14 00:19:05.824143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.981 [2024-12-14 00:19:05.824185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.981 qpair failed and we were unable to recover it. 00:38:26.981 [2024-12-14 00:19:05.824456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.981 [2024-12-14 00:19:05.824499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.982 qpair failed and we were unable to recover it. 00:38:26.982 [2024-12-14 00:19:05.824722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.982 [2024-12-14 00:19:05.824765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.982 qpair failed and we were unable to recover it. 00:38:26.982 [2024-12-14 00:19:05.824973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.982 [2024-12-14 00:19:05.825015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.982 qpair failed and we were unable to recover it. 00:38:26.982 [2024-12-14 00:19:05.825234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.982 [2024-12-14 00:19:05.825277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.982 qpair failed and we were unable to recover it. 00:38:26.982 [2024-12-14 00:19:05.825489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.982 [2024-12-14 00:19:05.825533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.982 qpair failed and we were unable to recover it. 00:38:26.982 [2024-12-14 00:19:05.825764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.982 [2024-12-14 00:19:05.825805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.982 qpair failed and we were unable to recover it. 00:38:26.982 [2024-12-14 00:19:05.826009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.982 [2024-12-14 00:19:05.826022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.982 qpair failed and we were unable to recover it. 00:38:26.982 [2024-12-14 00:19:05.826108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.982 [2024-12-14 00:19:05.826121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.982 qpair failed and we were unable to recover it. 00:38:26.982 [2024-12-14 00:19:05.826256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.982 [2024-12-14 00:19:05.826269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.982 qpair failed and we were unable to recover it. 00:38:26.982 [2024-12-14 00:19:05.826421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.982 [2024-12-14 00:19:05.826486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.982 qpair failed and we were unable to recover it. 00:38:26.982 [2024-12-14 00:19:05.826695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.982 [2024-12-14 00:19:05.826738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.982 qpair failed and we were unable to recover it. 00:38:26.982 [2024-12-14 00:19:05.826998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.982 [2024-12-14 00:19:05.827039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.982 qpair failed and we were unable to recover it. 00:38:26.982 [2024-12-14 00:19:05.827249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.982 [2024-12-14 00:19:05.827292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.982 qpair failed and we were unable to recover it. 00:38:26.982 [2024-12-14 00:19:05.827507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.982 [2024-12-14 00:19:05.827554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.982 qpair failed and we were unable to recover it. 00:38:26.982 [2024-12-14 00:19:05.827692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.982 [2024-12-14 00:19:05.827706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.982 qpair failed and we were unable to recover it. 00:38:26.982 [2024-12-14 00:19:05.827860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.982 [2024-12-14 00:19:05.827902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.982 qpair failed and we were unable to recover it. 00:38:26.982 [2024-12-14 00:19:05.828105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.982 [2024-12-14 00:19:05.828147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.982 qpair failed and we were unable to recover it. 00:38:26.982 [2024-12-14 00:19:05.828407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.982 [2024-12-14 00:19:05.828468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.982 qpair failed and we were unable to recover it. 00:38:26.982 [2024-12-14 00:19:05.828690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.982 [2024-12-14 00:19:05.828732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.982 qpair failed and we were unable to recover it. 00:38:26.982 [2024-12-14 00:19:05.829013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.982 [2024-12-14 00:19:05.829055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.982 qpair failed and we were unable to recover it. 00:38:26.982 [2024-12-14 00:19:05.829266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.982 [2024-12-14 00:19:05.829315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.982 qpair failed and we were unable to recover it. 00:38:26.982 [2024-12-14 00:19:05.829456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.982 [2024-12-14 00:19:05.829498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.982 qpair failed and we were unable to recover it. 00:38:26.982 [2024-12-14 00:19:05.829769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.982 [2024-12-14 00:19:05.829782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.982 qpair failed and we were unable to recover it. 00:38:26.982 [2024-12-14 00:19:05.829991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.982 [2024-12-14 00:19:05.830004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.982 qpair failed and we were unable to recover it. 00:38:26.982 [2024-12-14 00:19:05.830173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.982 [2024-12-14 00:19:05.830185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.982 qpair failed and we were unable to recover it. 00:38:26.982 [2024-12-14 00:19:05.830395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.982 [2024-12-14 00:19:05.830408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.982 qpair failed and we were unable to recover it. 00:38:26.982 [2024-12-14 00:19:05.830510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.982 [2024-12-14 00:19:05.830524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.982 qpair failed and we were unable to recover it. 00:38:26.982 [2024-12-14 00:19:05.830745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.982 [2024-12-14 00:19:05.830758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.982 qpair failed and we were unable to recover it. 00:38:26.982 [2024-12-14 00:19:05.830844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.982 [2024-12-14 00:19:05.830858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.982 qpair failed and we were unable to recover it. 00:38:26.982 [2024-12-14 00:19:05.831029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.982 [2024-12-14 00:19:05.831070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.982 qpair failed and we were unable to recover it. 00:38:26.982 [2024-12-14 00:19:05.831227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.982 [2024-12-14 00:19:05.831269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.982 qpair failed and we were unable to recover it. 00:38:26.982 [2024-12-14 00:19:05.831489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.982 [2024-12-14 00:19:05.831539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.982 qpair failed and we were unable to recover it. 00:38:26.982 [2024-12-14 00:19:05.831742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.982 [2024-12-14 00:19:05.831755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.982 qpair failed and we were unable to recover it. 00:38:26.982 [2024-12-14 00:19:05.831920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.982 [2024-12-14 00:19:05.831962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.982 qpair failed and we were unable to recover it. 00:38:26.982 [2024-12-14 00:19:05.832282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.982 [2024-12-14 00:19:05.832324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.982 qpair failed and we were unable to recover it. 00:38:26.982 [2024-12-14 00:19:05.832483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.982 [2024-12-14 00:19:05.832526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.982 qpair failed and we were unable to recover it. 00:38:26.982 [2024-12-14 00:19:05.832726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.982 [2024-12-14 00:19:05.832767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.982 qpair failed and we were unable to recover it. 00:38:26.982 [2024-12-14 00:19:05.833043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.982 [2024-12-14 00:19:05.833056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.982 qpair failed and we were unable to recover it. 00:38:26.982 [2024-12-14 00:19:05.833222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.982 [2024-12-14 00:19:05.833236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.982 qpair failed and we were unable to recover it. 00:38:26.982 [2024-12-14 00:19:05.833399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.982 [2024-12-14 00:19:05.833447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.982 qpair failed and we were unable to recover it. 00:38:26.982 [2024-12-14 00:19:05.833589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.982 [2024-12-14 00:19:05.833630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.982 qpair failed and we were unable to recover it. 00:38:26.982 [2024-12-14 00:19:05.833840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.982 [2024-12-14 00:19:05.833882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.982 qpair failed and we were unable to recover it. 00:38:26.982 [2024-12-14 00:19:05.834079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.982 [2024-12-14 00:19:05.834092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.982 qpair failed and we were unable to recover it. 00:38:26.982 [2024-12-14 00:19:05.834180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.982 [2024-12-14 00:19:05.834193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.982 qpair failed and we were unable to recover it. 00:38:26.982 [2024-12-14 00:19:05.834348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.982 [2024-12-14 00:19:05.834362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.982 qpair failed and we were unable to recover it. 00:38:26.982 [2024-12-14 00:19:05.834478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.982 [2024-12-14 00:19:05.834521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.982 qpair failed and we were unable to recover it. 00:38:26.982 [2024-12-14 00:19:05.834803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.983 [2024-12-14 00:19:05.834845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.983 qpair failed and we were unable to recover it. 00:38:26.983 [2024-12-14 00:19:05.835057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.983 [2024-12-14 00:19:05.835100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.983 qpair failed and we were unable to recover it. 00:38:26.983 [2024-12-14 00:19:05.835302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.983 [2024-12-14 00:19:05.835343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.983 qpair failed and we were unable to recover it. 00:38:26.983 [2024-12-14 00:19:05.835493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.983 [2024-12-14 00:19:05.835535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.983 qpair failed and we were unable to recover it. 00:38:26.983 [2024-12-14 00:19:05.835664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.983 [2024-12-14 00:19:05.835714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.983 qpair failed and we were unable to recover it. 00:38:26.983 [2024-12-14 00:19:05.835868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.983 [2024-12-14 00:19:05.835881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.983 qpair failed and we were unable to recover it. 00:38:26.983 [2024-12-14 00:19:05.836109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.983 [2024-12-14 00:19:05.836133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.983 qpair failed and we were unable to recover it. 00:38:26.983 [2024-12-14 00:19:05.836323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.983 [2024-12-14 00:19:05.836365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.983 qpair failed and we were unable to recover it. 00:38:26.983 [2024-12-14 00:19:05.836647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.983 [2024-12-14 00:19:05.836704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.983 qpair failed and we were unable to recover it. 00:38:26.983 [2024-12-14 00:19:05.836900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.983 [2024-12-14 00:19:05.836913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.983 qpair failed and we were unable to recover it. 00:38:26.983 [2024-12-14 00:19:05.837100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.983 [2024-12-14 00:19:05.837142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.983 qpair failed and we were unable to recover it. 00:38:26.983 [2024-12-14 00:19:05.837364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.983 [2024-12-14 00:19:05.837405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.983 qpair failed and we were unable to recover it. 00:38:26.983 [2024-12-14 00:19:05.837614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.983 [2024-12-14 00:19:05.837656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.983 qpair failed and we were unable to recover it. 00:38:26.983 [2024-12-14 00:19:05.837850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.983 [2024-12-14 00:19:05.837863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.983 qpair failed and we were unable to recover it. 00:38:26.983 [2024-12-14 00:19:05.838116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.983 [2024-12-14 00:19:05.838164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.983 qpair failed and we were unable to recover it. 00:38:26.983 [2024-12-14 00:19:05.838431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.983 [2024-12-14 00:19:05.838484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.983 qpair failed and we were unable to recover it. 00:38:26.983 [2024-12-14 00:19:05.838772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.983 [2024-12-14 00:19:05.838815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.983 qpair failed and we were unable to recover it. 00:38:26.983 [2024-12-14 00:19:05.839024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.983 [2024-12-14 00:19:05.839066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.983 qpair failed and we were unable to recover it. 00:38:26.983 [2024-12-14 00:19:05.839290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.983 [2024-12-14 00:19:05.839331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.983 qpair failed and we were unable to recover it. 00:38:26.983 [2024-12-14 00:19:05.839539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.983 [2024-12-14 00:19:05.839583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.983 qpair failed and we were unable to recover it. 00:38:26.983 [2024-12-14 00:19:05.839861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.983 [2024-12-14 00:19:05.839902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.983 qpair failed and we were unable to recover it. 00:38:26.983 [2024-12-14 00:19:05.840036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.983 [2024-12-14 00:19:05.840050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.983 qpair failed and we were unable to recover it. 00:38:26.983 [2024-12-14 00:19:05.840274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.983 [2024-12-14 00:19:05.840288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.983 qpair failed and we were unable to recover it. 00:38:26.983 [2024-12-14 00:19:05.840458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.983 [2024-12-14 00:19:05.840499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.983 qpair failed and we were unable to recover it. 00:38:26.983 [2024-12-14 00:19:05.840741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.983 [2024-12-14 00:19:05.840783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.983 qpair failed and we were unable to recover it. 00:38:26.983 [2024-12-14 00:19:05.840907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.983 [2024-12-14 00:19:05.840948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.983 qpair failed and we were unable to recover it. 00:38:26.983 [2024-12-14 00:19:05.841062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.983 [2024-12-14 00:19:05.841075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.983 qpair failed and we were unable to recover it. 00:38:26.983 [2024-12-14 00:19:05.841159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.983 [2024-12-14 00:19:05.841172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.983 qpair failed and we were unable to recover it. 00:38:26.983 [2024-12-14 00:19:05.841380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.983 [2024-12-14 00:19:05.841395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.983 qpair failed and we were unable to recover it. 00:38:26.983 [2024-12-14 00:19:05.841497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.983 [2024-12-14 00:19:05.841511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.983 qpair failed and we were unable to recover it. 00:38:26.983 [2024-12-14 00:19:05.841650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.983 [2024-12-14 00:19:05.841663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.983 qpair failed and we were unable to recover it. 00:38:26.983 [2024-12-14 00:19:05.841764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.983 [2024-12-14 00:19:05.841813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.983 qpair failed and we were unable to recover it. 00:38:26.983 [2024-12-14 00:19:05.842024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.983 [2024-12-14 00:19:05.842065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.983 qpair failed and we were unable to recover it. 00:38:26.983 [2024-12-14 00:19:05.842337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.983 [2024-12-14 00:19:05.842380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.983 qpair failed and we were unable to recover it. 00:38:26.983 [2024-12-14 00:19:05.842530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.983 [2024-12-14 00:19:05.842573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.983 qpair failed and we were unable to recover it. 00:38:26.983 [2024-12-14 00:19:05.842714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.983 [2024-12-14 00:19:05.842738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.983 qpair failed and we were unable to recover it. 00:38:26.983 [2024-12-14 00:19:05.842962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.983 [2024-12-14 00:19:05.842975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.983 qpair failed and we were unable to recover it. 00:38:26.983 [2024-12-14 00:19:05.843209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.983 [2024-12-14 00:19:05.843222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.983 qpair failed and we were unable to recover it. 00:38:26.983 [2024-12-14 00:19:05.843315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.983 [2024-12-14 00:19:05.843328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.983 qpair failed and we were unable to recover it. 00:38:26.983 [2024-12-14 00:19:05.843625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.983 [2024-12-14 00:19:05.843669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.983 qpair failed and we were unable to recover it. 00:38:26.983 [2024-12-14 00:19:05.843882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.983 [2024-12-14 00:19:05.843923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.983 qpair failed and we were unable to recover it. 00:38:26.983 [2024-12-14 00:19:05.844119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.983 [2024-12-14 00:19:05.844133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.983 qpair failed and we were unable to recover it. 00:38:26.983 [2024-12-14 00:19:05.844311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.983 [2024-12-14 00:19:05.844352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.983 qpair failed and we were unable to recover it. 00:38:26.983 [2024-12-14 00:19:05.844565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.983 [2024-12-14 00:19:05.844608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.983 qpair failed and we were unable to recover it. 00:38:26.983 [2024-12-14 00:19:05.844760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.983 [2024-12-14 00:19:05.844801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.983 qpair failed and we were unable to recover it. 00:38:26.983 [2024-12-14 00:19:05.844933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.983 [2024-12-14 00:19:05.844946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.983 qpair failed and we were unable to recover it. 00:38:26.983 [2024-12-14 00:19:05.845039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.983 [2024-12-14 00:19:05.845052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.983 qpair failed and we were unable to recover it. 00:38:26.983 [2024-12-14 00:19:05.845203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.983 [2024-12-14 00:19:05.845245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.983 qpair failed and we were unable to recover it. 00:38:26.983 [2024-12-14 00:19:05.845454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.983 [2024-12-14 00:19:05.845498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.983 qpair failed and we were unable to recover it. 00:38:26.983 [2024-12-14 00:19:05.845724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.983 [2024-12-14 00:19:05.845765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.983 qpair failed and we were unable to recover it. 00:38:26.983 [2024-12-14 00:19:05.846022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.983 [2024-12-14 00:19:05.846064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.983 qpair failed and we were unable to recover it. 00:38:26.983 [2024-12-14 00:19:05.846274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.983 [2024-12-14 00:19:05.846316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.984 qpair failed and we were unable to recover it. 00:38:26.984 [2024-12-14 00:19:05.846575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.984 [2024-12-14 00:19:05.846618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.984 qpair failed and we were unable to recover it. 00:38:26.984 [2024-12-14 00:19:05.846898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.984 [2024-12-14 00:19:05.846940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.984 qpair failed and we were unable to recover it. 00:38:26.984 [2024-12-14 00:19:05.847129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.984 [2024-12-14 00:19:05.847144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.984 qpair failed and we were unable to recover it. 00:38:26.984 [2024-12-14 00:19:05.847306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.984 [2024-12-14 00:19:05.847319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.984 qpair failed and we were unable to recover it. 00:38:26.984 [2024-12-14 00:19:05.847529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.984 [2024-12-14 00:19:05.847572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.984 qpair failed and we were unable to recover it. 00:38:26.984 [2024-12-14 00:19:05.847725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.984 [2024-12-14 00:19:05.847766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.984 qpair failed and we were unable to recover it. 00:38:26.984 [2024-12-14 00:19:05.847970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.984 [2024-12-14 00:19:05.848011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.984 qpair failed and we were unable to recover it. 00:38:26.984 [2024-12-14 00:19:05.848157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.984 [2024-12-14 00:19:05.848198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.984 qpair failed and we were unable to recover it. 00:38:26.984 [2024-12-14 00:19:05.848392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.984 [2024-12-14 00:19:05.848435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.984 qpair failed and we were unable to recover it. 00:38:26.984 [2024-12-14 00:19:05.848676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.984 [2024-12-14 00:19:05.848690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.984 qpair failed and we were unable to recover it. 00:38:26.984 [2024-12-14 00:19:05.848822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.984 [2024-12-14 00:19:05.848835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.984 qpair failed and we were unable to recover it. 00:38:26.984 [2024-12-14 00:19:05.848979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.984 [2024-12-14 00:19:05.848992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.984 qpair failed and we were unable to recover it. 00:38:26.984 [2024-12-14 00:19:05.849151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.984 [2024-12-14 00:19:05.849193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.984 qpair failed and we were unable to recover it. 00:38:26.984 [2024-12-14 00:19:05.849436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.984 [2024-12-14 00:19:05.849490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.984 qpair failed and we were unable to recover it. 00:38:26.984 [2024-12-14 00:19:05.849754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.984 [2024-12-14 00:19:05.849767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.984 qpair failed and we were unable to recover it. 00:38:26.984 [2024-12-14 00:19:05.849993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.984 [2024-12-14 00:19:05.850007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.984 qpair failed and we were unable to recover it. 00:38:26.984 [2024-12-14 00:19:05.850264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.984 [2024-12-14 00:19:05.850278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.984 qpair failed and we were unable to recover it. 00:38:26.984 [2024-12-14 00:19:05.850424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.984 [2024-12-14 00:19:05.850441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.984 qpair failed and we were unable to recover it. 00:38:26.984 [2024-12-14 00:19:05.850531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.984 [2024-12-14 00:19:05.850544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.984 qpair failed and we were unable to recover it. 00:38:26.984 [2024-12-14 00:19:05.850790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.984 [2024-12-14 00:19:05.850833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.984 qpair failed and we were unable to recover it. 00:38:26.984 [2024-12-14 00:19:05.851036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.984 [2024-12-14 00:19:05.851089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.984 qpair failed and we were unable to recover it. 00:38:26.984 [2024-12-14 00:19:05.851298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.984 [2024-12-14 00:19:05.851340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.984 qpair failed and we were unable to recover it. 00:38:26.984 [2024-12-14 00:19:05.851498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.984 [2024-12-14 00:19:05.851541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.984 qpair failed and we were unable to recover it. 00:38:26.984 [2024-12-14 00:19:05.851756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.984 [2024-12-14 00:19:05.851798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.984 qpair failed and we were unable to recover it. 00:38:26.984 [2024-12-14 00:19:05.852004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.984 [2024-12-14 00:19:05.852045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.984 qpair failed and we were unable to recover it. 00:38:26.984 [2024-12-14 00:19:05.852198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.984 [2024-12-14 00:19:05.852239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.984 qpair failed and we were unable to recover it. 00:38:26.984 [2024-12-14 00:19:05.852554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.984 [2024-12-14 00:19:05.852596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.984 qpair failed and we were unable to recover it. 00:38:26.984 [2024-12-14 00:19:05.852858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.984 [2024-12-14 00:19:05.852899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.984 qpair failed and we were unable to recover it. 00:38:26.984 [2024-12-14 00:19:05.853152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.984 [2024-12-14 00:19:05.853166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.984 qpair failed and we were unable to recover it. 00:38:26.984 [2024-12-14 00:19:05.853374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.984 [2024-12-14 00:19:05.853387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.984 qpair failed and we were unable to recover it. 00:38:26.984 [2024-12-14 00:19:05.853460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.984 [2024-12-14 00:19:05.853474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.984 qpair failed and we were unable to recover it. 00:38:26.984 [2024-12-14 00:19:05.853637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.984 [2024-12-14 00:19:05.853650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.984 qpair failed and we were unable to recover it. 00:38:26.984 [2024-12-14 00:19:05.853745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.984 [2024-12-14 00:19:05.853758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.984 qpair failed and we were unable to recover it. 00:38:26.984 [2024-12-14 00:19:05.853925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.984 [2024-12-14 00:19:05.853938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.984 qpair failed and we were unable to recover it. 00:38:26.984 [2024-12-14 00:19:05.854095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.984 [2024-12-14 00:19:05.854109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.984 qpair failed and we were unable to recover it. 00:38:26.984 [2024-12-14 00:19:05.854266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.984 [2024-12-14 00:19:05.854307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.984 qpair failed and we were unable to recover it. 00:38:26.984 [2024-12-14 00:19:05.854614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.984 [2024-12-14 00:19:05.854657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.984 qpair failed and we were unable to recover it. 00:38:26.984 [2024-12-14 00:19:05.854888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.984 [2024-12-14 00:19:05.854901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.984 qpair failed and we were unable to recover it. 00:38:26.984 [2024-12-14 00:19:05.855003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.984 [2024-12-14 00:19:05.855016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.984 qpair failed and we were unable to recover it. 00:38:26.984 [2024-12-14 00:19:05.855168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.984 [2024-12-14 00:19:05.855209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.984 qpair failed and we were unable to recover it. 00:38:26.984 [2024-12-14 00:19:05.855498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.984 [2024-12-14 00:19:05.855541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.984 qpair failed and we were unable to recover it. 00:38:26.984 [2024-12-14 00:19:05.855745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.984 [2024-12-14 00:19:05.855758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.984 qpair failed and we were unable to recover it. 00:38:26.984 [2024-12-14 00:19:05.855978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.984 [2024-12-14 00:19:05.855994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.984 qpair failed and we were unable to recover it. 00:38:26.984 [2024-12-14 00:19:05.856154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.984 [2024-12-14 00:19:05.856196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.984 qpair failed and we were unable to recover it. 00:38:26.984 [2024-12-14 00:19:05.856503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.984 [2024-12-14 00:19:05.856546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.984 qpair failed and we were unable to recover it. 00:38:26.984 [2024-12-14 00:19:05.856695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.984 [2024-12-14 00:19:05.856708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.984 qpair failed and we were unable to recover it. 00:38:26.984 [2024-12-14 00:19:05.856809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.984 [2024-12-14 00:19:05.856846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.984 qpair failed and we were unable to recover it. 00:38:26.984 [2024-12-14 00:19:05.857004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.984 [2024-12-14 00:19:05.857046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.984 qpair failed and we were unable to recover it. 00:38:26.984 [2024-12-14 00:19:05.857313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.984 [2024-12-14 00:19:05.857355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.984 qpair failed and we were unable to recover it. 00:38:26.984 [2024-12-14 00:19:05.857492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.984 [2024-12-14 00:19:05.857536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.984 qpair failed and we were unable to recover it. 00:38:26.984 [2024-12-14 00:19:05.857665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.984 [2024-12-14 00:19:05.857707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.984 qpair failed and we were unable to recover it. 00:38:26.984 [2024-12-14 00:19:05.857926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.984 [2024-12-14 00:19:05.857968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.985 qpair failed and we were unable to recover it. 00:38:26.985 [2024-12-14 00:19:05.858237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.985 [2024-12-14 00:19:05.858250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.985 qpair failed and we were unable to recover it. 00:38:26.985 [2024-12-14 00:19:05.858346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.985 [2024-12-14 00:19:05.858359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.985 qpair failed and we were unable to recover it. 00:38:26.985 [2024-12-14 00:19:05.858585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.985 [2024-12-14 00:19:05.858627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.985 qpair failed and we were unable to recover it. 00:38:26.985 [2024-12-14 00:19:05.858896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.985 [2024-12-14 00:19:05.858937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.985 qpair failed and we were unable to recover it. 00:38:26.985 [2024-12-14 00:19:05.859199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.985 [2024-12-14 00:19:05.859213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.985 qpair failed and we were unable to recover it. 00:38:26.985 [2024-12-14 00:19:05.859362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.985 [2024-12-14 00:19:05.859376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.985 qpair failed and we were unable to recover it. 00:38:26.985 [2024-12-14 00:19:05.859450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.985 [2024-12-14 00:19:05.859464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.985 qpair failed and we were unable to recover it. 00:38:26.985 [2024-12-14 00:19:05.859614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.985 [2024-12-14 00:19:05.859628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.985 qpair failed and we were unable to recover it. 00:38:26.985 [2024-12-14 00:19:05.859774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.985 [2024-12-14 00:19:05.859787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.985 qpair failed and we were unable to recover it. 00:38:26.985 [2024-12-14 00:19:05.859863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.985 [2024-12-14 00:19:05.859876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.985 qpair failed and we were unable to recover it. 00:38:26.985 [2024-12-14 00:19:05.860085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.985 [2024-12-14 00:19:05.860127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.985 qpair failed and we were unable to recover it. 00:38:26.985 [2024-12-14 00:19:05.860394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.985 [2024-12-14 00:19:05.860435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.985 qpair failed and we were unable to recover it. 00:38:26.985 [2024-12-14 00:19:05.860730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.985 [2024-12-14 00:19:05.860772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.985 qpair failed and we were unable to recover it. 00:38:26.985 [2024-12-14 00:19:05.861090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.985 [2024-12-14 00:19:05.861132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.985 qpair failed and we were unable to recover it. 00:38:26.985 [2024-12-14 00:19:05.861415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.985 [2024-12-14 00:19:05.861465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.985 qpair failed and we were unable to recover it. 00:38:26.985 [2024-12-14 00:19:05.861680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.985 [2024-12-14 00:19:05.861721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.985 qpair failed and we were unable to recover it. 00:38:26.985 [2024-12-14 00:19:05.861962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.985 [2024-12-14 00:19:05.862004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.985 qpair failed and we were unable to recover it. 00:38:26.985 [2024-12-14 00:19:05.862376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.985 [2024-12-14 00:19:05.862480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:26.985 qpair failed and we were unable to recover it. 00:38:26.985 [2024-12-14 00:19:05.862742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.985 [2024-12-14 00:19:05.862827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:26.985 qpair failed and we were unable to recover it. 00:38:26.985 [2024-12-14 00:19:05.863084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.985 [2024-12-14 00:19:05.863131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.985 qpair failed and we were unable to recover it. 00:38:26.985 [2024-12-14 00:19:05.863377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.985 [2024-12-14 00:19:05.863393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.985 qpair failed and we were unable to recover it. 00:38:26.985 [2024-12-14 00:19:05.863549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.985 [2024-12-14 00:19:05.863563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.985 qpair failed and we were unable to recover it. 00:38:26.985 [2024-12-14 00:19:05.863764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.985 [2024-12-14 00:19:05.863777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.985 qpair failed and we were unable to recover it. 00:38:26.985 [2024-12-14 00:19:05.863924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.985 [2024-12-14 00:19:05.863970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.985 qpair failed and we were unable to recover it. 00:38:26.985 [2024-12-14 00:19:05.864234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.985 [2024-12-14 00:19:05.864275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.985 qpair failed and we were unable to recover it. 00:38:26.985 [2024-12-14 00:19:05.864491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.985 [2024-12-14 00:19:05.864534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.985 qpair failed and we were unable to recover it. 00:38:26.985 [2024-12-14 00:19:05.864743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.985 [2024-12-14 00:19:05.864784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.985 qpair failed and we were unable to recover it. 00:38:26.985 [2024-12-14 00:19:05.864889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.985 [2024-12-14 00:19:05.864902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.985 qpair failed and we were unable to recover it. 00:38:26.985 [2024-12-14 00:19:05.865127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.985 [2024-12-14 00:19:05.865168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.985 qpair failed and we were unable to recover it. 00:38:26.985 [2024-12-14 00:19:05.865326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.985 [2024-12-14 00:19:05.865368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.985 qpair failed and we were unable to recover it. 00:38:26.985 [2024-12-14 00:19:05.865665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.985 [2024-12-14 00:19:05.865681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.985 qpair failed and we were unable to recover it. 00:38:26.985 [2024-12-14 00:19:05.865764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.985 [2024-12-14 00:19:05.865778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.985 qpair failed and we were unable to recover it. 00:38:26.985 [2024-12-14 00:19:05.865855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.985 [2024-12-14 00:19:05.865868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.985 qpair failed and we were unable to recover it. 00:38:26.985 [2024-12-14 00:19:05.865967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.985 [2024-12-14 00:19:05.866009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.985 qpair failed and we were unable to recover it. 00:38:26.985 [2024-12-14 00:19:05.866252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.985 [2024-12-14 00:19:05.866295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.985 qpair failed and we were unable to recover it. 00:38:26.985 [2024-12-14 00:19:05.866554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.985 [2024-12-14 00:19:05.866604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.985 qpair failed and we were unable to recover it. 00:38:26.985 [2024-12-14 00:19:05.866814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.985 [2024-12-14 00:19:05.866828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.985 qpair failed and we were unable to recover it. 00:38:26.985 [2024-12-14 00:19:05.866917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.985 [2024-12-14 00:19:05.866931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.985 qpair failed and we were unable to recover it. 00:38:26.985 [2024-12-14 00:19:05.867019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.986 [2024-12-14 00:19:05.867032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.986 qpair failed and we were unable to recover it. 00:38:26.986 [2024-12-14 00:19:05.867172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.986 [2024-12-14 00:19:05.867186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.986 qpair failed and we were unable to recover it. 00:38:26.986 [2024-12-14 00:19:05.867347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.986 [2024-12-14 00:19:05.867388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.986 qpair failed and we were unable to recover it. 00:38:26.986 [2024-12-14 00:19:05.867611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.986 [2024-12-14 00:19:05.867654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.986 qpair failed and we were unable to recover it. 00:38:26.986 [2024-12-14 00:19:05.867797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.986 [2024-12-14 00:19:05.867838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.986 qpair failed and we were unable to recover it. 00:38:26.986 [2024-12-14 00:19:05.867948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.986 [2024-12-14 00:19:05.867961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.986 qpair failed and we were unable to recover it. 00:38:26.986 [2024-12-14 00:19:05.868047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.986 [2024-12-14 00:19:05.868061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.986 qpair failed and we were unable to recover it. 00:38:26.986 [2024-12-14 00:19:05.868212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.986 [2024-12-14 00:19:05.868225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.986 qpair failed and we were unable to recover it. 00:38:26.986 [2024-12-14 00:19:05.868453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.986 [2024-12-14 00:19:05.868467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.986 qpair failed and we were unable to recover it. 00:38:26.986 [2024-12-14 00:19:05.868615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.986 [2024-12-14 00:19:05.868629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.986 qpair failed and we were unable to recover it. 00:38:26.986 [2024-12-14 00:19:05.868797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.986 [2024-12-14 00:19:05.868839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.986 qpair failed and we were unable to recover it. 00:38:26.986 [2024-12-14 00:19:05.869074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.986 [2024-12-14 00:19:05.869115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.986 qpair failed and we were unable to recover it. 00:38:26.986 [2024-12-14 00:19:05.869316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.986 [2024-12-14 00:19:05.869358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.986 qpair failed and we were unable to recover it. 00:38:26.986 [2024-12-14 00:19:05.869566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.986 [2024-12-14 00:19:05.869616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.986 qpair failed and we were unable to recover it. 00:38:26.986 [2024-12-14 00:19:05.869739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.986 [2024-12-14 00:19:05.869752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.986 qpair failed and we were unable to recover it. 00:38:26.986 [2024-12-14 00:19:05.869886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.986 [2024-12-14 00:19:05.869899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.986 qpair failed and we were unable to recover it. 00:38:26.986 [2024-12-14 00:19:05.870003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.986 [2024-12-14 00:19:05.870015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.986 qpair failed and we were unable to recover it. 00:38:26.986 [2024-12-14 00:19:05.870113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.986 [2024-12-14 00:19:05.870127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.986 qpair failed and we were unable to recover it. 00:38:26.986 [2024-12-14 00:19:05.870261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.986 [2024-12-14 00:19:05.870275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.986 qpair failed and we were unable to recover it. 00:38:26.986 [2024-12-14 00:19:05.870390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.986 [2024-12-14 00:19:05.870418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:26.986 qpair failed and we were unable to recover it. 00:38:26.986 [2024-12-14 00:19:05.870543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.986 [2024-12-14 00:19:05.870569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:26.986 qpair failed and we were unable to recover it. 00:38:26.986 [2024-12-14 00:19:05.870787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.986 [2024-12-14 00:19:05.870842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.986 qpair failed and we were unable to recover it. 00:38:26.986 [2024-12-14 00:19:05.870994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.986 [2024-12-14 00:19:05.871037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.986 qpair failed and we were unable to recover it. 00:38:26.986 [2024-12-14 00:19:05.871176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.986 [2024-12-14 00:19:05.871218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.986 qpair failed and we were unable to recover it. 00:38:26.986 [2024-12-14 00:19:05.871367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.986 [2024-12-14 00:19:05.871409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.986 qpair failed and we were unable to recover it. 00:38:26.986 [2024-12-14 00:19:05.871557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.986 [2024-12-14 00:19:05.871610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.986 qpair failed and we were unable to recover it. 00:38:26.986 [2024-12-14 00:19:05.871845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.986 [2024-12-14 00:19:05.871893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.986 qpair failed and we were unable to recover it. 00:38:26.986 [2024-12-14 00:19:05.872031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.986 [2024-12-14 00:19:05.872044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.986 qpair failed and we were unable to recover it. 00:38:26.986 [2024-12-14 00:19:05.872113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.986 [2024-12-14 00:19:05.872126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.986 qpair failed and we were unable to recover it. 00:38:26.986 [2024-12-14 00:19:05.872332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.986 [2024-12-14 00:19:05.872345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.986 qpair failed and we were unable to recover it. 00:38:26.986 [2024-12-14 00:19:05.872496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.986 [2024-12-14 00:19:05.872510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.986 qpair failed and we were unable to recover it. 00:38:26.986 [2024-12-14 00:19:05.872588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.986 [2024-12-14 00:19:05.872619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.986 qpair failed and we were unable to recover it. 00:38:26.986 [2024-12-14 00:19:05.872899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.986 [2024-12-14 00:19:05.872948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.986 qpair failed and we were unable to recover it. 00:38:26.986 [2024-12-14 00:19:05.873213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.986 [2024-12-14 00:19:05.873254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.986 qpair failed and we were unable to recover it. 00:38:26.986 [2024-12-14 00:19:05.873467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.986 [2024-12-14 00:19:05.873510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.986 qpair failed and we were unable to recover it. 00:38:26.986 [2024-12-14 00:19:05.873707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.986 [2024-12-14 00:19:05.873749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.986 qpair failed and we were unable to recover it. 00:38:26.986 [2024-12-14 00:19:05.874089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.986 [2024-12-14 00:19:05.874131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.986 qpair failed and we were unable to recover it. 00:38:26.986 [2024-12-14 00:19:05.874403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.986 [2024-12-14 00:19:05.874452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.986 qpair failed and we were unable to recover it. 00:38:26.986 [2024-12-14 00:19:05.874591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.986 [2024-12-14 00:19:05.874633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.986 qpair failed and we were unable to recover it. 00:38:26.986 [2024-12-14 00:19:05.874841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.986 [2024-12-14 00:19:05.874855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.986 qpair failed and we were unable to recover it. 00:38:26.986 [2024-12-14 00:19:05.875002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.986 [2024-12-14 00:19:05.875043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.986 qpair failed and we were unable to recover it. 00:38:26.986 [2024-12-14 00:19:05.875173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.986 [2024-12-14 00:19:05.875215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.986 qpair failed and we were unable to recover it. 00:38:26.986 [2024-12-14 00:19:05.875425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.986 [2024-12-14 00:19:05.875484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.986 qpair failed and we were unable to recover it. 00:38:26.986 [2024-12-14 00:19:05.875649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.986 [2024-12-14 00:19:05.875691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.986 qpair failed and we were unable to recover it. 00:38:26.986 [2024-12-14 00:19:05.875899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.986 [2024-12-14 00:19:05.875940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.986 qpair failed and we were unable to recover it. 00:38:26.986 [2024-12-14 00:19:05.876249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.986 [2024-12-14 00:19:05.876290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.986 qpair failed and we were unable to recover it. 00:38:26.986 [2024-12-14 00:19:05.876517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.986 [2024-12-14 00:19:05.876561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.986 qpair failed and we were unable to recover it. 00:38:26.986 [2024-12-14 00:19:05.876799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.986 [2024-12-14 00:19:05.876845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.986 qpair failed and we were unable to recover it. 00:38:26.986 [2024-12-14 00:19:05.876984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.986 [2024-12-14 00:19:05.876998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.986 qpair failed and we were unable to recover it. 00:38:26.986 [2024-12-14 00:19:05.877239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.987 [2024-12-14 00:19:05.877252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.987 qpair failed and we were unable to recover it. 00:38:26.987 [2024-12-14 00:19:05.877466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.987 [2024-12-14 00:19:05.877508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.987 qpair failed and we were unable to recover it. 00:38:26.987 [2024-12-14 00:19:05.877744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.987 [2024-12-14 00:19:05.877786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.987 qpair failed and we were unable to recover it. 00:38:26.987 [2024-12-14 00:19:05.878014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.987 [2024-12-14 00:19:05.878028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.987 qpair failed and we were unable to recover it. 00:38:26.987 [2024-12-14 00:19:05.878149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.987 [2024-12-14 00:19:05.878191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.987 qpair failed and we were unable to recover it. 00:38:26.987 [2024-12-14 00:19:05.878340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.987 [2024-12-14 00:19:05.878382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.987 qpair failed and we were unable to recover it. 00:38:26.987 [2024-12-14 00:19:05.878658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.987 [2024-12-14 00:19:05.878699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.987 qpair failed and we were unable to recover it. 00:38:26.987 [2024-12-14 00:19:05.878862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.987 [2024-12-14 00:19:05.878876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.987 qpair failed and we were unable to recover it. 00:38:26.987 [2024-12-14 00:19:05.879046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.987 [2024-12-14 00:19:05.879088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.987 qpair failed and we were unable to recover it. 00:38:26.987 [2024-12-14 00:19:05.879214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.987 [2024-12-14 00:19:05.879255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.987 qpair failed and we were unable to recover it. 00:38:26.987 [2024-12-14 00:19:05.879497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.987 [2024-12-14 00:19:05.879546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:26.987 qpair failed and we were unable to recover it. 00:38:26.987 [2024-12-14 00:19:05.879765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.987 [2024-12-14 00:19:05.879790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:26.987 qpair failed and we were unable to recover it. 00:38:26.987 [2024-12-14 00:19:05.879897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.987 [2024-12-14 00:19:05.879948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.987 qpair failed and we were unable to recover it. 00:38:26.987 [2024-12-14 00:19:05.880253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.987 [2024-12-14 00:19:05.880298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.987 qpair failed and we were unable to recover it. 00:38:26.987 [2024-12-14 00:19:05.880593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.987 [2024-12-14 00:19:05.880639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.987 qpair failed and we were unable to recover it. 00:38:26.987 [2024-12-14 00:19:05.880865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.987 [2024-12-14 00:19:05.880887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.987 qpair failed and we were unable to recover it. 00:38:26.987 [2024-12-14 00:19:05.881101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.987 [2024-12-14 00:19:05.881122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.987 qpair failed and we were unable to recover it. 00:38:26.987 [2024-12-14 00:19:05.881290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.987 [2024-12-14 00:19:05.881311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.987 qpair failed and we were unable to recover it. 00:38:26.987 [2024-12-14 00:19:05.881496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.987 [2024-12-14 00:19:05.881542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.987 qpair failed and we were unable to recover it. 00:38:26.987 [2024-12-14 00:19:05.881802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.987 [2024-12-14 00:19:05.881823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.987 qpair failed and we were unable to recover it. 00:38:26.987 [2024-12-14 00:19:05.882054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.987 [2024-12-14 00:19:05.882076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.987 qpair failed and we were unable to recover it. 00:38:26.987 [2024-12-14 00:19:05.882265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.987 [2024-12-14 00:19:05.882287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.987 qpair failed and we were unable to recover it. 00:38:26.987 [2024-12-14 00:19:05.882461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.987 [2024-12-14 00:19:05.882507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.987 qpair failed and we were unable to recover it. 00:38:26.987 [2024-12-14 00:19:05.882765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.987 [2024-12-14 00:19:05.882821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:26.987 qpair failed and we were unable to recover it. 00:38:26.987 [2024-12-14 00:19:05.882975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.987 [2024-12-14 00:19:05.882997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:26.987 qpair failed and we were unable to recover it. 00:38:26.987 [2024-12-14 00:19:05.883221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.987 [2024-12-14 00:19:05.883265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:26.987 qpair failed and we were unable to recover it. 00:38:26.987 [2024-12-14 00:19:05.883477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.987 [2024-12-14 00:19:05.883521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:26.987 qpair failed and we were unable to recover it. 00:38:26.987 [2024-12-14 00:19:05.883687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.987 [2024-12-14 00:19:05.883733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:26.987 qpair failed and we were unable to recover it. 00:38:26.987 [2024-12-14 00:19:05.883885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.987 [2024-12-14 00:19:05.883907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:26.987 qpair failed and we were unable to recover it. 00:38:26.987 [2024-12-14 00:19:05.884028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.987 [2024-12-14 00:19:05.884070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:26.987 qpair failed and we were unable to recover it. 00:38:26.987 [2024-12-14 00:19:05.884276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.987 [2024-12-14 00:19:05.884319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:26.987 qpair failed and we were unable to recover it. 00:38:26.987 [2024-12-14 00:19:05.884590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.987 [2024-12-14 00:19:05.884637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:26.987 qpair failed and we were unable to recover it. 00:38:26.987 [2024-12-14 00:19:05.884900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.987 [2024-12-14 00:19:05.884943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:26.987 qpair failed and we were unable to recover it. 00:38:26.987 [2024-12-14 00:19:05.885219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.987 [2024-12-14 00:19:05.885262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:26.987 qpair failed and we were unable to recover it. 00:38:26.987 [2024-12-14 00:19:05.885462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.987 [2024-12-14 00:19:05.885506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:26.987 qpair failed and we were unable to recover it. 00:38:26.987 [2024-12-14 00:19:05.885775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.987 [2024-12-14 00:19:05.885818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:26.987 qpair failed and we were unable to recover it. 00:38:26.987 [2024-12-14 00:19:05.885982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.987 [2024-12-14 00:19:05.886003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:26.987 qpair failed and we were unable to recover it. 00:38:26.987 [2024-12-14 00:19:05.886186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.987 [2024-12-14 00:19:05.886208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:26.987 qpair failed and we were unable to recover it. 00:38:26.987 [2024-12-14 00:19:05.886473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.987 [2024-12-14 00:19:05.886519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.987 qpair failed and we were unable to recover it. 00:38:26.987 [2024-12-14 00:19:05.886722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.987 [2024-12-14 00:19:05.886764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.987 qpair failed and we were unable to recover it. 00:38:26.987 [2024-12-14 00:19:05.886989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.987 [2024-12-14 00:19:05.887030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.987 qpair failed and we were unable to recover it. 00:38:26.987 [2024-12-14 00:19:05.887248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.987 [2024-12-14 00:19:05.887293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.987 qpair failed and we were unable to recover it. 00:38:26.987 [2024-12-14 00:19:05.887534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.987 [2024-12-14 00:19:05.887589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.987 qpair failed and we were unable to recover it. 00:38:26.987 [2024-12-14 00:19:05.887803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.987 [2024-12-14 00:19:05.887846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.987 qpair failed and we were unable to recover it. 00:38:26.987 [2024-12-14 00:19:05.888118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.987 [2024-12-14 00:19:05.888158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.987 qpair failed and we were unable to recover it. 00:38:26.987 [2024-12-14 00:19:05.888308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.987 [2024-12-14 00:19:05.888350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.987 qpair failed and we were unable to recover it. 00:38:26.987 [2024-12-14 00:19:05.888660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.987 [2024-12-14 00:19:05.888703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.987 qpair failed and we were unable to recover it. 00:38:26.987 [2024-12-14 00:19:05.888800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.988 [2024-12-14 00:19:05.888814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.988 qpair failed and we were unable to recover it. 00:38:26.988 [2024-12-14 00:19:05.889086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.988 [2024-12-14 00:19:05.889127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.988 qpair failed and we were unable to recover it. 00:38:26.988 [2024-12-14 00:19:05.889409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.988 [2024-12-14 00:19:05.889463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.988 qpair failed and we were unable to recover it. 00:38:26.988 [2024-12-14 00:19:05.889682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.988 [2024-12-14 00:19:05.889728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:26.988 qpair failed and we were unable to recover it. 00:38:26.988 [2024-12-14 00:19:05.889886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.988 [2024-12-14 00:19:05.889929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:26.988 qpair failed and we were unable to recover it. 00:38:26.988 [2024-12-14 00:19:05.890144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.988 [2024-12-14 00:19:05.890165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:26.988 qpair failed and we were unable to recover it. 00:38:26.988 [2024-12-14 00:19:05.890412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.988 [2024-12-14 00:19:05.890466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:26.988 qpair failed and we were unable to recover it. 00:38:26.988 [2024-12-14 00:19:05.890682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.988 [2024-12-14 00:19:05.890725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:26.988 qpair failed and we were unable to recover it. 00:38:26.988 [2024-12-14 00:19:05.890935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.988 [2024-12-14 00:19:05.890957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:26.988 qpair failed and we were unable to recover it. 00:38:26.988 [2024-12-14 00:19:05.891207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.988 [2024-12-14 00:19:05.891252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.988 qpair failed and we were unable to recover it. 00:38:26.988 [2024-12-14 00:19:05.891544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.988 [2024-12-14 00:19:05.891587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.988 qpair failed and we were unable to recover it. 00:38:26.988 [2024-12-14 00:19:05.891848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.988 [2024-12-14 00:19:05.891890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.988 qpair failed and we were unable to recover it. 00:38:26.988 [2024-12-14 00:19:05.892107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.988 [2024-12-14 00:19:05.892148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.988 qpair failed and we were unable to recover it. 00:38:26.988 [2024-12-14 00:19:05.892359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.988 [2024-12-14 00:19:05.892400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.988 qpair failed and we were unable to recover it. 00:38:26.988 [2024-12-14 00:19:05.892671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.988 [2024-12-14 00:19:05.892715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.988 qpair failed and we were unable to recover it. 00:38:26.988 [2024-12-14 00:19:05.892978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.988 [2024-12-14 00:19:05.893019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.988 qpair failed and we were unable to recover it. 00:38:26.988 [2024-12-14 00:19:05.893239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.988 [2024-12-14 00:19:05.893291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.988 qpair failed and we were unable to recover it. 00:38:26.988 [2024-12-14 00:19:05.893509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.988 [2024-12-14 00:19:05.893553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.988 qpair failed and we were unable to recover it. 00:38:26.988 [2024-12-14 00:19:05.893762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.988 [2024-12-14 00:19:05.893804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.988 qpair failed and we were unable to recover it. 00:38:26.988 [2024-12-14 00:19:05.894008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.988 [2024-12-14 00:19:05.894050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.988 qpair failed and we were unable to recover it. 00:38:26.988 [2024-12-14 00:19:05.894195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.988 [2024-12-14 00:19:05.894208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.988 qpair failed and we were unable to recover it. 00:38:26.988 [2024-12-14 00:19:05.894428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.988 [2024-12-14 00:19:05.894448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.988 qpair failed and we were unable to recover it. 00:38:26.988 [2024-12-14 00:19:05.894531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.988 [2024-12-14 00:19:05.894544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.988 qpair failed and we were unable to recover it. 00:38:26.988 [2024-12-14 00:19:05.894754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.988 [2024-12-14 00:19:05.894796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.988 qpair failed and we were unable to recover it. 00:38:26.988 [2024-12-14 00:19:05.895020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.988 [2024-12-14 00:19:05.895061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.988 qpair failed and we were unable to recover it. 00:38:26.988 [2024-12-14 00:19:05.895250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.988 [2024-12-14 00:19:05.895292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.988 qpair failed and we were unable to recover it. 00:38:26.988 [2024-12-14 00:19:05.895501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.988 [2024-12-14 00:19:05.895545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.988 qpair failed and we were unable to recover it. 00:38:26.988 [2024-12-14 00:19:05.895814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.988 [2024-12-14 00:19:05.895855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.988 qpair failed and we were unable to recover it. 00:38:26.988 [2024-12-14 00:19:05.896061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.988 [2024-12-14 00:19:05.896103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.988 qpair failed and we were unable to recover it. 00:38:26.988 [2024-12-14 00:19:05.896214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.988 [2024-12-14 00:19:05.896228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.988 qpair failed and we were unable to recover it. 00:38:26.988 [2024-12-14 00:19:05.896469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.988 [2024-12-14 00:19:05.896512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.988 qpair failed and we were unable to recover it. 00:38:26.988 [2024-12-14 00:19:05.896746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.988 [2024-12-14 00:19:05.896787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.988 qpair failed and we were unable to recover it. 00:38:26.988 [2024-12-14 00:19:05.897000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.988 [2024-12-14 00:19:05.897042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.988 qpair failed and we were unable to recover it. 00:38:26.988 [2024-12-14 00:19:05.897206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.988 [2024-12-14 00:19:05.897247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.988 qpair failed and we were unable to recover it. 00:38:26.988 [2024-12-14 00:19:05.897396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.988 [2024-12-14 00:19:05.897445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.988 qpair failed and we were unable to recover it. 00:38:26.988 [2024-12-14 00:19:05.897737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.988 [2024-12-14 00:19:05.897778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.988 qpair failed and we were unable to recover it. 00:38:26.988 [2024-12-14 00:19:05.897978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.988 [2024-12-14 00:19:05.898020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.988 qpair failed and we were unable to recover it. 00:38:26.988 [2024-12-14 00:19:05.898168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.988 [2024-12-14 00:19:05.898210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.988 qpair failed and we were unable to recover it. 00:38:26.988 [2024-12-14 00:19:05.898403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.988 [2024-12-14 00:19:05.898450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.988 qpair failed and we were unable to recover it. 00:38:26.988 [2024-12-14 00:19:05.898684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.988 [2024-12-14 00:19:05.898726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.988 qpair failed and we were unable to recover it. 00:38:26.988 [2024-12-14 00:19:05.898950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.988 [2024-12-14 00:19:05.898992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.988 qpair failed and we were unable to recover it. 00:38:26.988 [2024-12-14 00:19:05.899219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.988 [2024-12-14 00:19:05.899260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.988 qpair failed and we were unable to recover it. 00:38:26.988 [2024-12-14 00:19:05.899431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.988 [2024-12-14 00:19:05.899485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.988 qpair failed and we were unable to recover it. 00:38:26.988 [2024-12-14 00:19:05.899786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.988 [2024-12-14 00:19:05.899870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:26.988 qpair failed and we were unable to recover it. 00:38:26.988 [2024-12-14 00:19:05.900188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.988 [2024-12-14 00:19:05.900275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.988 qpair failed and we were unable to recover it. 00:38:26.988 [2024-12-14 00:19:05.900589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.988 [2024-12-14 00:19:05.900643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.988 qpair failed and we were unable to recover it. 00:38:26.988 [2024-12-14 00:19:05.900917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.988 [2024-12-14 00:19:05.900961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.988 qpair failed and we were unable to recover it. 00:38:26.988 [2024-12-14 00:19:05.901190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.988 [2024-12-14 00:19:05.901235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.988 qpair failed and we were unable to recover it. 00:38:26.988 [2024-12-14 00:19:05.901461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.988 [2024-12-14 00:19:05.901506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.989 qpair failed and we were unable to recover it. 00:38:26.989 [2024-12-14 00:19:05.901656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.989 [2024-12-14 00:19:05.901699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.989 qpair failed and we were unable to recover it. 00:38:26.989 [2024-12-14 00:19:05.901962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.989 [2024-12-14 00:19:05.902006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.989 qpair failed and we were unable to recover it. 00:38:26.989 [2024-12-14 00:19:05.902217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.989 [2024-12-14 00:19:05.902261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.989 qpair failed and we were unable to recover it. 00:38:26.989 [2024-12-14 00:19:05.902467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.989 [2024-12-14 00:19:05.902511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.989 qpair failed and we were unable to recover it. 00:38:26.989 [2024-12-14 00:19:05.902774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.989 [2024-12-14 00:19:05.902818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.989 qpair failed and we were unable to recover it. 00:38:26.989 [2024-12-14 00:19:05.903016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.989 [2024-12-14 00:19:05.903058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.989 qpair failed and we were unable to recover it. 00:38:26.989 [2024-12-14 00:19:05.903191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.989 [2024-12-14 00:19:05.903213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.989 qpair failed and we were unable to recover it. 00:38:26.989 [2024-12-14 00:19:05.903455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.989 [2024-12-14 00:19:05.903507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.989 qpair failed and we were unable to recover it. 00:38:26.989 [2024-12-14 00:19:05.903727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.989 [2024-12-14 00:19:05.903770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.989 qpair failed and we were unable to recover it. 00:38:26.989 [2024-12-14 00:19:05.903914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.989 [2024-12-14 00:19:05.903965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.989 qpair failed and we were unable to recover it. 00:38:26.989 [2024-12-14 00:19:05.904139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.989 [2024-12-14 00:19:05.904160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.989 qpair failed and we were unable to recover it. 00:38:26.989 [2024-12-14 00:19:05.904264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.989 [2024-12-14 00:19:05.904309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.989 qpair failed and we were unable to recover it. 00:38:26.989 [2024-12-14 00:19:05.904530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.989 [2024-12-14 00:19:05.904575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.989 qpair failed and we were unable to recover it. 00:38:26.989 [2024-12-14 00:19:05.904866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.989 [2024-12-14 00:19:05.904909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.989 qpair failed and we were unable to recover it. 00:38:26.989 [2024-12-14 00:19:05.905073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.989 [2024-12-14 00:19:05.905117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.989 qpair failed and we were unable to recover it. 00:38:26.989 [2024-12-14 00:19:05.905271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.989 [2024-12-14 00:19:05.905314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.989 qpair failed and we were unable to recover it. 00:38:26.989 [2024-12-14 00:19:05.905610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.989 [2024-12-14 00:19:05.905654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.989 qpair failed and we were unable to recover it. 00:38:26.989 [2024-12-14 00:19:05.905828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.989 [2024-12-14 00:19:05.905850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.989 qpair failed and we were unable to recover it. 00:38:26.989 [2024-12-14 00:19:05.905964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.989 [2024-12-14 00:19:05.906012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.989 qpair failed and we were unable to recover it. 00:38:26.989 [2024-12-14 00:19:05.906171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.989 [2024-12-14 00:19:05.906214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.989 qpair failed and we were unable to recover it. 00:38:26.989 [2024-12-14 00:19:05.906372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.989 [2024-12-14 00:19:05.906414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.989 qpair failed and we were unable to recover it. 00:38:26.989 [2024-12-14 00:19:05.906655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.989 [2024-12-14 00:19:05.906698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.989 qpair failed and we were unable to recover it. 00:38:26.989 [2024-12-14 00:19:05.906917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.989 [2024-12-14 00:19:05.906958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.989 qpair failed and we were unable to recover it. 00:38:26.989 [2024-12-14 00:19:05.907175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.989 [2024-12-14 00:19:05.907217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.989 qpair failed and we were unable to recover it. 00:38:26.989 [2024-12-14 00:19:05.907492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.989 [2024-12-14 00:19:05.907534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.989 qpair failed and we were unable to recover it. 00:38:26.989 [2024-12-14 00:19:05.907796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.989 [2024-12-14 00:19:05.907837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.989 qpair failed and we were unable to recover it. 00:38:26.989 [2024-12-14 00:19:05.908039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.989 [2024-12-14 00:19:05.908052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.989 qpair failed and we were unable to recover it. 00:38:26.989 [2024-12-14 00:19:05.908290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.989 [2024-12-14 00:19:05.908332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.989 qpair failed and we were unable to recover it. 00:38:26.989 [2024-12-14 00:19:05.908475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.989 [2024-12-14 00:19:05.908520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.989 qpair failed and we were unable to recover it. 00:38:26.989 [2024-12-14 00:19:05.908806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.989 [2024-12-14 00:19:05.908847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.989 qpair failed and we were unable to recover it. 00:38:26.989 [2024-12-14 00:19:05.909069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.989 [2024-12-14 00:19:05.909111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.989 qpair failed and we were unable to recover it. 00:38:26.989 [2024-12-14 00:19:05.909325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.989 [2024-12-14 00:19:05.909367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.989 qpair failed and we were unable to recover it. 00:38:26.989 [2024-12-14 00:19:05.909661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.989 [2024-12-14 00:19:05.909704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.989 qpair failed and we were unable to recover it. 00:38:26.989 [2024-12-14 00:19:05.909861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.989 [2024-12-14 00:19:05.909902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.989 qpair failed and we were unable to recover it. 00:38:26.989 [2024-12-14 00:19:05.910050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.989 [2024-12-14 00:19:05.910066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.989 qpair failed and we were unable to recover it. 00:38:26.989 [2024-12-14 00:19:05.910225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.989 [2024-12-14 00:19:05.910239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.989 qpair failed and we were unable to recover it. 00:38:26.989 [2024-12-14 00:19:05.910473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.989 [2024-12-14 00:19:05.910487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.989 qpair failed and we were unable to recover it. 00:38:26.989 [2024-12-14 00:19:05.910644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.989 [2024-12-14 00:19:05.910658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.989 qpair failed and we were unable to recover it. 00:38:26.989 [2024-12-14 00:19:05.910824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.989 [2024-12-14 00:19:05.910866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.989 qpair failed and we were unable to recover it. 00:38:26.989 [2024-12-14 00:19:05.911140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.989 [2024-12-14 00:19:05.911184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.989 qpair failed and we were unable to recover it. 00:38:26.990 [2024-12-14 00:19:05.911328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.990 [2024-12-14 00:19:05.911382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.990 qpair failed and we were unable to recover it. 00:38:26.990 [2024-12-14 00:19:05.911566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.990 [2024-12-14 00:19:05.911609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.990 qpair failed and we were unable to recover it. 00:38:26.990 [2024-12-14 00:19:05.911822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.990 [2024-12-14 00:19:05.911865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.990 qpair failed and we were unable to recover it. 00:38:26.990 [2024-12-14 00:19:05.912075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.990 [2024-12-14 00:19:05.912100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.990 qpair failed and we were unable to recover it. 00:38:26.990 [2024-12-14 00:19:05.912309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.990 [2024-12-14 00:19:05.912351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.990 qpair failed and we were unable to recover it. 00:38:26.990 [2024-12-14 00:19:05.912571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.990 [2024-12-14 00:19:05.912615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.990 qpair failed and we were unable to recover it. 00:38:26.990 [2024-12-14 00:19:05.912844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.990 [2024-12-14 00:19:05.912885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.990 qpair failed and we were unable to recover it. 00:38:26.990 [2024-12-14 00:19:05.913115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.990 [2024-12-14 00:19:05.913128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.990 qpair failed and we were unable to recover it. 00:38:26.990 [2024-12-14 00:19:05.913351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.990 [2024-12-14 00:19:05.913365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.990 qpair failed and we were unable to recover it. 00:38:26.990 [2024-12-14 00:19:05.913510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.990 [2024-12-14 00:19:05.913524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.990 qpair failed and we were unable to recover it. 00:38:26.990 [2024-12-14 00:19:05.913704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.990 [2024-12-14 00:19:05.913746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.990 qpair failed and we were unable to recover it. 00:38:26.990 [2024-12-14 00:19:05.914007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.990 [2024-12-14 00:19:05.914050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.990 qpair failed and we were unable to recover it. 00:38:26.990 [2024-12-14 00:19:05.914190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.990 [2024-12-14 00:19:05.914231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.990 qpair failed and we were unable to recover it. 00:38:26.990 [2024-12-14 00:19:05.914495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.990 [2024-12-14 00:19:05.914538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.990 qpair failed and we were unable to recover it. 00:38:26.990 [2024-12-14 00:19:05.914770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.990 [2024-12-14 00:19:05.914812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.990 qpair failed and we were unable to recover it. 00:38:26.990 [2024-12-14 00:19:05.915025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.990 [2024-12-14 00:19:05.915068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.990 qpair failed and we were unable to recover it. 00:38:26.990 [2024-12-14 00:19:05.915223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.990 [2024-12-14 00:19:05.915265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.990 qpair failed and we were unable to recover it. 00:38:26.990 [2024-12-14 00:19:05.915427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.990 [2024-12-14 00:19:05.915478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.990 qpair failed and we were unable to recover it. 00:38:26.990 [2024-12-14 00:19:05.915765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.990 [2024-12-14 00:19:05.915806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.990 qpair failed and we were unable to recover it. 00:38:26.990 [2024-12-14 00:19:05.916016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.990 [2024-12-14 00:19:05.916058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.990 qpair failed and we were unable to recover it. 00:38:26.990 [2024-12-14 00:19:05.916211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.990 [2024-12-14 00:19:05.916254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.990 qpair failed and we were unable to recover it. 00:38:26.990 [2024-12-14 00:19:05.916387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.990 [2024-12-14 00:19:05.916429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.990 qpair failed and we were unable to recover it. 00:38:26.990 [2024-12-14 00:19:05.916651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.990 [2024-12-14 00:19:05.916693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.990 qpair failed and we were unable to recover it. 00:38:26.990 [2024-12-14 00:19:05.916902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.990 [2024-12-14 00:19:05.916938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.990 qpair failed and we were unable to recover it. 00:38:26.990 [2024-12-14 00:19:05.917098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.990 [2024-12-14 00:19:05.917111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.990 qpair failed and we were unable to recover it. 00:38:26.990 [2024-12-14 00:19:05.917275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.990 [2024-12-14 00:19:05.917288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.990 qpair failed and we were unable to recover it. 00:38:26.990 [2024-12-14 00:19:05.917362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.990 [2024-12-14 00:19:05.917375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.990 qpair failed and we were unable to recover it. 00:38:26.990 [2024-12-14 00:19:05.917546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.990 [2024-12-14 00:19:05.917560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.990 qpair failed and we were unable to recover it. 00:38:26.990 [2024-12-14 00:19:05.917659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.990 [2024-12-14 00:19:05.917673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.990 qpair failed and we were unable to recover it. 00:38:26.990 [2024-12-14 00:19:05.917898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.990 [2024-12-14 00:19:05.917912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.990 qpair failed and we were unable to recover it. 00:38:26.990 [2024-12-14 00:19:05.918081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.990 [2024-12-14 00:19:05.918094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.990 qpair failed and we were unable to recover it. 00:38:26.990 [2024-12-14 00:19:05.918288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.990 [2024-12-14 00:19:05.918330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.990 qpair failed and we were unable to recover it. 00:38:26.990 [2024-12-14 00:19:05.918525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.990 [2024-12-14 00:19:05.918568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.990 qpair failed and we were unable to recover it. 00:38:26.990 [2024-12-14 00:19:05.918778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.990 [2024-12-14 00:19:05.918819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.990 qpair failed and we were unable to recover it. 00:38:26.990 [2024-12-14 00:19:05.919014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.990 [2024-12-14 00:19:05.919063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.990 qpair failed and we were unable to recover it. 00:38:26.990 [2024-12-14 00:19:05.919271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.990 [2024-12-14 00:19:05.919313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.990 qpair failed and we were unable to recover it. 00:38:26.990 [2024-12-14 00:19:05.919518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.990 [2024-12-14 00:19:05.919560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.990 qpair failed and we were unable to recover it. 00:38:26.990 [2024-12-14 00:19:05.919824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.990 [2024-12-14 00:19:05.919866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.990 qpair failed and we were unable to recover it. 00:38:26.990 [2024-12-14 00:19:05.920070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.990 [2024-12-14 00:19:05.920093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.990 qpair failed and we were unable to recover it. 00:38:26.990 [2024-12-14 00:19:05.920319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.990 [2024-12-14 00:19:05.920332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.990 qpair failed and we were unable to recover it. 00:38:26.990 [2024-12-14 00:19:05.920416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.990 [2024-12-14 00:19:05.920429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.990 qpair failed and we were unable to recover it. 00:38:26.990 [2024-12-14 00:19:05.920583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.990 [2024-12-14 00:19:05.920625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.990 qpair failed and we were unable to recover it. 00:38:26.990 [2024-12-14 00:19:05.920843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.990 [2024-12-14 00:19:05.920885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.990 qpair failed and we were unable to recover it. 00:38:26.990 [2024-12-14 00:19:05.921025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.991 [2024-12-14 00:19:05.921067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.991 qpair failed and we were unable to recover it. 00:38:26.991 [2024-12-14 00:19:05.921225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.991 [2024-12-14 00:19:05.921238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.991 qpair failed and we were unable to recover it. 00:38:26.991 [2024-12-14 00:19:05.921407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.991 [2024-12-14 00:19:05.921460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.991 qpair failed and we were unable to recover it. 00:38:26.991 [2024-12-14 00:19:05.921697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.991 [2024-12-14 00:19:05.921740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.991 qpair failed and we were unable to recover it. 00:38:26.991 [2024-12-14 00:19:05.922029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.991 [2024-12-14 00:19:05.922071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.991 qpair failed and we were unable to recover it. 00:38:26.991 [2024-12-14 00:19:05.922296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.991 [2024-12-14 00:19:05.922338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.991 qpair failed and we were unable to recover it. 00:38:26.991 [2024-12-14 00:19:05.922484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.991 [2024-12-14 00:19:05.922528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.991 qpair failed and we were unable to recover it. 00:38:26.991 [2024-12-14 00:19:05.922765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.991 [2024-12-14 00:19:05.922807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.991 qpair failed and we were unable to recover it. 00:38:26.991 [2024-12-14 00:19:05.922943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.991 [2024-12-14 00:19:05.922986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.991 qpair failed and we were unable to recover it. 00:38:26.991 [2024-12-14 00:19:05.923113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.991 [2024-12-14 00:19:05.923162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.991 qpair failed and we were unable to recover it. 00:38:26.991 [2024-12-14 00:19:05.923399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.991 [2024-12-14 00:19:05.923411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.991 qpair failed and we were unable to recover it. 00:38:26.991 [2024-12-14 00:19:05.923619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.991 [2024-12-14 00:19:05.923633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.991 qpair failed and we were unable to recover it. 00:38:26.991 [2024-12-14 00:19:05.923793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.991 [2024-12-14 00:19:05.923834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.991 qpair failed and we were unable to recover it. 00:38:26.991 [2024-12-14 00:19:05.923983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.991 [2024-12-14 00:19:05.924025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.991 qpair failed and we were unable to recover it. 00:38:26.991 [2024-12-14 00:19:05.924165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.991 [2024-12-14 00:19:05.924208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.991 qpair failed and we were unable to recover it. 00:38:26.991 [2024-12-14 00:19:05.924334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.991 [2024-12-14 00:19:05.924347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.991 qpair failed and we were unable to recover it. 00:38:26.991 [2024-12-14 00:19:05.924496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.991 [2024-12-14 00:19:05.924548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.991 qpair failed and we were unable to recover it. 00:38:26.991 [2024-12-14 00:19:05.924768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.991 [2024-12-14 00:19:05.924809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.991 qpair failed and we were unable to recover it. 00:38:26.991 [2024-12-14 00:19:05.925011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.991 [2024-12-14 00:19:05.925053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.991 qpair failed and we were unable to recover it. 00:38:26.991 [2024-12-14 00:19:05.925238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.991 [2024-12-14 00:19:05.925252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.991 qpair failed and we were unable to recover it. 00:38:26.991 [2024-12-14 00:19:05.925427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.991 [2024-12-14 00:19:05.925483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.991 qpair failed and we were unable to recover it. 00:38:26.991 [2024-12-14 00:19:05.925696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.991 [2024-12-14 00:19:05.925750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.991 qpair failed and we were unable to recover it. 00:38:26.991 [2024-12-14 00:19:05.926039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.991 [2024-12-14 00:19:05.926082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.991 qpair failed and we were unable to recover it. 00:38:26.991 [2024-12-14 00:19:05.926277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.991 [2024-12-14 00:19:05.926319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.991 qpair failed and we were unable to recover it. 00:38:26.991 [2024-12-14 00:19:05.926534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.991 [2024-12-14 00:19:05.926577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.991 qpair failed and we were unable to recover it. 00:38:26.991 [2024-12-14 00:19:05.926847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.991 [2024-12-14 00:19:05.926890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.991 qpair failed and we were unable to recover it. 00:38:26.991 [2024-12-14 00:19:05.927090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.991 [2024-12-14 00:19:05.927103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.991 qpair failed and we were unable to recover it. 00:38:26.991 [2024-12-14 00:19:05.927269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.991 [2024-12-14 00:19:05.927311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.991 qpair failed and we were unable to recover it. 00:38:26.991 [2024-12-14 00:19:05.927525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.991 [2024-12-14 00:19:05.927568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.991 qpair failed and we were unable to recover it. 00:38:26.991 [2024-12-14 00:19:05.927796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.991 [2024-12-14 00:19:05.927839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.991 qpair failed and we were unable to recover it. 00:38:26.991 [2024-12-14 00:19:05.927989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.991 [2024-12-14 00:19:05.928031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.991 qpair failed and we were unable to recover it. 00:38:26.991 [2024-12-14 00:19:05.928230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.991 [2024-12-14 00:19:05.928245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.991 qpair failed and we were unable to recover it. 00:38:26.991 [2024-12-14 00:19:05.928377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.991 [2024-12-14 00:19:05.928390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.991 qpair failed and we were unable to recover it. 00:38:26.991 [2024-12-14 00:19:05.928475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.991 [2024-12-14 00:19:05.928489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.991 qpair failed and we were unable to recover it. 00:38:26.991 [2024-12-14 00:19:05.928626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.991 [2024-12-14 00:19:05.928640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.991 qpair failed and we were unable to recover it. 00:38:26.991 [2024-12-14 00:19:05.928817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.991 [2024-12-14 00:19:05.928831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.991 qpair failed and we were unable to recover it. 00:38:26.991 [2024-12-14 00:19:05.928992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.991 [2024-12-14 00:19:05.929034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.991 qpair failed and we were unable to recover it. 00:38:26.991 [2024-12-14 00:19:05.929291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.991 [2024-12-14 00:19:05.929333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.991 qpair failed and we were unable to recover it. 00:38:26.991 [2024-12-14 00:19:05.929596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.991 [2024-12-14 00:19:05.929638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.991 qpair failed and we were unable to recover it. 00:38:26.991 [2024-12-14 00:19:05.929866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.991 [2024-12-14 00:19:05.929909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.991 qpair failed and we were unable to recover it. 00:38:26.991 [2024-12-14 00:19:05.930121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.991 [2024-12-14 00:19:05.930164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.991 qpair failed and we were unable to recover it. 00:38:26.991 [2024-12-14 00:19:05.930324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.991 [2024-12-14 00:19:05.930366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.991 qpair failed and we were unable to recover it. 00:38:26.991 [2024-12-14 00:19:05.930533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.991 [2024-12-14 00:19:05.930576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.991 qpair failed and we were unable to recover it. 00:38:26.991 [2024-12-14 00:19:05.930697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.991 [2024-12-14 00:19:05.930739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.991 qpair failed and we were unable to recover it. 00:38:26.991 [2024-12-14 00:19:05.930873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.992 [2024-12-14 00:19:05.930914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.992 qpair failed and we were unable to recover it. 00:38:26.992 [2024-12-14 00:19:05.931185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.992 [2024-12-14 00:19:05.931227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.992 qpair failed and we were unable to recover it. 00:38:26.992 [2024-12-14 00:19:05.931536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.992 [2024-12-14 00:19:05.931580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.992 qpair failed and we were unable to recover it. 00:38:26.992 [2024-12-14 00:19:05.931809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.992 [2024-12-14 00:19:05.931850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.992 qpair failed and we were unable to recover it. 00:38:26.992 [2024-12-14 00:19:05.932110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.992 [2024-12-14 00:19:05.932151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.992 qpair failed and we were unable to recover it. 00:38:26.992 [2024-12-14 00:19:05.932331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.992 [2024-12-14 00:19:05.932344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.992 qpair failed and we were unable to recover it. 00:38:26.992 [2024-12-14 00:19:05.932580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.992 [2024-12-14 00:19:05.932623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.992 qpair failed and we were unable to recover it. 00:38:26.992 [2024-12-14 00:19:05.932819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.992 [2024-12-14 00:19:05.932861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.992 qpair failed and we were unable to recover it. 00:38:26.992 [2024-12-14 00:19:05.933083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.992 [2024-12-14 00:19:05.933127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.992 qpair failed and we were unable to recover it. 00:38:26.992 [2024-12-14 00:19:05.933232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.992 [2024-12-14 00:19:05.933245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.992 qpair failed and we were unable to recover it. 00:38:26.992 [2024-12-14 00:19:05.933463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.992 [2024-12-14 00:19:05.933505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.992 qpair failed and we were unable to recover it. 00:38:26.992 [2024-12-14 00:19:05.933716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.992 [2024-12-14 00:19:05.933758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.992 qpair failed and we were unable to recover it. 00:38:26.992 [2024-12-14 00:19:05.933903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.992 [2024-12-14 00:19:05.933950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.992 qpair failed and we were unable to recover it. 00:38:26.992 [2024-12-14 00:19:05.934174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.992 [2024-12-14 00:19:05.934188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.992 qpair failed and we were unable to recover it. 00:38:26.992 [2024-12-14 00:19:05.934397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.992 [2024-12-14 00:19:05.934421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.992 qpair failed and we were unable to recover it. 00:38:26.992 [2024-12-14 00:19:05.934531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.992 [2024-12-14 00:19:05.934545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.992 qpair failed and we were unable to recover it. 00:38:26.992 [2024-12-14 00:19:05.934805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.992 [2024-12-14 00:19:05.934819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.992 qpair failed and we were unable to recover it. 00:38:26.992 [2024-12-14 00:19:05.934958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.992 [2024-12-14 00:19:05.934972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.992 qpair failed and we were unable to recover it. 00:38:26.992 [2024-12-14 00:19:05.935124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.992 [2024-12-14 00:19:05.935138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.992 qpair failed and we were unable to recover it. 00:38:26.992 [2024-12-14 00:19:05.935311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.992 [2024-12-14 00:19:05.935325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.992 qpair failed and we were unable to recover it. 00:38:26.992 [2024-12-14 00:19:05.935391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.992 [2024-12-14 00:19:05.935404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.992 qpair failed and we were unable to recover it. 00:38:26.992 [2024-12-14 00:19:05.935611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.992 [2024-12-14 00:19:05.935625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.992 qpair failed and we were unable to recover it. 00:38:26.992 [2024-12-14 00:19:05.935802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.992 [2024-12-14 00:19:05.935843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.992 qpair failed and we were unable to recover it. 00:38:26.992 [2024-12-14 00:19:05.936049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.992 [2024-12-14 00:19:05.936091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.992 qpair failed and we were unable to recover it. 00:38:26.992 [2024-12-14 00:19:05.936234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.992 [2024-12-14 00:19:05.936276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.992 qpair failed and we were unable to recover it. 00:38:26.992 [2024-12-14 00:19:05.936482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.992 [2024-12-14 00:19:05.936525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.992 qpair failed and we were unable to recover it. 00:38:26.992 [2024-12-14 00:19:05.936739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.992 [2024-12-14 00:19:05.936780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.992 qpair failed and we were unable to recover it. 00:38:26.992 [2024-12-14 00:19:05.936922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.992 [2024-12-14 00:19:05.936971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.992 qpair failed and we were unable to recover it. 00:38:26.992 [2024-12-14 00:19:05.937121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.992 [2024-12-14 00:19:05.937162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.992 qpair failed and we were unable to recover it. 00:38:26.992 [2024-12-14 00:19:05.937299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.992 [2024-12-14 00:19:05.937312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.992 qpair failed and we were unable to recover it. 00:38:26.992 [2024-12-14 00:19:05.937464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.992 [2024-12-14 00:19:05.937478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.992 qpair failed and we were unable to recover it. 00:38:26.992 [2024-12-14 00:19:05.937702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.992 [2024-12-14 00:19:05.937744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.992 qpair failed and we were unable to recover it. 00:38:26.992 [2024-12-14 00:19:05.937963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.992 [2024-12-14 00:19:05.938004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.992 qpair failed and we were unable to recover it. 00:38:26.992 [2024-12-14 00:19:05.938201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.992 [2024-12-14 00:19:05.938243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.992 qpair failed and we were unable to recover it. 00:38:26.992 [2024-12-14 00:19:05.938446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.992 [2024-12-14 00:19:05.938492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.992 qpair failed and we were unable to recover it. 00:38:26.992 [2024-12-14 00:19:05.938702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.992 [2024-12-14 00:19:05.938744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.992 qpair failed and we were unable to recover it. 00:38:26.992 [2024-12-14 00:19:05.938911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.992 [2024-12-14 00:19:05.938924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.992 qpair failed and we were unable to recover it. 00:38:26.992 [2024-12-14 00:19:05.939007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.992 [2024-12-14 00:19:05.939020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.992 qpair failed and we were unable to recover it. 00:38:26.992 [2024-12-14 00:19:05.939157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.992 [2024-12-14 00:19:05.939170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.992 qpair failed and we were unable to recover it. 00:38:26.992 [2024-12-14 00:19:05.939256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.992 [2024-12-14 00:19:05.939270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.992 qpair failed and we were unable to recover it. 00:38:26.992 [2024-12-14 00:19:05.939522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.992 [2024-12-14 00:19:05.939537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.992 qpair failed and we were unable to recover it. 00:38:26.992 [2024-12-14 00:19:05.939747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.993 [2024-12-14 00:19:05.939764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.993 qpair failed and we were unable to recover it. 00:38:26.993 [2024-12-14 00:19:05.939964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.993 [2024-12-14 00:19:05.939977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.993 qpair failed and we were unable to recover it. 00:38:26.993 [2024-12-14 00:19:05.940157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.993 [2024-12-14 00:19:05.940170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.993 qpair failed and we were unable to recover it. 00:38:26.993 [2024-12-14 00:19:05.940237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.993 [2024-12-14 00:19:05.940250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.993 qpair failed and we were unable to recover it. 00:38:26.993 [2024-12-14 00:19:05.940340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.993 [2024-12-14 00:19:05.940382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.993 qpair failed and we were unable to recover it. 00:38:26.993 [2024-12-14 00:19:05.940532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.993 [2024-12-14 00:19:05.940574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.993 qpair failed and we were unable to recover it. 00:38:26.993 [2024-12-14 00:19:05.940726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.993 [2024-12-14 00:19:05.940766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.993 qpair failed and we were unable to recover it. 00:38:26.993 [2024-12-14 00:19:05.941038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.993 [2024-12-14 00:19:05.941051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.993 qpair failed and we were unable to recover it. 00:38:26.993 [2024-12-14 00:19:05.941146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.993 [2024-12-14 00:19:05.941160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.993 qpair failed and we were unable to recover it. 00:38:26.993 [2024-12-14 00:19:05.941312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.993 [2024-12-14 00:19:05.941326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.993 qpair failed and we were unable to recover it. 00:38:26.993 [2024-12-14 00:19:05.941420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.993 [2024-12-14 00:19:05.941433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.993 qpair failed and we were unable to recover it. 00:38:26.993 [2024-12-14 00:19:05.941662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.993 [2024-12-14 00:19:05.941676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.993 qpair failed and we were unable to recover it. 00:38:26.993 [2024-12-14 00:19:05.941831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.993 [2024-12-14 00:19:05.941844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.993 qpair failed and we were unable to recover it. 00:38:26.993 [2024-12-14 00:19:05.941934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.993 [2024-12-14 00:19:05.941947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.993 qpair failed and we were unable to recover it. 00:38:26.993 [2024-12-14 00:19:05.942046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.993 [2024-12-14 00:19:05.942059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.993 qpair failed and we were unable to recover it. 00:38:26.993 [2024-12-14 00:19:05.942142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.993 [2024-12-14 00:19:05.942156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.993 qpair failed and we were unable to recover it. 00:38:26.993 [2024-12-14 00:19:05.942326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.993 [2024-12-14 00:19:05.942367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.993 qpair failed and we were unable to recover it. 00:38:26.993 [2024-12-14 00:19:05.942638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.993 [2024-12-14 00:19:05.942681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.993 qpair failed and we were unable to recover it. 00:38:26.993 [2024-12-14 00:19:05.942878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.993 [2024-12-14 00:19:05.942921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.993 qpair failed and we were unable to recover it. 00:38:26.993 [2024-12-14 00:19:05.943064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.993 [2024-12-14 00:19:05.943105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.993 qpair failed and we were unable to recover it. 00:38:26.993 [2024-12-14 00:19:05.943259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.993 [2024-12-14 00:19:05.943302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.993 qpair failed and we were unable to recover it. 00:38:26.993 [2024-12-14 00:19:05.943515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.993 [2024-12-14 00:19:05.943559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.993 qpair failed and we were unable to recover it. 00:38:26.993 [2024-12-14 00:19:05.943769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.993 [2024-12-14 00:19:05.943810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.993 qpair failed and we were unable to recover it. 00:38:26.993 [2024-12-14 00:19:05.944001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.993 [2024-12-14 00:19:05.944042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.993 qpair failed and we were unable to recover it. 00:38:26.993 [2024-12-14 00:19:05.944195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.993 [2024-12-14 00:19:05.944209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.993 qpair failed and we were unable to recover it. 00:38:26.993 [2024-12-14 00:19:05.944353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.993 [2024-12-14 00:19:05.944367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.993 qpair failed and we were unable to recover it. 00:38:26.993 [2024-12-14 00:19:05.944579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.993 [2024-12-14 00:19:05.944630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.993 qpair failed and we were unable to recover it. 00:38:26.993 [2024-12-14 00:19:05.944896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.993 [2024-12-14 00:19:05.944938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.993 qpair failed and we were unable to recover it. 00:38:26.993 [2024-12-14 00:19:05.945131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.993 [2024-12-14 00:19:05.945172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.993 qpair failed and we were unable to recover it. 00:38:26.993 [2024-12-14 00:19:05.945287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.993 [2024-12-14 00:19:05.945301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.993 qpair failed and we were unable to recover it. 00:38:26.993 [2024-12-14 00:19:05.945377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.993 [2024-12-14 00:19:05.945390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.993 qpair failed and we were unable to recover it. 00:38:26.993 [2024-12-14 00:19:05.945603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.993 [2024-12-14 00:19:05.945617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.993 qpair failed and we were unable to recover it. 00:38:26.993 [2024-12-14 00:19:05.945850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.993 [2024-12-14 00:19:05.945892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.993 qpair failed and we were unable to recover it. 00:38:26.993 [2024-12-14 00:19:05.946047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.993 [2024-12-14 00:19:05.946089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.993 qpair failed and we were unable to recover it. 00:38:26.993 [2024-12-14 00:19:05.946357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.993 [2024-12-14 00:19:05.946398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.993 qpair failed and we were unable to recover it. 00:38:26.993 [2024-12-14 00:19:05.946637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.993 [2024-12-14 00:19:05.946680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.993 qpair failed and we were unable to recover it. 00:38:26.993 [2024-12-14 00:19:05.946893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.993 [2024-12-14 00:19:05.946935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.993 qpair failed and we were unable to recover it. 00:38:26.993 [2024-12-14 00:19:05.947081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.994 [2024-12-14 00:19:05.947123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.994 qpair failed and we were unable to recover it. 00:38:26.994 [2024-12-14 00:19:05.947271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.994 [2024-12-14 00:19:05.947312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.994 qpair failed and we were unable to recover it. 00:38:26.994 [2024-12-14 00:19:05.947505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.994 [2024-12-14 00:19:05.947547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.994 qpair failed and we were unable to recover it. 00:38:26.994 [2024-12-14 00:19:05.947710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.994 [2024-12-14 00:19:05.947751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.994 qpair failed and we were unable to recover it. 00:38:26.994 [2024-12-14 00:19:05.947962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.994 [2024-12-14 00:19:05.948004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.994 qpair failed and we were unable to recover it. 00:38:26.994 [2024-12-14 00:19:05.948209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.994 [2024-12-14 00:19:05.948225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.994 qpair failed and we were unable to recover it. 00:38:26.994 [2024-12-14 00:19:05.948379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.994 [2024-12-14 00:19:05.948421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.994 qpair failed and we were unable to recover it. 00:38:26.994 [2024-12-14 00:19:05.948594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.994 [2024-12-14 00:19:05.948636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.994 qpair failed and we were unable to recover it. 00:38:26.994 [2024-12-14 00:19:05.948839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.994 [2024-12-14 00:19:05.948880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.994 qpair failed and we were unable to recover it. 00:38:26.994 [2024-12-14 00:19:05.949016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.994 [2024-12-14 00:19:05.949029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.994 qpair failed and we were unable to recover it. 00:38:26.994 [2024-12-14 00:19:05.949162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.994 [2024-12-14 00:19:05.949175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.994 qpair failed and we were unable to recover it. 00:38:26.994 [2024-12-14 00:19:05.949423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.994 [2024-12-14 00:19:05.949436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.994 qpair failed and we were unable to recover it. 00:38:26.994 [2024-12-14 00:19:05.949646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.994 [2024-12-14 00:19:05.949660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.994 qpair failed and we were unable to recover it. 00:38:26.994 [2024-12-14 00:19:05.949826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.994 [2024-12-14 00:19:05.949839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.994 qpair failed and we were unable to recover it. 00:38:26.994 [2024-12-14 00:19:05.950020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.994 [2024-12-14 00:19:05.950061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.994 qpair failed and we were unable to recover it. 00:38:26.994 [2024-12-14 00:19:05.950275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.994 [2024-12-14 00:19:05.950317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.994 qpair failed and we were unable to recover it. 00:38:26.994 [2024-12-14 00:19:05.950465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.994 [2024-12-14 00:19:05.950506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.994 qpair failed and we were unable to recover it. 00:38:26.994 [2024-12-14 00:19:05.950738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.994 [2024-12-14 00:19:05.950778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.994 qpair failed and we were unable to recover it. 00:38:26.994 [2024-12-14 00:19:05.951018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.994 [2024-12-14 00:19:05.951031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.994 qpair failed and we were unable to recover it. 00:38:26.994 [2024-12-14 00:19:05.951177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.994 [2024-12-14 00:19:05.951191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.994 qpair failed and we were unable to recover it. 00:38:26.994 [2024-12-14 00:19:05.951331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.994 [2024-12-14 00:19:05.951372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.994 qpair failed and we were unable to recover it. 00:38:26.994 [2024-12-14 00:19:05.951602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.994 [2024-12-14 00:19:05.951644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.994 qpair failed and we were unable to recover it. 00:38:26.994 [2024-12-14 00:19:05.951843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.994 [2024-12-14 00:19:05.951886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.994 qpair failed and we were unable to recover it. 00:38:26.994 [2024-12-14 00:19:05.952112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.994 [2024-12-14 00:19:05.952152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.994 qpair failed and we were unable to recover it. 00:38:26.994 [2024-12-14 00:19:05.952303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.994 [2024-12-14 00:19:05.952343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.994 qpair failed and we were unable to recover it. 00:38:26.994 [2024-12-14 00:19:05.952559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.994 [2024-12-14 00:19:05.952603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.994 qpair failed and we were unable to recover it. 00:38:26.994 [2024-12-14 00:19:05.952897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.994 [2024-12-14 00:19:05.952952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.994 qpair failed and we were unable to recover it. 00:38:26.994 [2024-12-14 00:19:05.953187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.994 [2024-12-14 00:19:05.953208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.994 qpair failed and we were unable to recover it. 00:38:26.994 [2024-12-14 00:19:05.953458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.994 [2024-12-14 00:19:05.953472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.994 qpair failed and we were unable to recover it. 00:38:26.994 [2024-12-14 00:19:05.953640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.994 [2024-12-14 00:19:05.953658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.994 qpair failed and we were unable to recover it. 00:38:26.994 [2024-12-14 00:19:05.953811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.994 [2024-12-14 00:19:05.953825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.994 qpair failed and we were unable to recover it. 00:38:26.994 [2024-12-14 00:19:05.953983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.994 [2024-12-14 00:19:05.954024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.994 qpair failed and we were unable to recover it. 00:38:26.994 [2024-12-14 00:19:05.954330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.994 [2024-12-14 00:19:05.954370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.994 qpair failed and we were unable to recover it. 00:38:26.994 [2024-12-14 00:19:05.954593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.994 [2024-12-14 00:19:05.954635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.994 qpair failed and we were unable to recover it. 00:38:26.994 [2024-12-14 00:19:05.954920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.994 [2024-12-14 00:19:05.954962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.994 qpair failed and we were unable to recover it. 00:38:26.994 [2024-12-14 00:19:05.955110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.994 [2024-12-14 00:19:05.955152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.994 qpair failed and we were unable to recover it. 00:38:26.994 [2024-12-14 00:19:05.955289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.994 [2024-12-14 00:19:05.955329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.994 qpair failed and we were unable to recover it. 00:38:26.994 [2024-12-14 00:19:05.955613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.994 [2024-12-14 00:19:05.955656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.994 qpair failed and we were unable to recover it. 00:38:26.994 [2024-12-14 00:19:05.955863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.994 [2024-12-14 00:19:05.955903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.994 qpair failed and we were unable to recover it. 00:38:26.994 [2024-12-14 00:19:05.956065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.994 [2024-12-14 00:19:05.956106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.994 qpair failed and we were unable to recover it. 00:38:26.994 [2024-12-14 00:19:05.956293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.994 [2024-12-14 00:19:05.956306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.994 qpair failed and we were unable to recover it. 00:38:26.994 [2024-12-14 00:19:05.956489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.994 [2024-12-14 00:19:05.956532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.994 qpair failed and we were unable to recover it. 00:38:26.994 [2024-12-14 00:19:05.956743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.995 [2024-12-14 00:19:05.956785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.995 qpair failed and we were unable to recover it. 00:38:26.995 [2024-12-14 00:19:05.957011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.995 [2024-12-14 00:19:05.957052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.995 qpair failed and we were unable to recover it. 00:38:26.995 [2024-12-14 00:19:05.957332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.995 [2024-12-14 00:19:05.957372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.995 qpair failed and we were unable to recover it. 00:38:26.995 [2024-12-14 00:19:05.957667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.995 [2024-12-14 00:19:05.957708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.995 qpair failed and we were unable to recover it. 00:38:26.995 [2024-12-14 00:19:05.957913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.995 [2024-12-14 00:19:05.957953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.995 qpair failed and we were unable to recover it. 00:38:26.995 [2024-12-14 00:19:05.958177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.995 [2024-12-14 00:19:05.958219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.995 qpair failed and we were unable to recover it. 00:38:26.995 [2024-12-14 00:19:05.958366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.995 [2024-12-14 00:19:05.958407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.995 qpair failed and we were unable to recover it. 00:38:26.995 [2024-12-14 00:19:05.958695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.995 [2024-12-14 00:19:05.958736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.995 qpair failed and we were unable to recover it. 00:38:26.995 [2024-12-14 00:19:05.958942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.995 [2024-12-14 00:19:05.958984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.995 qpair failed and we were unable to recover it. 00:38:26.995 [2024-12-14 00:19:05.959264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.995 [2024-12-14 00:19:05.959306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.995 qpair failed and we were unable to recover it. 00:38:26.995 [2024-12-14 00:19:05.959508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.995 [2024-12-14 00:19:05.959530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.995 qpair failed and we were unable to recover it. 00:38:26.995 [2024-12-14 00:19:05.959698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.995 [2024-12-14 00:19:05.959740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.995 qpair failed and we were unable to recover it. 00:38:26.995 [2024-12-14 00:19:05.959900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.995 [2024-12-14 00:19:05.959941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.995 qpair failed and we were unable to recover it. 00:38:26.995 [2024-12-14 00:19:05.960148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.995 [2024-12-14 00:19:05.960189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.995 qpair failed and we were unable to recover it. 00:38:26.995 [2024-12-14 00:19:05.960510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.995 [2024-12-14 00:19:05.960554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.995 qpair failed and we were unable to recover it. 00:38:26.995 [2024-12-14 00:19:05.960765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.995 [2024-12-14 00:19:05.960807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.995 qpair failed and we were unable to recover it. 00:38:26.995 [2024-12-14 00:19:05.961017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.995 [2024-12-14 00:19:05.961059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.995 qpair failed and we were unable to recover it. 00:38:26.995 [2024-12-14 00:19:05.961189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.995 [2024-12-14 00:19:05.961231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.995 qpair failed and we were unable to recover it. 00:38:26.995 [2024-12-14 00:19:05.961383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.995 [2024-12-14 00:19:05.961425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.995 qpair failed and we were unable to recover it. 00:38:26.995 [2024-12-14 00:19:05.961703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.995 [2024-12-14 00:19:05.961744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.995 qpair failed and we were unable to recover it. 00:38:26.995 [2024-12-14 00:19:05.961903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.995 [2024-12-14 00:19:05.961943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.995 qpair failed and we were unable to recover it. 00:38:26.995 [2024-12-14 00:19:05.962170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.995 [2024-12-14 00:19:05.962211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.995 qpair failed and we were unable to recover it. 00:38:26.995 [2024-12-14 00:19:05.962424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.995 [2024-12-14 00:19:05.962475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.995 qpair failed and we were unable to recover it. 00:38:26.995 [2024-12-14 00:19:05.962759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.995 [2024-12-14 00:19:05.962807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.995 qpair failed and we were unable to recover it. 00:38:26.995 [2024-12-14 00:19:05.962963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.995 [2024-12-14 00:19:05.962976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.995 qpair failed and we were unable to recover it. 00:38:26.995 [2024-12-14 00:19:05.963139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.995 [2024-12-14 00:19:05.963152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.995 qpair failed and we were unable to recover it. 00:38:26.995 [2024-12-14 00:19:05.963284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.995 [2024-12-14 00:19:05.963298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.995 qpair failed and we were unable to recover it. 00:38:26.995 [2024-12-14 00:19:05.963569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.995 [2024-12-14 00:19:05.963619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.995 qpair failed and we were unable to recover it. 00:38:26.995 [2024-12-14 00:19:05.963764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.995 [2024-12-14 00:19:05.963805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.995 qpair failed and we were unable to recover it. 00:38:26.995 [2024-12-14 00:19:05.964007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.995 [2024-12-14 00:19:05.964019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.995 qpair failed and we were unable to recover it. 00:38:26.995 [2024-12-14 00:19:05.964103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.995 [2024-12-14 00:19:05.964116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.995 qpair failed and we were unable to recover it. 00:38:26.995 [2024-12-14 00:19:05.964266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.995 [2024-12-14 00:19:05.964279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.995 qpair failed and we were unable to recover it. 00:38:26.995 [2024-12-14 00:19:05.964461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.995 [2024-12-14 00:19:05.964504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.995 qpair failed and we were unable to recover it. 00:38:26.995 [2024-12-14 00:19:05.964638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.995 [2024-12-14 00:19:05.964679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.995 qpair failed and we were unable to recover it. 00:38:26.995 [2024-12-14 00:19:05.964894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.995 [2024-12-14 00:19:05.964935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.995 qpair failed and we were unable to recover it. 00:38:26.995 [2024-12-14 00:19:05.965131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.995 [2024-12-14 00:19:05.965145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.995 qpair failed and we were unable to recover it. 00:38:26.995 [2024-12-14 00:19:05.965292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.995 [2024-12-14 00:19:05.965305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.995 qpair failed and we were unable to recover it. 00:38:26.995 [2024-12-14 00:19:05.965486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.995 [2024-12-14 00:19:05.965500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.995 qpair failed and we were unable to recover it. 00:38:26.995 [2024-12-14 00:19:05.965678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.995 [2024-12-14 00:19:05.965719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.995 qpair failed and we were unable to recover it. 00:38:26.995 [2024-12-14 00:19:05.965975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.995 [2024-12-14 00:19:05.966015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.995 qpair failed and we were unable to recover it. 00:38:26.995 [2024-12-14 00:19:05.966285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.995 [2024-12-14 00:19:05.966298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.995 qpair failed and we were unable to recover it. 00:38:26.995 [2024-12-14 00:19:05.966388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.995 [2024-12-14 00:19:05.966424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.995 qpair failed and we were unable to recover it. 00:38:26.995 [2024-12-14 00:19:05.966674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.995 [2024-12-14 00:19:05.966716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.995 qpair failed and we were unable to recover it. 00:38:26.996 [2024-12-14 00:19:05.966852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.996 [2024-12-14 00:19:05.966892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.996 qpair failed and we were unable to recover it. 00:38:26.996 [2024-12-14 00:19:05.967016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.996 [2024-12-14 00:19:05.967030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.996 qpair failed and we were unable to recover it. 00:38:26.996 [2024-12-14 00:19:05.967178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.996 [2024-12-14 00:19:05.967191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.996 qpair failed and we were unable to recover it. 00:38:26.996 [2024-12-14 00:19:05.967275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.996 [2024-12-14 00:19:05.967307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.996 qpair failed and we were unable to recover it. 00:38:26.996 [2024-12-14 00:19:05.967456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.996 [2024-12-14 00:19:05.967500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.996 qpair failed and we were unable to recover it. 00:38:26.996 [2024-12-14 00:19:05.967770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.996 [2024-12-14 00:19:05.967811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.996 qpair failed and we were unable to recover it. 00:38:26.996 [2024-12-14 00:19:05.968048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.996 [2024-12-14 00:19:05.968077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.996 qpair failed and we were unable to recover it. 00:38:26.996 [2024-12-14 00:19:05.968232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.996 [2024-12-14 00:19:05.968245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.996 qpair failed and we were unable to recover it. 00:38:26.996 [2024-12-14 00:19:05.968390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.996 [2024-12-14 00:19:05.968404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.996 qpair failed and we were unable to recover it. 00:38:26.996 [2024-12-14 00:19:05.968577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.996 [2024-12-14 00:19:05.968619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.996 qpair failed and we were unable to recover it. 00:38:26.996 [2024-12-14 00:19:05.968826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.996 [2024-12-14 00:19:05.968868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.996 qpair failed and we were unable to recover it. 00:38:26.996 [2024-12-14 00:19:05.969008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.996 [2024-12-14 00:19:05.969049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.996 qpair failed and we were unable to recover it. 00:38:26.996 [2024-12-14 00:19:05.969253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.996 [2024-12-14 00:19:05.969267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.996 qpair failed and we were unable to recover it. 00:38:26.996 [2024-12-14 00:19:05.969421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.996 [2024-12-14 00:19:05.969435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.996 qpair failed and we were unable to recover it. 00:38:26.996 [2024-12-14 00:19:05.969612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.996 [2024-12-14 00:19:05.969653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.996 qpair failed and we were unable to recover it. 00:38:26.996 [2024-12-14 00:19:05.969874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.996 [2024-12-14 00:19:05.969916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.996 qpair failed and we were unable to recover it. 00:38:26.996 [2024-12-14 00:19:05.970182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.996 [2024-12-14 00:19:05.970195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.996 qpair failed and we were unable to recover it. 00:38:26.996 [2024-12-14 00:19:05.970344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.996 [2024-12-14 00:19:05.970394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.996 qpair failed and we were unable to recover it. 00:38:26.996 [2024-12-14 00:19:05.970548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.996 [2024-12-14 00:19:05.970589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.996 qpair failed and we were unable to recover it. 00:38:26.996 [2024-12-14 00:19:05.970732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.996 [2024-12-14 00:19:05.970773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.996 qpair failed and we were unable to recover it. 00:38:26.996 [2024-12-14 00:19:05.970971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.996 [2024-12-14 00:19:05.971013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.996 qpair failed and we were unable to recover it. 00:38:26.996 [2024-12-14 00:19:05.971178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.996 [2024-12-14 00:19:05.971192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.996 qpair failed and we were unable to recover it. 00:38:26.996 [2024-12-14 00:19:05.971354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.996 [2024-12-14 00:19:05.971394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.996 qpair failed and we were unable to recover it. 00:38:26.996 [2024-12-14 00:19:05.971670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.996 [2024-12-14 00:19:05.971716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.996 qpair failed and we were unable to recover it. 00:38:26.996 [2024-12-14 00:19:05.971918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.996 [2024-12-14 00:19:05.971934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.996 qpair failed and we were unable to recover it. 00:38:26.996 [2024-12-14 00:19:05.972143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.996 [2024-12-14 00:19:05.972185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.996 qpair failed and we were unable to recover it. 00:38:26.996 [2024-12-14 00:19:05.972345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.996 [2024-12-14 00:19:05.972386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.996 qpair failed and we were unable to recover it. 00:38:26.996 [2024-12-14 00:19:05.972693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.996 [2024-12-14 00:19:05.972736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.996 qpair failed and we were unable to recover it. 00:38:26.996 [2024-12-14 00:19:05.973000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.996 [2024-12-14 00:19:05.973041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.996 qpair failed and we were unable to recover it. 00:38:26.996 [2024-12-14 00:19:05.973232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.996 [2024-12-14 00:19:05.973272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.996 qpair failed and we were unable to recover it. 00:38:26.996 [2024-12-14 00:19:05.973476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.996 [2024-12-14 00:19:05.973519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.996 qpair failed and we were unable to recover it. 00:38:26.996 [2024-12-14 00:19:05.973781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.996 [2024-12-14 00:19:05.973820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.996 qpair failed and we were unable to recover it. 00:38:26.996 [2024-12-14 00:19:05.974040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.996 [2024-12-14 00:19:05.974082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.996 qpair failed and we were unable to recover it. 00:38:26.996 [2024-12-14 00:19:05.974300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.996 [2024-12-14 00:19:05.974341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.996 qpair failed and we were unable to recover it. 00:38:26.996 [2024-12-14 00:19:05.974602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.996 [2024-12-14 00:19:05.974645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.996 qpair failed and we were unable to recover it. 00:38:26.996 [2024-12-14 00:19:05.974881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.996 [2024-12-14 00:19:05.974921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.996 qpair failed and we were unable to recover it. 00:38:26.996 [2024-12-14 00:19:05.975129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.996 [2024-12-14 00:19:05.975169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.996 qpair failed and we were unable to recover it. 00:38:26.996 [2024-12-14 00:19:05.975373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.996 [2024-12-14 00:19:05.975415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.996 qpair failed and we were unable to recover it. 00:38:26.996 [2024-12-14 00:19:05.975642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.996 [2024-12-14 00:19:05.975685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.996 qpair failed and we were unable to recover it. 00:38:26.996 [2024-12-14 00:19:05.975903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.996 [2024-12-14 00:19:05.975945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.996 qpair failed and we were unable to recover it. 00:38:26.996 [2024-12-14 00:19:05.976076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.996 [2024-12-14 00:19:05.976116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.996 qpair failed and we were unable to recover it. 00:38:26.996 [2024-12-14 00:19:05.976311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.996 [2024-12-14 00:19:05.976353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.996 qpair failed and we were unable to recover it. 00:38:26.997 [2024-12-14 00:19:05.976504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.997 [2024-12-14 00:19:05.976547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.997 qpair failed and we were unable to recover it. 00:38:26.997 [2024-12-14 00:19:05.976818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.997 [2024-12-14 00:19:05.976858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.997 qpair failed and we were unable to recover it. 00:38:26.997 [2024-12-14 00:19:05.977091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.997 [2024-12-14 00:19:05.977133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.997 qpair failed and we were unable to recover it. 00:38:26.997 [2024-12-14 00:19:05.977281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.997 [2024-12-14 00:19:05.977323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.997 qpair failed and we were unable to recover it. 00:38:26.997 [2024-12-14 00:19:05.977564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.997 [2024-12-14 00:19:05.977606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.997 qpair failed and we were unable to recover it. 00:38:26.997 [2024-12-14 00:19:05.977893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.997 [2024-12-14 00:19:05.977934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.997 qpair failed and we were unable to recover it. 00:38:26.997 [2024-12-14 00:19:05.978099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.997 [2024-12-14 00:19:05.978143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.997 qpair failed and we were unable to recover it. 00:38:26.997 [2024-12-14 00:19:05.978269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.997 [2024-12-14 00:19:05.978282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.997 qpair failed and we were unable to recover it. 00:38:26.997 [2024-12-14 00:19:05.978455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.997 [2024-12-14 00:19:05.978497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.997 qpair failed and we were unable to recover it. 00:38:26.997 [2024-12-14 00:19:05.978792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.997 [2024-12-14 00:19:05.978879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:26.997 qpair failed and we were unable to recover it. 00:38:26.997 [2024-12-14 00:19:05.979223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.997 [2024-12-14 00:19:05.979269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.997 qpair failed and we were unable to recover it. 00:38:26.997 [2024-12-14 00:19:05.979495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.997 [2024-12-14 00:19:05.979541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:26.997 qpair failed and we were unable to recover it. 00:38:26.997 [2024-12-14 00:19:05.979656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.997 [2024-12-14 00:19:05.979671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.997 qpair failed and we were unable to recover it. 00:38:26.997 [2024-12-14 00:19:05.979754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.997 [2024-12-14 00:19:05.979767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.997 qpair failed and we were unable to recover it. 00:38:26.997 [2024-12-14 00:19:05.979866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.997 [2024-12-14 00:19:05.979879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.997 qpair failed and we were unable to recover it. 00:38:26.997 [2024-12-14 00:19:05.979976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.997 [2024-12-14 00:19:05.979989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.997 qpair failed and we were unable to recover it. 00:38:26.997 [2024-12-14 00:19:05.980060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.997 [2024-12-14 00:19:05.980074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.997 qpair failed and we were unable to recover it. 00:38:26.997 [2024-12-14 00:19:05.980171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.997 [2024-12-14 00:19:05.980184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.997 qpair failed and we were unable to recover it. 00:38:26.997 [2024-12-14 00:19:05.980323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.997 [2024-12-14 00:19:05.980336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.997 qpair failed and we were unable to recover it. 00:38:26.997 [2024-12-14 00:19:05.980470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.997 [2024-12-14 00:19:05.980484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.997 qpair failed and we were unable to recover it. 00:38:26.997 [2024-12-14 00:19:05.980622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.997 [2024-12-14 00:19:05.980636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.997 qpair failed and we were unable to recover it. 00:38:26.997 [2024-12-14 00:19:05.980732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.997 [2024-12-14 00:19:05.980746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.997 qpair failed and we were unable to recover it. 00:38:26.997 [2024-12-14 00:19:05.980832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.997 [2024-12-14 00:19:05.980848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.997 qpair failed and we were unable to recover it. 00:38:26.997 [2024-12-14 00:19:05.980997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.997 [2024-12-14 00:19:05.981040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.997 qpair failed and we were unable to recover it. 00:38:26.997 [2024-12-14 00:19:05.981241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.997 [2024-12-14 00:19:05.981283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.997 qpair failed and we were unable to recover it. 00:38:26.997 [2024-12-14 00:19:05.981508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.997 [2024-12-14 00:19:05.981552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.997 qpair failed and we were unable to recover it. 00:38:26.997 [2024-12-14 00:19:05.981782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.997 [2024-12-14 00:19:05.981823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.997 qpair failed and we were unable to recover it. 00:38:26.997 [2024-12-14 00:19:05.981954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.997 [2024-12-14 00:19:05.981968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.997 qpair failed and we were unable to recover it. 00:38:26.997 [2024-12-14 00:19:05.982141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.997 [2024-12-14 00:19:05.982183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.997 qpair failed and we were unable to recover it. 00:38:26.997 [2024-12-14 00:19:05.982326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.997 [2024-12-14 00:19:05.982368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.997 qpair failed and we were unable to recover it. 00:38:26.997 [2024-12-14 00:19:05.982587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.997 [2024-12-14 00:19:05.982631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.997 qpair failed and we were unable to recover it. 00:38:26.997 [2024-12-14 00:19:05.982842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.997 [2024-12-14 00:19:05.982937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.997 qpair failed and we were unable to recover it. 00:38:26.997 [2024-12-14 00:19:05.983089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.997 [2024-12-14 00:19:05.983139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.997 qpair failed and we were unable to recover it. 00:38:26.997 [2024-12-14 00:19:05.983232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.997 [2024-12-14 00:19:05.983245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.997 qpair failed and we were unable to recover it. 00:38:26.997 [2024-12-14 00:19:05.983379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.997 [2024-12-14 00:19:05.983392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.997 qpair failed and we were unable to recover it. 00:38:26.997 [2024-12-14 00:19:05.983485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.997 [2024-12-14 00:19:05.983499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.997 qpair failed and we were unable to recover it. 00:38:26.997 [2024-12-14 00:19:05.983674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.997 [2024-12-14 00:19:05.983714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.997 qpair failed and we were unable to recover it. 00:38:26.997 [2024-12-14 00:19:05.983919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.998 [2024-12-14 00:19:05.983961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.998 qpair failed and we were unable to recover it. 00:38:26.998 [2024-12-14 00:19:05.984157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.998 [2024-12-14 00:19:05.984211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.998 qpair failed and we were unable to recover it. 00:38:26.998 [2024-12-14 00:19:05.984290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.998 [2024-12-14 00:19:05.984304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.998 qpair failed and we were unable to recover it. 00:38:26.998 [2024-12-14 00:19:05.984409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.998 [2024-12-14 00:19:05.984422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.998 qpair failed and we were unable to recover it. 00:38:26.998 [2024-12-14 00:19:05.984577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.998 [2024-12-14 00:19:05.984591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.998 qpair failed and we were unable to recover it. 00:38:26.998 [2024-12-14 00:19:05.984732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.998 [2024-12-14 00:19:05.984745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.998 qpair failed and we were unable to recover it. 00:38:26.998 [2024-12-14 00:19:05.984815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.998 [2024-12-14 00:19:05.984829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.998 qpair failed and we were unable to recover it. 00:38:26.998 [2024-12-14 00:19:05.984972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.998 [2024-12-14 00:19:05.984985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.998 qpair failed and we were unable to recover it. 00:38:26.998 [2024-12-14 00:19:05.985223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.998 [2024-12-14 00:19:05.985264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.998 qpair failed and we were unable to recover it. 00:38:26.998 [2024-12-14 00:19:05.985409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.998 [2024-12-14 00:19:05.985462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.998 qpair failed and we were unable to recover it. 00:38:26.998 [2024-12-14 00:19:05.985673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.998 [2024-12-14 00:19:05.985713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.998 qpair failed and we were unable to recover it. 00:38:26.998 [2024-12-14 00:19:05.985863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.998 [2024-12-14 00:19:05.985904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.998 qpair failed and we were unable to recover it. 00:38:26.998 [2024-12-14 00:19:05.986127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.998 [2024-12-14 00:19:05.986173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:26.998 qpair failed and we were unable to recover it. 00:38:26.998 [2024-12-14 00:19:05.986462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.998 [2024-12-14 00:19:05.986492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:26.998 qpair failed and we were unable to recover it. 00:38:26.998 [2024-12-14 00:19:05.986718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.998 [2024-12-14 00:19:05.986773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:26.998 qpair failed and we were unable to recover it. 00:38:26.998 [2024-12-14 00:19:05.986988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.998 [2024-12-14 00:19:05.987033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:26.998 qpair failed and we were unable to recover it. 00:38:26.998 [2024-12-14 00:19:05.987197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.998 [2024-12-14 00:19:05.987251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:26.998 qpair failed and we were unable to recover it. 00:38:26.998 [2024-12-14 00:19:05.987474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.998 [2024-12-14 00:19:05.987496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:26.998 qpair failed and we were unable to recover it. 00:38:26.998 [2024-12-14 00:19:05.987610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.998 [2024-12-14 00:19:05.987632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:26.998 qpair failed and we were unable to recover it. 00:38:26.998 [2024-12-14 00:19:05.987793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.998 [2024-12-14 00:19:05.987814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:26.998 qpair failed and we were unable to recover it. 00:38:26.998 [2024-12-14 00:19:05.987988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.998 [2024-12-14 00:19:05.988010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:26.998 qpair failed and we were unable to recover it. 00:38:26.998 [2024-12-14 00:19:05.988182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.998 [2024-12-14 00:19:05.988198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.998 qpair failed and we were unable to recover it. 00:38:26.998 [2024-12-14 00:19:05.988306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.998 [2024-12-14 00:19:05.988347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.998 qpair failed and we were unable to recover it. 00:38:26.998 [2024-12-14 00:19:05.988561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.998 [2024-12-14 00:19:05.988606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.998 qpair failed and we were unable to recover it. 00:38:26.998 [2024-12-14 00:19:05.988762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.998 [2024-12-14 00:19:05.988804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.998 qpair failed and we were unable to recover it. 00:38:26.998 [2024-12-14 00:19:05.988893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.998 [2024-12-14 00:19:05.988909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.998 qpair failed and we were unable to recover it. 00:38:26.998 [2024-12-14 00:19:05.989063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.998 [2024-12-14 00:19:05.989077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.998 qpair failed and we were unable to recover it. 00:38:26.998 [2024-12-14 00:19:05.989216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.998 [2024-12-14 00:19:05.989230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.998 qpair failed and we were unable to recover it. 00:38:26.998 [2024-12-14 00:19:05.989305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.998 [2024-12-14 00:19:05.989318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.998 qpair failed and we were unable to recover it. 00:38:26.998 [2024-12-14 00:19:05.989467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.998 [2024-12-14 00:19:05.989481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.998 qpair failed and we were unable to recover it. 00:38:26.998 [2024-12-14 00:19:05.989635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.998 [2024-12-14 00:19:05.989677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.998 qpair failed and we were unable to recover it. 00:38:26.998 [2024-12-14 00:19:05.989969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.998 [2024-12-14 00:19:05.990011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.998 qpair failed and we were unable to recover it. 00:38:26.998 [2024-12-14 00:19:05.990200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.998 [2024-12-14 00:19:05.990242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.998 qpair failed and we were unable to recover it. 00:38:26.998 [2024-12-14 00:19:05.990459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.998 [2024-12-14 00:19:05.990503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.998 qpair failed and we were unable to recover it. 00:38:26.998 [2024-12-14 00:19:05.990711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.998 [2024-12-14 00:19:05.990753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.998 qpair failed and we were unable to recover it. 00:38:26.998 [2024-12-14 00:19:05.990912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.998 [2024-12-14 00:19:05.990954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.998 qpair failed and we were unable to recover it. 00:38:26.998 [2024-12-14 00:19:05.991078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.998 [2024-12-14 00:19:05.991120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.998 qpair failed and we were unable to recover it. 00:38:26.998 [2024-12-14 00:19:05.991425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.998 [2024-12-14 00:19:05.991477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.998 qpair failed and we were unable to recover it. 00:38:26.998 [2024-12-14 00:19:05.991619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.998 [2024-12-14 00:19:05.991661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.999 qpair failed and we were unable to recover it. 00:38:26.999 [2024-12-14 00:19:05.991865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.999 [2024-12-14 00:19:05.991908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.999 qpair failed and we were unable to recover it. 00:38:26.999 [2024-12-14 00:19:05.992099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.999 [2024-12-14 00:19:05.992140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.999 qpair failed and we were unable to recover it. 00:38:26.999 [2024-12-14 00:19:05.992349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.999 [2024-12-14 00:19:05.992363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.999 qpair failed and we were unable to recover it. 00:38:26.999 [2024-12-14 00:19:05.992542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.999 [2024-12-14 00:19:05.992595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.999 qpair failed and we were unable to recover it. 00:38:26.999 [2024-12-14 00:19:05.992799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.999 [2024-12-14 00:19:05.992841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.999 qpair failed and we were unable to recover it. 00:38:26.999 [2024-12-14 00:19:05.993051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.999 [2024-12-14 00:19:05.993097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.999 qpair failed and we were unable to recover it. 00:38:26.999 [2024-12-14 00:19:05.993268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.999 [2024-12-14 00:19:05.993284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.999 qpair failed and we were unable to recover it. 00:38:26.999 [2024-12-14 00:19:05.993461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.999 [2024-12-14 00:19:05.993504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.999 qpair failed and we were unable to recover it. 00:38:26.999 [2024-12-14 00:19:05.993647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.999 [2024-12-14 00:19:05.993689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.999 qpair failed and we were unable to recover it. 00:38:26.999 [2024-12-14 00:19:05.993883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.999 [2024-12-14 00:19:05.993925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.999 qpair failed and we were unable to recover it. 00:38:26.999 [2024-12-14 00:19:05.994113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.999 [2024-12-14 00:19:05.994127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.999 qpair failed and we were unable to recover it. 00:38:26.999 [2024-12-14 00:19:05.994222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.999 [2024-12-14 00:19:05.994235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.999 qpair failed and we were unable to recover it. 00:38:26.999 [2024-12-14 00:19:05.994375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.999 [2024-12-14 00:19:05.994389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.999 qpair failed and we were unable to recover it. 00:38:26.999 [2024-12-14 00:19:05.994482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.999 [2024-12-14 00:19:05.994497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.999 qpair failed and we were unable to recover it. 00:38:26.999 [2024-12-14 00:19:05.994595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.999 [2024-12-14 00:19:05.994608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.999 qpair failed and we were unable to recover it. 00:38:26.999 [2024-12-14 00:19:05.994677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.999 [2024-12-14 00:19:05.994691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.999 qpair failed and we were unable to recover it. 00:38:26.999 [2024-12-14 00:19:05.994853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.999 [2024-12-14 00:19:05.994895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.999 qpair failed and we were unable to recover it. 00:38:26.999 [2024-12-14 00:19:05.995088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.999 [2024-12-14 00:19:05.995130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.999 qpair failed and we were unable to recover it. 00:38:26.999 [2024-12-14 00:19:05.995289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.999 [2024-12-14 00:19:05.995330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.999 qpair failed and we were unable to recover it. 00:38:26.999 [2024-12-14 00:19:05.995597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.999 [2024-12-14 00:19:05.995611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.999 qpair failed and we were unable to recover it. 00:38:26.999 [2024-12-14 00:19:05.995916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.999 [2024-12-14 00:19:05.995958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.999 qpair failed and we were unable to recover it. 00:38:26.999 [2024-12-14 00:19:05.996251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.999 [2024-12-14 00:19:05.996293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.999 qpair failed and we were unable to recover it. 00:38:26.999 [2024-12-14 00:19:05.996561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.999 [2024-12-14 00:19:05.996604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.999 qpair failed and we were unable to recover it. 00:38:26.999 [2024-12-14 00:19:05.996818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.999 [2024-12-14 00:19:05.996860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.999 qpair failed and we were unable to recover it. 00:38:26.999 [2024-12-14 00:19:05.997122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.999 [2024-12-14 00:19:05.997174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.999 qpair failed and we were unable to recover it. 00:38:26.999 [2024-12-14 00:19:05.997269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.999 [2024-12-14 00:19:05.997282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.999 qpair failed and we were unable to recover it. 00:38:26.999 [2024-12-14 00:19:05.997455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.999 [2024-12-14 00:19:05.997471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.999 qpair failed and we were unable to recover it. 00:38:26.999 [2024-12-14 00:19:05.997659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.999 [2024-12-14 00:19:05.997673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.999 qpair failed and we were unable to recover it. 00:38:26.999 [2024-12-14 00:19:05.997746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.999 [2024-12-14 00:19:05.997763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.999 qpair failed and we were unable to recover it. 00:38:26.999 [2024-12-14 00:19:05.997859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.999 [2024-12-14 00:19:05.997873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.999 qpair failed and we were unable to recover it. 00:38:26.999 [2024-12-14 00:19:05.998046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.999 [2024-12-14 00:19:05.998060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.999 qpair failed and we were unable to recover it. 00:38:26.999 [2024-12-14 00:19:05.998194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.999 [2024-12-14 00:19:05.998207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.999 qpair failed and we were unable to recover it. 00:38:26.999 [2024-12-14 00:19:05.998491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.999 [2024-12-14 00:19:05.998534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.999 qpair failed and we were unable to recover it. 00:38:26.999 [2024-12-14 00:19:05.998835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.999 [2024-12-14 00:19:05.998876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.999 qpair failed and we were unable to recover it. 00:38:26.999 [2024-12-14 00:19:05.999073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.999 [2024-12-14 00:19:05.999086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.999 qpair failed and we were unable to recover it. 00:38:26.999 [2024-12-14 00:19:05.999180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.999 [2024-12-14 00:19:05.999204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.999 qpair failed and we were unable to recover it. 00:38:26.999 [2024-12-14 00:19:05.999423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.999 [2024-12-14 00:19:05.999441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.999 qpair failed and we were unable to recover it. 00:38:26.999 [2024-12-14 00:19:05.999590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.999 [2024-12-14 00:19:05.999603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.999 qpair failed and we were unable to recover it. 00:38:26.999 [2024-12-14 00:19:05.999764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:26.999 [2024-12-14 00:19:05.999805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:26.999 qpair failed and we were unable to recover it. 00:38:27.000 [2024-12-14 00:19:05.999999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.000 [2024-12-14 00:19:06.000040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.000 qpair failed and we were unable to recover it. 00:38:27.000 [2024-12-14 00:19:06.000321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.000 [2024-12-14 00:19:06.000364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.000 qpair failed and we were unable to recover it. 00:38:27.000 [2024-12-14 00:19:06.000526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.000 [2024-12-14 00:19:06.000569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.000 qpair failed and we were unable to recover it. 00:38:27.000 [2024-12-14 00:19:06.000725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.000 [2024-12-14 00:19:06.000768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.000 qpair failed and we were unable to recover it. 00:38:27.000 [2024-12-14 00:19:06.001051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.000 [2024-12-14 00:19:06.001093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.000 qpair failed and we were unable to recover it. 00:38:27.000 [2024-12-14 00:19:06.001295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.000 [2024-12-14 00:19:06.001337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.000 qpair failed and we were unable to recover it. 00:38:27.000 [2024-12-14 00:19:06.001547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.000 [2024-12-14 00:19:06.001561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.000 qpair failed and we were unable to recover it. 00:38:27.000 [2024-12-14 00:19:06.001716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.000 [2024-12-14 00:19:06.001730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.000 qpair failed and we were unable to recover it. 00:38:27.000 [2024-12-14 00:19:06.001968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.000 [2024-12-14 00:19:06.002010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.000 qpair failed and we were unable to recover it. 00:38:27.000 [2024-12-14 00:19:06.002211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.000 [2024-12-14 00:19:06.002254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.000 qpair failed and we were unable to recover it. 00:38:27.000 [2024-12-14 00:19:06.002526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.000 [2024-12-14 00:19:06.002540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.000 qpair failed and we were unable to recover it. 00:38:27.000 [2024-12-14 00:19:06.002633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.000 [2024-12-14 00:19:06.002675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.000 qpair failed and we were unable to recover it. 00:38:27.000 [2024-12-14 00:19:06.002893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.000 [2024-12-14 00:19:06.002935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.000 qpair failed and we were unable to recover it. 00:38:27.000 [2024-12-14 00:19:06.003198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.000 [2024-12-14 00:19:06.003240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.000 qpair failed and we were unable to recover it. 00:38:27.000 [2024-12-14 00:19:06.003542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.000 [2024-12-14 00:19:06.003585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.000 qpair failed and we were unable to recover it. 00:38:27.000 [2024-12-14 00:19:06.003740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.000 [2024-12-14 00:19:06.003782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.000 qpair failed and we were unable to recover it. 00:38:27.000 [2024-12-14 00:19:06.003932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.000 [2024-12-14 00:19:06.003974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.000 qpair failed and we were unable to recover it. 00:38:27.000 [2024-12-14 00:19:06.004235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.000 [2024-12-14 00:19:06.004277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.000 qpair failed and we were unable to recover it. 00:38:27.000 [2024-12-14 00:19:06.004488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.000 [2024-12-14 00:19:06.004531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.000 qpair failed and we were unable to recover it. 00:38:27.000 [2024-12-14 00:19:06.004833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.000 [2024-12-14 00:19:06.004875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.000 qpair failed and we were unable to recover it. 00:38:27.000 [2024-12-14 00:19:06.005155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.000 [2024-12-14 00:19:06.005197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.000 qpair failed and we were unable to recover it. 00:38:27.000 [2024-12-14 00:19:06.005488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.000 [2024-12-14 00:19:06.005502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.000 qpair failed and we were unable to recover it. 00:38:27.000 [2024-12-14 00:19:06.005730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.000 [2024-12-14 00:19:06.005743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.000 qpair failed and we were unable to recover it. 00:38:27.000 [2024-12-14 00:19:06.005957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.000 [2024-12-14 00:19:06.005982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.000 qpair failed and we were unable to recover it. 00:38:27.000 [2024-12-14 00:19:06.006139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.000 [2024-12-14 00:19:06.006152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.000 qpair failed and we were unable to recover it. 00:38:27.000 [2024-12-14 00:19:06.006309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.000 [2024-12-14 00:19:06.006321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.000 qpair failed and we were unable to recover it. 00:38:27.000 [2024-12-14 00:19:06.006407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.000 [2024-12-14 00:19:06.006421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.000 qpair failed and we were unable to recover it. 00:38:27.000 [2024-12-14 00:19:06.006605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.000 [2024-12-14 00:19:06.006622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.000 qpair failed and we were unable to recover it. 00:38:27.000 [2024-12-14 00:19:06.006722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.000 [2024-12-14 00:19:06.006763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.000 qpair failed and we were unable to recover it. 00:38:27.000 [2024-12-14 00:19:06.006972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.000 [2024-12-14 00:19:06.007014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.000 qpair failed and we were unable to recover it. 00:38:27.000 [2024-12-14 00:19:06.007274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.000 [2024-12-14 00:19:06.007316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.000 qpair failed and we were unable to recover it. 00:38:27.000 [2024-12-14 00:19:06.007596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.000 [2024-12-14 00:19:06.007643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.000 qpair failed and we were unable to recover it. 00:38:27.000 [2024-12-14 00:19:06.007839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.000 [2024-12-14 00:19:06.007880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.000 qpair failed and we were unable to recover it. 00:38:27.000 [2024-12-14 00:19:06.008044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.000 [2024-12-14 00:19:06.008086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.000 qpair failed and we were unable to recover it. 00:38:27.000 [2024-12-14 00:19:06.008236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.000 [2024-12-14 00:19:06.008261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.000 qpair failed and we were unable to recover it. 00:38:27.000 [2024-12-14 00:19:06.008465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.000 [2024-12-14 00:19:06.008478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.000 qpair failed and we were unable to recover it. 00:38:27.000 [2024-12-14 00:19:06.008626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.000 [2024-12-14 00:19:06.008639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.000 qpair failed and we were unable to recover it. 00:38:27.000 [2024-12-14 00:19:06.008870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.000 [2024-12-14 00:19:06.008911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.000 qpair failed and we were unable to recover it. 00:38:27.000 [2024-12-14 00:19:06.009049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.001 [2024-12-14 00:19:06.009090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.001 qpair failed and we were unable to recover it. 00:38:27.001 [2024-12-14 00:19:06.009349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.001 [2024-12-14 00:19:06.009391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.001 qpair failed and we were unable to recover it. 00:38:27.001 [2024-12-14 00:19:06.009659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.001 [2024-12-14 00:19:06.009701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.001 qpair failed and we were unable to recover it. 00:38:27.001 [2024-12-14 00:19:06.009968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.001 [2024-12-14 00:19:06.010010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.001 qpair failed and we were unable to recover it. 00:38:27.001 [2024-12-14 00:19:06.010287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.001 [2024-12-14 00:19:06.010330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.001 qpair failed and we were unable to recover it. 00:38:27.001 [2024-12-14 00:19:06.010622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.001 [2024-12-14 00:19:06.010636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.001 qpair failed and we were unable to recover it. 00:38:27.001 [2024-12-14 00:19:06.010722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.001 [2024-12-14 00:19:06.010736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.001 qpair failed and we were unable to recover it. 00:38:27.001 [2024-12-14 00:19:06.010899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.001 [2024-12-14 00:19:06.010945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.001 qpair failed and we were unable to recover it. 00:38:27.001 [2024-12-14 00:19:06.011171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.001 [2024-12-14 00:19:06.011213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.001 qpair failed and we were unable to recover it. 00:38:27.001 [2024-12-14 00:19:06.011451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.001 [2024-12-14 00:19:06.011494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.001 qpair failed and we were unable to recover it. 00:38:27.001 [2024-12-14 00:19:06.011699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.001 [2024-12-14 00:19:06.011741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.001 qpair failed and we were unable to recover it. 00:38:27.001 [2024-12-14 00:19:06.011944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.001 [2024-12-14 00:19:06.011986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.001 qpair failed and we were unable to recover it. 00:38:27.001 [2024-12-14 00:19:06.012239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.001 [2024-12-14 00:19:06.012253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.001 qpair failed and we were unable to recover it. 00:38:27.001 [2024-12-14 00:19:06.012407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.001 [2024-12-14 00:19:06.012420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.001 qpair failed and we were unable to recover it. 00:38:27.001 [2024-12-14 00:19:06.012582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.001 [2024-12-14 00:19:06.012597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.001 qpair failed and we were unable to recover it. 00:38:27.001 [2024-12-14 00:19:06.012729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.001 [2024-12-14 00:19:06.012746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.001 qpair failed and we were unable to recover it. 00:38:27.001 [2024-12-14 00:19:06.012907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.001 [2024-12-14 00:19:06.012921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.001 qpair failed and we were unable to recover it. 00:38:27.001 [2024-12-14 00:19:06.013058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.001 [2024-12-14 00:19:06.013071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.001 qpair failed and we were unable to recover it. 00:38:27.001 [2024-12-14 00:19:06.013279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.001 [2024-12-14 00:19:06.013322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.001 qpair failed and we were unable to recover it. 00:38:27.001 [2024-12-14 00:19:06.013472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.001 [2024-12-14 00:19:06.013514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.001 qpair failed and we were unable to recover it. 00:38:27.001 [2024-12-14 00:19:06.013711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.001 [2024-12-14 00:19:06.013751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.001 qpair failed and we were unable to recover it. 00:38:27.001 [2024-12-14 00:19:06.014012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.001 [2024-12-14 00:19:06.014054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.001 qpair failed and we were unable to recover it. 00:38:27.001 [2024-12-14 00:19:06.014246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.001 [2024-12-14 00:19:06.014288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.001 qpair failed and we were unable to recover it. 00:38:27.001 [2024-12-14 00:19:06.014484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.001 [2024-12-14 00:19:06.014498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.001 qpair failed and we were unable to recover it. 00:38:27.001 [2024-12-14 00:19:06.014677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.001 [2024-12-14 00:19:06.014719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.001 qpair failed and we were unable to recover it. 00:38:27.001 [2024-12-14 00:19:06.014924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.001 [2024-12-14 00:19:06.014966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.001 qpair failed and we were unable to recover it. 00:38:27.001 [2024-12-14 00:19:06.015093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.001 [2024-12-14 00:19:06.015135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.001 qpair failed and we were unable to recover it. 00:38:27.001 [2024-12-14 00:19:06.015347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.001 [2024-12-14 00:19:06.015389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.001 qpair failed and we were unable to recover it. 00:38:27.001 [2024-12-14 00:19:06.015681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.001 [2024-12-14 00:19:06.015724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.001 qpair failed and we were unable to recover it. 00:38:27.001 [2024-12-14 00:19:06.015938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.001 [2024-12-14 00:19:06.015987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.001 qpair failed and we were unable to recover it. 00:38:27.001 [2024-12-14 00:19:06.016194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.001 [2024-12-14 00:19:06.016237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.001 qpair failed and we were unable to recover it. 00:38:27.001 [2024-12-14 00:19:06.016382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.001 [2024-12-14 00:19:06.016423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.001 qpair failed and we were unable to recover it. 00:38:27.001 [2024-12-14 00:19:06.016678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.001 [2024-12-14 00:19:06.016721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.001 qpair failed and we were unable to recover it. 00:38:27.001 [2024-12-14 00:19:06.016923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.001 [2024-12-14 00:19:06.016966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.001 qpair failed and we were unable to recover it. 00:38:27.002 [2024-12-14 00:19:06.017122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.002 [2024-12-14 00:19:06.017164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.002 qpair failed and we were unable to recover it. 00:38:27.002 [2024-12-14 00:19:06.017456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.002 [2024-12-14 00:19:06.017470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.002 qpair failed and we were unable to recover it. 00:38:27.002 [2024-12-14 00:19:06.017696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.002 [2024-12-14 00:19:06.017710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.002 qpair failed and we were unable to recover it. 00:38:27.002 [2024-12-14 00:19:06.017852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.002 [2024-12-14 00:19:06.017866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.002 qpair failed and we were unable to recover it. 00:38:27.002 [2024-12-14 00:19:06.018016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.002 [2024-12-14 00:19:06.018059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.002 qpair failed and we were unable to recover it. 00:38:27.002 [2024-12-14 00:19:06.018252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.002 [2024-12-14 00:19:06.018294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.002 qpair failed and we were unable to recover it. 00:38:27.002 [2024-12-14 00:19:06.018488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.002 [2024-12-14 00:19:06.018530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.002 qpair failed and we were unable to recover it. 00:38:27.002 [2024-12-14 00:19:06.018669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.002 [2024-12-14 00:19:06.018710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.002 qpair failed and we were unable to recover it. 00:38:27.002 [2024-12-14 00:19:06.018852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.002 [2024-12-14 00:19:06.018895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.002 qpair failed and we were unable to recover it. 00:38:27.002 [2024-12-14 00:19:06.019110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.002 [2024-12-14 00:19:06.019151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.002 qpair failed and we were unable to recover it. 00:38:27.002 [2024-12-14 00:19:06.019361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.002 [2024-12-14 00:19:06.019403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.002 qpair failed and we were unable to recover it. 00:38:27.002 [2024-12-14 00:19:06.019701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.002 [2024-12-14 00:19:06.019743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.002 qpair failed and we were unable to recover it. 00:38:27.002 [2024-12-14 00:19:06.019884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.002 [2024-12-14 00:19:06.019926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.002 qpair failed and we were unable to recover it. 00:38:27.002 [2024-12-14 00:19:06.020119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.002 [2024-12-14 00:19:06.020160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.002 qpair failed and we were unable to recover it. 00:38:27.002 [2024-12-14 00:19:06.020347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.002 [2024-12-14 00:19:06.020361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.002 qpair failed and we were unable to recover it. 00:38:27.002 [2024-12-14 00:19:06.020519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.002 [2024-12-14 00:19:06.020562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.002 qpair failed and we were unable to recover it. 00:38:27.002 [2024-12-14 00:19:06.020832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.002 [2024-12-14 00:19:06.020874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.002 qpair failed and we were unable to recover it. 00:38:27.002 [2024-12-14 00:19:06.021160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.002 [2024-12-14 00:19:06.021202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.002 qpair failed and we were unable to recover it. 00:38:27.002 [2024-12-14 00:19:06.021433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.002 [2024-12-14 00:19:06.021486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.002 qpair failed and we were unable to recover it. 00:38:27.002 [2024-12-14 00:19:06.021630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.002 [2024-12-14 00:19:06.021671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.002 qpair failed and we were unable to recover it. 00:38:27.002 [2024-12-14 00:19:06.021811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.002 [2024-12-14 00:19:06.021861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.002 qpair failed and we were unable to recover it. 00:38:27.002 [2024-12-14 00:19:06.022123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.002 [2024-12-14 00:19:06.022164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.002 qpair failed and we were unable to recover it. 00:38:27.002 [2024-12-14 00:19:06.022422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.002 [2024-12-14 00:19:06.022441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.002 qpair failed and we were unable to recover it. 00:38:27.002 [2024-12-14 00:19:06.022593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.002 [2024-12-14 00:19:06.022607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.002 qpair failed and we were unable to recover it. 00:38:27.002 [2024-12-14 00:19:06.022760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.002 [2024-12-14 00:19:06.022801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.002 qpair failed and we were unable to recover it. 00:38:27.002 [2024-12-14 00:19:06.022992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.002 [2024-12-14 00:19:06.023035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.002 qpair failed and we were unable to recover it. 00:38:27.002 [2024-12-14 00:19:06.023239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.002 [2024-12-14 00:19:06.023291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.002 qpair failed and we were unable to recover it. 00:38:27.002 [2024-12-14 00:19:06.023473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.002 [2024-12-14 00:19:06.023503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.002 qpair failed and we were unable to recover it. 00:38:27.002 [2024-12-14 00:19:06.023595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.002 [2024-12-14 00:19:06.023608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.002 qpair failed and we were unable to recover it. 00:38:27.002 [2024-12-14 00:19:06.023780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.002 [2024-12-14 00:19:06.023793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.002 qpair failed and we were unable to recover it. 00:38:27.002 [2024-12-14 00:19:06.023884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.002 [2024-12-14 00:19:06.023897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.002 qpair failed and we were unable to recover it. 00:38:27.002 [2024-12-14 00:19:06.023985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.002 [2024-12-14 00:19:06.023998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.002 qpair failed and we were unable to recover it. 00:38:27.002 [2024-12-14 00:19:06.024170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.002 [2024-12-14 00:19:06.024184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.002 qpair failed and we were unable to recover it. 00:38:27.002 [2024-12-14 00:19:06.024300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.002 [2024-12-14 00:19:06.024342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.002 qpair failed and we were unable to recover it. 00:38:27.002 [2024-12-14 00:19:06.024552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.002 [2024-12-14 00:19:06.024595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.002 qpair failed and we were unable to recover it. 00:38:27.002 [2024-12-14 00:19:06.024721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.002 [2024-12-14 00:19:06.024763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.002 qpair failed and we were unable to recover it. 00:38:27.002 [2024-12-14 00:19:06.024977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.002 [2024-12-14 00:19:06.025019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.002 qpair failed and we were unable to recover it. 00:38:27.002 [2024-12-14 00:19:06.025245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.002 [2024-12-14 00:19:06.025286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.002 qpair failed and we were unable to recover it. 00:38:27.003 [2024-12-14 00:19:06.025481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.003 [2024-12-14 00:19:06.025494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.003 qpair failed and we were unable to recover it. 00:38:27.003 [2024-12-14 00:19:06.025642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.003 [2024-12-14 00:19:06.025670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.003 qpair failed and we were unable to recover it. 00:38:27.003 [2024-12-14 00:19:06.025868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.003 [2024-12-14 00:19:06.025910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.003 qpair failed and we were unable to recover it. 00:38:27.003 [2024-12-14 00:19:06.026069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.003 [2024-12-14 00:19:06.026111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.003 qpair failed and we were unable to recover it. 00:38:27.003 [2024-12-14 00:19:06.026380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.003 [2024-12-14 00:19:06.026422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.003 qpair failed and we were unable to recover it. 00:38:27.003 [2024-12-14 00:19:06.026637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.003 [2024-12-14 00:19:06.026680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.003 qpair failed and we were unable to recover it. 00:38:27.003 [2024-12-14 00:19:06.026975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.003 [2024-12-14 00:19:06.027018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.003 qpair failed and we were unable to recover it. 00:38:27.003 [2024-12-14 00:19:06.027195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.003 [2024-12-14 00:19:06.027217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.003 qpair failed and we were unable to recover it. 00:38:27.003 [2024-12-14 00:19:06.027448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.003 [2024-12-14 00:19:06.027479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.003 qpair failed and we were unable to recover it. 00:38:27.003 [2024-12-14 00:19:06.027564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.003 [2024-12-14 00:19:06.027578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.003 qpair failed and we were unable to recover it. 00:38:27.003 [2024-12-14 00:19:06.027729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.003 [2024-12-14 00:19:06.027743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.003 qpair failed and we were unable to recover it. 00:38:27.003 [2024-12-14 00:19:06.027921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.003 [2024-12-14 00:19:06.027935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.003 qpair failed and we were unable to recover it. 00:38:27.003 [2024-12-14 00:19:06.028094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.003 [2024-12-14 00:19:06.028135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.003 qpair failed and we were unable to recover it. 00:38:27.003 [2024-12-14 00:19:06.028398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.003 [2024-12-14 00:19:06.028459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.003 qpair failed and we were unable to recover it. 00:38:27.003 [2024-12-14 00:19:06.028599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.003 [2024-12-14 00:19:06.028641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.003 qpair failed and we were unable to recover it. 00:38:27.003 [2024-12-14 00:19:06.028918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.003 [2024-12-14 00:19:06.028960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.003 qpair failed and we were unable to recover it. 00:38:27.003 [2024-12-14 00:19:06.029099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.003 [2024-12-14 00:19:06.029112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.003 qpair failed and we were unable to recover it. 00:38:27.003 [2024-12-14 00:19:06.029337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.003 [2024-12-14 00:19:06.029351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.003 qpair failed and we were unable to recover it. 00:38:27.003 [2024-12-14 00:19:06.029496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.003 [2024-12-14 00:19:06.029510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.003 qpair failed and we were unable to recover it. 00:38:27.003 [2024-12-14 00:19:06.029731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.003 [2024-12-14 00:19:06.029774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.003 qpair failed and we were unable to recover it. 00:38:27.003 [2024-12-14 00:19:06.029922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.003 [2024-12-14 00:19:06.029963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.003 qpair failed and we were unable to recover it. 00:38:27.003 [2024-12-14 00:19:06.030173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.003 [2024-12-14 00:19:06.030213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.003 qpair failed and we were unable to recover it. 00:38:27.003 [2024-12-14 00:19:06.030441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.003 [2024-12-14 00:19:06.030454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.003 qpair failed and we were unable to recover it. 00:38:27.003 [2024-12-14 00:19:06.030664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.003 [2024-12-14 00:19:06.030677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.003 qpair failed and we were unable to recover it. 00:38:27.003 [2024-12-14 00:19:06.030776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.003 [2024-12-14 00:19:06.030792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.003 qpair failed and we were unable to recover it. 00:38:27.003 [2024-12-14 00:19:06.030970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.003 [2024-12-14 00:19:06.031012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.003 qpair failed and we were unable to recover it. 00:38:27.003 [2024-12-14 00:19:06.031155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.003 [2024-12-14 00:19:06.031197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.003 qpair failed and we were unable to recover it. 00:38:27.003 [2024-12-14 00:19:06.031422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.003 [2024-12-14 00:19:06.031474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.003 qpair failed and we were unable to recover it. 00:38:27.003 [2024-12-14 00:19:06.031663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.003 [2024-12-14 00:19:06.031677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.003 qpair failed and we were unable to recover it. 00:38:27.003 [2024-12-14 00:19:06.031856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.003 [2024-12-14 00:19:06.031897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.003 qpair failed and we were unable to recover it. 00:38:27.003 [2024-12-14 00:19:06.032127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.003 [2024-12-14 00:19:06.032170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.003 qpair failed and we were unable to recover it. 00:38:27.003 [2024-12-14 00:19:06.032399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.003 [2024-12-14 00:19:06.032458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.003 qpair failed and we were unable to recover it. 00:38:27.003 [2024-12-14 00:19:06.032716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.003 [2024-12-14 00:19:06.032759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.003 qpair failed and we were unable to recover it. 00:38:27.003 [2024-12-14 00:19:06.032977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.003 [2024-12-14 00:19:06.033019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.003 qpair failed and we were unable to recover it. 00:38:27.003 [2024-12-14 00:19:06.033246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.003 [2024-12-14 00:19:06.033288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.003 qpair failed and we were unable to recover it. 00:38:27.004 [2024-12-14 00:19:06.033571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.004 [2024-12-14 00:19:06.033615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.004 qpair failed and we were unable to recover it. 00:38:27.004 [2024-12-14 00:19:06.033827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.004 [2024-12-14 00:19:06.033869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.004 qpair failed and we were unable to recover it. 00:38:27.004 [2024-12-14 00:19:06.034100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.004 [2024-12-14 00:19:06.034113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.004 qpair failed and we were unable to recover it. 00:38:27.004 [2024-12-14 00:19:06.034269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.004 [2024-12-14 00:19:06.034311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.004 qpair failed and we were unable to recover it. 00:38:27.004 [2024-12-14 00:19:06.034541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.004 [2024-12-14 00:19:06.034583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.004 qpair failed and we were unable to recover it. 00:38:27.004 [2024-12-14 00:19:06.034746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.004 [2024-12-14 00:19:06.034788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.004 qpair failed and we were unable to recover it. 00:38:27.004 [2024-12-14 00:19:06.034996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.004 [2024-12-14 00:19:06.035038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.004 qpair failed and we were unable to recover it. 00:38:27.004 [2024-12-14 00:19:06.035193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.004 [2024-12-14 00:19:06.035234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.004 qpair failed and we were unable to recover it. 00:38:27.004 [2024-12-14 00:19:06.035382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.004 [2024-12-14 00:19:06.035395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.004 qpair failed and we were unable to recover it. 00:38:27.004 [2024-12-14 00:19:06.035621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.004 [2024-12-14 00:19:06.035634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.004 qpair failed and we were unable to recover it. 00:38:27.004 [2024-12-14 00:19:06.035801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.004 [2024-12-14 00:19:06.035843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.004 qpair failed and we were unable to recover it. 00:38:27.004 [2024-12-14 00:19:06.036047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.004 [2024-12-14 00:19:06.036089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.004 qpair failed and we were unable to recover it. 00:38:27.004 [2024-12-14 00:19:06.036227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.004 [2024-12-14 00:19:06.036269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.004 qpair failed and we were unable to recover it. 00:38:27.004 [2024-12-14 00:19:06.036536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.004 [2024-12-14 00:19:06.036550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.004 qpair failed and we were unable to recover it. 00:38:27.004 [2024-12-14 00:19:06.036632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.004 [2024-12-14 00:19:06.036646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.004 qpair failed and we were unable to recover it. 00:38:27.004 [2024-12-14 00:19:06.036726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.004 [2024-12-14 00:19:06.036740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.004 qpair failed and we were unable to recover it. 00:38:27.004 [2024-12-14 00:19:06.036851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.004 [2024-12-14 00:19:06.036892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.004 qpair failed and we were unable to recover it. 00:38:27.004 [2024-12-14 00:19:06.037104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.004 [2024-12-14 00:19:06.037146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.004 qpair failed and we were unable to recover it. 00:38:27.004 [2024-12-14 00:19:06.037342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.004 [2024-12-14 00:19:06.037383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.004 qpair failed and we were unable to recover it. 00:38:27.004 [2024-12-14 00:19:06.037494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.004 [2024-12-14 00:19:06.037508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.004 qpair failed and we were unable to recover it. 00:38:27.004 [2024-12-14 00:19:06.037642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.004 [2024-12-14 00:19:06.037656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.004 qpair failed and we were unable to recover it. 00:38:27.004 [2024-12-14 00:19:06.037732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.004 [2024-12-14 00:19:06.037746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.004 qpair failed and we were unable to recover it. 00:38:27.004 [2024-12-14 00:19:06.037907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.004 [2024-12-14 00:19:06.037920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.004 qpair failed and we were unable to recover it. 00:38:27.004 [2024-12-14 00:19:06.038057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.004 [2024-12-14 00:19:06.038070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.004 qpair failed and we were unable to recover it. 00:38:27.004 [2024-12-14 00:19:06.038236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.004 [2024-12-14 00:19:06.038279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.004 qpair failed and we were unable to recover it. 00:38:27.004 [2024-12-14 00:19:06.038488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.004 [2024-12-14 00:19:06.038532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.004 qpair failed and we were unable to recover it. 00:38:27.004 [2024-12-14 00:19:06.038744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.004 [2024-12-14 00:19:06.038785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.004 qpair failed and we were unable to recover it. 00:38:27.004 [2024-12-14 00:19:06.038991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.004 [2024-12-14 00:19:06.039035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.004 qpair failed and we were unable to recover it. 00:38:27.004 [2024-12-14 00:19:06.039240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.004 [2024-12-14 00:19:06.039282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.004 qpair failed and we were unable to recover it. 00:38:27.004 [2024-12-14 00:19:06.039483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.004 [2024-12-14 00:19:06.039499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.004 qpair failed and we were unable to recover it. 00:38:27.004 [2024-12-14 00:19:06.039731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.004 [2024-12-14 00:19:06.039774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.004 qpair failed and we were unable to recover it. 00:38:27.004 [2024-12-14 00:19:06.039989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.004 [2024-12-14 00:19:06.040030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.004 qpair failed and we were unable to recover it. 00:38:27.004 [2024-12-14 00:19:06.040302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.004 [2024-12-14 00:19:06.040343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.004 qpair failed and we were unable to recover it. 00:38:27.004 [2024-12-14 00:19:06.040575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.004 [2024-12-14 00:19:06.040618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.004 qpair failed and we were unable to recover it. 00:38:27.004 [2024-12-14 00:19:06.040764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.004 [2024-12-14 00:19:06.040805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.004 qpair failed and we were unable to recover it. 00:38:27.004 [2024-12-14 00:19:06.040972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.004 [2024-12-14 00:19:06.041015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.004 qpair failed and we were unable to recover it. 00:38:27.004 [2024-12-14 00:19:06.041235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.004 [2024-12-14 00:19:06.041304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.004 qpair failed and we were unable to recover it. 00:38:27.004 [2024-12-14 00:19:06.041477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.004 [2024-12-14 00:19:06.041492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.005 qpair failed and we were unable to recover it. 00:38:27.005 [2024-12-14 00:19:06.041571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.005 [2024-12-14 00:19:06.041584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.005 qpair failed and we were unable to recover it. 00:38:27.005 [2024-12-14 00:19:06.041722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.005 [2024-12-14 00:19:06.041736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.005 qpair failed and we were unable to recover it. 00:38:27.005 [2024-12-14 00:19:06.041815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.005 [2024-12-14 00:19:06.041828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.005 qpair failed and we were unable to recover it. 00:38:27.005 [2024-12-14 00:19:06.041976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.005 [2024-12-14 00:19:06.042019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.005 qpair failed and we were unable to recover it. 00:38:27.005 [2024-12-14 00:19:06.042304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.005 [2024-12-14 00:19:06.042345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.005 qpair failed and we were unable to recover it. 00:38:27.005 [2024-12-14 00:19:06.042561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.005 [2024-12-14 00:19:06.042575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.005 qpair failed and we were unable to recover it. 00:38:27.005 [2024-12-14 00:19:06.042711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.005 [2024-12-14 00:19:06.042764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.005 qpair failed and we were unable to recover it. 00:38:27.005 [2024-12-14 00:19:06.043054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.005 [2024-12-14 00:19:06.043095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.005 qpair failed and we were unable to recover it. 00:38:27.005 [2024-12-14 00:19:06.043247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.005 [2024-12-14 00:19:06.043289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.005 qpair failed and we were unable to recover it. 00:38:27.005 [2024-12-14 00:19:06.043503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.005 [2024-12-14 00:19:06.043516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.005 qpair failed and we were unable to recover it. 00:38:27.005 [2024-12-14 00:19:06.043617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.005 [2024-12-14 00:19:06.043631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.005 qpair failed and we were unable to recover it. 00:38:27.005 [2024-12-14 00:19:06.043774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.005 [2024-12-14 00:19:06.043788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.005 qpair failed and we were unable to recover it. 00:38:27.005 [2024-12-14 00:19:06.044021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.005 [2024-12-14 00:19:06.044034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.005 qpair failed and we were unable to recover it. 00:38:27.005 [2024-12-14 00:19:06.044187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.005 [2024-12-14 00:19:06.044200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.005 qpair failed and we were unable to recover it. 00:38:27.005 [2024-12-14 00:19:06.044353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.005 [2024-12-14 00:19:06.044366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.005 qpair failed and we were unable to recover it. 00:38:27.005 [2024-12-14 00:19:06.044605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.005 [2024-12-14 00:19:06.044619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.005 qpair failed and we were unable to recover it. 00:38:27.005 [2024-12-14 00:19:06.044728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.005 [2024-12-14 00:19:06.044742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.005 qpair failed and we were unable to recover it. 00:38:27.005 [2024-12-14 00:19:06.044964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.005 [2024-12-14 00:19:06.044988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.005 qpair failed and we were unable to recover it. 00:38:27.005 [2024-12-14 00:19:06.045171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.005 [2024-12-14 00:19:06.045187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.005 qpair failed and we were unable to recover it. 00:38:27.005 [2024-12-14 00:19:06.045281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.005 [2024-12-14 00:19:06.045323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.005 qpair failed and we were unable to recover it. 00:38:27.005 [2024-12-14 00:19:06.045553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.005 [2024-12-14 00:19:06.045596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.005 qpair failed and we were unable to recover it. 00:38:27.005 [2024-12-14 00:19:06.045747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.005 [2024-12-14 00:19:06.045789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.005 qpair failed and we were unable to recover it. 00:38:27.005 [2024-12-14 00:19:06.046050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.005 [2024-12-14 00:19:06.046092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.005 qpair failed and we were unable to recover it. 00:38:27.005 [2024-12-14 00:19:06.046285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.005 [2024-12-14 00:19:06.046326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.005 qpair failed and we were unable to recover it. 00:38:27.005 [2024-12-14 00:19:06.046532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.005 [2024-12-14 00:19:06.046546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.005 qpair failed and we were unable to recover it. 00:38:27.005 [2024-12-14 00:19:06.046721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.005 [2024-12-14 00:19:06.046763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.005 qpair failed and we were unable to recover it. 00:38:27.005 [2024-12-14 00:19:06.047074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.005 [2024-12-14 00:19:06.047116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.005 qpair failed and we were unable to recover it. 00:38:27.005 [2024-12-14 00:19:06.047320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.005 [2024-12-14 00:19:06.047333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.005 qpair failed and we were unable to recover it. 00:38:27.005 [2024-12-14 00:19:06.047542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.005 [2024-12-14 00:19:06.047556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.005 qpair failed and we were unable to recover it. 00:38:27.005 [2024-12-14 00:19:06.047649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.005 [2024-12-14 00:19:06.047662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.005 qpair failed and we were unable to recover it. 00:38:27.005 [2024-12-14 00:19:06.047734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.005 [2024-12-14 00:19:06.047748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.005 qpair failed and we were unable to recover it. 00:38:27.005 [2024-12-14 00:19:06.047904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.005 [2024-12-14 00:19:06.047953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.005 qpair failed and we were unable to recover it. 00:38:27.005 [2024-12-14 00:19:06.048114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.005 [2024-12-14 00:19:06.048156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.005 qpair failed and we were unable to recover it. 00:38:27.005 [2024-12-14 00:19:06.048391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.005 [2024-12-14 00:19:06.048447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.005 qpair failed and we were unable to recover it. 00:38:27.005 [2024-12-14 00:19:06.048654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.005 [2024-12-14 00:19:06.048668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.005 qpair failed and we were unable to recover it. 00:38:27.005 [2024-12-14 00:19:06.048895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.005 [2024-12-14 00:19:06.048908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.005 qpair failed and we were unable to recover it. 00:38:27.005 [2024-12-14 00:19:06.049131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.005 [2024-12-14 00:19:06.049147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.005 qpair failed and we were unable to recover it. 00:38:27.005 [2024-12-14 00:19:06.049246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.005 [2024-12-14 00:19:06.049259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.005 qpair failed and we were unable to recover it. 00:38:27.005 [2024-12-14 00:19:06.049343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.006 [2024-12-14 00:19:06.049357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.006 qpair failed and we were unable to recover it. 00:38:27.006 [2024-12-14 00:19:06.049443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.006 [2024-12-14 00:19:06.049457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.006 qpair failed and we were unable to recover it. 00:38:27.006 [2024-12-14 00:19:06.049666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.006 [2024-12-14 00:19:06.049680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.006 qpair failed and we were unable to recover it. 00:38:27.006 [2024-12-14 00:19:06.049816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.006 [2024-12-14 00:19:06.049830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.006 qpair failed and we were unable to recover it. 00:38:27.006 [2024-12-14 00:19:06.050018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.006 [2024-12-14 00:19:06.050032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.006 qpair failed and we were unable to recover it. 00:38:27.006 [2024-12-14 00:19:06.050204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.006 [2024-12-14 00:19:06.050217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.006 qpair failed and we were unable to recover it. 00:38:27.006 [2024-12-14 00:19:06.050342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.006 [2024-12-14 00:19:06.050384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.006 qpair failed and we were unable to recover it. 00:38:27.006 [2024-12-14 00:19:06.050668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.006 [2024-12-14 00:19:06.050712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.006 qpair failed and we were unable to recover it. 00:38:27.006 [2024-12-14 00:19:06.050916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.006 [2024-12-14 00:19:06.050959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.006 qpair failed and we were unable to recover it. 00:38:27.006 [2024-12-14 00:19:06.051090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.006 [2024-12-14 00:19:06.051121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.006 qpair failed and we were unable to recover it. 00:38:27.006 [2024-12-14 00:19:06.051293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.006 [2024-12-14 00:19:06.051307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.006 qpair failed and we were unable to recover it. 00:38:27.006 [2024-12-14 00:19:06.051567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.006 [2024-12-14 00:19:06.051611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.006 qpair failed and we were unable to recover it. 00:38:27.006 [2024-12-14 00:19:06.051804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.006 [2024-12-14 00:19:06.051845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.006 qpair failed and we were unable to recover it. 00:38:27.006 [2024-12-14 00:19:06.052106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.006 [2024-12-14 00:19:06.052147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.006 qpair failed and we were unable to recover it. 00:38:27.006 [2024-12-14 00:19:06.052362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.006 [2024-12-14 00:19:06.052404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.006 qpair failed and we were unable to recover it. 00:38:27.006 [2024-12-14 00:19:06.052616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.006 [2024-12-14 00:19:06.052629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.006 qpair failed and we were unable to recover it. 00:38:27.006 [2024-12-14 00:19:06.052819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.006 [2024-12-14 00:19:06.052861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.006 qpair failed and we were unable to recover it. 00:38:27.006 [2024-12-14 00:19:06.053064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.006 [2024-12-14 00:19:06.053106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.006 qpair failed and we were unable to recover it. 00:38:27.006 [2024-12-14 00:19:06.053388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.006 [2024-12-14 00:19:06.053430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.006 qpair failed and we were unable to recover it. 00:38:27.006 [2024-12-14 00:19:06.053702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.006 [2024-12-14 00:19:06.053744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.006 qpair failed and we were unable to recover it. 00:38:27.006 [2024-12-14 00:19:06.053962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.006 [2024-12-14 00:19:06.054004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.006 qpair failed and we were unable to recover it. 00:38:27.006 [2024-12-14 00:19:06.054210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.006 [2024-12-14 00:19:06.054251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.006 qpair failed and we were unable to recover it. 00:38:27.006 [2024-12-14 00:19:06.054522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.006 [2024-12-14 00:19:06.054535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.006 qpair failed and we were unable to recover it. 00:38:27.006 [2024-12-14 00:19:06.054688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.006 [2024-12-14 00:19:06.054702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.006 qpair failed and we were unable to recover it. 00:38:27.006 [2024-12-14 00:19:06.054844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.006 [2024-12-14 00:19:06.054864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.006 qpair failed and we were unable to recover it. 00:38:27.006 [2024-12-14 00:19:06.055016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.006 [2024-12-14 00:19:06.055029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.006 qpair failed and we were unable to recover it. 00:38:27.006 [2024-12-14 00:19:06.055173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.006 [2024-12-14 00:19:06.055214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.006 qpair failed and we were unable to recover it. 00:38:27.006 [2024-12-14 00:19:06.055510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.006 [2024-12-14 00:19:06.055554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.006 qpair failed and we were unable to recover it. 00:38:27.006 [2024-12-14 00:19:06.055713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.006 [2024-12-14 00:19:06.055755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.006 qpair failed and we were unable to recover it. 00:38:27.006 [2024-12-14 00:19:06.055968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.006 [2024-12-14 00:19:06.056009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.006 qpair failed and we were unable to recover it. 00:38:27.006 [2024-12-14 00:19:06.056288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.006 [2024-12-14 00:19:06.056330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.006 qpair failed and we were unable to recover it. 00:38:27.006 [2024-12-14 00:19:06.056534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.006 [2024-12-14 00:19:06.056577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.006 qpair failed and we were unable to recover it. 00:38:27.006 [2024-12-14 00:19:06.056793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.006 [2024-12-14 00:19:06.056835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.006 qpair failed and we were unable to recover it. 00:38:27.006 [2024-12-14 00:19:06.057043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.007 [2024-12-14 00:19:06.057092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.007 qpair failed and we were unable to recover it. 00:38:27.007 [2024-12-14 00:19:06.057291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.007 [2024-12-14 00:19:06.057332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.007 qpair failed and we were unable to recover it. 00:38:27.007 [2024-12-14 00:19:06.057631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.007 [2024-12-14 00:19:06.057645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.007 qpair failed and we were unable to recover it. 00:38:27.007 [2024-12-14 00:19:06.057729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.007 [2024-12-14 00:19:06.057771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.007 qpair failed and we were unable to recover it. 00:38:27.007 [2024-12-14 00:19:06.058002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.007 [2024-12-14 00:19:06.058045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.007 qpair failed and we were unable to recover it. 00:38:27.007 [2024-12-14 00:19:06.058264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.007 [2024-12-14 00:19:06.058305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.007 qpair failed and we were unable to recover it. 00:38:27.007 [2024-12-14 00:19:06.058530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.007 [2024-12-14 00:19:06.058543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.007 qpair failed and we were unable to recover it. 00:38:27.007 [2024-12-14 00:19:06.058645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.007 [2024-12-14 00:19:06.058659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.007 qpair failed and we were unable to recover it. 00:38:27.007 [2024-12-14 00:19:06.058880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.007 [2024-12-14 00:19:06.058922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.007 qpair failed and we were unable to recover it. 00:38:27.007 [2024-12-14 00:19:06.059232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.007 [2024-12-14 00:19:06.059274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.007 qpair failed and we were unable to recover it. 00:38:27.007 [2024-12-14 00:19:06.059501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.007 [2024-12-14 00:19:06.059545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.007 qpair failed and we were unable to recover it. 00:38:27.007 [2024-12-14 00:19:06.059765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.007 [2024-12-14 00:19:06.059808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.007 qpair failed and we were unable to recover it. 00:38:27.007 [2024-12-14 00:19:06.060088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.007 [2024-12-14 00:19:06.060129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.007 qpair failed and we were unable to recover it. 00:38:27.007 [2024-12-14 00:19:06.060339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.007 [2024-12-14 00:19:06.060353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.007 qpair failed and we were unable to recover it. 00:38:27.007 [2024-12-14 00:19:06.060512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.007 [2024-12-14 00:19:06.060527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.007 qpair failed and we were unable to recover it. 00:38:27.007 [2024-12-14 00:19:06.060679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.007 [2024-12-14 00:19:06.060692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.007 qpair failed and we were unable to recover it. 00:38:27.007 [2024-12-14 00:19:06.060794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.007 [2024-12-14 00:19:06.060808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.007 qpair failed and we were unable to recover it. 00:38:27.007 [2024-12-14 00:19:06.061032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.007 [2024-12-14 00:19:06.061074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.007 qpair failed and we were unable to recover it. 00:38:27.007 [2024-12-14 00:19:06.061282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.007 [2024-12-14 00:19:06.061324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.007 qpair failed and we were unable to recover it. 00:38:27.007 [2024-12-14 00:19:06.061536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.007 [2024-12-14 00:19:06.061579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.007 qpair failed and we were unable to recover it. 00:38:27.007 [2024-12-14 00:19:06.061781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.007 [2024-12-14 00:19:06.061823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.007 qpair failed and we were unable to recover it. 00:38:27.007 [2024-12-14 00:19:06.062047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.007 [2024-12-14 00:19:06.062089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.007 qpair failed and we were unable to recover it. 00:38:27.007 [2024-12-14 00:19:06.062305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.007 [2024-12-14 00:19:06.062346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.007 qpair failed and we were unable to recover it. 00:38:27.007 [2024-12-14 00:19:06.062566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.007 [2024-12-14 00:19:06.062580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.007 qpair failed and we were unable to recover it. 00:38:27.007 [2024-12-14 00:19:06.062730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.007 [2024-12-14 00:19:06.062771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.007 qpair failed and we were unable to recover it. 00:38:27.007 [2024-12-14 00:19:06.062932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.007 [2024-12-14 00:19:06.062974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.007 qpair failed and we were unable to recover it. 00:38:27.007 [2024-12-14 00:19:06.063119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.007 [2024-12-14 00:19:06.063161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.007 qpair failed and we were unable to recover it. 00:38:27.007 [2024-12-14 00:19:06.063374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.007 [2024-12-14 00:19:06.063388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.007 qpair failed and we were unable to recover it. 00:38:27.007 [2024-12-14 00:19:06.063615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.007 [2024-12-14 00:19:06.063629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.007 qpair failed and we were unable to recover it. 00:38:27.007 [2024-12-14 00:19:06.063803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.007 [2024-12-14 00:19:06.063816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.007 qpair failed and we were unable to recover it. 00:38:27.007 [2024-12-14 00:19:06.063896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.007 [2024-12-14 00:19:06.063909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.007 qpair failed and we were unable to recover it. 00:38:27.007 [2024-12-14 00:19:06.064073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.007 [2024-12-14 00:19:06.064115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.007 qpair failed and we were unable to recover it. 00:38:27.007 [2024-12-14 00:19:06.064382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.007 [2024-12-14 00:19:06.064425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.007 qpair failed and we were unable to recover it. 00:38:27.007 [2024-12-14 00:19:06.064574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.007 [2024-12-14 00:19:06.064587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.007 qpair failed and we were unable to recover it. 00:38:27.007 [2024-12-14 00:19:06.064818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.007 [2024-12-14 00:19:06.064859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.007 qpair failed and we were unable to recover it. 00:38:27.007 [2024-12-14 00:19:06.065009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.007 [2024-12-14 00:19:06.065051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.007 qpair failed and we were unable to recover it. 00:38:27.007 [2024-12-14 00:19:06.065273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.007 [2024-12-14 00:19:06.065315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.008 qpair failed and we were unable to recover it. 00:38:27.008 [2024-12-14 00:19:06.065549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.008 [2024-12-14 00:19:06.065562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.008 qpair failed and we were unable to recover it. 00:38:27.008 [2024-12-14 00:19:06.065710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.008 [2024-12-14 00:19:06.065723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.008 qpair failed and we were unable to recover it. 00:38:27.008 [2024-12-14 00:19:06.065902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.008 [2024-12-14 00:19:06.065944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.008 qpair failed and we were unable to recover it. 00:38:27.008 [2024-12-14 00:19:06.066103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.008 [2024-12-14 00:19:06.066151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.008 qpair failed and we were unable to recover it. 00:38:27.008 [2024-12-14 00:19:06.066290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.008 [2024-12-14 00:19:06.066332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.008 qpair failed and we were unable to recover it. 00:38:27.008 [2024-12-14 00:19:06.066528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.008 [2024-12-14 00:19:06.066543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.008 qpair failed and we were unable to recover it. 00:38:27.008 [2024-12-14 00:19:06.066681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.008 [2024-12-14 00:19:06.066711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.008 qpair failed and we were unable to recover it. 00:38:27.008 [2024-12-14 00:19:06.066947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.008 [2024-12-14 00:19:06.066988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.008 qpair failed and we were unable to recover it. 00:38:27.008 [2024-12-14 00:19:06.067180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.008 [2024-12-14 00:19:06.067223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.008 qpair failed and we were unable to recover it. 00:38:27.008 [2024-12-14 00:19:06.067326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.008 [2024-12-14 00:19:06.067339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.008 qpair failed and we were unable to recover it. 00:38:27.008 [2024-12-14 00:19:06.067492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.008 [2024-12-14 00:19:06.067506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.008 qpair failed and we were unable to recover it. 00:38:27.008 [2024-12-14 00:19:06.067764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.008 [2024-12-14 00:19:06.067807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.008 qpair failed and we were unable to recover it. 00:38:27.008 [2024-12-14 00:19:06.068063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.008 [2024-12-14 00:19:06.068105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.008 qpair failed and we were unable to recover it. 00:38:27.008 [2024-12-14 00:19:06.068357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.008 [2024-12-14 00:19:06.068370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.008 qpair failed and we were unable to recover it. 00:38:27.008 [2024-12-14 00:19:06.068623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.008 [2024-12-14 00:19:06.068636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.008 qpair failed and we were unable to recover it. 00:38:27.008 [2024-12-14 00:19:06.068725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.008 [2024-12-14 00:19:06.068738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.008 qpair failed and we were unable to recover it. 00:38:27.008 [2024-12-14 00:19:06.068831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.008 [2024-12-14 00:19:06.068844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.008 qpair failed and we were unable to recover it. 00:38:27.008 [2024-12-14 00:19:06.069007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.008 [2024-12-14 00:19:06.069050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.008 qpair failed and we were unable to recover it. 00:38:27.008 [2024-12-14 00:19:06.069276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.008 [2024-12-14 00:19:06.069331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.008 qpair failed and we were unable to recover it. 00:38:27.008 [2024-12-14 00:19:06.069498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.008 [2024-12-14 00:19:06.069545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.008 qpair failed and we were unable to recover it. 00:38:27.008 [2024-12-14 00:19:06.069643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.008 [2024-12-14 00:19:06.069656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.008 qpair failed and we were unable to recover it. 00:38:27.008 [2024-12-14 00:19:06.069868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.008 [2024-12-14 00:19:06.069910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.008 qpair failed and we were unable to recover it. 00:38:27.008 [2024-12-14 00:19:06.070051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.008 [2024-12-14 00:19:06.070092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.008 qpair failed and we were unable to recover it. 00:38:27.008 [2024-12-14 00:19:06.070389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.008 [2024-12-14 00:19:06.070432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.008 qpair failed and we were unable to recover it. 00:38:27.008 [2024-12-14 00:19:06.070644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.008 [2024-12-14 00:19:06.070686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.008 qpair failed and we were unable to recover it. 00:38:27.008 [2024-12-14 00:19:06.070838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.008 [2024-12-14 00:19:06.070880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.008 qpair failed and we were unable to recover it. 00:38:27.008 [2024-12-14 00:19:06.071103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.008 [2024-12-14 00:19:06.071145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.008 qpair failed and we were unable to recover it. 00:38:27.008 [2024-12-14 00:19:06.071347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.008 [2024-12-14 00:19:06.071390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.008 qpair failed and we were unable to recover it. 00:38:27.008 [2024-12-14 00:19:06.071586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.008 [2024-12-14 00:19:06.071600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.008 qpair failed and we were unable to recover it. 00:38:27.008 [2024-12-14 00:19:06.071835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.008 [2024-12-14 00:19:06.071880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.008 qpair failed and we were unable to recover it. 00:38:27.008 [2024-12-14 00:19:06.072128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.008 [2024-12-14 00:19:06.072171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.008 qpair failed and we were unable to recover it. 00:38:27.008 [2024-12-14 00:19:06.072375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.008 [2024-12-14 00:19:06.072388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.008 qpair failed and we were unable to recover it. 00:38:27.008 [2024-12-14 00:19:06.072548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.008 [2024-12-14 00:19:06.072593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.008 qpair failed and we were unable to recover it. 00:38:27.008 [2024-12-14 00:19:06.072793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.008 [2024-12-14 00:19:06.072835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.008 qpair failed and we were unable to recover it. 00:38:27.008 [2024-12-14 00:19:06.073112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.008 [2024-12-14 00:19:06.073154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.008 qpair failed and we were unable to recover it. 00:38:27.008 [2024-12-14 00:19:06.073381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.008 [2024-12-14 00:19:06.073423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.009 qpair failed and we were unable to recover it. 00:38:27.009 [2024-12-14 00:19:06.073658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.009 [2024-12-14 00:19:06.073701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.009 qpair failed and we were unable to recover it. 00:38:27.009 [2024-12-14 00:19:06.073970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.009 [2024-12-14 00:19:06.074012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.009 qpair failed and we were unable to recover it. 00:38:27.009 [2024-12-14 00:19:06.074299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.009 [2024-12-14 00:19:06.074345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.009 qpair failed and we were unable to recover it. 00:38:27.009 [2024-12-14 00:19:06.074625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.009 [2024-12-14 00:19:06.074669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.009 qpair failed and we were unable to recover it. 00:38:27.009 [2024-12-14 00:19:06.074885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.009 [2024-12-14 00:19:06.074927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.009 qpair failed and we were unable to recover it. 00:38:27.009 [2024-12-14 00:19:06.075128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.009 [2024-12-14 00:19:06.075170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.009 qpair failed and we were unable to recover it. 00:38:27.009 [2024-12-14 00:19:06.075376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.009 [2024-12-14 00:19:06.075419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.009 qpair failed and we were unable to recover it. 00:38:27.009 [2024-12-14 00:19:06.075582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.009 [2024-12-14 00:19:06.075659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.009 qpair failed and we were unable to recover it. 00:38:27.009 [2024-12-14 00:19:06.075866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.009 [2024-12-14 00:19:06.075880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.009 qpair failed and we were unable to recover it. 00:38:27.009 [2024-12-14 00:19:06.076016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.009 [2024-12-14 00:19:06.076030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.009 qpair failed and we were unable to recover it. 00:38:27.009 [2024-12-14 00:19:06.076254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.009 [2024-12-14 00:19:06.076268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.009 qpair failed and we were unable to recover it. 00:38:27.009 [2024-12-14 00:19:06.076415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.009 [2024-12-14 00:19:06.076471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.009 qpair failed and we were unable to recover it. 00:38:27.009 [2024-12-14 00:19:06.076731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.009 [2024-12-14 00:19:06.076774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.009 qpair failed and we were unable to recover it. 00:38:27.009 [2024-12-14 00:19:06.077038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.009 [2024-12-14 00:19:06.077079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.009 qpair failed and we were unable to recover it. 00:38:27.009 [2024-12-14 00:19:06.077275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.009 [2024-12-14 00:19:06.077288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.009 qpair failed and we were unable to recover it. 00:38:27.009 [2024-12-14 00:19:06.077483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.009 [2024-12-14 00:19:06.077527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.009 qpair failed and we were unable to recover it. 00:38:27.009 [2024-12-14 00:19:06.077732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.009 [2024-12-14 00:19:06.077773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.009 qpair failed and we were unable to recover it. 00:38:27.009 [2024-12-14 00:19:06.077975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.009 [2024-12-14 00:19:06.078017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.009 qpair failed and we were unable to recover it. 00:38:27.009 [2024-12-14 00:19:06.078213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.009 [2024-12-14 00:19:06.078226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.009 qpair failed and we were unable to recover it. 00:38:27.009 [2024-12-14 00:19:06.078389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.009 [2024-12-14 00:19:06.078429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.009 qpair failed and we were unable to recover it. 00:38:27.009 [2024-12-14 00:19:06.078585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.009 [2024-12-14 00:19:06.078635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.009 qpair failed and we were unable to recover it. 00:38:27.009 [2024-12-14 00:19:06.078848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.009 [2024-12-14 00:19:06.078891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.009 qpair failed and we were unable to recover it. 00:38:27.009 [2024-12-14 00:19:06.079176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.009 [2024-12-14 00:19:06.079218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.009 qpair failed and we were unable to recover it. 00:38:27.009 [2024-12-14 00:19:06.079491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.009 [2024-12-14 00:19:06.079535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.009 qpair failed and we were unable to recover it. 00:38:27.009 [2024-12-14 00:19:06.079695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.009 [2024-12-14 00:19:06.079709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.009 qpair failed and we were unable to recover it. 00:38:27.009 [2024-12-14 00:19:06.079875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.009 [2024-12-14 00:19:06.079917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.009 qpair failed and we were unable to recover it. 00:38:27.009 [2024-12-14 00:19:06.080118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.009 [2024-12-14 00:19:06.080160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.009 qpair failed and we were unable to recover it. 00:38:27.009 [2024-12-14 00:19:06.080364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.009 [2024-12-14 00:19:06.080405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.009 qpair failed and we were unable to recover it. 00:38:27.009 [2024-12-14 00:19:06.080631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.009 [2024-12-14 00:19:06.080645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.009 qpair failed and we were unable to recover it. 00:38:27.009 [2024-12-14 00:19:06.080859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.009 [2024-12-14 00:19:06.080901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.009 qpair failed and we were unable to recover it. 00:38:27.009 [2024-12-14 00:19:06.081133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.009 [2024-12-14 00:19:06.081175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.009 qpair failed and we were unable to recover it. 00:38:27.009 [2024-12-14 00:19:06.081379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.009 [2024-12-14 00:19:06.081392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.009 qpair failed and we were unable to recover it. 00:38:27.009 [2024-12-14 00:19:06.081570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.009 [2024-12-14 00:19:06.081614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.009 qpair failed and we were unable to recover it. 00:38:27.009 [2024-12-14 00:19:06.081891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.009 [2024-12-14 00:19:06.081932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.009 qpair failed and we were unable to recover it. 00:38:27.009 [2024-12-14 00:19:06.082140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.009 [2024-12-14 00:19:06.082183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.009 qpair failed and we were unable to recover it. 00:38:27.009 [2024-12-14 00:19:06.082378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.009 [2024-12-14 00:19:06.082420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.009 qpair failed and we were unable to recover it. 00:38:27.009 [2024-12-14 00:19:06.082747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.009 [2024-12-14 00:19:06.082800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.009 qpair failed and we were unable to recover it. 00:38:27.009 [2024-12-14 00:19:06.083033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.009 [2024-12-14 00:19:06.083074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.009 qpair failed and we were unable to recover it. 00:38:27.009 [2024-12-14 00:19:06.083311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.010 [2024-12-14 00:19:06.083354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.010 qpair failed and we were unable to recover it. 00:38:27.010 [2024-12-14 00:19:06.083553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.010 [2024-12-14 00:19:06.083567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.010 qpair failed and we were unable to recover it. 00:38:27.010 [2024-12-14 00:19:06.083651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.010 [2024-12-14 00:19:06.083664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.010 qpair failed and we were unable to recover it. 00:38:27.010 [2024-12-14 00:19:06.083816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.010 [2024-12-14 00:19:06.083830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.010 qpair failed and we were unable to recover it. 00:38:27.010 [2024-12-14 00:19:06.084036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.010 [2024-12-14 00:19:06.084049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.010 qpair failed and we were unable to recover it. 00:38:27.010 [2024-12-14 00:19:06.084228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.010 [2024-12-14 00:19:06.084242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.010 qpair failed and we were unable to recover it. 00:38:27.010 [2024-12-14 00:19:06.084359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.010 [2024-12-14 00:19:06.084401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.010 qpair failed and we were unable to recover it. 00:38:27.010 [2024-12-14 00:19:06.084619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.010 [2024-12-14 00:19:06.084663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.010 qpair failed and we were unable to recover it. 00:38:27.010 [2024-12-14 00:19:06.084866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.010 [2024-12-14 00:19:06.084921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.010 qpair failed and we were unable to recover it. 00:38:27.010 [2024-12-14 00:19:06.085157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.010 [2024-12-14 00:19:06.085206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.010 qpair failed and we were unable to recover it. 00:38:27.010 [2024-12-14 00:19:06.085358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.010 [2024-12-14 00:19:06.085400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.010 qpair failed and we were unable to recover it. 00:38:27.010 [2024-12-14 00:19:06.085680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.010 [2024-12-14 00:19:06.085723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.010 qpair failed and we were unable to recover it. 00:38:27.010 [2024-12-14 00:19:06.085931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.010 [2024-12-14 00:19:06.085973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.010 qpair failed and we were unable to recover it. 00:38:27.010 [2024-12-14 00:19:06.086190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.010 [2024-12-14 00:19:06.086232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.010 qpair failed and we were unable to recover it. 00:38:27.010 [2024-12-14 00:19:06.086497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.010 [2024-12-14 00:19:06.086540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.010 qpair failed and we were unable to recover it. 00:38:27.010 [2024-12-14 00:19:06.086825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.010 [2024-12-14 00:19:06.086866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.010 qpair failed and we were unable to recover it. 00:38:27.010 [2024-12-14 00:19:06.087002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.010 [2024-12-14 00:19:06.087044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.010 qpair failed and we were unable to recover it. 00:38:27.010 [2024-12-14 00:19:06.087260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.010 [2024-12-14 00:19:06.087304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.010 qpair failed and we were unable to recover it. 00:38:27.010 [2024-12-14 00:19:06.087531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.010 [2024-12-14 00:19:06.087545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.010 qpair failed and we were unable to recover it. 00:38:27.010 [2024-12-14 00:19:06.087715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.010 [2024-12-14 00:19:06.087728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.010 qpair failed and we were unable to recover it. 00:38:27.010 [2024-12-14 00:19:06.087872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.010 [2024-12-14 00:19:06.087885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.010 qpair failed and we were unable to recover it. 00:38:27.010 [2024-12-14 00:19:06.088046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.010 [2024-12-14 00:19:06.088059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.010 qpair failed and we were unable to recover it. 00:38:27.010 [2024-12-14 00:19:06.088286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.010 [2024-12-14 00:19:06.088300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.010 qpair failed and we were unable to recover it. 00:38:27.010 [2024-12-14 00:19:06.088483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.010 [2024-12-14 00:19:06.088497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.010 qpair failed and we were unable to recover it. 00:38:27.010 [2024-12-14 00:19:06.088611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.010 [2024-12-14 00:19:06.088653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.010 qpair failed and we were unable to recover it. 00:38:27.010 [2024-12-14 00:19:06.088869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.010 [2024-12-14 00:19:06.088912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.010 qpair failed and we were unable to recover it. 00:38:27.010 [2024-12-14 00:19:06.089071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.010 [2024-12-14 00:19:06.089113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.010 qpair failed and we were unable to recover it. 00:38:27.010 [2024-12-14 00:19:06.089243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.010 [2024-12-14 00:19:06.089284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.010 qpair failed and we were unable to recover it. 00:38:27.010 [2024-12-14 00:19:06.089435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.010 [2024-12-14 00:19:06.089455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.010 qpair failed and we were unable to recover it. 00:38:27.010 [2024-12-14 00:19:06.089623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.010 [2024-12-14 00:19:06.089665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.010 qpair failed and we were unable to recover it. 00:38:27.010 [2024-12-14 00:19:06.089896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.010 [2024-12-14 00:19:06.089938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.010 qpair failed and we were unable to recover it. 00:38:27.010 [2024-12-14 00:19:06.090158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.010 [2024-12-14 00:19:06.090200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.010 qpair failed and we were unable to recover it. 00:38:27.010 [2024-12-14 00:19:06.090337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.010 [2024-12-14 00:19:06.090378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.010 qpair failed and we were unable to recover it. 00:38:27.010 [2024-12-14 00:19:06.090691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.010 [2024-12-14 00:19:06.090704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.010 qpair failed and we were unable to recover it. 00:38:27.010 [2024-12-14 00:19:06.090869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.010 [2024-12-14 00:19:06.090883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.010 qpair failed and we were unable to recover it. 00:38:27.010 [2024-12-14 00:19:06.091129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.010 [2024-12-14 00:19:06.091172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.010 qpair failed and we were unable to recover it. 00:38:27.010 [2024-12-14 00:19:06.091390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.010 [2024-12-14 00:19:06.091404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.010 qpair failed and we were unable to recover it. 00:38:27.010 [2024-12-14 00:19:06.091584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.011 [2024-12-14 00:19:06.091628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.011 qpair failed and we were unable to recover it. 00:38:27.011 [2024-12-14 00:19:06.091796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.011 [2024-12-14 00:19:06.091838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.011 qpair failed and we were unable to recover it. 00:38:27.011 [2024-12-14 00:19:06.091986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.011 [2024-12-14 00:19:06.092028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.011 qpair failed and we were unable to recover it. 00:38:27.011 [2024-12-14 00:19:06.092290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.011 [2024-12-14 00:19:06.092332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.011 qpair failed and we were unable to recover it. 00:38:27.011 [2024-12-14 00:19:06.092553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.011 [2024-12-14 00:19:06.092596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.011 qpair failed and we were unable to recover it. 00:38:27.011 [2024-12-14 00:19:06.092818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.011 [2024-12-14 00:19:06.092832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.011 qpair failed and we were unable to recover it. 00:38:27.011 [2024-12-14 00:19:06.092986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.011 [2024-12-14 00:19:06.093010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.011 qpair failed and we were unable to recover it. 00:38:27.011 [2024-12-14 00:19:06.093209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.011 [2024-12-14 00:19:06.093251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.011 qpair failed and we were unable to recover it. 00:38:27.011 [2024-12-14 00:19:06.093509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.011 [2024-12-14 00:19:06.093552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.011 qpair failed and we were unable to recover it. 00:38:27.011 [2024-12-14 00:19:06.093758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.011 [2024-12-14 00:19:06.093799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.011 qpair failed and we were unable to recover it. 00:38:27.011 [2024-12-14 00:19:06.094074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.011 [2024-12-14 00:19:06.094117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.011 qpair failed and we were unable to recover it. 00:38:27.011 [2024-12-14 00:19:06.094339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.011 [2024-12-14 00:19:06.094353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.011 qpair failed and we were unable to recover it. 00:38:27.011 [2024-12-14 00:19:06.094425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.011 [2024-12-14 00:19:06.094444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.011 qpair failed and we were unable to recover it. 00:38:27.011 [2024-12-14 00:19:06.094600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.011 [2024-12-14 00:19:06.094614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.011 qpair failed and we were unable to recover it. 00:38:27.011 [2024-12-14 00:19:06.094708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.011 [2024-12-14 00:19:06.094722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.011 qpair failed and we were unable to recover it. 00:38:27.011 [2024-12-14 00:19:06.094953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.011 [2024-12-14 00:19:06.094995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.011 qpair failed and we were unable to recover it. 00:38:27.011 [2024-12-14 00:19:06.095222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.011 [2024-12-14 00:19:06.095265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.011 qpair failed and we were unable to recover it. 00:38:27.011 [2024-12-14 00:19:06.095479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.011 [2024-12-14 00:19:06.095523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.011 qpair failed and we were unable to recover it. 00:38:27.011 [2024-12-14 00:19:06.095769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.011 [2024-12-14 00:19:06.095783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.011 qpair failed and we were unable to recover it. 00:38:27.011 [2024-12-14 00:19:06.096022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.011 [2024-12-14 00:19:06.096035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.011 qpair failed and we were unable to recover it. 00:38:27.011 [2024-12-14 00:19:06.096184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.011 [2024-12-14 00:19:06.096198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.011 qpair failed and we were unable to recover it. 00:38:27.011 [2024-12-14 00:19:06.096354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.011 [2024-12-14 00:19:06.096395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.011 qpair failed and we were unable to recover it. 00:38:27.011 [2024-12-14 00:19:06.096618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.011 [2024-12-14 00:19:06.096661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.011 qpair failed and we were unable to recover it. 00:38:27.011 [2024-12-14 00:19:06.096946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.011 [2024-12-14 00:19:06.096988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.011 qpair failed and we were unable to recover it. 00:38:27.011 [2024-12-14 00:19:06.097296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.011 [2024-12-14 00:19:06.097339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.011 qpair failed and we were unable to recover it. 00:38:27.011 [2024-12-14 00:19:06.097542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.011 [2024-12-14 00:19:06.097556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.011 qpair failed and we were unable to recover it. 00:38:27.011 [2024-12-14 00:19:06.097769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.011 [2024-12-14 00:19:06.097782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.011 qpair failed and we were unable to recover it. 00:38:27.011 [2024-12-14 00:19:06.097927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.011 [2024-12-14 00:19:06.097940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.011 qpair failed and we were unable to recover it. 00:38:27.011 [2024-12-14 00:19:06.098030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.011 [2024-12-14 00:19:06.098066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.011 qpair failed and we were unable to recover it. 00:38:27.011 [2024-12-14 00:19:06.098348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.011 [2024-12-14 00:19:06.098390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.011 qpair failed and we were unable to recover it. 00:38:27.011 [2024-12-14 00:19:06.098530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.011 [2024-12-14 00:19:06.098573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.011 qpair failed and we were unable to recover it. 00:38:27.011 [2024-12-14 00:19:06.098751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.011 [2024-12-14 00:19:06.098764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.011 qpair failed and we were unable to recover it. 00:38:27.011 [2024-12-14 00:19:06.099003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.011 [2024-12-14 00:19:06.099044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.011 qpair failed and we were unable to recover it. 00:38:27.011 [2024-12-14 00:19:06.099266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.011 [2024-12-14 00:19:06.099309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.011 qpair failed and we were unable to recover it. 00:38:27.012 [2024-12-14 00:19:06.099566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.012 [2024-12-14 00:19:06.099583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.012 qpair failed and we were unable to recover it. 00:38:27.012 [2024-12-14 00:19:06.099746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.012 [2024-12-14 00:19:06.099764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.012 qpair failed and we were unable to recover it. 00:38:27.012 [2024-12-14 00:19:06.099910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.012 [2024-12-14 00:19:06.099923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.012 qpair failed and we were unable to recover it. 00:38:27.012 [2024-12-14 00:19:06.100123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.012 [2024-12-14 00:19:06.100136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.012 qpair failed and we were unable to recover it. 00:38:27.012 [2024-12-14 00:19:06.100307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.012 [2024-12-14 00:19:06.100350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.012 qpair failed and we were unable to recover it. 00:38:27.012 [2024-12-14 00:19:06.100588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.012 [2024-12-14 00:19:06.100632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.012 qpair failed and we were unable to recover it. 00:38:27.012 [2024-12-14 00:19:06.100789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.012 [2024-12-14 00:19:06.100831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.012 qpair failed and we were unable to recover it. 00:38:27.012 [2024-12-14 00:19:06.101022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.012 [2024-12-14 00:19:06.101063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.012 qpair failed and we were unable to recover it. 00:38:27.012 [2024-12-14 00:19:06.101203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.012 [2024-12-14 00:19:06.101244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.012 qpair failed and we were unable to recover it. 00:38:27.012 [2024-12-14 00:19:06.101378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.012 [2024-12-14 00:19:06.101391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.012 qpair failed and we were unable to recover it. 00:38:27.012 [2024-12-14 00:19:06.101573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.012 [2024-12-14 00:19:06.101587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.012 qpair failed and we were unable to recover it. 00:38:27.012 [2024-12-14 00:19:06.101745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.012 [2024-12-14 00:19:06.101787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.012 qpair failed and we were unable to recover it. 00:38:27.012 [2024-12-14 00:19:06.101925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.012 [2024-12-14 00:19:06.101965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.012 qpair failed and we were unable to recover it. 00:38:27.012 [2024-12-14 00:19:06.102115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.012 [2024-12-14 00:19:06.102156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.012 qpair failed and we were unable to recover it. 00:38:27.012 [2024-12-14 00:19:06.102279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.012 [2024-12-14 00:19:06.102321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.012 qpair failed and we were unable to recover it. 00:38:27.012 [2024-12-14 00:19:06.102620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.012 [2024-12-14 00:19:06.102657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.012 qpair failed and we were unable to recover it. 00:38:27.012 [2024-12-14 00:19:06.102965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.012 [2024-12-14 00:19:06.103007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.012 qpair failed and we were unable to recover it. 00:38:27.012 [2024-12-14 00:19:06.103167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.012 [2024-12-14 00:19:06.103208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.012 qpair failed and we were unable to recover it. 00:38:27.012 [2024-12-14 00:19:06.103405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.012 [2024-12-14 00:19:06.103464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.012 qpair failed and we were unable to recover it. 00:38:27.012 [2024-12-14 00:19:06.103667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.012 [2024-12-14 00:19:06.103681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.012 qpair failed and we were unable to recover it. 00:38:27.012 [2024-12-14 00:19:06.103852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.012 [2024-12-14 00:19:06.103894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.012 qpair failed and we were unable to recover it. 00:38:27.012 [2024-12-14 00:19:06.104130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.012 [2024-12-14 00:19:06.104173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.012 qpair failed and we were unable to recover it. 00:38:27.012 [2024-12-14 00:19:06.104377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.012 [2024-12-14 00:19:06.104391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.012 qpair failed and we were unable to recover it. 00:38:27.012 [2024-12-14 00:19:06.104561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.012 [2024-12-14 00:19:06.104603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.012 qpair failed and we were unable to recover it. 00:38:27.012 [2024-12-14 00:19:06.104750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.012 [2024-12-14 00:19:06.104793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.012 qpair failed and we were unable to recover it. 00:38:27.012 [2024-12-14 00:19:06.105096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.012 [2024-12-14 00:19:06.105135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.012 qpair failed and we were unable to recover it. 00:38:27.012 [2024-12-14 00:19:06.105230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.012 [2024-12-14 00:19:06.105244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.012 qpair failed and we were unable to recover it. 00:38:27.012 [2024-12-14 00:19:06.105418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.012 [2024-12-14 00:19:06.105432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.012 qpair failed and we were unable to recover it. 00:38:27.012 [2024-12-14 00:19:06.105584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.012 [2024-12-14 00:19:06.105597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.012 qpair failed and we were unable to recover it. 00:38:27.012 [2024-12-14 00:19:06.105675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.012 [2024-12-14 00:19:06.105689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.012 qpair failed and we were unable to recover it. 00:38:27.012 [2024-12-14 00:19:06.105840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.012 [2024-12-14 00:19:06.105853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.012 qpair failed and we were unable to recover it. 00:38:27.012 [2024-12-14 00:19:06.105932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.012 [2024-12-14 00:19:06.105966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.012 qpair failed and we were unable to recover it. 00:38:27.012 [2024-12-14 00:19:06.106113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.012 [2024-12-14 00:19:06.106156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.012 qpair failed and we were unable to recover it. 00:38:27.012 [2024-12-14 00:19:06.106420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.012 [2024-12-14 00:19:06.106486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.012 qpair failed and we were unable to recover it. 00:38:27.012 [2024-12-14 00:19:06.106696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.012 [2024-12-14 00:19:06.106709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.012 qpair failed and we were unable to recover it. 00:38:27.012 [2024-12-14 00:19:06.106784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.012 [2024-12-14 00:19:06.106797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.012 qpair failed and we were unable to recover it. 00:38:27.012 [2024-12-14 00:19:06.106877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.012 [2024-12-14 00:19:06.106890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.012 qpair failed and we were unable to recover it. 00:38:27.280 [2024-12-14 00:19:06.107048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.280 [2024-12-14 00:19:06.107091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.280 qpair failed and we were unable to recover it. 00:38:27.280 [2024-12-14 00:19:06.107302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.280 [2024-12-14 00:19:06.107345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.280 qpair failed and we were unable to recover it. 00:38:27.280 [2024-12-14 00:19:06.107637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.280 [2024-12-14 00:19:06.107680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.280 qpair failed and we were unable to recover it. 00:38:27.280 [2024-12-14 00:19:06.107908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.280 [2024-12-14 00:19:06.107950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.280 qpair failed and we were unable to recover it. 00:38:27.280 [2024-12-14 00:19:06.108182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.280 [2024-12-14 00:19:06.108230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.280 qpair failed and we were unable to recover it. 00:38:27.281 [2024-12-14 00:19:06.108339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.281 [2024-12-14 00:19:06.108353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.281 qpair failed and we were unable to recover it. 00:38:27.281 [2024-12-14 00:19:06.108577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.281 [2024-12-14 00:19:06.108620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.281 qpair failed and we were unable to recover it. 00:38:27.281 [2024-12-14 00:19:06.108898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.281 [2024-12-14 00:19:06.108939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.281 qpair failed and we were unable to recover it. 00:38:27.281 [2024-12-14 00:19:06.109158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.281 [2024-12-14 00:19:06.109196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.281 qpair failed and we were unable to recover it. 00:38:27.281 [2024-12-14 00:19:06.109419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.281 [2024-12-14 00:19:06.109433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.281 qpair failed and we were unable to recover it. 00:38:27.281 [2024-12-14 00:19:06.109528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.281 [2024-12-14 00:19:06.109542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.281 qpair failed and we were unable to recover it. 00:38:27.281 [2024-12-14 00:19:06.109745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.281 [2024-12-14 00:19:06.109759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.281 qpair failed and we were unable to recover it. 00:38:27.281 [2024-12-14 00:19:06.109849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.281 [2024-12-14 00:19:06.109862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.281 qpair failed and we were unable to recover it. 00:38:27.281 [2024-12-14 00:19:06.110121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.281 [2024-12-14 00:19:06.110164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.281 qpair failed and we were unable to recover it. 00:38:27.281 [2024-12-14 00:19:06.110353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.281 [2024-12-14 00:19:06.110366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.281 qpair failed and we were unable to recover it. 00:38:27.281 [2024-12-14 00:19:06.110554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.281 [2024-12-14 00:19:06.110567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.281 qpair failed and we were unable to recover it. 00:38:27.281 [2024-12-14 00:19:06.110664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.281 [2024-12-14 00:19:06.110677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.281 qpair failed and we were unable to recover it. 00:38:27.281 [2024-12-14 00:19:06.110772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.281 [2024-12-14 00:19:06.110786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.281 qpair failed and we were unable to recover it. 00:38:27.281 [2024-12-14 00:19:06.110882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.281 [2024-12-14 00:19:06.110895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.281 qpair failed and we were unable to recover it. 00:38:27.281 [2024-12-14 00:19:06.110975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.281 [2024-12-14 00:19:06.110990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.281 qpair failed and we were unable to recover it. 00:38:27.281 [2024-12-14 00:19:06.111072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.281 [2024-12-14 00:19:06.111085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.281 qpair failed and we were unable to recover it. 00:38:27.281 [2024-12-14 00:19:06.111199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.281 [2024-12-14 00:19:06.111247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.281 qpair failed and we were unable to recover it. 00:38:27.281 [2024-12-14 00:19:06.111387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.281 [2024-12-14 00:19:06.111429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.281 qpair failed and we were unable to recover it. 00:38:27.281 [2024-12-14 00:19:06.111631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.281 [2024-12-14 00:19:06.111673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.281 qpair failed and we were unable to recover it. 00:38:27.281 [2024-12-14 00:19:06.111822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.281 [2024-12-14 00:19:06.111864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.281 qpair failed and we were unable to recover it. 00:38:27.281 [2024-12-14 00:19:06.112010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.281 [2024-12-14 00:19:06.112053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.281 qpair failed and we were unable to recover it. 00:38:27.281 [2024-12-14 00:19:06.112246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.281 [2024-12-14 00:19:06.112288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.281 qpair failed and we were unable to recover it. 00:38:27.281 [2024-12-14 00:19:06.112493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.281 [2024-12-14 00:19:06.112538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.281 qpair failed and we were unable to recover it. 00:38:27.281 [2024-12-14 00:19:06.112828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.281 [2024-12-14 00:19:06.112882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.281 qpair failed and we were unable to recover it. 00:38:27.281 [2024-12-14 00:19:06.113044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.281 [2024-12-14 00:19:06.113085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.281 qpair failed and we were unable to recover it. 00:38:27.281 [2024-12-14 00:19:06.113237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.281 [2024-12-14 00:19:06.113278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.281 qpair failed and we were unable to recover it. 00:38:27.281 [2024-12-14 00:19:06.113552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.281 [2024-12-14 00:19:06.113572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.281 qpair failed and we were unable to recover it. 00:38:27.281 [2024-12-14 00:19:06.113737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.281 [2024-12-14 00:19:06.113750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.281 qpair failed and we were unable to recover it. 00:38:27.281 [2024-12-14 00:19:06.113910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.281 [2024-12-14 00:19:06.113952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.281 qpair failed and we were unable to recover it. 00:38:27.281 [2024-12-14 00:19:06.114151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.281 [2024-12-14 00:19:06.114193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.281 qpair failed and we were unable to recover it. 00:38:27.281 [2024-12-14 00:19:06.114464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.281 [2024-12-14 00:19:06.114506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.281 qpair failed and we were unable to recover it. 00:38:27.281 [2024-12-14 00:19:06.114785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.281 [2024-12-14 00:19:06.114826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.281 qpair failed and we were unable to recover it. 00:38:27.281 [2024-12-14 00:19:06.115031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.281 [2024-12-14 00:19:06.115073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.282 qpair failed and we were unable to recover it. 00:38:27.282 [2024-12-14 00:19:06.115308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.282 [2024-12-14 00:19:06.115349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.282 qpair failed and we were unable to recover it. 00:38:27.282 [2024-12-14 00:19:06.115554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.282 [2024-12-14 00:19:06.115597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.282 qpair failed and we were unable to recover it. 00:38:27.282 [2024-12-14 00:19:06.115748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.282 [2024-12-14 00:19:06.115790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.282 qpair failed and we were unable to recover it. 00:38:27.282 [2024-12-14 00:19:06.116051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.282 [2024-12-14 00:19:06.116093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.282 qpair failed and we were unable to recover it. 00:38:27.282 [2024-12-14 00:19:06.116303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.282 [2024-12-14 00:19:06.116344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.282 qpair failed and we were unable to recover it. 00:38:27.282 [2024-12-14 00:19:06.116625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.282 [2024-12-14 00:19:06.116663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.282 qpair failed and we were unable to recover it. 00:38:27.282 [2024-12-14 00:19:06.116799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.282 [2024-12-14 00:19:06.116812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.282 qpair failed and we were unable to recover it. 00:38:27.282 [2024-12-14 00:19:06.117043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.282 [2024-12-14 00:19:06.117085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.282 qpair failed and we were unable to recover it. 00:38:27.282 [2024-12-14 00:19:06.117284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.282 [2024-12-14 00:19:06.117326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.282 qpair failed and we were unable to recover it. 00:38:27.282 [2024-12-14 00:19:06.117520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.282 [2024-12-14 00:19:06.117534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.282 qpair failed and we were unable to recover it. 00:38:27.282 [2024-12-14 00:19:06.117701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.282 [2024-12-14 00:19:06.117715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.282 qpair failed and we were unable to recover it. 00:38:27.282 [2024-12-14 00:19:06.117944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.282 [2024-12-14 00:19:06.117958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.282 qpair failed and we were unable to recover it. 00:38:27.282 [2024-12-14 00:19:06.118078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.282 [2024-12-14 00:19:06.118120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.282 qpair failed and we were unable to recover it. 00:38:27.282 [2024-12-14 00:19:06.118386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.282 [2024-12-14 00:19:06.118427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.282 qpair failed and we were unable to recover it. 00:38:27.282 [2024-12-14 00:19:06.118742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.282 [2024-12-14 00:19:06.118785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.282 qpair failed and we were unable to recover it. 00:38:27.282 [2024-12-14 00:19:06.118935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.282 [2024-12-14 00:19:06.118977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.282 qpair failed and we were unable to recover it. 00:38:27.282 [2024-12-14 00:19:06.119183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.282 [2024-12-14 00:19:06.119226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.282 qpair failed and we were unable to recover it. 00:38:27.282 [2024-12-14 00:19:06.119375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.282 [2024-12-14 00:19:06.119431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.282 qpair failed and we were unable to recover it. 00:38:27.282 [2024-12-14 00:19:06.119541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.282 [2024-12-14 00:19:06.119554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.282 qpair failed and we were unable to recover it. 00:38:27.282 [2024-12-14 00:19:06.119772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.282 [2024-12-14 00:19:06.119814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.282 qpair failed and we were unable to recover it. 00:38:27.282 [2024-12-14 00:19:06.120032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.282 [2024-12-14 00:19:06.120073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.282 qpair failed and we were unable to recover it. 00:38:27.282 [2024-12-14 00:19:06.120286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.282 [2024-12-14 00:19:06.120329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.282 qpair failed and we were unable to recover it. 00:38:27.282 [2024-12-14 00:19:06.120612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.282 [2024-12-14 00:19:06.120656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.282 qpair failed and we were unable to recover it. 00:38:27.282 [2024-12-14 00:19:06.120853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.282 [2024-12-14 00:19:06.120902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.282 qpair failed and we were unable to recover it. 00:38:27.282 [2024-12-14 00:19:06.121050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.282 [2024-12-14 00:19:06.121091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.282 qpair failed and we were unable to recover it. 00:38:27.282 [2024-12-14 00:19:06.121349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.282 [2024-12-14 00:19:06.121391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.282 qpair failed and we were unable to recover it. 00:38:27.282 [2024-12-14 00:19:06.121609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.282 [2024-12-14 00:19:06.121653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.282 qpair failed and we were unable to recover it. 00:38:27.282 [2024-12-14 00:19:06.121802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.282 [2024-12-14 00:19:06.121843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.282 qpair failed and we were unable to recover it. 00:38:27.282 [2024-12-14 00:19:06.122126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.282 [2024-12-14 00:19:06.122167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.282 qpair failed and we were unable to recover it. 00:38:27.282 [2024-12-14 00:19:06.122429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.282 [2024-12-14 00:19:06.122445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.282 qpair failed and we were unable to recover it. 00:38:27.282 [2024-12-14 00:19:06.122532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.282 [2024-12-14 00:19:06.122545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.282 qpair failed and we were unable to recover it. 00:38:27.282 [2024-12-14 00:19:06.122780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.282 [2024-12-14 00:19:06.122794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.282 qpair failed and we were unable to recover it. 00:38:27.282 [2024-12-14 00:19:06.122885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.282 [2024-12-14 00:19:06.122898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.282 qpair failed and we were unable to recover it. 00:38:27.282 [2024-12-14 00:19:06.122974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.282 [2024-12-14 00:19:06.122988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.282 qpair failed and we were unable to recover it. 00:38:27.282 [2024-12-14 00:19:06.123154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.282 [2024-12-14 00:19:06.123196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.282 qpair failed and we were unable to recover it. 00:38:27.282 [2024-12-14 00:19:06.123347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.282 [2024-12-14 00:19:06.123389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.282 qpair failed and we were unable to recover it. 00:38:27.282 [2024-12-14 00:19:06.123611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.282 [2024-12-14 00:19:06.123647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.282 qpair failed and we were unable to recover it. 00:38:27.282 [2024-12-14 00:19:06.123805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.282 [2024-12-14 00:19:06.123819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.283 qpair failed and we were unable to recover it. 00:38:27.283 [2024-12-14 00:19:06.123916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.283 [2024-12-14 00:19:06.123963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.283 qpair failed and we were unable to recover it. 00:38:27.283 [2024-12-14 00:19:06.124174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.283 [2024-12-14 00:19:06.124216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.283 qpair failed and we were unable to recover it. 00:38:27.283 [2024-12-14 00:19:06.124478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.283 [2024-12-14 00:19:06.124522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.283 qpair failed and we were unable to recover it. 00:38:27.283 [2024-12-14 00:19:06.124646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.283 [2024-12-14 00:19:06.124659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.283 qpair failed and we were unable to recover it. 00:38:27.283 [2024-12-14 00:19:06.124758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.283 [2024-12-14 00:19:06.124771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.283 qpair failed and we were unable to recover it. 00:38:27.283 [2024-12-14 00:19:06.124920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.283 [2024-12-14 00:19:06.124934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.283 qpair failed and we were unable to recover it. 00:38:27.283 [2024-12-14 00:19:06.125010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.283 [2024-12-14 00:19:06.125023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.283 qpair failed and we were unable to recover it. 00:38:27.283 [2024-12-14 00:19:06.125294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.283 [2024-12-14 00:19:06.125336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.283 qpair failed and we were unable to recover it. 00:38:27.283 [2024-12-14 00:19:06.125491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.283 [2024-12-14 00:19:06.125535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.283 qpair failed and we were unable to recover it. 00:38:27.283 [2024-12-14 00:19:06.125746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.283 [2024-12-14 00:19:06.125759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.283 qpair failed and we were unable to recover it. 00:38:27.283 [2024-12-14 00:19:06.125845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.283 [2024-12-14 00:19:06.125894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.283 qpair failed and we were unable to recover it. 00:38:27.283 [2024-12-14 00:19:06.126027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.283 [2024-12-14 00:19:06.126070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.283 qpair failed and we were unable to recover it. 00:38:27.283 [2024-12-14 00:19:06.126412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.283 [2024-12-14 00:19:06.126493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:27.283 qpair failed and we were unable to recover it. 00:38:27.283 [2024-12-14 00:19:06.126711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.283 [2024-12-14 00:19:06.126758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:27.283 qpair failed and we were unable to recover it. 00:38:27.283 [2024-12-14 00:19:06.127005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.283 [2024-12-14 00:19:06.127102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.283 qpair failed and we were unable to recover it. 00:38:27.283 [2024-12-14 00:19:06.127395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.283 [2024-12-14 00:19:06.127450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.283 qpair failed and we were unable to recover it. 00:38:27.283 [2024-12-14 00:19:06.127685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.283 [2024-12-14 00:19:06.127698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.283 qpair failed and we were unable to recover it. 00:38:27.283 [2024-12-14 00:19:06.127850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.283 [2024-12-14 00:19:06.127864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.283 qpair failed and we were unable to recover it. 00:38:27.283 [2024-12-14 00:19:06.128046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.283 [2024-12-14 00:19:06.128060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.283 qpair failed and we were unable to recover it. 00:38:27.283 [2024-12-14 00:19:06.128219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.283 [2024-12-14 00:19:06.128273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.283 qpair failed and we were unable to recover it. 00:38:27.283 [2024-12-14 00:19:06.128538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.283 [2024-12-14 00:19:06.128581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.283 qpair failed and we were unable to recover it. 00:38:27.283 [2024-12-14 00:19:06.128709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.283 [2024-12-14 00:19:06.128722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.283 qpair failed and we were unable to recover it. 00:38:27.283 [2024-12-14 00:19:06.128818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.283 [2024-12-14 00:19:06.128832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.283 qpair failed and we were unable to recover it. 00:38:27.283 [2024-12-14 00:19:06.129043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.283 [2024-12-14 00:19:06.129084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.283 qpair failed and we were unable to recover it. 00:38:27.283 [2024-12-14 00:19:06.129344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.283 [2024-12-14 00:19:06.129386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.283 qpair failed and we were unable to recover it. 00:38:27.283 [2024-12-14 00:19:06.129589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.283 [2024-12-14 00:19:06.129606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.283 qpair failed and we were unable to recover it. 00:38:27.283 [2024-12-14 00:19:06.129689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.283 [2024-12-14 00:19:06.129702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.283 qpair failed and we were unable to recover it. 00:38:27.283 [2024-12-14 00:19:06.129904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.283 [2024-12-14 00:19:06.129918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.283 qpair failed and we were unable to recover it. 00:38:27.283 [2024-12-14 00:19:06.130084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.283 [2024-12-14 00:19:06.130126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.283 qpair failed and we were unable to recover it. 00:38:27.283 [2024-12-14 00:19:06.130329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.283 [2024-12-14 00:19:06.130371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.283 qpair failed and we were unable to recover it. 00:38:27.283 [2024-12-14 00:19:06.130589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.283 [2024-12-14 00:19:06.130632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.283 qpair failed and we were unable to recover it. 00:38:27.283 [2024-12-14 00:19:06.130829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.283 [2024-12-14 00:19:06.130872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.283 qpair failed and we were unable to recover it. 00:38:27.283 [2024-12-14 00:19:06.131136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.283 [2024-12-14 00:19:06.131177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.283 qpair failed and we were unable to recover it. 00:38:27.283 [2024-12-14 00:19:06.131344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.283 [2024-12-14 00:19:06.131386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.283 qpair failed and we were unable to recover it. 00:38:27.283 [2024-12-14 00:19:06.131675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.283 [2024-12-14 00:19:06.131718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.283 qpair failed and we were unable to recover it. 00:38:27.283 [2024-12-14 00:19:06.131998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.283 [2024-12-14 00:19:06.132039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.283 qpair failed and we were unable to recover it. 00:38:27.283 [2024-12-14 00:19:06.132305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.283 [2024-12-14 00:19:06.132347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.283 qpair failed and we were unable to recover it. 00:38:27.284 [2024-12-14 00:19:06.132502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.284 [2024-12-14 00:19:06.132546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.284 qpair failed and we were unable to recover it. 00:38:27.284 [2024-12-14 00:19:06.132804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.284 [2024-12-14 00:19:06.132846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.284 qpair failed and we were unable to recover it. 00:38:27.284 [2024-12-14 00:19:06.133009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.284 [2024-12-14 00:19:06.133052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.284 qpair failed and we were unable to recover it. 00:38:27.284 [2024-12-14 00:19:06.133311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.284 [2024-12-14 00:19:06.133353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.284 qpair failed and we were unable to recover it. 00:38:27.284 [2024-12-14 00:19:06.133638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.284 [2024-12-14 00:19:06.133680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.284 qpair failed and we were unable to recover it. 00:38:27.284 [2024-12-14 00:19:06.133823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.284 [2024-12-14 00:19:06.133864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.284 qpair failed and we were unable to recover it. 00:38:27.284 [2024-12-14 00:19:06.134122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.284 [2024-12-14 00:19:06.134165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.284 qpair failed and we were unable to recover it. 00:38:27.284 [2024-12-14 00:19:06.134455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.284 [2024-12-14 00:19:06.134497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.284 qpair failed and we were unable to recover it. 00:38:27.284 [2024-12-14 00:19:06.134700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.284 [2024-12-14 00:19:06.134742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.284 qpair failed and we were unable to recover it. 00:38:27.284 [2024-12-14 00:19:06.134939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.284 [2024-12-14 00:19:06.134980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.284 qpair failed and we were unable to recover it. 00:38:27.284 [2024-12-14 00:19:06.135184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.284 [2024-12-14 00:19:06.135226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.284 qpair failed and we were unable to recover it. 00:38:27.284 [2024-12-14 00:19:06.135420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.284 [2024-12-14 00:19:06.135434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.284 qpair failed and we were unable to recover it. 00:38:27.284 [2024-12-14 00:19:06.135577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.284 [2024-12-14 00:19:06.135591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.284 qpair failed and we were unable to recover it. 00:38:27.284 [2024-12-14 00:19:06.135685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.284 [2024-12-14 00:19:06.135699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.284 qpair failed and we were unable to recover it. 00:38:27.284 [2024-12-14 00:19:06.135802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.284 [2024-12-14 00:19:06.135815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.284 qpair failed and we were unable to recover it. 00:38:27.284 [2024-12-14 00:19:06.135977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.284 [2024-12-14 00:19:06.135995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.284 qpair failed and we were unable to recover it. 00:38:27.284 [2024-12-14 00:19:06.136082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.284 [2024-12-14 00:19:06.136096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.284 qpair failed and we were unable to recover it. 00:38:27.284 [2024-12-14 00:19:06.136182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.284 [2024-12-14 00:19:06.136197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.284 qpair failed and we were unable to recover it. 00:38:27.284 [2024-12-14 00:19:06.136348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.284 [2024-12-14 00:19:06.136361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.284 qpair failed and we were unable to recover it. 00:38:27.284 [2024-12-14 00:19:06.136501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.284 [2024-12-14 00:19:06.136516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.284 qpair failed and we were unable to recover it. 00:38:27.284 [2024-12-14 00:19:06.136635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.284 [2024-12-14 00:19:06.136649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.284 qpair failed and we were unable to recover it. 00:38:27.284 [2024-12-14 00:19:06.136820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.284 [2024-12-14 00:19:06.136862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.284 qpair failed and we were unable to recover it. 00:38:27.284 [2024-12-14 00:19:06.137153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.284 [2024-12-14 00:19:06.137194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.284 qpair failed and we were unable to recover it. 00:38:27.284 [2024-12-14 00:19:06.137389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.284 [2024-12-14 00:19:06.137430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.284 qpair failed and we were unable to recover it. 00:38:27.284 [2024-12-14 00:19:06.137600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.284 [2024-12-14 00:19:06.137641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.284 qpair failed and we were unable to recover it. 00:38:27.284 [2024-12-14 00:19:06.137787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.284 [2024-12-14 00:19:06.137828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.284 qpair failed and we were unable to recover it. 00:38:27.284 [2024-12-14 00:19:06.138108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.284 [2024-12-14 00:19:06.138148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.284 qpair failed and we were unable to recover it. 00:38:27.284 [2024-12-14 00:19:06.138421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.284 [2024-12-14 00:19:06.138434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.284 qpair failed and we were unable to recover it. 00:38:27.284 [2024-12-14 00:19:06.138639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.284 [2024-12-14 00:19:06.138653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.284 qpair failed and we were unable to recover it. 00:38:27.284 [2024-12-14 00:19:06.138762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.284 [2024-12-14 00:19:06.138804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.284 qpair failed and we were unable to recover it. 00:38:27.284 [2024-12-14 00:19:06.139054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.284 [2024-12-14 00:19:06.139096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.284 qpair failed and we were unable to recover it. 00:38:27.284 [2024-12-14 00:19:06.139401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.284 [2024-12-14 00:19:06.139452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.284 qpair failed and we were unable to recover it. 00:38:27.284 [2024-12-14 00:19:06.139665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.284 [2024-12-14 00:19:06.139708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.284 qpair failed and we were unable to recover it. 00:38:27.284 [2024-12-14 00:19:06.139880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.284 [2024-12-14 00:19:06.139894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.284 qpair failed and we were unable to recover it. 00:38:27.284 [2024-12-14 00:19:06.140076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.284 [2024-12-14 00:19:06.140117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.284 qpair failed and we were unable to recover it. 00:38:27.284 [2024-12-14 00:19:06.140325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.284 [2024-12-14 00:19:06.140367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.284 qpair failed and we were unable to recover it. 00:38:27.284 [2024-12-14 00:19:06.140594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.284 [2024-12-14 00:19:06.140638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.284 qpair failed and we were unable to recover it. 00:38:27.284 [2024-12-14 00:19:06.140858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.284 [2024-12-14 00:19:06.140900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.284 qpair failed and we were unable to recover it. 00:38:27.284 [2024-12-14 00:19:06.141207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.285 [2024-12-14 00:19:06.141248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.285 qpair failed and we were unable to recover it. 00:38:27.285 [2024-12-14 00:19:06.141472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.285 [2024-12-14 00:19:06.141516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.285 qpair failed and we were unable to recover it. 00:38:27.285 [2024-12-14 00:19:06.141827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.285 [2024-12-14 00:19:06.141869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.285 qpair failed and we were unable to recover it. 00:38:27.285 [2024-12-14 00:19:06.142149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.285 [2024-12-14 00:19:06.142189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.285 qpair failed and we were unable to recover it. 00:38:27.285 [2024-12-14 00:19:06.142472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.285 [2024-12-14 00:19:06.142486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.285 qpair failed and we were unable to recover it. 00:38:27.285 [2024-12-14 00:19:06.142630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.285 [2024-12-14 00:19:06.142643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.285 qpair failed and we were unable to recover it. 00:38:27.285 [2024-12-14 00:19:06.142815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.285 [2024-12-14 00:19:06.142856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.285 qpair failed and we were unable to recover it. 00:38:27.285 [2024-12-14 00:19:06.143035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.285 [2024-12-14 00:19:06.143080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.285 qpair failed and we were unable to recover it. 00:38:27.285 [2024-12-14 00:19:06.143287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.285 [2024-12-14 00:19:06.143341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.285 qpair failed and we were unable to recover it. 00:38:27.285 [2024-12-14 00:19:06.143495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.285 [2024-12-14 00:19:06.143509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.285 qpair failed and we were unable to recover it. 00:38:27.285 [2024-12-14 00:19:06.143709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.285 [2024-12-14 00:19:06.143722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.285 qpair failed and we were unable to recover it. 00:38:27.285 [2024-12-14 00:19:06.143942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.285 [2024-12-14 00:19:06.143956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.285 qpair failed and we were unable to recover it. 00:38:27.285 [2024-12-14 00:19:06.144127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.285 [2024-12-14 00:19:06.144169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.285 qpair failed and we were unable to recover it. 00:38:27.285 [2024-12-14 00:19:06.144423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.285 [2024-12-14 00:19:06.144446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.285 qpair failed and we were unable to recover it. 00:38:27.285 [2024-12-14 00:19:06.144707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.285 [2024-12-14 00:19:06.144750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.285 qpair failed and we were unable to recover it. 00:38:27.285 [2024-12-14 00:19:06.144953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.285 [2024-12-14 00:19:06.144994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.285 qpair failed and we were unable to recover it. 00:38:27.285 [2024-12-14 00:19:06.145186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.285 [2024-12-14 00:19:06.145227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.285 qpair failed and we were unable to recover it. 00:38:27.285 [2024-12-14 00:19:06.145432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.285 [2024-12-14 00:19:06.145454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.285 qpair failed and we were unable to recover it. 00:38:27.285 [2024-12-14 00:19:06.145714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.285 [2024-12-14 00:19:06.145756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.285 qpair failed and we were unable to recover it. 00:38:27.285 [2024-12-14 00:19:06.146064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.285 [2024-12-14 00:19:06.146106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.285 qpair failed and we were unable to recover it. 00:38:27.285 [2024-12-14 00:19:06.146388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.285 [2024-12-14 00:19:06.146430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.285 qpair failed and we were unable to recover it. 00:38:27.285 [2024-12-14 00:19:06.146713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.285 [2024-12-14 00:19:06.146755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.285 qpair failed and we were unable to recover it. 00:38:27.285 [2024-12-14 00:19:06.146971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.285 [2024-12-14 00:19:06.147013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.285 qpair failed and we were unable to recover it. 00:38:27.285 [2024-12-14 00:19:06.147232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.285 [2024-12-14 00:19:06.147275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.285 qpair failed and we were unable to recover it. 00:38:27.285 [2024-12-14 00:19:06.147486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.285 [2024-12-14 00:19:06.147530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.285 qpair failed and we were unable to recover it. 00:38:27.285 [2024-12-14 00:19:06.147727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.285 [2024-12-14 00:19:06.147768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.285 qpair failed and we were unable to recover it. 00:38:27.285 [2024-12-14 00:19:06.148030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.285 [2024-12-14 00:19:06.148072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.285 qpair failed and we were unable to recover it. 00:38:27.285 [2024-12-14 00:19:06.148325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.285 [2024-12-14 00:19:06.148368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.285 qpair failed and we were unable to recover it. 00:38:27.285 [2024-12-14 00:19:06.148664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.285 [2024-12-14 00:19:06.148678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.285 qpair failed and we were unable to recover it. 00:38:27.285 [2024-12-14 00:19:06.148958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.285 [2024-12-14 00:19:06.148999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.285 qpair failed and we were unable to recover it. 00:38:27.285 [2024-12-14 00:19:06.149151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.285 [2024-12-14 00:19:06.149193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.285 qpair failed and we were unable to recover it. 00:38:27.285 [2024-12-14 00:19:06.149400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.285 [2024-12-14 00:19:06.149454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.285 qpair failed and we were unable to recover it. 00:38:27.285 [2024-12-14 00:19:06.149718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.285 [2024-12-14 00:19:06.149760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.285 qpair failed and we were unable to recover it. 00:38:27.285 [2024-12-14 00:19:06.150025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.286 [2024-12-14 00:19:06.150067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.286 qpair failed and we were unable to recover it. 00:38:27.286 [2024-12-14 00:19:06.150342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.286 [2024-12-14 00:19:06.150393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.286 qpair failed and we were unable to recover it. 00:38:27.286 [2024-12-14 00:19:06.150612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.286 [2024-12-14 00:19:06.150626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.286 qpair failed and we were unable to recover it. 00:38:27.286 [2024-12-14 00:19:06.150809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.286 [2024-12-14 00:19:06.150851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.286 qpair failed and we were unable to recover it. 00:38:27.286 [2024-12-14 00:19:06.151055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.286 [2024-12-14 00:19:06.151097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.286 qpair failed and we were unable to recover it. 00:38:27.286 [2024-12-14 00:19:06.151396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.286 [2024-12-14 00:19:06.151457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.286 qpair failed and we were unable to recover it. 00:38:27.286 [2024-12-14 00:19:06.151669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.286 [2024-12-14 00:19:06.151683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.286 qpair failed and we were unable to recover it. 00:38:27.286 [2024-12-14 00:19:06.151910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.286 [2024-12-14 00:19:06.151952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.286 qpair failed and we were unable to recover it. 00:38:27.286 [2024-12-14 00:19:06.152236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.286 [2024-12-14 00:19:06.152277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.286 qpair failed and we were unable to recover it. 00:38:27.286 [2024-12-14 00:19:06.152533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.286 [2024-12-14 00:19:06.152548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.286 qpair failed and we were unable to recover it. 00:38:27.286 [2024-12-14 00:19:06.152772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.286 [2024-12-14 00:19:06.152785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.286 qpair failed and we were unable to recover it. 00:38:27.286 [2024-12-14 00:19:06.152959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.286 [2024-12-14 00:19:06.153002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.286 qpair failed and we were unable to recover it. 00:38:27.286 [2024-12-14 00:19:06.153214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.286 [2024-12-14 00:19:06.153256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.286 qpair failed and we were unable to recover it. 00:38:27.286 [2024-12-14 00:19:06.153467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.286 [2024-12-14 00:19:06.153481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.286 qpair failed and we were unable to recover it. 00:38:27.286 [2024-12-14 00:19:06.153714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.286 [2024-12-14 00:19:06.153756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.286 qpair failed and we were unable to recover it. 00:38:27.286 [2024-12-14 00:19:06.154044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.286 [2024-12-14 00:19:06.154086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.286 qpair failed and we were unable to recover it. 00:38:27.286 [2024-12-14 00:19:06.154236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.286 [2024-12-14 00:19:06.154279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.286 qpair failed and we were unable to recover it. 00:38:27.286 [2024-12-14 00:19:06.154473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.286 [2024-12-14 00:19:06.154516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.286 qpair failed and we were unable to recover it. 00:38:27.286 [2024-12-14 00:19:06.154722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.286 [2024-12-14 00:19:06.154764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.286 qpair failed and we were unable to recover it. 00:38:27.286 [2024-12-14 00:19:06.155046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.286 [2024-12-14 00:19:06.155088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.286 qpair failed and we were unable to recover it. 00:38:27.286 [2024-12-14 00:19:06.155333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.286 [2024-12-14 00:19:06.155375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.286 qpair failed and we were unable to recover it. 00:38:27.286 [2024-12-14 00:19:06.155643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.286 [2024-12-14 00:19:06.155688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.286 qpair failed and we were unable to recover it. 00:38:27.286 [2024-12-14 00:19:06.155975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.286 [2024-12-14 00:19:06.156017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.286 qpair failed and we were unable to recover it. 00:38:27.286 [2024-12-14 00:19:06.156228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.286 [2024-12-14 00:19:06.156270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.286 qpair failed and we were unable to recover it. 00:38:27.286 [2024-12-14 00:19:06.156537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.286 [2024-12-14 00:19:06.156554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.286 qpair failed and we were unable to recover it. 00:38:27.286 [2024-12-14 00:19:06.156769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.286 [2024-12-14 00:19:06.156811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.286 qpair failed and we were unable to recover it. 00:38:27.286 [2024-12-14 00:19:06.157101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.286 [2024-12-14 00:19:06.157143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.286 qpair failed and we were unable to recover it. 00:38:27.286 [2024-12-14 00:19:06.157416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.286 [2024-12-14 00:19:06.157468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.286 qpair failed and we were unable to recover it. 00:38:27.286 [2024-12-14 00:19:06.157690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.286 [2024-12-14 00:19:06.157741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.286 qpair failed and we were unable to recover it. 00:38:27.286 [2024-12-14 00:19:06.157918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.286 [2024-12-14 00:19:06.157932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.286 qpair failed and we were unable to recover it. 00:38:27.286 [2024-12-14 00:19:06.158165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.286 [2024-12-14 00:19:06.158206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.286 qpair failed and we were unable to recover it. 00:38:27.286 [2024-12-14 00:19:06.158418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.286 [2024-12-14 00:19:06.158469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.286 qpair failed and we were unable to recover it. 00:38:27.286 [2024-12-14 00:19:06.158688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.286 [2024-12-14 00:19:06.158702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.286 qpair failed and we were unable to recover it. 00:38:27.286 [2024-12-14 00:19:06.158913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.286 [2024-12-14 00:19:06.158956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.286 qpair failed and we were unable to recover it. 00:38:27.286 [2024-12-14 00:19:06.159194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.286 [2024-12-14 00:19:06.159235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.286 qpair failed and we were unable to recover it. 00:38:27.286 [2024-12-14 00:19:06.159524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.286 [2024-12-14 00:19:06.159561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.286 qpair failed and we were unable to recover it. 00:38:27.286 [2024-12-14 00:19:06.159810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.286 [2024-12-14 00:19:06.159823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.286 qpair failed and we were unable to recover it. 00:38:27.286 [2024-12-14 00:19:06.160044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.287 [2024-12-14 00:19:06.160058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.287 qpair failed and we were unable to recover it. 00:38:27.287 [2024-12-14 00:19:06.160243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.287 [2024-12-14 00:19:06.160258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.287 qpair failed and we were unable to recover it. 00:38:27.287 [2024-12-14 00:19:06.160459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.287 [2024-12-14 00:19:06.160514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.287 qpair failed and we were unable to recover it. 00:38:27.287 [2024-12-14 00:19:06.160779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.287 [2024-12-14 00:19:06.160821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.287 qpair failed and we were unable to recover it. 00:38:27.287 [2024-12-14 00:19:06.161121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.287 [2024-12-14 00:19:06.161163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.287 qpair failed and we were unable to recover it. 00:38:27.287 [2024-12-14 00:19:06.161425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.287 [2024-12-14 00:19:06.161443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.287 qpair failed and we were unable to recover it. 00:38:27.287 [2024-12-14 00:19:06.161682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.287 [2024-12-14 00:19:06.161724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.287 qpair failed and we were unable to recover it. 00:38:27.287 [2024-12-14 00:19:06.162043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.287 [2024-12-14 00:19:06.162084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.287 qpair failed and we were unable to recover it. 00:38:27.287 [2024-12-14 00:19:06.162291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.287 [2024-12-14 00:19:06.162332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.287 qpair failed and we were unable to recover it. 00:38:27.287 [2024-12-14 00:19:06.162584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.287 [2024-12-14 00:19:06.162598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.287 qpair failed and we were unable to recover it. 00:38:27.287 [2024-12-14 00:19:06.162797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.287 [2024-12-14 00:19:06.162810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.287 qpair failed and we were unable to recover it. 00:38:27.287 [2024-12-14 00:19:06.163040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.287 [2024-12-14 00:19:06.163082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.287 qpair failed and we were unable to recover it. 00:38:27.287 [2024-12-14 00:19:06.163368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.287 [2024-12-14 00:19:06.163410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.287 qpair failed and we were unable to recover it. 00:38:27.287 [2024-12-14 00:19:06.163714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.287 [2024-12-14 00:19:06.163727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.287 qpair failed and we were unable to recover it. 00:38:27.287 [2024-12-14 00:19:06.163822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.287 [2024-12-14 00:19:06.163857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.287 qpair failed and we were unable to recover it. 00:38:27.287 [2024-12-14 00:19:06.164084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.287 [2024-12-14 00:19:06.164126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.287 qpair failed and we were unable to recover it. 00:38:27.287 [2024-12-14 00:19:06.164384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.287 [2024-12-14 00:19:06.164425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.287 qpair failed and we were unable to recover it. 00:38:27.287 [2024-12-14 00:19:06.164702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.287 [2024-12-14 00:19:06.164716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.287 qpair failed and we were unable to recover it. 00:38:27.287 [2024-12-14 00:19:06.164852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.287 [2024-12-14 00:19:06.164866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.287 qpair failed and we were unable to recover it. 00:38:27.287 [2024-12-14 00:19:06.165113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.287 [2024-12-14 00:19:06.165127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.287 qpair failed and we were unable to recover it. 00:38:27.287 [2024-12-14 00:19:06.165333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.287 [2024-12-14 00:19:06.165389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.287 qpair failed and we were unable to recover it. 00:38:27.287 [2024-12-14 00:19:06.165692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.287 [2024-12-14 00:19:06.165736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.287 qpair failed and we were unable to recover it. 00:38:27.287 [2024-12-14 00:19:06.166033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.287 [2024-12-14 00:19:06.166075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.287 qpair failed and we were unable to recover it. 00:38:27.287 [2024-12-14 00:19:06.166338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.287 [2024-12-14 00:19:06.166379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.287 qpair failed and we were unable to recover it. 00:38:27.287 [2024-12-14 00:19:06.166604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.287 [2024-12-14 00:19:06.166648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.287 qpair failed and we were unable to recover it. 00:38:27.287 [2024-12-14 00:19:06.166752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.287 [2024-12-14 00:19:06.166766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.287 qpair failed and we were unable to recover it. 00:38:27.287 [2024-12-14 00:19:06.166901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.287 [2024-12-14 00:19:06.166914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.287 qpair failed and we were unable to recover it. 00:38:27.287 [2024-12-14 00:19:06.167140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.287 [2024-12-14 00:19:06.167156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.287 qpair failed and we were unable to recover it. 00:38:27.287 [2024-12-14 00:19:06.167321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.287 [2024-12-14 00:19:06.167335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.287 qpair failed and we were unable to recover it. 00:38:27.287 [2024-12-14 00:19:06.167545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.287 [2024-12-14 00:19:06.167559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.287 qpair failed and we were unable to recover it. 00:38:27.287 [2024-12-14 00:19:06.167763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.287 [2024-12-14 00:19:06.167777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.287 qpair failed and we were unable to recover it. 00:38:27.287 [2024-12-14 00:19:06.167961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.287 [2024-12-14 00:19:06.167974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.287 qpair failed and we were unable to recover it. 00:38:27.287 [2024-12-14 00:19:06.168182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.287 [2024-12-14 00:19:06.168195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.287 qpair failed and we were unable to recover it. 00:38:27.287 [2024-12-14 00:19:06.168402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.287 [2024-12-14 00:19:06.168415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.287 qpair failed and we were unable to recover it. 00:38:27.287 [2024-12-14 00:19:06.168705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.287 [2024-12-14 00:19:06.168741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.287 qpair failed and we were unable to recover it. 00:38:27.287 [2024-12-14 00:19:06.168961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.287 [2024-12-14 00:19:06.169003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.287 qpair failed and we were unable to recover it. 00:38:27.287 [2024-12-14 00:19:06.169286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.287 [2024-12-14 00:19:06.169328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.287 qpair failed and we were unable to recover it. 00:38:27.287 [2024-12-14 00:19:06.169486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.287 [2024-12-14 00:19:06.169529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.287 qpair failed and we were unable to recover it. 00:38:27.287 [2024-12-14 00:19:06.169791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.288 [2024-12-14 00:19:06.169832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.288 qpair failed and we were unable to recover it. 00:38:27.288 [2024-12-14 00:19:06.170131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.288 [2024-12-14 00:19:06.170173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.288 qpair failed and we were unable to recover it. 00:38:27.288 [2024-12-14 00:19:06.170435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.288 [2024-12-14 00:19:06.170488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.288 qpair failed and we were unable to recover it. 00:38:27.288 [2024-12-14 00:19:06.170796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.288 [2024-12-14 00:19:06.170838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.288 qpair failed and we were unable to recover it. 00:38:27.288 [2024-12-14 00:19:06.171099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.288 [2024-12-14 00:19:06.171140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.288 qpair failed and we were unable to recover it. 00:38:27.288 [2024-12-14 00:19:06.171454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.288 [2024-12-14 00:19:06.171498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.288 qpair failed and we were unable to recover it. 00:38:27.288 [2024-12-14 00:19:06.171763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.288 [2024-12-14 00:19:06.171810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.288 qpair failed and we were unable to recover it. 00:38:27.288 [2024-12-14 00:19:06.172101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.288 [2024-12-14 00:19:06.172143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.288 qpair failed and we were unable to recover it. 00:38:27.288 [2024-12-14 00:19:06.172391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.288 [2024-12-14 00:19:06.172433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.288 qpair failed and we were unable to recover it. 00:38:27.288 [2024-12-14 00:19:06.172659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.288 [2024-12-14 00:19:06.172702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.288 qpair failed and we were unable to recover it. 00:38:27.288 [2024-12-14 00:19:06.172887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.288 [2024-12-14 00:19:06.172901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.288 qpair failed and we were unable to recover it. 00:38:27.288 [2024-12-14 00:19:06.173048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.288 [2024-12-14 00:19:06.173090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.288 qpair failed and we were unable to recover it. 00:38:27.288 [2024-12-14 00:19:06.173321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.288 [2024-12-14 00:19:06.173365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.288 qpair failed and we were unable to recover it. 00:38:27.288 [2024-12-14 00:19:06.173653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.288 [2024-12-14 00:19:06.173695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.288 qpair failed and we were unable to recover it. 00:38:27.288 [2024-12-14 00:19:06.173893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.288 [2024-12-14 00:19:06.173935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.288 qpair failed and we were unable to recover it. 00:38:27.288 [2024-12-14 00:19:06.174246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.288 [2024-12-14 00:19:06.174290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.288 qpair failed and we were unable to recover it. 00:38:27.288 [2024-12-14 00:19:06.174609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.288 [2024-12-14 00:19:06.174653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.288 qpair failed and we were unable to recover it. 00:38:27.288 [2024-12-14 00:19:06.174926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.288 [2024-12-14 00:19:06.174939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.288 qpair failed and we were unable to recover it. 00:38:27.288 [2024-12-14 00:19:06.175043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.288 [2024-12-14 00:19:06.175056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.288 qpair failed and we were unable to recover it. 00:38:27.288 [2024-12-14 00:19:06.175289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.288 [2024-12-14 00:19:06.175313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.288 qpair failed and we were unable to recover it. 00:38:27.288 [2024-12-14 00:19:06.175454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.288 [2024-12-14 00:19:06.175468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.288 qpair failed and we were unable to recover it. 00:38:27.288 [2024-12-14 00:19:06.175673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.288 [2024-12-14 00:19:06.175687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.288 qpair failed and we were unable to recover it. 00:38:27.288 [2024-12-14 00:19:06.175801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.288 [2024-12-14 00:19:06.175844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.288 qpair failed and we were unable to recover it. 00:38:27.288 [2024-12-14 00:19:06.176156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.288 [2024-12-14 00:19:06.176199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.288 qpair failed and we were unable to recover it. 00:38:27.288 [2024-12-14 00:19:06.176490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.288 [2024-12-14 00:19:06.176534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.288 qpair failed and we were unable to recover it. 00:38:27.288 [2024-12-14 00:19:06.176732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.288 [2024-12-14 00:19:06.176773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.288 qpair failed and we were unable to recover it. 00:38:27.288 [2024-12-14 00:19:06.176921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.288 [2024-12-14 00:19:06.176964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.288 qpair failed and we were unable to recover it. 00:38:27.288 [2024-12-14 00:19:06.177288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.288 [2024-12-14 00:19:06.177335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.288 qpair failed and we were unable to recover it. 00:38:27.288 [2024-12-14 00:19:06.177511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.288 [2024-12-14 00:19:06.177529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.288 qpair failed and we were unable to recover it. 00:38:27.288 [2024-12-14 00:19:06.177679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.288 [2024-12-14 00:19:06.177695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.288 qpair failed and we were unable to recover it. 00:38:27.288 [2024-12-14 00:19:06.177917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.288 [2024-12-14 00:19:06.177958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.288 qpair failed and we were unable to recover it. 00:38:27.288 [2024-12-14 00:19:06.178185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.288 [2024-12-14 00:19:06.178228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.288 qpair failed and we were unable to recover it. 00:38:27.288 [2024-12-14 00:19:06.178507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.288 [2024-12-14 00:19:06.178552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.288 qpair failed and we were unable to recover it. 00:38:27.288 [2024-12-14 00:19:06.178698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.288 [2024-12-14 00:19:06.178739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.288 qpair failed and we were unable to recover it. 00:38:27.288 [2024-12-14 00:19:06.178951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.288 [2024-12-14 00:19:06.178993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.288 qpair failed and we were unable to recover it. 00:38:27.288 [2024-12-14 00:19:06.179258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.288 [2024-12-14 00:19:06.179299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.288 qpair failed and we were unable to recover it. 00:38:27.288 [2024-12-14 00:19:06.179549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.288 [2024-12-14 00:19:06.179564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.288 qpair failed and we were unable to recover it. 00:38:27.288 [2024-12-14 00:19:06.179718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.288 [2024-12-14 00:19:06.179731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.288 qpair failed and we were unable to recover it. 00:38:27.289 [2024-12-14 00:19:06.179838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.289 [2024-12-14 00:19:06.179851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.289 qpair failed and we were unable to recover it. 00:38:27.289 [2024-12-14 00:19:06.180007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.289 [2024-12-14 00:19:06.180020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.289 qpair failed and we were unable to recover it. 00:38:27.289 [2024-12-14 00:19:06.180231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.289 [2024-12-14 00:19:06.180272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.289 qpair failed and we were unable to recover it. 00:38:27.289 [2024-12-14 00:19:06.180510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.289 [2024-12-14 00:19:06.180553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.289 qpair failed and we were unable to recover it. 00:38:27.289 [2024-12-14 00:19:06.180750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.289 [2024-12-14 00:19:06.180763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.289 qpair failed and we were unable to recover it. 00:38:27.289 [2024-12-14 00:19:06.180956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.289 [2024-12-14 00:19:06.180997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.289 qpair failed and we were unable to recover it. 00:38:27.289 [2024-12-14 00:19:06.181212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.289 [2024-12-14 00:19:06.181254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.289 qpair failed and we were unable to recover it. 00:38:27.289 [2024-12-14 00:19:06.181494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.289 [2024-12-14 00:19:06.181545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.289 qpair failed and we were unable to recover it. 00:38:27.289 [2024-12-14 00:19:06.181773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.289 [2024-12-14 00:19:06.181787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.289 qpair failed and we were unable to recover it. 00:38:27.289 [2024-12-14 00:19:06.181996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.289 [2024-12-14 00:19:06.182010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.289 qpair failed and we were unable to recover it. 00:38:27.289 [2024-12-14 00:19:06.182205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.289 [2024-12-14 00:19:06.182219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.289 qpair failed and we were unable to recover it. 00:38:27.289 [2024-12-14 00:19:06.182374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.289 [2024-12-14 00:19:06.182398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.289 qpair failed and we were unable to recover it. 00:38:27.289 [2024-12-14 00:19:06.182553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.289 [2024-12-14 00:19:06.182567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.289 qpair failed and we were unable to recover it. 00:38:27.289 [2024-12-14 00:19:06.182724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.289 [2024-12-14 00:19:06.182739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.289 qpair failed and we were unable to recover it. 00:38:27.289 [2024-12-14 00:19:06.182891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.289 [2024-12-14 00:19:06.182904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.289 qpair failed and we were unable to recover it. 00:38:27.289 [2024-12-14 00:19:06.183116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.289 [2024-12-14 00:19:06.183158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.289 qpair failed and we were unable to recover it. 00:38:27.289 [2024-12-14 00:19:06.183457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.289 [2024-12-14 00:19:06.183500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.289 qpair failed and we were unable to recover it. 00:38:27.289 [2024-12-14 00:19:06.183774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.289 [2024-12-14 00:19:06.183788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.289 qpair failed and we were unable to recover it. 00:38:27.289 [2024-12-14 00:19:06.184026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.289 [2024-12-14 00:19:06.184069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.289 qpair failed and we were unable to recover it. 00:38:27.289 [2024-12-14 00:19:06.184276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.289 [2024-12-14 00:19:06.184319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.289 qpair failed and we were unable to recover it. 00:38:27.289 [2024-12-14 00:19:06.184648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.289 [2024-12-14 00:19:06.184692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.289 qpair failed and we were unable to recover it. 00:38:27.289 [2024-12-14 00:19:06.184987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.289 [2024-12-14 00:19:06.185028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.289 qpair failed and we were unable to recover it. 00:38:27.289 [2024-12-14 00:19:06.185314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.289 [2024-12-14 00:19:06.185357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.289 qpair failed and we were unable to recover it. 00:38:27.289 [2024-12-14 00:19:06.185614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.289 [2024-12-14 00:19:06.185657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.289 qpair failed and we were unable to recover it. 00:38:27.289 [2024-12-14 00:19:06.185893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.289 [2024-12-14 00:19:06.185935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.289 qpair failed and we were unable to recover it. 00:38:27.289 [2024-12-14 00:19:06.186258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.289 [2024-12-14 00:19:06.186299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.289 qpair failed and we were unable to recover it. 00:38:27.289 [2024-12-14 00:19:06.186564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.289 [2024-12-14 00:19:06.186608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.289 qpair failed and we were unable to recover it. 00:38:27.289 [2024-12-14 00:19:06.186771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.289 [2024-12-14 00:19:06.186813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.289 qpair failed and we were unable to recover it. 00:38:27.289 [2024-12-14 00:19:06.187074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.289 [2024-12-14 00:19:06.187116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.289 qpair failed and we were unable to recover it. 00:38:27.289 [2024-12-14 00:19:06.187323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.289 [2024-12-14 00:19:06.187366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.289 qpair failed and we were unable to recover it. 00:38:27.289 [2024-12-14 00:19:06.187601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.289 [2024-12-14 00:19:06.187646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.289 qpair failed and we were unable to recover it. 00:38:27.289 [2024-12-14 00:19:06.187891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.289 [2024-12-14 00:19:06.187939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.289 qpair failed and we were unable to recover it. 00:38:27.289 [2024-12-14 00:19:06.188264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.289 [2024-12-14 00:19:06.188311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.289 qpair failed and we were unable to recover it. 00:38:27.289 [2024-12-14 00:19:06.188541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.289 [2024-12-14 00:19:06.188556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.289 qpair failed and we were unable to recover it. 00:38:27.289 [2024-12-14 00:19:06.188668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.289 [2024-12-14 00:19:06.188710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.289 qpair failed and we were unable to recover it. 00:38:27.289 [2024-12-14 00:19:06.188910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.289 [2024-12-14 00:19:06.188953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.289 qpair failed and we were unable to recover it. 00:38:27.289 [2024-12-14 00:19:06.189208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.289 [2024-12-14 00:19:06.189250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.289 qpair failed and we were unable to recover it. 00:38:27.289 [2024-12-14 00:19:06.189564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.289 [2024-12-14 00:19:06.189607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.290 qpair failed and we were unable to recover it. 00:38:27.290 [2024-12-14 00:19:06.189871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.290 [2024-12-14 00:19:06.189912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.290 qpair failed and we were unable to recover it. 00:38:27.290 [2024-12-14 00:19:06.190174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.290 [2024-12-14 00:19:06.190218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.290 qpair failed and we were unable to recover it. 00:38:27.290 [2024-12-14 00:19:06.190524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.290 [2024-12-14 00:19:06.190568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.290 qpair failed and we were unable to recover it. 00:38:27.290 [2024-12-14 00:19:06.190820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.290 [2024-12-14 00:19:06.190861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.290 qpair failed and we were unable to recover it. 00:38:27.290 [2024-12-14 00:19:06.191015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.290 [2024-12-14 00:19:06.191058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.290 qpair failed and we were unable to recover it. 00:38:27.290 [2024-12-14 00:19:06.191343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.290 [2024-12-14 00:19:06.191384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.290 qpair failed and we were unable to recover it. 00:38:27.290 [2024-12-14 00:19:06.191659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.290 [2024-12-14 00:19:06.191703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.290 qpair failed and we were unable to recover it. 00:38:27.290 [2024-12-14 00:19:06.191911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.290 [2024-12-14 00:19:06.191926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.290 qpair failed and we were unable to recover it. 00:38:27.290 [2024-12-14 00:19:06.192180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.290 [2024-12-14 00:19:06.192221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.290 qpair failed and we were unable to recover it. 00:38:27.290 [2024-12-14 00:19:06.192535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.290 [2024-12-14 00:19:06.192578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.290 qpair failed and we were unable to recover it. 00:38:27.290 [2024-12-14 00:19:06.192841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.290 [2024-12-14 00:19:06.192855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.290 qpair failed and we were unable to recover it. 00:38:27.290 [2024-12-14 00:19:06.192958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.290 [2024-12-14 00:19:06.192972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.290 qpair failed and we were unable to recover it. 00:38:27.290 [2024-12-14 00:19:06.193193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.290 [2024-12-14 00:19:06.193235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.290 qpair failed and we were unable to recover it. 00:38:27.290 [2024-12-14 00:19:06.193525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.290 [2024-12-14 00:19:06.193567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.290 qpair failed and we were unable to recover it. 00:38:27.290 [2024-12-14 00:19:06.193783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.290 [2024-12-14 00:19:06.193797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.290 qpair failed and we were unable to recover it. 00:38:27.290 [2024-12-14 00:19:06.193980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.290 [2024-12-14 00:19:06.194023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.290 qpair failed and we were unable to recover it. 00:38:27.290 [2024-12-14 00:19:06.194261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.290 [2024-12-14 00:19:06.194320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.290 qpair failed and we were unable to recover it. 00:38:27.290 [2024-12-14 00:19:06.194634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.290 [2024-12-14 00:19:06.194679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.290 qpair failed and we were unable to recover it. 00:38:27.290 [2024-12-14 00:19:06.194940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.290 [2024-12-14 00:19:06.194981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.290 qpair failed and we were unable to recover it. 00:38:27.290 [2024-12-14 00:19:06.195191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.290 [2024-12-14 00:19:06.195234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.290 qpair failed and we were unable to recover it. 00:38:27.290 [2024-12-14 00:19:06.195465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.290 [2024-12-14 00:19:06.195509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.290 qpair failed and we were unable to recover it. 00:38:27.290 [2024-12-14 00:19:06.195653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.290 [2024-12-14 00:19:06.195667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.290 qpair failed and we were unable to recover it. 00:38:27.290 [2024-12-14 00:19:06.195903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.290 [2024-12-14 00:19:06.195946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.290 qpair failed and we were unable to recover it. 00:38:27.290 [2024-12-14 00:19:06.196165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.290 [2024-12-14 00:19:06.196208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.290 qpair failed and we were unable to recover it. 00:38:27.290 [2024-12-14 00:19:06.196519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.290 [2024-12-14 00:19:06.196557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.290 qpair failed and we were unable to recover it. 00:38:27.290 [2024-12-14 00:19:06.196656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.290 [2024-12-14 00:19:06.196669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.290 qpair failed and we were unable to recover it. 00:38:27.290 [2024-12-14 00:19:06.196864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.290 [2024-12-14 00:19:06.196906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.290 qpair failed and we were unable to recover it. 00:38:27.290 [2024-12-14 00:19:06.197113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.290 [2024-12-14 00:19:06.197156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.290 qpair failed and we were unable to recover it. 00:38:27.290 [2024-12-14 00:19:06.197392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.290 [2024-12-14 00:19:06.197434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.290 qpair failed and we were unable to recover it. 00:38:27.290 [2024-12-14 00:19:06.197651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.290 [2024-12-14 00:19:06.197665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.290 qpair failed and we were unable to recover it. 00:38:27.290 [2024-12-14 00:19:06.197747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.290 [2024-12-14 00:19:06.197761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.290 qpair failed and we were unable to recover it. 00:38:27.290 [2024-12-14 00:19:06.197982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.290 [2024-12-14 00:19:06.197995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.290 qpair failed and we were unable to recover it. 00:38:27.290 [2024-12-14 00:19:06.198222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.290 [2024-12-14 00:19:06.198236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.290 qpair failed and we were unable to recover it. 00:38:27.290 [2024-12-14 00:19:06.198408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.290 [2024-12-14 00:19:06.198424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.290 qpair failed and we were unable to recover it. 00:38:27.291 [2024-12-14 00:19:06.198633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.291 [2024-12-14 00:19:06.198647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.291 qpair failed and we were unable to recover it. 00:38:27.291 [2024-12-14 00:19:06.198850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.291 [2024-12-14 00:19:06.198892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.291 qpair failed and we were unable to recover it. 00:38:27.291 [2024-12-14 00:19:06.199118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.291 [2024-12-14 00:19:06.199160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.291 qpair failed and we were unable to recover it. 00:38:27.291 [2024-12-14 00:19:06.199463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.291 [2024-12-14 00:19:06.199506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.291 qpair failed and we were unable to recover it. 00:38:27.291 [2024-12-14 00:19:06.199776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.291 [2024-12-14 00:19:06.199790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.291 qpair failed and we were unable to recover it. 00:38:27.291 [2024-12-14 00:19:06.199928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.291 [2024-12-14 00:19:06.199941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.291 qpair failed and we were unable to recover it. 00:38:27.291 [2024-12-14 00:19:06.200201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.291 [2024-12-14 00:19:06.200243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.291 qpair failed and we were unable to recover it. 00:38:27.291 [2024-12-14 00:19:06.200536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.291 [2024-12-14 00:19:06.200580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.291 qpair failed and we were unable to recover it. 00:38:27.291 [2024-12-14 00:19:06.200794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.291 [2024-12-14 00:19:06.200808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.291 qpair failed and we were unable to recover it. 00:38:27.291 [2024-12-14 00:19:06.200917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.291 [2024-12-14 00:19:06.200930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.291 qpair failed and we were unable to recover it. 00:38:27.291 [2024-12-14 00:19:06.201075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.291 [2024-12-14 00:19:06.201089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.291 qpair failed and we were unable to recover it. 00:38:27.291 [2024-12-14 00:19:06.201235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.291 [2024-12-14 00:19:06.201278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.291 qpair failed and we were unable to recover it. 00:38:27.291 [2024-12-14 00:19:06.201577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.291 [2024-12-14 00:19:06.201620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.291 qpair failed and we were unable to recover it. 00:38:27.291 [2024-12-14 00:19:06.201831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.291 [2024-12-14 00:19:06.201845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.291 qpair failed and we were unable to recover it. 00:38:27.291 [2024-12-14 00:19:06.202084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.291 [2024-12-14 00:19:06.202126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.291 qpair failed and we were unable to recover it. 00:38:27.291 [2024-12-14 00:19:06.202416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.291 [2024-12-14 00:19:06.202478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.291 qpair failed and we were unable to recover it. 00:38:27.291 [2024-12-14 00:19:06.202652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.291 [2024-12-14 00:19:06.202665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.291 qpair failed and we were unable to recover it. 00:38:27.291 [2024-12-14 00:19:06.202756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.291 [2024-12-14 00:19:06.202770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.291 qpair failed and we were unable to recover it. 00:38:27.291 [2024-12-14 00:19:06.203077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.291 [2024-12-14 00:19:06.203118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.291 qpair failed and we were unable to recover it. 00:38:27.291 [2024-12-14 00:19:06.203391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.291 [2024-12-14 00:19:06.203435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.291 qpair failed and we were unable to recover it. 00:38:27.291 [2024-12-14 00:19:06.203713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.291 [2024-12-14 00:19:06.203754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.291 qpair failed and we were unable to recover it. 00:38:27.291 [2024-12-14 00:19:06.203961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.291 [2024-12-14 00:19:06.203975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.291 qpair failed and we were unable to recover it. 00:38:27.291 [2024-12-14 00:19:06.204235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.291 [2024-12-14 00:19:06.204277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.291 qpair failed and we were unable to recover it. 00:38:27.291 [2024-12-14 00:19:06.204595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.291 [2024-12-14 00:19:06.204639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.291 qpair failed and we were unable to recover it. 00:38:27.291 [2024-12-14 00:19:06.204940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.291 [2024-12-14 00:19:06.204954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.291 qpair failed and we were unable to recover it. 00:38:27.291 [2024-12-14 00:19:06.205047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.291 [2024-12-14 00:19:06.205061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.291 qpair failed and we were unable to recover it. 00:38:27.291 [2024-12-14 00:19:06.205299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.291 [2024-12-14 00:19:06.205341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.291 qpair failed and we were unable to recover it. 00:38:27.291 [2024-12-14 00:19:06.205539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.291 [2024-12-14 00:19:06.205582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.291 qpair failed and we were unable to recover it. 00:38:27.291 [2024-12-14 00:19:06.205842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.291 [2024-12-14 00:19:06.205884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.291 qpair failed and we were unable to recover it. 00:38:27.291 [2024-12-14 00:19:06.206146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.291 [2024-12-14 00:19:06.206188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.291 qpair failed and we were unable to recover it. 00:38:27.291 [2024-12-14 00:19:06.206492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.291 [2024-12-14 00:19:06.206536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.291 qpair failed and we were unable to recover it. 00:38:27.291 [2024-12-14 00:19:06.206733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.291 [2024-12-14 00:19:06.206774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.291 qpair failed and we were unable to recover it. 00:38:27.291 [2024-12-14 00:19:06.206983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.291 [2024-12-14 00:19:06.207025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.291 qpair failed and we were unable to recover it. 00:38:27.291 [2024-12-14 00:19:06.207183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.291 [2024-12-14 00:19:06.207225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.291 qpair failed and we were unable to recover it. 00:38:27.291 [2024-12-14 00:19:06.207512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.291 [2024-12-14 00:19:06.207555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.291 qpair failed and we were unable to recover it. 00:38:27.291 [2024-12-14 00:19:06.207753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.291 [2024-12-14 00:19:06.207766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.291 qpair failed and we were unable to recover it. 00:38:27.291 [2024-12-14 00:19:06.207866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.291 [2024-12-14 00:19:06.207890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.291 qpair failed and we were unable to recover it. 00:38:27.291 [2024-12-14 00:19:06.208100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.291 [2024-12-14 00:19:06.208113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.291 qpair failed and we were unable to recover it. 00:38:27.291 [2024-12-14 00:19:06.208318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.292 [2024-12-14 00:19:06.208360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.292 qpair failed and we were unable to recover it. 00:38:27.292 [2024-12-14 00:19:06.208528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.292 [2024-12-14 00:19:06.208579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.292 qpair failed and we were unable to recover it. 00:38:27.292 [2024-12-14 00:19:06.208891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.292 [2024-12-14 00:19:06.208933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.292 qpair failed and we were unable to recover it. 00:38:27.292 [2024-12-14 00:19:06.209139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.292 [2024-12-14 00:19:06.209181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.292 qpair failed and we were unable to recover it. 00:38:27.292 [2024-12-14 00:19:06.209419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.292 [2024-12-14 00:19:06.209474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.292 qpair failed and we were unable to recover it. 00:38:27.292 [2024-12-14 00:19:06.209700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.292 [2024-12-14 00:19:06.209742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.292 qpair failed and we were unable to recover it. 00:38:27.292 [2024-12-14 00:19:06.210053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.292 [2024-12-14 00:19:06.210095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.292 qpair failed and we were unable to recover it. 00:38:27.292 [2024-12-14 00:19:06.210392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.292 [2024-12-14 00:19:06.210434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.292 qpair failed and we were unable to recover it. 00:38:27.292 [2024-12-14 00:19:06.210613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.292 [2024-12-14 00:19:06.210674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.292 qpair failed and we were unable to recover it. 00:38:27.292 [2024-12-14 00:19:06.210915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.292 [2024-12-14 00:19:06.210929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.292 qpair failed and we were unable to recover it. 00:38:27.292 [2024-12-14 00:19:06.211176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.292 [2024-12-14 00:19:06.211190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.292 qpair failed and we were unable to recover it. 00:38:27.292 [2024-12-14 00:19:06.211446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.292 [2024-12-14 00:19:06.211459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.292 qpair failed and we were unable to recover it. 00:38:27.292 [2024-12-14 00:19:06.211610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.292 [2024-12-14 00:19:06.211623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.292 qpair failed and we were unable to recover it. 00:38:27.292 [2024-12-14 00:19:06.211827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.292 [2024-12-14 00:19:06.211841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.292 qpair failed and we were unable to recover it. 00:38:27.292 [2024-12-14 00:19:06.212098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.292 [2024-12-14 00:19:06.212139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.292 qpair failed and we were unable to recover it. 00:38:27.292 [2024-12-14 00:19:06.212466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.292 [2024-12-14 00:19:06.212510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.292 qpair failed and we were unable to recover it. 00:38:27.292 [2024-12-14 00:19:06.212717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.292 [2024-12-14 00:19:06.212730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.292 qpair failed and we were unable to recover it. 00:38:27.292 [2024-12-14 00:19:06.212908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.292 [2024-12-14 00:19:06.212950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.292 qpair failed and we were unable to recover it. 00:38:27.292 [2024-12-14 00:19:06.213251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.292 [2024-12-14 00:19:06.213293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.292 qpair failed and we were unable to recover it. 00:38:27.292 [2024-12-14 00:19:06.213572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.292 [2024-12-14 00:19:06.213586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.292 qpair failed and we were unable to recover it. 00:38:27.292 [2024-12-14 00:19:06.213771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.292 [2024-12-14 00:19:06.213812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.292 qpair failed and we were unable to recover it. 00:38:27.292 [2024-12-14 00:19:06.214042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.292 [2024-12-14 00:19:06.214084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.292 qpair failed and we were unable to recover it. 00:38:27.292 [2024-12-14 00:19:06.214291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.292 [2024-12-14 00:19:06.214333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.292 qpair failed and we were unable to recover it. 00:38:27.292 [2024-12-14 00:19:06.214564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.292 [2024-12-14 00:19:06.214608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.292 qpair failed and we were unable to recover it. 00:38:27.292 [2024-12-14 00:19:06.214822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.292 [2024-12-14 00:19:06.214864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.292 qpair failed and we were unable to recover it. 00:38:27.292 [2024-12-14 00:19:06.215144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.292 [2024-12-14 00:19:06.215190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.292 qpair failed and we were unable to recover it. 00:38:27.292 [2024-12-14 00:19:06.215479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.292 [2024-12-14 00:19:06.215522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.292 qpair failed and we were unable to recover it. 00:38:27.292 [2024-12-14 00:19:06.215738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.292 [2024-12-14 00:19:06.215780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.292 qpair failed and we were unable to recover it. 00:38:27.292 [2024-12-14 00:19:06.216002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.292 [2024-12-14 00:19:06.216045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.292 qpair failed and we were unable to recover it. 00:38:27.292 [2024-12-14 00:19:06.216263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.292 [2024-12-14 00:19:06.216305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.292 qpair failed and we were unable to recover it. 00:38:27.292 [2024-12-14 00:19:06.216511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.292 [2024-12-14 00:19:06.216554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.292 qpair failed and we were unable to recover it. 00:38:27.292 [2024-12-14 00:19:06.216767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.292 [2024-12-14 00:19:06.216808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.292 qpair failed and we were unable to recover it. 00:38:27.292 [2024-12-14 00:19:06.217033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.292 [2024-12-14 00:19:06.217046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.292 qpair failed and we were unable to recover it. 00:38:27.292 [2024-12-14 00:19:06.217215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.292 [2024-12-14 00:19:06.217257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.292 qpair failed and we were unable to recover it. 00:38:27.292 [2024-12-14 00:19:06.217544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.292 [2024-12-14 00:19:06.217599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.292 qpair failed and we were unable to recover it. 00:38:27.292 [2024-12-14 00:19:06.217881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.292 [2024-12-14 00:19:06.217923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.292 qpair failed and we were unable to recover it. 00:38:27.292 [2024-12-14 00:19:06.218200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.292 [2024-12-14 00:19:06.218242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.292 qpair failed and we were unable to recover it. 00:38:27.292 [2024-12-14 00:19:06.218568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.292 [2024-12-14 00:19:06.218611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.292 qpair failed and we were unable to recover it. 00:38:27.292 [2024-12-14 00:19:06.218895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.292 [2024-12-14 00:19:06.218936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.292 qpair failed and we were unable to recover it. 00:38:27.292 [2024-12-14 00:19:06.219265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.293 [2024-12-14 00:19:06.219308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.293 qpair failed and we were unable to recover it. 00:38:27.293 [2024-12-14 00:19:06.219524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.293 [2024-12-14 00:19:06.219567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.293 qpair failed and we were unable to recover it. 00:38:27.293 [2024-12-14 00:19:06.219882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.293 [2024-12-14 00:19:06.219930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.293 qpair failed and we were unable to recover it. 00:38:27.293 [2024-12-14 00:19:06.220234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.293 [2024-12-14 00:19:06.220277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.293 qpair failed and we were unable to recover it. 00:38:27.293 [2024-12-14 00:19:06.220432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.293 [2024-12-14 00:19:06.220488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.293 qpair failed and we were unable to recover it. 00:38:27.293 [2024-12-14 00:19:06.220786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.293 [2024-12-14 00:19:06.220823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.293 qpair failed and we were unable to recover it. 00:38:27.293 [2024-12-14 00:19:06.221082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.293 [2024-12-14 00:19:06.221124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.293 qpair failed and we were unable to recover it. 00:38:27.293 [2024-12-14 00:19:06.221394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.293 [2024-12-14 00:19:06.221435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.293 qpair failed and we were unable to recover it. 00:38:27.293 [2024-12-14 00:19:06.221736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.293 [2024-12-14 00:19:06.221778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.293 qpair failed and we were unable to recover it. 00:38:27.293 [2024-12-14 00:19:06.222070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.293 [2024-12-14 00:19:06.222111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.293 qpair failed and we were unable to recover it. 00:38:27.293 [2024-12-14 00:19:06.222349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.293 [2024-12-14 00:19:06.222391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.293 qpair failed and we were unable to recover it. 00:38:27.293 [2024-12-14 00:19:06.222681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.293 [2024-12-14 00:19:06.222726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.293 qpair failed and we were unable to recover it. 00:38:27.293 [2024-12-14 00:19:06.222945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.293 [2024-12-14 00:19:06.222959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.293 qpair failed and we were unable to recover it. 00:38:27.293 [2024-12-14 00:19:06.223174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.293 [2024-12-14 00:19:06.223216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.293 qpair failed and we were unable to recover it. 00:38:27.293 [2024-12-14 00:19:06.223483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.293 [2024-12-14 00:19:06.223526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.293 qpair failed and we were unable to recover it. 00:38:27.293 [2024-12-14 00:19:06.223815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.293 [2024-12-14 00:19:06.223828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.293 qpair failed and we were unable to recover it. 00:38:27.293 [2024-12-14 00:19:06.224007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.293 [2024-12-14 00:19:06.224020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.293 qpair failed and we were unable to recover it. 00:38:27.293 [2024-12-14 00:19:06.224242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.293 [2024-12-14 00:19:06.224284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.293 qpair failed and we were unable to recover it. 00:38:27.293 [2024-12-14 00:19:06.224564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.293 [2024-12-14 00:19:06.224607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.293 qpair failed and we were unable to recover it. 00:38:27.293 [2024-12-14 00:19:06.224744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.293 [2024-12-14 00:19:06.224785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.293 qpair failed and we were unable to recover it. 00:38:27.293 [2024-12-14 00:19:06.225049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.293 [2024-12-14 00:19:06.225091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.293 qpair failed and we were unable to recover it. 00:38:27.293 [2024-12-14 00:19:06.225307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.293 [2024-12-14 00:19:06.225348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.293 qpair failed and we were unable to recover it. 00:38:27.293 [2024-12-14 00:19:06.225545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.293 [2024-12-14 00:19:06.225589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.293 qpair failed and we were unable to recover it. 00:38:27.293 [2024-12-14 00:19:06.225853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.293 [2024-12-14 00:19:06.225895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.293 qpair failed and we were unable to recover it. 00:38:27.293 [2024-12-14 00:19:06.226106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.293 [2024-12-14 00:19:06.226119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.293 qpair failed and we were unable to recover it. 00:38:27.293 [2024-12-14 00:19:06.226374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.293 [2024-12-14 00:19:06.226388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.293 qpair failed and we were unable to recover it. 00:38:27.293 [2024-12-14 00:19:06.226593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.293 [2024-12-14 00:19:06.226607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.293 qpair failed and we were unable to recover it. 00:38:27.293 [2024-12-14 00:19:06.226772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.293 [2024-12-14 00:19:06.226785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.293 qpair failed and we were unable to recover it. 00:38:27.293 [2024-12-14 00:19:06.226924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.293 [2024-12-14 00:19:06.226938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.293 qpair failed and we were unable to recover it. 00:38:27.293 [2024-12-14 00:19:06.227242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.293 [2024-12-14 00:19:06.227295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.293 qpair failed and we were unable to recover it. 00:38:27.293 [2024-12-14 00:19:06.227564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.293 [2024-12-14 00:19:06.227591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.293 qpair failed and we were unable to recover it. 00:38:27.293 [2024-12-14 00:19:06.227880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.293 [2024-12-14 00:19:06.227927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.293 qpair failed and we were unable to recover it. 00:38:27.293 [2024-12-14 00:19:06.228204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.293 [2024-12-14 00:19:06.228249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.293 qpair failed and we were unable to recover it. 00:38:27.293 [2024-12-14 00:19:06.228468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.293 [2024-12-14 00:19:06.228513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.293 qpair failed and we were unable to recover it. 00:38:27.293 [2024-12-14 00:19:06.228828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.293 [2024-12-14 00:19:06.228872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.293 qpair failed and we were unable to recover it. 00:38:27.293 [2024-12-14 00:19:06.229197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.293 [2024-12-14 00:19:06.229219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.293 qpair failed and we were unable to recover it. 00:38:27.293 [2024-12-14 00:19:06.229396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.293 [2024-12-14 00:19:06.229418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.294 qpair failed and we were unable to recover it. 00:38:27.294 [2024-12-14 00:19:06.229600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.294 [2024-12-14 00:19:06.229622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.294 qpair failed and we were unable to recover it. 00:38:27.294 [2024-12-14 00:19:06.229828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.294 [2024-12-14 00:19:06.229871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.294 qpair failed and we were unable to recover it. 00:38:27.294 [2024-12-14 00:19:06.230132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.294 [2024-12-14 00:19:06.230190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.294 qpair failed and we were unable to recover it. 00:38:27.294 [2024-12-14 00:19:06.230350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.294 [2024-12-14 00:19:06.230393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.294 qpair failed and we were unable to recover it. 00:38:27.294 [2024-12-14 00:19:06.230629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.294 [2024-12-14 00:19:06.230651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.294 qpair failed and we were unable to recover it. 00:38:27.294 [2024-12-14 00:19:06.230933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.294 [2024-12-14 00:19:06.230984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.294 qpair failed and we were unable to recover it. 00:38:27.294 [2024-12-14 00:19:06.231293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.294 [2024-12-14 00:19:06.231338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.294 qpair failed and we were unable to recover it. 00:38:27.294 [2024-12-14 00:19:06.231629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.294 [2024-12-14 00:19:06.231675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.294 qpair failed and we were unable to recover it. 00:38:27.294 [2024-12-14 00:19:06.231950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.294 [2024-12-14 00:19:06.231994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.294 qpair failed and we were unable to recover it. 00:38:27.294 [2024-12-14 00:19:06.232324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.294 [2024-12-14 00:19:06.232368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.294 qpair failed and we were unable to recover it. 00:38:27.294 [2024-12-14 00:19:06.232618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.294 [2024-12-14 00:19:06.232664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.294 qpair failed and we were unable to recover it. 00:38:27.294 [2024-12-14 00:19:06.232946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.294 [2024-12-14 00:19:06.232967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.294 qpair failed and we were unable to recover it. 00:38:27.294 [2024-12-14 00:19:06.233085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.294 [2024-12-14 00:19:06.233106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.294 qpair failed and we were unable to recover it. 00:38:27.294 [2024-12-14 00:19:06.233379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.294 [2024-12-14 00:19:06.233497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.294 qpair failed and we were unable to recover it. 00:38:27.294 [2024-12-14 00:19:06.233822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.294 [2024-12-14 00:19:06.233864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.294 qpair failed and we were unable to recover it. 00:38:27.294 [2024-12-14 00:19:06.234111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.294 [2024-12-14 00:19:06.234131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.294 qpair failed and we were unable to recover it. 00:38:27.294 [2024-12-14 00:19:06.234383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.294 [2024-12-14 00:19:06.234405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.294 qpair failed and we were unable to recover it. 00:38:27.294 [2024-12-14 00:19:06.234636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.294 [2024-12-14 00:19:06.234658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.294 qpair failed and we were unable to recover it. 00:38:27.294 [2024-12-14 00:19:06.234899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.294 [2024-12-14 00:19:06.234920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.294 qpair failed and we were unable to recover it. 00:38:27.294 [2024-12-14 00:19:06.235179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.294 [2024-12-14 00:19:06.235201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.294 qpair failed and we were unable to recover it. 00:38:27.294 [2024-12-14 00:19:06.235455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.294 [2024-12-14 00:19:06.235477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.294 qpair failed and we were unable to recover it. 00:38:27.294 [2024-12-14 00:19:06.235666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.294 [2024-12-14 00:19:06.235688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.294 qpair failed and we were unable to recover it. 00:38:27.294 [2024-12-14 00:19:06.235989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.294 [2024-12-14 00:19:06.236034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.294 qpair failed and we were unable to recover it. 00:38:27.294 [2024-12-14 00:19:06.236308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.294 [2024-12-14 00:19:06.236352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.294 qpair failed and we were unable to recover it. 00:38:27.294 [2024-12-14 00:19:06.236643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.294 [2024-12-14 00:19:06.236688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.294 qpair failed and we were unable to recover it. 00:38:27.294 [2024-12-14 00:19:06.236997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.294 [2024-12-14 00:19:06.237041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.294 qpair failed and we were unable to recover it. 00:38:27.294 [2024-12-14 00:19:06.237343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.294 [2024-12-14 00:19:06.237386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.294 qpair failed and we were unable to recover it. 00:38:27.294 [2024-12-14 00:19:06.237687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.294 [2024-12-14 00:19:06.237731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.294 qpair failed and we were unable to recover it. 00:38:27.294 [2024-12-14 00:19:06.238020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.294 [2024-12-14 00:19:06.238041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.294 qpair failed and we were unable to recover it. 00:38:27.294 [2024-12-14 00:19:06.238240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.294 [2024-12-14 00:19:06.238261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.294 qpair failed and we were unable to recover it. 00:38:27.294 [2024-12-14 00:19:06.238435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.294 [2024-12-14 00:19:06.238463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.294 qpair failed and we were unable to recover it. 00:38:27.294 [2024-12-14 00:19:06.238663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.294 [2024-12-14 00:19:06.238685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.294 qpair failed and we were unable to recover it. 00:38:27.294 [2024-12-14 00:19:06.238950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.294 [2024-12-14 00:19:06.238997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:27.294 qpair failed and we were unable to recover it. 00:38:27.294 [2024-12-14 00:19:06.239225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.295 [2024-12-14 00:19:06.239275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:27.295 qpair failed and we were unable to recover it. 00:38:27.295 [2024-12-14 00:19:06.239595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.295 [2024-12-14 00:19:06.239686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.295 qpair failed and we were unable to recover it. 00:38:27.295 [2024-12-14 00:19:06.240014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.295 [2024-12-14 00:19:06.240063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.295 qpair failed and we were unable to recover it. 00:38:27.295 [2024-12-14 00:19:06.240285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.295 [2024-12-14 00:19:06.240330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.295 qpair failed and we were unable to recover it. 00:38:27.295 [2024-12-14 00:19:06.240640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.295 [2024-12-14 00:19:06.240684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.295 qpair failed and we were unable to recover it. 00:38:27.295 [2024-12-14 00:19:06.240959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.295 [2024-12-14 00:19:06.240981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.295 qpair failed and we were unable to recover it. 00:38:27.295 [2024-12-14 00:19:06.241226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.295 [2024-12-14 00:19:06.241247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.295 qpair failed and we were unable to recover it. 00:38:27.295 [2024-12-14 00:19:06.241479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.295 [2024-12-14 00:19:06.241501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.295 qpair failed and we were unable to recover it. 00:38:27.295 [2024-12-14 00:19:06.241670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.295 [2024-12-14 00:19:06.241692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.295 qpair failed and we were unable to recover it. 00:38:27.295 [2024-12-14 00:19:06.241925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.295 [2024-12-14 00:19:06.241969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.295 qpair failed and we were unable to recover it. 00:38:27.295 [2024-12-14 00:19:06.242182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.295 [2024-12-14 00:19:06.242224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.295 qpair failed and we were unable to recover it. 00:38:27.295 [2024-12-14 00:19:06.242479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.295 [2024-12-14 00:19:06.242523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.295 qpair failed and we were unable to recover it. 00:38:27.295 [2024-12-14 00:19:06.242816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.295 [2024-12-14 00:19:06.242866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.295 qpair failed and we were unable to recover it. 00:38:27.295 [2024-12-14 00:19:06.243163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.295 [2024-12-14 00:19:06.243185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.295 qpair failed and we were unable to recover it. 00:38:27.295 [2024-12-14 00:19:06.243445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.295 [2024-12-14 00:19:06.243467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.295 qpair failed and we were unable to recover it. 00:38:27.295 [2024-12-14 00:19:06.243707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.295 [2024-12-14 00:19:06.243729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.295 qpair failed and we were unable to recover it. 00:38:27.295 [2024-12-14 00:19:06.243905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.295 [2024-12-14 00:19:06.243927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.295 qpair failed and we were unable to recover it. 00:38:27.295 [2024-12-14 00:19:06.244089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.295 [2024-12-14 00:19:06.244110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.295 qpair failed and we were unable to recover it. 00:38:27.295 [2024-12-14 00:19:06.244369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.295 [2024-12-14 00:19:06.244411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.295 qpair failed and we were unable to recover it. 00:38:27.295 [2024-12-14 00:19:06.244698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.295 [2024-12-14 00:19:06.244742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.295 qpair failed and we were unable to recover it. 00:38:27.295 [2024-12-14 00:19:06.245051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.295 [2024-12-14 00:19:06.245072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.295 qpair failed and we were unable to recover it. 00:38:27.295 [2024-12-14 00:19:06.245323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.295 [2024-12-14 00:19:06.245345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.295 qpair failed and we were unable to recover it. 00:38:27.295 [2024-12-14 00:19:06.245637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.295 [2024-12-14 00:19:06.245681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.295 qpair failed and we were unable to recover it. 00:38:27.295 [2024-12-14 00:19:06.245961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.295 [2024-12-14 00:19:06.246006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.295 qpair failed and we were unable to recover it. 00:38:27.295 [2024-12-14 00:19:06.246303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.295 [2024-12-14 00:19:06.246347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.295 qpair failed and we were unable to recover it. 00:38:27.295 [2024-12-14 00:19:06.246636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.295 [2024-12-14 00:19:06.246681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.295 qpair failed and we were unable to recover it. 00:38:27.295 [2024-12-14 00:19:06.246982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.295 [2024-12-14 00:19:06.247025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.295 qpair failed and we were unable to recover it. 00:38:27.295 [2024-12-14 00:19:06.247254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.295 [2024-12-14 00:19:06.247297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.295 qpair failed and we were unable to recover it. 00:38:27.295 [2024-12-14 00:19:06.247635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.295 [2024-12-14 00:19:06.247680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.295 qpair failed and we were unable to recover it. 00:38:27.295 [2024-12-14 00:19:06.247984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.295 [2024-12-14 00:19:06.248027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.295 qpair failed and we were unable to recover it. 00:38:27.295 [2024-12-14 00:19:06.248322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.295 [2024-12-14 00:19:06.248365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.295 qpair failed and we were unable to recover it. 00:38:27.295 [2024-12-14 00:19:06.248646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.295 [2024-12-14 00:19:06.248692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.295 qpair failed and we were unable to recover it. 00:38:27.295 [2024-12-14 00:19:06.248997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.295 [2024-12-14 00:19:06.249041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.295 qpair failed and we were unable to recover it. 00:38:27.295 [2024-12-14 00:19:06.249199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.295 [2024-12-14 00:19:06.249242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.295 qpair failed and we were unable to recover it. 00:38:27.295 [2024-12-14 00:19:06.249522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.295 [2024-12-14 00:19:06.249567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.295 qpair failed and we were unable to recover it. 00:38:27.295 [2024-12-14 00:19:06.249782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.295 [2024-12-14 00:19:06.249803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.295 qpair failed and we were unable to recover it. 00:38:27.295 [2024-12-14 00:19:06.250047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.295 [2024-12-14 00:19:06.250068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.295 qpair failed and we were unable to recover it. 00:38:27.295 [2024-12-14 00:19:06.250317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.296 [2024-12-14 00:19:06.250339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.296 qpair failed and we were unable to recover it. 00:38:27.296 [2024-12-14 00:19:06.250462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.296 [2024-12-14 00:19:06.250484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.296 qpair failed and we were unable to recover it. 00:38:27.296 [2024-12-14 00:19:06.250684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.296 [2024-12-14 00:19:06.250729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.296 qpair failed and we were unable to recover it. 00:38:27.296 [2024-12-14 00:19:06.251055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.296 [2024-12-14 00:19:06.251100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.296 qpair failed and we were unable to recover it. 00:38:27.296 [2024-12-14 00:19:06.251309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.296 [2024-12-14 00:19:06.251331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.296 qpair failed and we were unable to recover it. 00:38:27.296 [2024-12-14 00:19:06.251592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.296 [2024-12-14 00:19:06.251636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.296 qpair failed and we were unable to recover it. 00:38:27.296 [2024-12-14 00:19:06.251833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.296 [2024-12-14 00:19:06.251855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.296 qpair failed and we were unable to recover it. 00:38:27.296 [2024-12-14 00:19:06.252108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.296 [2024-12-14 00:19:06.252152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.296 qpair failed and we were unable to recover it. 00:38:27.296 [2024-12-14 00:19:06.252424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.296 [2024-12-14 00:19:06.252479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.296 qpair failed and we were unable to recover it. 00:38:27.296 [2024-12-14 00:19:06.252770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.296 [2024-12-14 00:19:06.252814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.296 qpair failed and we were unable to recover it. 00:38:27.296 [2024-12-14 00:19:06.253080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.296 [2024-12-14 00:19:06.253144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.296 qpair failed and we were unable to recover it. 00:38:27.296 [2024-12-14 00:19:06.253467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.296 [2024-12-14 00:19:06.253514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.296 qpair failed and we were unable to recover it. 00:38:27.296 [2024-12-14 00:19:06.253733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.296 [2024-12-14 00:19:06.253756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.296 qpair failed and we were unable to recover it. 00:38:27.296 [2024-12-14 00:19:06.253988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.296 [2024-12-14 00:19:06.254030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.296 qpair failed and we were unable to recover it. 00:38:27.296 [2024-12-14 00:19:06.254346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.296 [2024-12-14 00:19:06.254390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.296 qpair failed and we were unable to recover it. 00:38:27.296 [2024-12-14 00:19:06.254703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.296 [2024-12-14 00:19:06.254754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.296 qpair failed and we were unable to recover it. 00:38:27.296 [2024-12-14 00:19:06.255023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.296 [2024-12-14 00:19:06.255067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.296 qpair failed and we were unable to recover it. 00:38:27.296 [2024-12-14 00:19:06.255360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.296 [2024-12-14 00:19:06.255404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.296 qpair failed and we were unable to recover it. 00:38:27.296 [2024-12-14 00:19:06.255682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.296 [2024-12-14 00:19:06.255727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.296 qpair failed and we were unable to recover it. 00:38:27.296 [2024-12-14 00:19:06.256043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.296 [2024-12-14 00:19:06.256087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.296 qpair failed and we were unable to recover it. 00:38:27.296 [2024-12-14 00:19:06.256312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.296 [2024-12-14 00:19:06.256355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.296 qpair failed and we were unable to recover it. 00:38:27.296 [2024-12-14 00:19:06.256673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.296 [2024-12-14 00:19:06.256717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.296 qpair failed and we were unable to recover it. 00:38:27.296 [2024-12-14 00:19:06.256945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.296 [2024-12-14 00:19:06.256991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.296 qpair failed and we were unable to recover it. 00:38:27.296 [2024-12-14 00:19:06.257157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.296 [2024-12-14 00:19:06.257179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.296 qpair failed and we were unable to recover it. 00:38:27.296 [2024-12-14 00:19:06.257298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.296 [2024-12-14 00:19:06.257319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.296 qpair failed and we were unable to recover it. 00:38:27.296 [2024-12-14 00:19:06.257530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.296 [2024-12-14 00:19:06.257576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.296 qpair failed and we were unable to recover it. 00:38:27.296 [2024-12-14 00:19:06.257810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.296 [2024-12-14 00:19:06.257853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.296 qpair failed and we were unable to recover it. 00:38:27.296 [2024-12-14 00:19:06.258080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.296 [2024-12-14 00:19:06.258125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.296 qpair failed and we were unable to recover it. 00:38:27.296 [2024-12-14 00:19:06.258415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.296 [2024-12-14 00:19:06.258471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.296 qpair failed and we were unable to recover it. 00:38:27.296 [2024-12-14 00:19:06.258803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.296 [2024-12-14 00:19:06.258846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.296 qpair failed and we were unable to recover it. 00:38:27.296 [2024-12-14 00:19:06.259127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.296 [2024-12-14 00:19:06.259171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.296 qpair failed and we were unable to recover it. 00:38:27.296 [2024-12-14 00:19:06.259466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.296 [2024-12-14 00:19:06.259512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.296 qpair failed and we were unable to recover it. 00:38:27.296 [2024-12-14 00:19:06.259789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.296 [2024-12-14 00:19:06.259811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.296 qpair failed and we were unable to recover it. 00:38:27.296 [2024-12-14 00:19:06.260078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.296 [2024-12-14 00:19:06.260133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.296 qpair failed and we were unable to recover it. 00:38:27.296 [2024-12-14 00:19:06.260455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.296 [2024-12-14 00:19:06.260500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.296 qpair failed and we were unable to recover it. 00:38:27.296 [2024-12-14 00:19:06.260822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.296 [2024-12-14 00:19:06.260866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.296 qpair failed and we were unable to recover it. 00:38:27.296 [2024-12-14 00:19:06.261157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.296 [2024-12-14 00:19:06.261201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.296 qpair failed and we were unable to recover it. 00:38:27.296 [2024-12-14 00:19:06.261482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.296 [2024-12-14 00:19:06.261527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.297 qpair failed and we were unable to recover it. 00:38:27.297 [2024-12-14 00:19:06.261816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.297 [2024-12-14 00:19:06.261859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.297 qpair failed and we were unable to recover it. 00:38:27.297 [2024-12-14 00:19:06.262156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.297 [2024-12-14 00:19:06.262201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.297 qpair failed and we were unable to recover it. 00:38:27.297 [2024-12-14 00:19:06.262505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.297 [2024-12-14 00:19:06.262551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.297 qpair failed and we were unable to recover it. 00:38:27.297 [2024-12-14 00:19:06.262804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.297 [2024-12-14 00:19:06.262847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.297 qpair failed and we were unable to recover it. 00:38:27.297 [2024-12-14 00:19:06.263161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.297 [2024-12-14 00:19:06.263204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.297 qpair failed and we were unable to recover it. 00:38:27.297 [2024-12-14 00:19:06.263505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.297 [2024-12-14 00:19:06.263552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.297 qpair failed and we were unable to recover it. 00:38:27.297 [2024-12-14 00:19:06.263757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.297 [2024-12-14 00:19:06.263799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.297 qpair failed and we were unable to recover it. 00:38:27.297 [2024-12-14 00:19:06.264019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.297 [2024-12-14 00:19:06.264041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.297 qpair failed and we were unable to recover it. 00:38:27.297 [2024-12-14 00:19:06.264289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.297 [2024-12-14 00:19:06.264310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.297 qpair failed and we were unable to recover it. 00:38:27.297 [2024-12-14 00:19:06.264486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.297 [2024-12-14 00:19:06.264509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.297 qpair failed and we were unable to recover it. 00:38:27.297 [2024-12-14 00:19:06.264684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.297 [2024-12-14 00:19:06.264705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.297 qpair failed and we were unable to recover it. 00:38:27.297 [2024-12-14 00:19:06.264888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.297 [2024-12-14 00:19:06.264935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.297 qpair failed and we were unable to recover it. 00:38:27.297 [2024-12-14 00:19:06.265139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.297 [2024-12-14 00:19:06.265181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.297 qpair failed and we were unable to recover it. 00:38:27.297 [2024-12-14 00:19:06.265394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.297 [2024-12-14 00:19:06.265450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.297 qpair failed and we were unable to recover it. 00:38:27.297 [2024-12-14 00:19:06.265738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.297 [2024-12-14 00:19:06.265760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.297 qpair failed and we were unable to recover it. 00:38:27.297 [2024-12-14 00:19:06.265938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.297 [2024-12-14 00:19:06.265959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.297 qpair failed and we were unable to recover it. 00:38:27.297 [2024-12-14 00:19:06.266222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.297 [2024-12-14 00:19:06.266244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.297 qpair failed and we were unable to recover it. 00:38:27.297 [2024-12-14 00:19:06.266370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.297 [2024-12-14 00:19:06.266396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.297 qpair failed and we were unable to recover it. 00:38:27.297 [2024-12-14 00:19:06.266621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.297 [2024-12-14 00:19:06.266644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.297 qpair failed and we were unable to recover it. 00:38:27.297 [2024-12-14 00:19:06.266891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.297 [2024-12-14 00:19:06.266913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.297 qpair failed and we were unable to recover it. 00:38:27.297 [2024-12-14 00:19:06.267149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.297 [2024-12-14 00:19:06.267171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.297 qpair failed and we were unable to recover it. 00:38:27.297 [2024-12-14 00:19:06.267355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.297 [2024-12-14 00:19:06.267377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.297 qpair failed and we were unable to recover it. 00:38:27.297 [2024-12-14 00:19:06.267481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.297 [2024-12-14 00:19:06.267504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.297 qpair failed and we were unable to recover it. 00:38:27.297 [2024-12-14 00:19:06.267681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.297 [2024-12-14 00:19:06.267703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.297 qpair failed and we were unable to recover it. 00:38:27.297 [2024-12-14 00:19:06.267958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.297 [2024-12-14 00:19:06.268001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.297 qpair failed and we were unable to recover it. 00:38:27.297 [2024-12-14 00:19:06.268233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.297 [2024-12-14 00:19:06.268281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.297 qpair failed and we were unable to recover it. 00:38:27.297 [2024-12-14 00:19:06.268504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.297 [2024-12-14 00:19:06.268548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.297 qpair failed and we were unable to recover it. 00:38:27.297 [2024-12-14 00:19:06.268801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.297 [2024-12-14 00:19:06.268822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.297 qpair failed and we were unable to recover it. 00:38:27.297 [2024-12-14 00:19:06.269078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.297 [2024-12-14 00:19:06.269121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.297 qpair failed and we were unable to recover it. 00:38:27.297 [2024-12-14 00:19:06.269349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.297 [2024-12-14 00:19:06.269393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.297 qpair failed and we were unable to recover it. 00:38:27.297 [2024-12-14 00:19:06.269608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.297 [2024-12-14 00:19:06.269631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.297 qpair failed and we were unable to recover it. 00:38:27.297 [2024-12-14 00:19:06.269881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.297 [2024-12-14 00:19:06.269903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.297 qpair failed and we were unable to recover it. 00:38:27.297 [2024-12-14 00:19:06.270112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.297 [2024-12-14 00:19:06.270135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.297 qpair failed and we were unable to recover it. 00:38:27.297 [2024-12-14 00:19:06.270394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.297 [2024-12-14 00:19:06.270448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.297 qpair failed and we were unable to recover it. 00:38:27.297 [2024-12-14 00:19:06.270792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.297 [2024-12-14 00:19:06.270836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.297 qpair failed and we were unable to recover it. 00:38:27.297 [2024-12-14 00:19:06.271109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.297 [2024-12-14 00:19:06.271131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.297 qpair failed and we were unable to recover it. 00:38:27.297 [2024-12-14 00:19:06.271384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.297 [2024-12-14 00:19:06.271413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.297 qpair failed and we were unable to recover it. 00:38:27.298 [2024-12-14 00:19:06.271675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.298 [2024-12-14 00:19:06.271699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.298 qpair failed and we were unable to recover it. 00:38:27.298 [2024-12-14 00:19:06.271892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.298 [2024-12-14 00:19:06.271914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.298 qpair failed and we were unable to recover it. 00:38:27.298 [2024-12-14 00:19:06.272187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.298 [2024-12-14 00:19:06.272209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.298 qpair failed and we were unable to recover it. 00:38:27.298 [2024-12-14 00:19:06.272458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.298 [2024-12-14 00:19:06.272481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.298 qpair failed and we were unable to recover it. 00:38:27.298 [2024-12-14 00:19:06.272741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.298 [2024-12-14 00:19:06.272763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.298 qpair failed and we were unable to recover it. 00:38:27.298 [2024-12-14 00:19:06.273030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.298 [2024-12-14 00:19:06.273074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.298 qpair failed and we were unable to recover it. 00:38:27.298 [2024-12-14 00:19:06.273373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.298 [2024-12-14 00:19:06.273416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.298 qpair failed and we were unable to recover it. 00:38:27.298 [2024-12-14 00:19:06.273740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.298 [2024-12-14 00:19:06.273785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.298 qpair failed and we were unable to recover it. 00:38:27.298 [2024-12-14 00:19:06.274082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.298 [2024-12-14 00:19:06.274125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.298 qpair failed and we were unable to recover it. 00:38:27.298 [2024-12-14 00:19:06.274417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.298 [2024-12-14 00:19:06.274490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.298 qpair failed and we were unable to recover it. 00:38:27.298 [2024-12-14 00:19:06.274787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.298 [2024-12-14 00:19:06.274831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.298 qpair failed and we were unable to recover it. 00:38:27.298 [2024-12-14 00:19:06.275153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.298 [2024-12-14 00:19:06.275197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.298 qpair failed and we were unable to recover it. 00:38:27.298 [2024-12-14 00:19:06.275462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.298 [2024-12-14 00:19:06.275508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.298 qpair failed and we were unable to recover it. 00:38:27.298 [2024-12-14 00:19:06.275727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.298 [2024-12-14 00:19:06.275750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.298 qpair failed and we were unable to recover it. 00:38:27.298 [2024-12-14 00:19:06.276006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.298 [2024-12-14 00:19:06.276049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.298 qpair failed and we were unable to recover it. 00:38:27.298 [2024-12-14 00:19:06.276207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.298 [2024-12-14 00:19:06.276251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.298 qpair failed and we were unable to recover it. 00:38:27.298 [2024-12-14 00:19:06.276410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.298 [2024-12-14 00:19:06.276466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.298 qpair failed and we were unable to recover it. 00:38:27.298 [2024-12-14 00:19:06.276719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.298 [2024-12-14 00:19:06.276764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.298 qpair failed and we were unable to recover it. 00:38:27.298 [2024-12-14 00:19:06.276963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.298 [2024-12-14 00:19:06.277007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.298 qpair failed and we were unable to recover it. 00:38:27.298 [2024-12-14 00:19:06.277265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.298 [2024-12-14 00:19:06.277286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.298 qpair failed and we were unable to recover it. 00:38:27.298 [2024-12-14 00:19:06.277551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.298 [2024-12-14 00:19:06.277591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.298 qpair failed and we were unable to recover it. 00:38:27.298 [2024-12-14 00:19:06.277772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.298 [2024-12-14 00:19:06.277795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.298 qpair failed and we were unable to recover it. 00:38:27.298 [2024-12-14 00:19:06.277991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.298 [2024-12-14 00:19:06.278013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.298 qpair failed and we were unable to recover it. 00:38:27.298 [2024-12-14 00:19:06.278269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.298 [2024-12-14 00:19:06.278313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.298 qpair failed and we were unable to recover it. 00:38:27.298 [2024-12-14 00:19:06.278656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.298 [2024-12-14 00:19:06.278702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.298 qpair failed and we were unable to recover it. 00:38:27.298 [2024-12-14 00:19:06.278978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.298 [2024-12-14 00:19:06.279000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.298 qpair failed and we were unable to recover it. 00:38:27.298 [2024-12-14 00:19:06.279299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.298 [2024-12-14 00:19:06.279343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.298 qpair failed and we were unable to recover it. 00:38:27.298 [2024-12-14 00:19:06.279623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.298 [2024-12-14 00:19:06.279669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.298 qpair failed and we were unable to recover it. 00:38:27.298 [2024-12-14 00:19:06.279951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.298 [2024-12-14 00:19:06.279995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.298 qpair failed and we were unable to recover it. 00:38:27.298 [2024-12-14 00:19:06.280290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.298 [2024-12-14 00:19:06.280334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.298 qpair failed and we were unable to recover it. 00:38:27.298 [2024-12-14 00:19:06.280627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.298 [2024-12-14 00:19:06.280672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.298 qpair failed and we were unable to recover it. 00:38:27.298 [2024-12-14 00:19:06.280880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.298 [2024-12-14 00:19:06.280903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.298 qpair failed and we were unable to recover it. 00:38:27.298 [2024-12-14 00:19:06.281134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.298 [2024-12-14 00:19:06.281156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.298 qpair failed and we were unable to recover it. 00:38:27.298 [2024-12-14 00:19:06.281317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.298 [2024-12-14 00:19:06.281339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.298 qpair failed and we were unable to recover it. 00:38:27.298 [2024-12-14 00:19:06.281590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.298 [2024-12-14 00:19:06.281635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.298 qpair failed and we were unable to recover it. 00:38:27.298 [2024-12-14 00:19:06.281866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.298 [2024-12-14 00:19:06.281888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.298 qpair failed and we were unable to recover it. 00:38:27.298 [2024-12-14 00:19:06.282139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.298 [2024-12-14 00:19:06.282162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.298 qpair failed and we were unable to recover it. 00:38:27.298 [2024-12-14 00:19:06.282390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.298 [2024-12-14 00:19:06.282412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.298 qpair failed and we were unable to recover it. 00:38:27.298 [2024-12-14 00:19:06.282544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.298 [2024-12-14 00:19:06.282568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.298 qpair failed and we were unable to recover it. 00:38:27.298 [2024-12-14 00:19:06.282806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.299 [2024-12-14 00:19:06.282850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.299 qpair failed and we were unable to recover it. 00:38:27.299 [2024-12-14 00:19:06.283173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.299 [2024-12-14 00:19:06.283221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.299 qpair failed and we were unable to recover it. 00:38:27.299 [2024-12-14 00:19:06.283524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.299 [2024-12-14 00:19:06.283581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.299 qpair failed and we were unable to recover it. 00:38:27.299 [2024-12-14 00:19:06.283836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.299 [2024-12-14 00:19:06.283859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.299 qpair failed and we were unable to recover it. 00:38:27.299 [2024-12-14 00:19:06.284059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.299 [2024-12-14 00:19:06.284082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.299 qpair failed and we were unable to recover it. 00:38:27.299 [2024-12-14 00:19:06.284261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.299 [2024-12-14 00:19:06.284283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.299 qpair failed and we were unable to recover it. 00:38:27.299 [2024-12-14 00:19:06.284461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.299 [2024-12-14 00:19:06.284506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.299 qpair failed and we were unable to recover it. 00:38:27.299 [2024-12-14 00:19:06.284725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.299 [2024-12-14 00:19:06.284747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.299 qpair failed and we were unable to recover it. 00:38:27.299 [2024-12-14 00:19:06.285006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.299 [2024-12-14 00:19:06.285055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.299 qpair failed and we were unable to recover it. 00:38:27.299 [2024-12-14 00:19:06.285290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.299 [2024-12-14 00:19:06.285335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.299 qpair failed and we were unable to recover it. 00:38:27.299 [2024-12-14 00:19:06.285635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.299 [2024-12-14 00:19:06.285680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.299 qpair failed and we were unable to recover it. 00:38:27.299 [2024-12-14 00:19:06.285972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.299 [2024-12-14 00:19:06.286016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.299 qpair failed and we were unable to recover it. 00:38:27.299 [2024-12-14 00:19:06.286217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.299 [2024-12-14 00:19:06.286240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.299 qpair failed and we were unable to recover it. 00:38:27.299 [2024-12-14 00:19:06.286412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.299 [2024-12-14 00:19:06.286482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.299 qpair failed and we were unable to recover it. 00:38:27.299 [2024-12-14 00:19:06.286786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.299 [2024-12-14 00:19:06.286830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.299 qpair failed and we were unable to recover it. 00:38:27.299 [2024-12-14 00:19:06.287126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.299 [2024-12-14 00:19:06.287148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.299 qpair failed and we were unable to recover it. 00:38:27.299 [2024-12-14 00:19:06.287408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.299 [2024-12-14 00:19:06.287430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.299 qpair failed and we were unable to recover it. 00:38:27.299 [2024-12-14 00:19:06.287766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.299 [2024-12-14 00:19:06.287811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.299 qpair failed and we were unable to recover it. 00:38:27.299 [2024-12-14 00:19:06.288127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.299 [2024-12-14 00:19:06.288170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.299 qpair failed and we were unable to recover it. 00:38:27.299 [2024-12-14 00:19:06.288403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.299 [2024-12-14 00:19:06.288456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.299 qpair failed and we were unable to recover it. 00:38:27.299 [2024-12-14 00:19:06.288763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.299 [2024-12-14 00:19:06.288808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.299 qpair failed and we were unable to recover it. 00:38:27.299 [2024-12-14 00:19:06.289107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.299 [2024-12-14 00:19:06.289130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.299 qpair failed and we were unable to recover it. 00:38:27.299 [2024-12-14 00:19:06.289310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.299 [2024-12-14 00:19:06.289362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.299 qpair failed and we were unable to recover it. 00:38:27.299 [2024-12-14 00:19:06.289681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.299 [2024-12-14 00:19:06.289726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.299 qpair failed and we were unable to recover it. 00:38:27.299 [2024-12-14 00:19:06.289967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.299 [2024-12-14 00:19:06.290024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.299 qpair failed and we were unable to recover it. 00:38:27.299 [2024-12-14 00:19:06.290255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.299 [2024-12-14 00:19:06.290299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.299 qpair failed and we were unable to recover it. 00:38:27.299 [2024-12-14 00:19:06.290540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.299 [2024-12-14 00:19:06.290585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.299 qpair failed and we were unable to recover it. 00:38:27.299 [2024-12-14 00:19:06.290889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.299 [2024-12-14 00:19:06.290933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.299 qpair failed and we were unable to recover it. 00:38:27.299 [2024-12-14 00:19:06.291199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.299 [2024-12-14 00:19:06.291221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.299 qpair failed and we were unable to recover it. 00:38:27.299 [2024-12-14 00:19:06.291394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.299 [2024-12-14 00:19:06.291447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.299 qpair failed and we were unable to recover it. 00:38:27.299 [2024-12-14 00:19:06.291750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.299 [2024-12-14 00:19:06.291794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.299 qpair failed and we were unable to recover it. 00:38:27.299 [2024-12-14 00:19:06.292059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.299 [2024-12-14 00:19:06.292081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.299 qpair failed and we were unable to recover it. 00:38:27.300 [2024-12-14 00:19:06.292260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.300 [2024-12-14 00:19:06.292282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.300 qpair failed and we were unable to recover it. 00:38:27.300 [2024-12-14 00:19:06.292542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.300 [2024-12-14 00:19:06.292588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.300 qpair failed and we were unable to recover it. 00:38:27.300 [2024-12-14 00:19:06.292901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.300 [2024-12-14 00:19:06.292945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.300 qpair failed and we were unable to recover it. 00:38:27.300 [2024-12-14 00:19:06.293156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.300 [2024-12-14 00:19:06.293178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.300 qpair failed and we were unable to recover it. 00:38:27.300 [2024-12-14 00:19:06.293414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.300 [2024-12-14 00:19:06.293435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.300 qpair failed and we were unable to recover it. 00:38:27.300 [2024-12-14 00:19:06.293686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.300 [2024-12-14 00:19:06.293710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.300 qpair failed and we were unable to recover it. 00:38:27.300 [2024-12-14 00:19:06.293964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.300 [2024-12-14 00:19:06.293987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.300 qpair failed and we were unable to recover it. 00:38:27.300 [2024-12-14 00:19:06.294118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.300 [2024-12-14 00:19:06.294140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.300 qpair failed and we were unable to recover it. 00:38:27.300 [2024-12-14 00:19:06.294367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.300 [2024-12-14 00:19:06.294390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.300 qpair failed and we were unable to recover it. 00:38:27.300 [2024-12-14 00:19:06.294650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.300 [2024-12-14 00:19:06.294674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.300 qpair failed and we were unable to recover it. 00:38:27.300 [2024-12-14 00:19:06.294854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.300 [2024-12-14 00:19:06.294877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.300 qpair failed and we were unable to recover it. 00:38:27.300 [2024-12-14 00:19:06.295131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.300 [2024-12-14 00:19:06.295175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.300 qpair failed and we were unable to recover it. 00:38:27.300 [2024-12-14 00:19:06.295396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.300 [2024-12-14 00:19:06.295451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.300 qpair failed and we were unable to recover it. 00:38:27.300 [2024-12-14 00:19:06.295772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.300 [2024-12-14 00:19:06.295818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.300 qpair failed and we were unable to recover it. 00:38:27.300 [2024-12-14 00:19:06.296085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.300 [2024-12-14 00:19:06.296129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.300 qpair failed and we were unable to recover it. 00:38:27.300 [2024-12-14 00:19:06.296349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.300 [2024-12-14 00:19:06.296392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.300 qpair failed and we were unable to recover it. 00:38:27.300 [2024-12-14 00:19:06.296632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.300 [2024-12-14 00:19:06.296686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.300 qpair failed and we were unable to recover it. 00:38:27.300 [2024-12-14 00:19:06.296924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.300 [2024-12-14 00:19:06.296947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.300 qpair failed and we were unable to recover it. 00:38:27.300 [2024-12-14 00:19:06.297130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.300 [2024-12-14 00:19:06.297152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.300 qpair failed and we were unable to recover it. 00:38:27.300 [2024-12-14 00:19:06.297386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.300 [2024-12-14 00:19:06.297408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.300 qpair failed and we were unable to recover it. 00:38:27.300 [2024-12-14 00:19:06.297567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.300 [2024-12-14 00:19:06.297590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.300 qpair failed and we were unable to recover it. 00:38:27.300 [2024-12-14 00:19:06.297831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.300 [2024-12-14 00:19:06.297854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.300 qpair failed and we were unable to recover it. 00:38:27.300 [2024-12-14 00:19:06.298100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.300 [2024-12-14 00:19:06.298144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.300 qpair failed and we were unable to recover it. 00:38:27.300 [2024-12-14 00:19:06.298456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.300 [2024-12-14 00:19:06.298501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.300 qpair failed and we were unable to recover it. 00:38:27.300 [2024-12-14 00:19:06.298780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.300 [2024-12-14 00:19:06.298825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.300 qpair failed and we were unable to recover it. 00:38:27.300 [2024-12-14 00:19:06.299136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.300 [2024-12-14 00:19:06.299159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.300 qpair failed and we were unable to recover it. 00:38:27.300 [2024-12-14 00:19:06.299359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.300 [2024-12-14 00:19:06.299381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.300 qpair failed and we were unable to recover it. 00:38:27.300 [2024-12-14 00:19:06.299504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.300 [2024-12-14 00:19:06.299527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.300 qpair failed and we were unable to recover it. 00:38:27.300 [2024-12-14 00:19:06.299781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.300 [2024-12-14 00:19:06.299803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.300 qpair failed and we were unable to recover it. 00:38:27.300 [2024-12-14 00:19:06.300094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.300 [2024-12-14 00:19:06.300137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.300 qpair failed and we were unable to recover it. 00:38:27.300 [2024-12-14 00:19:06.300424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.300 [2024-12-14 00:19:06.300480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.300 qpair failed and we were unable to recover it. 00:38:27.300 [2024-12-14 00:19:06.300740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.300 [2024-12-14 00:19:06.300790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.300 qpair failed and we were unable to recover it. 00:38:27.300 [2024-12-14 00:19:06.301021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.300 [2024-12-14 00:19:06.301064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.300 qpair failed and we were unable to recover it. 00:38:27.300 [2024-12-14 00:19:06.301280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.300 [2024-12-14 00:19:06.301322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.300 qpair failed and we were unable to recover it. 00:38:27.300 [2024-12-14 00:19:06.301646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.300 [2024-12-14 00:19:06.301691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.300 qpair failed and we were unable to recover it. 00:38:27.300 [2024-12-14 00:19:06.301884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.300 [2024-12-14 00:19:06.301906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.300 qpair failed and we were unable to recover it. 00:38:27.300 [2024-12-14 00:19:06.302158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.300 [2024-12-14 00:19:06.302180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.300 qpair failed and we were unable to recover it. 00:38:27.300 [2024-12-14 00:19:06.302458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.300 [2024-12-14 00:19:06.302481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.300 qpair failed and we were unable to recover it. 00:38:27.300 [2024-12-14 00:19:06.302644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.300 [2024-12-14 00:19:06.302668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.300 qpair failed and we were unable to recover it. 00:38:27.301 [2024-12-14 00:19:06.302921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.301 [2024-12-14 00:19:06.302943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.301 qpair failed and we were unable to recover it. 00:38:27.301 [2024-12-14 00:19:06.303207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.301 [2024-12-14 00:19:06.303250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.301 qpair failed and we were unable to recover it. 00:38:27.301 [2024-12-14 00:19:06.303506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.301 [2024-12-14 00:19:06.303552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.301 qpair failed and we were unable to recover it. 00:38:27.301 [2024-12-14 00:19:06.303801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.301 [2024-12-14 00:19:06.303824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.301 qpair failed and we were unable to recover it. 00:38:27.301 [2024-12-14 00:19:06.304043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.301 [2024-12-14 00:19:06.304087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.301 qpair failed and we were unable to recover it. 00:38:27.301 [2024-12-14 00:19:06.304325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.301 [2024-12-14 00:19:06.304376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.301 qpair failed and we were unable to recover it. 00:38:27.301 [2024-12-14 00:19:06.304564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.301 [2024-12-14 00:19:06.304610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.301 qpair failed and we were unable to recover it. 00:38:27.301 [2024-12-14 00:19:06.304856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.301 [2024-12-14 00:19:06.304900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.301 qpair failed and we were unable to recover it. 00:38:27.301 [2024-12-14 00:19:06.305056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.301 [2024-12-14 00:19:06.305100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.301 qpair failed and we were unable to recover it. 00:38:27.301 [2024-12-14 00:19:06.305402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.301 [2024-12-14 00:19:06.305456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.301 qpair failed and we were unable to recover it. 00:38:27.301 [2024-12-14 00:19:06.305789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.301 [2024-12-14 00:19:06.305834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.301 qpair failed and we were unable to recover it. 00:38:27.301 [2024-12-14 00:19:06.306083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.301 [2024-12-14 00:19:06.306127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.301 qpair failed and we were unable to recover it. 00:38:27.301 [2024-12-14 00:19:06.306425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.301 [2024-12-14 00:19:06.306496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.301 qpair failed and we were unable to recover it. 00:38:27.301 [2024-12-14 00:19:06.306808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.301 [2024-12-14 00:19:06.306854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.301 qpair failed and we were unable to recover it. 00:38:27.301 [2024-12-14 00:19:06.307113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.301 [2024-12-14 00:19:06.307157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.301 qpair failed and we were unable to recover it. 00:38:27.301 [2024-12-14 00:19:06.307427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.301 [2024-12-14 00:19:06.307486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.301 qpair failed and we were unable to recover it. 00:38:27.301 [2024-12-14 00:19:06.307725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.301 [2024-12-14 00:19:06.307770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.301 qpair failed and we were unable to recover it. 00:38:27.301 [2024-12-14 00:19:06.308092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.301 [2024-12-14 00:19:06.308162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.301 qpair failed and we were unable to recover it. 00:38:27.301 [2024-12-14 00:19:06.308405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.301 [2024-12-14 00:19:06.308476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.301 qpair failed and we were unable to recover it. 00:38:27.301 [2024-12-14 00:19:06.308782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.301 [2024-12-14 00:19:06.308826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.301 qpair failed and we were unable to recover it. 00:38:27.301 [2024-12-14 00:19:06.309127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.301 [2024-12-14 00:19:06.309170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.301 qpair failed and we were unable to recover it. 00:38:27.301 [2024-12-14 00:19:06.309398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.301 [2024-12-14 00:19:06.309454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.301 qpair failed and we were unable to recover it. 00:38:27.301 [2024-12-14 00:19:06.309783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.301 [2024-12-14 00:19:06.309828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.301 qpair failed and we were unable to recover it. 00:38:27.301 [2024-12-14 00:19:06.310143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.301 [2024-12-14 00:19:06.310186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.301 qpair failed and we were unable to recover it. 00:38:27.301 [2024-12-14 00:19:06.310495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.301 [2024-12-14 00:19:06.310541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.301 qpair failed and we were unable to recover it. 00:38:27.301 [2024-12-14 00:19:06.310851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.301 [2024-12-14 00:19:06.310897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.301 qpair failed and we were unable to recover it. 00:38:27.301 [2024-12-14 00:19:06.311201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.301 [2024-12-14 00:19:06.311246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.301 qpair failed and we were unable to recover it. 00:38:27.301 [2024-12-14 00:19:06.311549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.301 [2024-12-14 00:19:06.311594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.301 qpair failed and we were unable to recover it. 00:38:27.301 [2024-12-14 00:19:06.311896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.301 [2024-12-14 00:19:06.311939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.301 qpair failed and we were unable to recover it. 00:38:27.301 [2024-12-14 00:19:06.312235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.301 [2024-12-14 00:19:06.312279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.301 qpair failed and we were unable to recover it. 00:38:27.301 [2024-12-14 00:19:06.312590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.301 [2024-12-14 00:19:06.312635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.301 qpair failed and we were unable to recover it. 00:38:27.301 [2024-12-14 00:19:06.312924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.301 [2024-12-14 00:19:06.312946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.301 qpair failed and we were unable to recover it. 00:38:27.301 [2024-12-14 00:19:06.313188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.301 [2024-12-14 00:19:06.313211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.301 qpair failed and we were unable to recover it. 00:38:27.301 [2024-12-14 00:19:06.313496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.301 [2024-12-14 00:19:06.313519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.301 qpair failed and we were unable to recover it. 00:38:27.301 [2024-12-14 00:19:06.313704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.301 [2024-12-14 00:19:06.313747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.301 qpair failed and we were unable to recover it. 00:38:27.301 [2024-12-14 00:19:06.313983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.301 [2024-12-14 00:19:06.314027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.301 qpair failed and we were unable to recover it. 00:38:27.301 [2024-12-14 00:19:06.314325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.301 [2024-12-14 00:19:06.314368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.301 qpair failed and we were unable to recover it. 00:38:27.301 [2024-12-14 00:19:06.314691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.302 [2024-12-14 00:19:06.314736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.302 qpair failed and we were unable to recover it. 00:38:27.302 [2024-12-14 00:19:06.315052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.302 [2024-12-14 00:19:06.315097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.302 qpair failed and we were unable to recover it. 00:38:27.302 [2024-12-14 00:19:06.315404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.302 [2024-12-14 00:19:06.315459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.302 qpair failed and we were unable to recover it. 00:38:27.302 [2024-12-14 00:19:06.315764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.302 [2024-12-14 00:19:06.315807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.302 qpair failed and we were unable to recover it. 00:38:27.302 [2024-12-14 00:19:06.316016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.302 [2024-12-14 00:19:06.316060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.302 qpair failed and we were unable to recover it. 00:38:27.302 [2024-12-14 00:19:06.316334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.302 [2024-12-14 00:19:06.316357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.302 qpair failed and we were unable to recover it. 00:38:27.302 [2024-12-14 00:19:06.316554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.302 [2024-12-14 00:19:06.316577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.302 qpair failed and we were unable to recover it. 00:38:27.302 [2024-12-14 00:19:06.316838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.302 [2024-12-14 00:19:06.316861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.302 qpair failed and we were unable to recover it. 00:38:27.302 [2024-12-14 00:19:06.317118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.302 [2024-12-14 00:19:06.317140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.302 qpair failed and we were unable to recover it. 00:38:27.302 [2024-12-14 00:19:06.317262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.302 [2024-12-14 00:19:06.317284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.302 qpair failed and we were unable to recover it. 00:38:27.302 [2024-12-14 00:19:06.317460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.302 [2024-12-14 00:19:06.317482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.302 qpair failed and we were unable to recover it. 00:38:27.302 [2024-12-14 00:19:06.317684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.302 [2024-12-14 00:19:06.317707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.302 qpair failed and we were unable to recover it. 00:38:27.302 [2024-12-14 00:19:06.317904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.302 [2024-12-14 00:19:06.317926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.302 qpair failed and we were unable to recover it. 00:38:27.302 [2024-12-14 00:19:06.318188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.302 [2024-12-14 00:19:06.318210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.302 qpair failed and we were unable to recover it. 00:38:27.302 [2024-12-14 00:19:06.318496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.302 [2024-12-14 00:19:06.318520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.302 qpair failed and we were unable to recover it. 00:38:27.302 [2024-12-14 00:19:06.318805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.302 [2024-12-14 00:19:06.318860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.302 qpair failed and we were unable to recover it. 00:38:27.302 [2024-12-14 00:19:06.319069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.302 [2024-12-14 00:19:06.319112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.302 qpair failed and we were unable to recover it. 00:38:27.302 [2024-12-14 00:19:06.319453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.302 [2024-12-14 00:19:06.319499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.302 qpair failed and we were unable to recover it. 00:38:27.302 [2024-12-14 00:19:06.319744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.302 [2024-12-14 00:19:06.319788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.302 qpair failed and we were unable to recover it. 00:38:27.302 [2024-12-14 00:19:06.320090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.302 [2024-12-14 00:19:06.320113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.302 qpair failed and we were unable to recover it. 00:38:27.302 [2024-12-14 00:19:06.320298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.302 [2024-12-14 00:19:06.320324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.302 qpair failed and we were unable to recover it. 00:38:27.302 [2024-12-14 00:19:06.320572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.302 [2024-12-14 00:19:06.320596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.302 qpair failed and we were unable to recover it. 00:38:27.302 [2024-12-14 00:19:06.320831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.302 [2024-12-14 00:19:06.320854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.302 qpair failed and we were unable to recover it. 00:38:27.302 [2024-12-14 00:19:06.321034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.302 [2024-12-14 00:19:06.321057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.302 qpair failed and we were unable to recover it. 00:38:27.302 [2024-12-14 00:19:06.321248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.302 [2024-12-14 00:19:06.321270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.302 qpair failed and we were unable to recover it. 00:38:27.302 [2024-12-14 00:19:06.321456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.302 [2024-12-14 00:19:06.321480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.302 qpair failed and we were unable to recover it. 00:38:27.302 [2024-12-14 00:19:06.321761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.302 [2024-12-14 00:19:06.321784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.302 qpair failed and we were unable to recover it. 00:38:27.302 [2024-12-14 00:19:06.321968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.302 [2024-12-14 00:19:06.321990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.302 qpair failed and we were unable to recover it. 00:38:27.302 [2024-12-14 00:19:06.322246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.302 [2024-12-14 00:19:06.322268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.302 qpair failed and we were unable to recover it. 00:38:27.302 [2024-12-14 00:19:06.322551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.302 [2024-12-14 00:19:06.322574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.302 qpair failed and we were unable to recover it. 00:38:27.302 [2024-12-14 00:19:06.322765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.302 [2024-12-14 00:19:06.322788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.302 qpair failed and we were unable to recover it. 00:38:27.302 [2024-12-14 00:19:06.322971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.302 [2024-12-14 00:19:06.322993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.302 qpair failed and we were unable to recover it. 00:38:27.302 [2024-12-14 00:19:06.323248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.302 [2024-12-14 00:19:06.323272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.302 qpair failed and we were unable to recover it. 00:38:27.302 [2024-12-14 00:19:06.323448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.302 [2024-12-14 00:19:06.323471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.302 qpair failed and we were unable to recover it. 00:38:27.302 [2024-12-14 00:19:06.323673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.302 [2024-12-14 00:19:06.323719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.302 qpair failed and we were unable to recover it. 00:38:27.302 [2024-12-14 00:19:06.324037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.302 [2024-12-14 00:19:06.324060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.302 qpair failed and we were unable to recover it. 00:38:27.302 [2024-12-14 00:19:06.324178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.302 [2024-12-14 00:19:06.324200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.302 qpair failed and we were unable to recover it. 00:38:27.302 [2024-12-14 00:19:06.324461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.302 [2024-12-14 00:19:06.324486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.302 qpair failed and we were unable to recover it. 00:38:27.303 [2024-12-14 00:19:06.324700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.303 [2024-12-14 00:19:06.324744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.303 qpair failed and we were unable to recover it. 00:38:27.303 [2024-12-14 00:19:06.324985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.303 [2024-12-14 00:19:06.325029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.303 qpair failed and we were unable to recover it. 00:38:27.303 [2024-12-14 00:19:06.325284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.303 [2024-12-14 00:19:06.325329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.303 qpair failed and we were unable to recover it. 00:38:27.303 [2024-12-14 00:19:06.325645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.303 [2024-12-14 00:19:06.325692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.303 qpair failed and we were unable to recover it. 00:38:27.303 [2024-12-14 00:19:06.325908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.303 [2024-12-14 00:19:06.325950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.303 qpair failed and we were unable to recover it. 00:38:27.303 [2024-12-14 00:19:06.326154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.303 [2024-12-14 00:19:06.326176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.303 qpair failed and we were unable to recover it. 00:38:27.303 [2024-12-14 00:19:06.326449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.303 [2024-12-14 00:19:06.326517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.303 qpair failed and we were unable to recover it. 00:38:27.303 [2024-12-14 00:19:06.326832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.303 [2024-12-14 00:19:06.326877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.303 qpair failed and we were unable to recover it. 00:38:27.303 [2024-12-14 00:19:06.327183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.303 [2024-12-14 00:19:06.327227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.303 qpair failed and we were unable to recover it. 00:38:27.303 [2024-12-14 00:19:06.327534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.303 [2024-12-14 00:19:06.327580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.303 qpair failed and we were unable to recover it. 00:38:27.303 [2024-12-14 00:19:06.327807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.303 [2024-12-14 00:19:06.327849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.303 qpair failed and we were unable to recover it. 00:38:27.303 [2024-12-14 00:19:06.328075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.303 [2024-12-14 00:19:06.328098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.303 qpair failed and we were unable to recover it. 00:38:27.303 [2024-12-14 00:19:06.328351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.303 [2024-12-14 00:19:06.328374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.303 qpair failed and we were unable to recover it. 00:38:27.303 [2024-12-14 00:19:06.328556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.303 [2024-12-14 00:19:06.328601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.303 qpair failed and we were unable to recover it. 00:38:27.303 [2024-12-14 00:19:06.328880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.303 [2024-12-14 00:19:06.328938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.303 qpair failed and we were unable to recover it. 00:38:27.303 [2024-12-14 00:19:06.329142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.303 [2024-12-14 00:19:06.329166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.303 qpair failed and we were unable to recover it. 00:38:27.303 [2024-12-14 00:19:06.329414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.303 [2024-12-14 00:19:06.329436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.303 qpair failed and we were unable to recover it. 00:38:27.303 [2024-12-14 00:19:06.329677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.303 [2024-12-14 00:19:06.329700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.303 qpair failed and we were unable to recover it. 00:38:27.303 [2024-12-14 00:19:06.329930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.303 [2024-12-14 00:19:06.329952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.303 qpair failed and we were unable to recover it. 00:38:27.303 [2024-12-14 00:19:06.330131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.303 [2024-12-14 00:19:06.330154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.303 qpair failed and we were unable to recover it. 00:38:27.303 [2024-12-14 00:19:06.330337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.303 [2024-12-14 00:19:06.330359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.303 qpair failed and we were unable to recover it. 00:38:27.303 [2024-12-14 00:19:06.330566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.303 [2024-12-14 00:19:06.330589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.303 qpair failed and we were unable to recover it. 00:38:27.303 [2024-12-14 00:19:06.330848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.303 [2024-12-14 00:19:06.330898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.303 qpair failed and we were unable to recover it. 00:38:27.303 [2024-12-14 00:19:06.331224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.303 [2024-12-14 00:19:06.331270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.303 qpair failed and we were unable to recover it. 00:38:27.303 [2024-12-14 00:19:06.331597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.303 [2024-12-14 00:19:06.331643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.303 qpair failed and we were unable to recover it. 00:38:27.303 [2024-12-14 00:19:06.331935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.303 [2024-12-14 00:19:06.331957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.303 qpair failed and we were unable to recover it. 00:38:27.303 [2024-12-14 00:19:06.332207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.303 [2024-12-14 00:19:06.332229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.303 qpair failed and we were unable to recover it. 00:38:27.303 [2024-12-14 00:19:06.332417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.303 [2024-12-14 00:19:06.332446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.303 qpair failed and we were unable to recover it. 00:38:27.303 [2024-12-14 00:19:06.332704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.303 [2024-12-14 00:19:06.332727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.303 qpair failed and we were unable to recover it. 00:38:27.303 [2024-12-14 00:19:06.332910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.303 [2024-12-14 00:19:06.332933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.303 qpair failed and we were unable to recover it. 00:38:27.303 [2024-12-14 00:19:06.333109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.303 [2024-12-14 00:19:06.333131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.303 qpair failed and we were unable to recover it. 00:38:27.303 [2024-12-14 00:19:06.333389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.303 [2024-12-14 00:19:06.333433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.303 qpair failed and we were unable to recover it. 00:38:27.303 [2024-12-14 00:19:06.333691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.303 [2024-12-14 00:19:06.333736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.303 qpair failed and we were unable to recover it. 00:38:27.303 [2024-12-14 00:19:06.333946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.303 [2024-12-14 00:19:06.333968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.304 qpair failed and we were unable to recover it. 00:38:27.304 [2024-12-14 00:19:06.334199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.304 [2024-12-14 00:19:06.334242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.304 qpair failed and we were unable to recover it. 00:38:27.304 [2024-12-14 00:19:06.334530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.304 [2024-12-14 00:19:06.334577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.304 qpair failed and we were unable to recover it. 00:38:27.304 [2024-12-14 00:19:06.334874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.304 [2024-12-14 00:19:06.334897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.304 qpair failed and we were unable to recover it. 00:38:27.304 [2024-12-14 00:19:06.335161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.304 [2024-12-14 00:19:06.335213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.304 qpair failed and we were unable to recover it. 00:38:27.304 [2024-12-14 00:19:06.335515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.304 [2024-12-14 00:19:06.335559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.304 qpair failed and we were unable to recover it. 00:38:27.304 [2024-12-14 00:19:06.335875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.304 [2024-12-14 00:19:06.335920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.304 qpair failed and we were unable to recover it. 00:38:27.304 [2024-12-14 00:19:06.336219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.304 [2024-12-14 00:19:06.336264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.304 qpair failed and we were unable to recover it. 00:38:27.304 [2024-12-14 00:19:06.336563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.304 [2024-12-14 00:19:06.336608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.304 qpair failed and we were unable to recover it. 00:38:27.304 [2024-12-14 00:19:06.336885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.304 [2024-12-14 00:19:06.336930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.304 qpair failed and we were unable to recover it. 00:38:27.304 [2024-12-14 00:19:06.337254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.304 [2024-12-14 00:19:06.337278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.304 qpair failed and we were unable to recover it. 00:38:27.304 [2024-12-14 00:19:06.337445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.304 [2024-12-14 00:19:06.337467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.304 qpair failed and we were unable to recover it. 00:38:27.304 [2024-12-14 00:19:06.337755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.304 [2024-12-14 00:19:06.337779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.304 qpair failed and we were unable to recover it. 00:38:27.304 [2024-12-14 00:19:06.337982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.304 [2024-12-14 00:19:06.338005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.304 qpair failed and we were unable to recover it. 00:38:27.304 [2024-12-14 00:19:06.338248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.304 [2024-12-14 00:19:06.338270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.304 qpair failed and we were unable to recover it. 00:38:27.304 [2024-12-14 00:19:06.338502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.304 [2024-12-14 00:19:06.338526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.304 qpair failed and we were unable to recover it. 00:38:27.304 [2024-12-14 00:19:06.338708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.304 [2024-12-14 00:19:06.338731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.304 qpair failed and we were unable to recover it. 00:38:27.304 [2024-12-14 00:19:06.338847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.304 [2024-12-14 00:19:06.338889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.304 qpair failed and we were unable to recover it. 00:38:27.304 [2024-12-14 00:19:06.339193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.304 [2024-12-14 00:19:06.339237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.304 qpair failed and we were unable to recover it. 00:38:27.304 [2024-12-14 00:19:06.339578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.304 [2024-12-14 00:19:06.339623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.304 qpair failed and we were unable to recover it. 00:38:27.304 [2024-12-14 00:19:06.339931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.304 [2024-12-14 00:19:06.339954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.304 qpair failed and we were unable to recover it. 00:38:27.304 [2024-12-14 00:19:06.340132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.304 [2024-12-14 00:19:06.340154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.304 qpair failed and we were unable to recover it. 00:38:27.304 [2024-12-14 00:19:06.340322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.304 [2024-12-14 00:19:06.340344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.304 qpair failed and we were unable to recover it. 00:38:27.304 [2024-12-14 00:19:06.340613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.304 [2024-12-14 00:19:06.340657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.304 qpair failed and we were unable to recover it. 00:38:27.304 [2024-12-14 00:19:06.340959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.304 [2024-12-14 00:19:06.341008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.304 qpair failed and we were unable to recover it. 00:38:27.304 [2024-12-14 00:19:06.341168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.304 [2024-12-14 00:19:06.341195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.304 qpair failed and we were unable to recover it. 00:38:27.304 [2024-12-14 00:19:06.341451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.304 [2024-12-14 00:19:06.341498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.304 qpair failed and we were unable to recover it. 00:38:27.304 [2024-12-14 00:19:06.341787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.304 [2024-12-14 00:19:06.341831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.304 qpair failed and we were unable to recover it. 00:38:27.304 [2024-12-14 00:19:06.342060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.304 [2024-12-14 00:19:06.342083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.304 qpair failed and we were unable to recover it. 00:38:27.304 [2024-12-14 00:19:06.342337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.304 [2024-12-14 00:19:06.342364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.304 qpair failed and we were unable to recover it. 00:38:27.304 [2024-12-14 00:19:06.342618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.304 [2024-12-14 00:19:06.342641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.304 qpair failed and we were unable to recover it. 00:38:27.304 [2024-12-14 00:19:06.342828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.304 [2024-12-14 00:19:06.342850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.304 qpair failed and we were unable to recover it. 00:38:27.304 [2024-12-14 00:19:06.343027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.304 [2024-12-14 00:19:06.343049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.304 qpair failed and we were unable to recover it. 00:38:27.304 [2024-12-14 00:19:06.343307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.304 [2024-12-14 00:19:06.343329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.304 qpair failed and we were unable to recover it. 00:38:27.304 [2024-12-14 00:19:06.343568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.304 [2024-12-14 00:19:06.343590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.304 qpair failed and we were unable to recover it. 00:38:27.304 [2024-12-14 00:19:06.343819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.304 [2024-12-14 00:19:06.343841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.304 qpair failed and we were unable to recover it. 00:38:27.304 [2024-12-14 00:19:06.344039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.304 [2024-12-14 00:19:06.344062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.304 qpair failed and we were unable to recover it. 00:38:27.305 [2024-12-14 00:19:06.344317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.305 [2024-12-14 00:19:06.344347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.305 qpair failed and we were unable to recover it. 00:38:27.305 [2024-12-14 00:19:06.344518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.305 [2024-12-14 00:19:06.344541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.305 qpair failed and we were unable to recover it. 00:38:27.305 [2024-12-14 00:19:06.344826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.305 [2024-12-14 00:19:06.344848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.305 qpair failed and we were unable to recover it. 00:38:27.305 [2024-12-14 00:19:06.345029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.305 [2024-12-14 00:19:06.345052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.305 qpair failed and we were unable to recover it. 00:38:27.305 [2024-12-14 00:19:06.345232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.305 [2024-12-14 00:19:06.345254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.305 qpair failed and we were unable to recover it. 00:38:27.305 [2024-12-14 00:19:06.345416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.305 [2024-12-14 00:19:06.345443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.305 qpair failed and we were unable to recover it. 00:38:27.305 [2024-12-14 00:19:06.345685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.305 [2024-12-14 00:19:06.345708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.305 qpair failed and we were unable to recover it. 00:38:27.305 [2024-12-14 00:19:06.345961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.305 [2024-12-14 00:19:06.345984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.305 qpair failed and we were unable to recover it. 00:38:27.305 [2024-12-14 00:19:06.346227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.305 [2024-12-14 00:19:06.346249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.305 qpair failed and we were unable to recover it. 00:38:27.305 [2024-12-14 00:19:06.346454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.305 [2024-12-14 00:19:06.346477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.305 qpair failed and we were unable to recover it. 00:38:27.305 [2024-12-14 00:19:06.346760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.305 [2024-12-14 00:19:06.346783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.305 qpair failed and we were unable to recover it. 00:38:27.305 [2024-12-14 00:19:06.346969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.305 [2024-12-14 00:19:06.346991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.305 qpair failed and we were unable to recover it. 00:38:27.305 [2024-12-14 00:19:06.347248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.305 [2024-12-14 00:19:06.347294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.305 qpair failed and we were unable to recover it. 00:38:27.305 [2024-12-14 00:19:06.347464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.305 [2024-12-14 00:19:06.347510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.305 qpair failed and we were unable to recover it. 00:38:27.305 [2024-12-14 00:19:06.347735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.305 [2024-12-14 00:19:06.347779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.305 qpair failed and we were unable to recover it. 00:38:27.305 [2024-12-14 00:19:06.348076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.305 [2024-12-14 00:19:06.348098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.305 qpair failed and we were unable to recover it. 00:38:27.305 [2024-12-14 00:19:06.348331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.305 [2024-12-14 00:19:06.348354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.305 qpair failed and we were unable to recover it. 00:38:27.305 [2024-12-14 00:19:06.348549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.305 [2024-12-14 00:19:06.348572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.305 qpair failed and we were unable to recover it. 00:38:27.305 [2024-12-14 00:19:06.348788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.305 [2024-12-14 00:19:06.348811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.305 qpair failed and we were unable to recover it. 00:38:27.305 [2024-12-14 00:19:06.348992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.305 [2024-12-14 00:19:06.349015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.305 qpair failed and we were unable to recover it. 00:38:27.305 [2024-12-14 00:19:06.349205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.305 [2024-12-14 00:19:06.349249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.305 qpair failed and we were unable to recover it. 00:38:27.305 [2024-12-14 00:19:06.349490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.305 [2024-12-14 00:19:06.349534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.305 qpair failed and we were unable to recover it. 00:38:27.305 [2024-12-14 00:19:06.349781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.305 [2024-12-14 00:19:06.349829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.305 qpair failed and we were unable to recover it. 00:38:27.305 [2024-12-14 00:19:06.350130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.305 [2024-12-14 00:19:06.350174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.305 qpair failed and we were unable to recover it. 00:38:27.305 [2024-12-14 00:19:06.350347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.305 [2024-12-14 00:19:06.350390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.305 qpair failed and we were unable to recover it. 00:38:27.305 [2024-12-14 00:19:06.350630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.305 [2024-12-14 00:19:06.350676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.305 qpair failed and we were unable to recover it. 00:38:27.305 [2024-12-14 00:19:06.350844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.305 [2024-12-14 00:19:06.350887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.305 qpair failed and we were unable to recover it. 00:38:27.305 [2024-12-14 00:19:06.351193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.305 [2024-12-14 00:19:06.351236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.305 qpair failed and we were unable to recover it. 00:38:27.305 [2024-12-14 00:19:06.351503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.305 [2024-12-14 00:19:06.351527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.305 qpair failed and we were unable to recover it. 00:38:27.305 [2024-12-14 00:19:06.351695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.305 [2024-12-14 00:19:06.351717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.305 qpair failed and we were unable to recover it. 00:38:27.305 [2024-12-14 00:19:06.351957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.305 [2024-12-14 00:19:06.352001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.305 qpair failed and we were unable to recover it. 00:38:27.305 [2024-12-14 00:19:06.352277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.305 [2024-12-14 00:19:06.352321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.305 qpair failed and we were unable to recover it. 00:38:27.305 [2024-12-14 00:19:06.352611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.305 [2024-12-14 00:19:06.352669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.305 qpair failed and we were unable to recover it. 00:38:27.305 [2024-12-14 00:19:06.352972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.305 [2024-12-14 00:19:06.353016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.305 qpair failed and we were unable to recover it. 00:38:27.305 [2024-12-14 00:19:06.353212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.305 [2024-12-14 00:19:06.353235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.305 qpair failed and we were unable to recover it. 00:38:27.305 [2024-12-14 00:19:06.353497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.305 [2024-12-14 00:19:06.353542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.305 qpair failed and we were unable to recover it. 00:38:27.306 [2024-12-14 00:19:06.353834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.306 [2024-12-14 00:19:06.353887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.306 qpair failed and we were unable to recover it. 00:38:27.306 [2024-12-14 00:19:06.354153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.306 [2024-12-14 00:19:06.354206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.306 qpair failed and we were unable to recover it. 00:38:27.306 [2024-12-14 00:19:06.354482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.306 [2024-12-14 00:19:06.354527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.306 qpair failed and we were unable to recover it. 00:38:27.306 [2024-12-14 00:19:06.354851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.306 [2024-12-14 00:19:06.354897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.306 qpair failed and we were unable to recover it. 00:38:27.306 [2024-12-14 00:19:06.355144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.306 [2024-12-14 00:19:06.355188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.306 qpair failed and we were unable to recover it. 00:38:27.306 [2024-12-14 00:19:06.355484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.306 [2024-12-14 00:19:06.355529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.306 qpair failed and we were unable to recover it. 00:38:27.306 [2024-12-14 00:19:06.355766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.306 [2024-12-14 00:19:06.355810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.306 qpair failed and we were unable to recover it. 00:38:27.306 [2024-12-14 00:19:06.356033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.306 [2024-12-14 00:19:06.356076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.306 qpair failed and we were unable to recover it. 00:38:27.306 [2024-12-14 00:19:06.356281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.306 [2024-12-14 00:19:06.356325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.306 qpair failed and we were unable to recover it. 00:38:27.306 [2024-12-14 00:19:06.356571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.306 [2024-12-14 00:19:06.356617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.306 qpair failed and we were unable to recover it. 00:38:27.306 [2024-12-14 00:19:06.356945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.306 [2024-12-14 00:19:06.356990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.306 qpair failed and we were unable to recover it. 00:38:27.306 [2024-12-14 00:19:06.357294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.306 [2024-12-14 00:19:06.357338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.306 qpair failed and we were unable to recover it. 00:38:27.306 [2024-12-14 00:19:06.357617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.306 [2024-12-14 00:19:06.357663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.306 qpair failed and we were unable to recover it. 00:38:27.306 [2024-12-14 00:19:06.357980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.306 [2024-12-14 00:19:06.358024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.306 qpair failed and we were unable to recover it. 00:38:27.306 [2024-12-14 00:19:06.358298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.306 [2024-12-14 00:19:06.358342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.306 qpair failed and we were unable to recover it. 00:38:27.306 [2024-12-14 00:19:06.358644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.306 [2024-12-14 00:19:06.358690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.306 qpair failed and we were unable to recover it. 00:38:27.306 [2024-12-14 00:19:06.358934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.306 [2024-12-14 00:19:06.358978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.306 qpair failed and we were unable to recover it. 00:38:27.306 [2024-12-14 00:19:06.359199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.306 [2024-12-14 00:19:06.359243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.306 qpair failed and we were unable to recover it. 00:38:27.306 [2024-12-14 00:19:06.359559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.306 [2024-12-14 00:19:06.359604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.306 qpair failed and we were unable to recover it. 00:38:27.306 [2024-12-14 00:19:06.359821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.306 [2024-12-14 00:19:06.359864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.306 qpair failed and we were unable to recover it. 00:38:27.306 [2024-12-14 00:19:06.360143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.306 [2024-12-14 00:19:06.360186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.306 qpair failed and we were unable to recover it. 00:38:27.306 [2024-12-14 00:19:06.360408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.306 [2024-12-14 00:19:06.360431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.306 qpair failed and we were unable to recover it. 00:38:27.306 [2024-12-14 00:19:06.360702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.306 [2024-12-14 00:19:06.360726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.306 qpair failed and we were unable to recover it. 00:38:27.306 [2024-12-14 00:19:06.360933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.306 [2024-12-14 00:19:06.360955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.306 qpair failed and we were unable to recover it. 00:38:27.306 [2024-12-14 00:19:06.361249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.306 [2024-12-14 00:19:06.361291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.306 qpair failed and we were unable to recover it. 00:38:27.306 [2024-12-14 00:19:06.361572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.306 [2024-12-14 00:19:06.361618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.306 qpair failed and we were unable to recover it. 00:38:27.306 [2024-12-14 00:19:06.361924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.306 [2024-12-14 00:19:06.361967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.306 qpair failed and we were unable to recover it. 00:38:27.306 [2024-12-14 00:19:06.362306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.306 [2024-12-14 00:19:06.362350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.306 qpair failed and we were unable to recover it. 00:38:27.306 [2024-12-14 00:19:06.362675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.306 [2024-12-14 00:19:06.362733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.306 qpair failed and we were unable to recover it. 00:38:27.306 [2024-12-14 00:19:06.362956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.306 [2024-12-14 00:19:06.362979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.306 qpair failed and we were unable to recover it. 00:38:27.306 [2024-12-14 00:19:06.363219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.306 [2024-12-14 00:19:06.363242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.306 qpair failed and we were unable to recover it. 00:38:27.306 [2024-12-14 00:19:06.363501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.306 [2024-12-14 00:19:06.363524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.306 qpair failed and we were unable to recover it. 00:38:27.306 [2024-12-14 00:19:06.363764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.306 [2024-12-14 00:19:06.363786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.306 qpair failed and we were unable to recover it. 00:38:27.306 [2024-12-14 00:19:06.363962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.306 [2024-12-14 00:19:06.363984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.306 qpair failed and we were unable to recover it. 00:38:27.306 [2024-12-14 00:19:06.364163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.306 [2024-12-14 00:19:06.364207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.306 qpair failed and we were unable to recover it. 00:38:27.306 [2024-12-14 00:19:06.364519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.306 [2024-12-14 00:19:06.364563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.306 qpair failed and we were unable to recover it. 00:38:27.306 [2024-12-14 00:19:06.364790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.306 [2024-12-14 00:19:06.364841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.306 qpair failed and we were unable to recover it. 00:38:27.306 [2024-12-14 00:19:06.365120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.306 [2024-12-14 00:19:06.365143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.306 qpair failed and we were unable to recover it. 00:38:27.306 [2024-12-14 00:19:06.365402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.307 [2024-12-14 00:19:06.365462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.307 qpair failed and we were unable to recover it. 00:38:27.307 [2024-12-14 00:19:06.365717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.307 [2024-12-14 00:19:06.365762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.307 qpair failed and we were unable to recover it. 00:38:27.307 [2024-12-14 00:19:06.366064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.307 [2024-12-14 00:19:06.366108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.307 qpair failed and we were unable to recover it. 00:38:27.307 [2024-12-14 00:19:06.366407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.307 [2024-12-14 00:19:06.366472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.307 qpair failed and we were unable to recover it. 00:38:27.307 [2024-12-14 00:19:06.366777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.307 [2024-12-14 00:19:06.366821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.307 qpair failed and we were unable to recover it. 00:38:27.307 [2024-12-14 00:19:06.367077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.307 [2024-12-14 00:19:06.367100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.307 qpair failed and we were unable to recover it. 00:38:27.307 [2024-12-14 00:19:06.367413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.307 [2024-12-14 00:19:06.367470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.307 qpair failed and we were unable to recover it. 00:38:27.307 [2024-12-14 00:19:06.367721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.307 [2024-12-14 00:19:06.367766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.307 qpair failed and we were unable to recover it. 00:38:27.307 [2024-12-14 00:19:06.368123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.307 [2024-12-14 00:19:06.368174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.307 qpair failed and we were unable to recover it. 00:38:27.307 [2024-12-14 00:19:06.368427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.307 [2024-12-14 00:19:06.368486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.307 qpair failed and we were unable to recover it. 00:38:27.307 [2024-12-14 00:19:06.368791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.307 [2024-12-14 00:19:06.368835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.307 qpair failed and we were unable to recover it. 00:38:27.307 [2024-12-14 00:19:06.369130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.307 [2024-12-14 00:19:06.369175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.307 qpair failed and we were unable to recover it. 00:38:27.307 [2024-12-14 00:19:06.369349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.307 [2024-12-14 00:19:06.369391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.307 qpair failed and we were unable to recover it. 00:38:27.307 [2024-12-14 00:19:06.369685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.307 [2024-12-14 00:19:06.369730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.307 qpair failed and we were unable to recover it. 00:38:27.307 [2024-12-14 00:19:06.370029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.307 [2024-12-14 00:19:06.370071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.307 qpair failed and we were unable to recover it. 00:38:27.307 [2024-12-14 00:19:06.370388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.307 [2024-12-14 00:19:06.370431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.307 qpair failed and we were unable to recover it. 00:38:27.307 [2024-12-14 00:19:06.370621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.307 [2024-12-14 00:19:06.370665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.307 qpair failed and we were unable to recover it. 00:38:27.307 [2024-12-14 00:19:06.370943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.307 [2024-12-14 00:19:06.370987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.307 qpair failed and we were unable to recover it. 00:38:27.307 [2024-12-14 00:19:06.371282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.307 [2024-12-14 00:19:06.371325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.307 qpair failed and we were unable to recover it. 00:38:27.307 [2024-12-14 00:19:06.371548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.307 [2024-12-14 00:19:06.371593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.307 qpair failed and we were unable to recover it. 00:38:27.307 [2024-12-14 00:19:06.371916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.307 [2024-12-14 00:19:06.371958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.307 qpair failed and we were unable to recover it. 00:38:27.307 [2024-12-14 00:19:06.372250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.307 [2024-12-14 00:19:06.372292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.307 qpair failed and we were unable to recover it. 00:38:27.307 [2024-12-14 00:19:06.372606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.307 [2024-12-14 00:19:06.372651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.307 qpair failed and we were unable to recover it. 00:38:27.307 [2024-12-14 00:19:06.372942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.307 [2024-12-14 00:19:06.372985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.307 qpair failed and we were unable to recover it. 00:38:27.307 [2024-12-14 00:19:06.373122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.307 [2024-12-14 00:19:06.373145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.307 qpair failed and we were unable to recover it. 00:38:27.307 [2024-12-14 00:19:06.373305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.307 [2024-12-14 00:19:06.373327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.307 qpair failed and we were unable to recover it. 00:38:27.307 [2024-12-14 00:19:06.373560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.307 [2024-12-14 00:19:06.373583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.307 qpair failed and we were unable to recover it. 00:38:27.307 [2024-12-14 00:19:06.373849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.307 [2024-12-14 00:19:06.373871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.307 qpair failed and we were unable to recover it. 00:38:27.307 [2024-12-14 00:19:06.374117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.307 [2024-12-14 00:19:06.374168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.307 qpair failed and we were unable to recover it. 00:38:27.307 [2024-12-14 00:19:06.374463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.307 [2024-12-14 00:19:06.374509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.307 qpair failed and we were unable to recover it. 00:38:27.307 [2024-12-14 00:19:06.374809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.307 [2024-12-14 00:19:06.374853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.307 qpair failed and we were unable to recover it. 00:38:27.307 [2024-12-14 00:19:06.375151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.307 [2024-12-14 00:19:06.375194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.307 qpair failed and we were unable to recover it. 00:38:27.307 [2024-12-14 00:19:06.375446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.307 [2024-12-14 00:19:06.375469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.307 qpair failed and we were unable to recover it. 00:38:27.307 [2024-12-14 00:19:06.375605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.307 [2024-12-14 00:19:06.375627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.307 qpair failed and we were unable to recover it. 00:38:27.307 [2024-12-14 00:19:06.375830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.307 [2024-12-14 00:19:06.375872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.307 qpair failed and we were unable to recover it. 00:38:27.307 [2024-12-14 00:19:06.376192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.307 [2024-12-14 00:19:06.376235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.307 qpair failed and we were unable to recover it. 00:38:27.307 [2024-12-14 00:19:06.376529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.307 [2024-12-14 00:19:06.376552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.307 qpair failed and we were unable to recover it. 00:38:27.307 [2024-12-14 00:19:06.376744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.307 [2024-12-14 00:19:06.376766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.307 qpair failed and we were unable to recover it. 00:38:27.308 [2024-12-14 00:19:06.377022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.308 [2024-12-14 00:19:06.377047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.308 qpair failed and we were unable to recover it. 00:38:27.308 [2024-12-14 00:19:06.377331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.308 [2024-12-14 00:19:06.377376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.308 qpair failed and we were unable to recover it. 00:38:27.308 [2024-12-14 00:19:06.377638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.308 [2024-12-14 00:19:06.377683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.308 qpair failed and we were unable to recover it. 00:38:27.308 [2024-12-14 00:19:06.377971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.308 [2024-12-14 00:19:06.378015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.308 qpair failed and we were unable to recover it. 00:38:27.308 [2024-12-14 00:19:06.378185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.308 [2024-12-14 00:19:06.378227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.308 qpair failed and we were unable to recover it. 00:38:27.308 [2024-12-14 00:19:06.378464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.308 [2024-12-14 00:19:06.378487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.308 qpair failed and we were unable to recover it. 00:38:27.308 [2024-12-14 00:19:06.378742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.308 [2024-12-14 00:19:06.378766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.308 qpair failed and we were unable to recover it. 00:38:27.308 [2024-12-14 00:19:06.378969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.308 [2024-12-14 00:19:06.378991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.308 qpair failed and we were unable to recover it. 00:38:27.308 [2024-12-14 00:19:06.379196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.308 [2024-12-14 00:19:06.379219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.308 qpair failed and we were unable to recover it. 00:38:27.308 [2024-12-14 00:19:06.379524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.308 [2024-12-14 00:19:06.379568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.308 qpair failed and we were unable to recover it. 00:38:27.308 [2024-12-14 00:19:06.379740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.308 [2024-12-14 00:19:06.379784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.308 qpair failed and we were unable to recover it. 00:38:27.308 [2024-12-14 00:19:06.380106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.308 [2024-12-14 00:19:06.380150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.308 qpair failed and we were unable to recover it. 00:38:27.308 [2024-12-14 00:19:06.380496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.308 [2024-12-14 00:19:06.380541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.308 qpair failed and we were unable to recover it. 00:38:27.308 [2024-12-14 00:19:06.380853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.308 [2024-12-14 00:19:06.380897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.308 qpair failed and we were unable to recover it. 00:38:27.308 [2024-12-14 00:19:06.381208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.308 [2024-12-14 00:19:06.381253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.308 qpair failed and we were unable to recover it. 00:38:27.308 [2024-12-14 00:19:06.381477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.308 [2024-12-14 00:19:06.381522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.308 qpair failed and we were unable to recover it. 00:38:27.308 [2024-12-14 00:19:06.381813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.308 [2024-12-14 00:19:06.381879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.308 qpair failed and we were unable to recover it. 00:38:27.308 [2024-12-14 00:19:06.382194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.308 [2024-12-14 00:19:06.382239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.308 qpair failed and we were unable to recover it. 00:38:27.308 [2024-12-14 00:19:06.382453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.308 [2024-12-14 00:19:06.382476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.308 qpair failed and we were unable to recover it. 00:38:27.308 [2024-12-14 00:19:06.382760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.308 [2024-12-14 00:19:06.382782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.308 qpair failed and we were unable to recover it. 00:38:27.308 [2024-12-14 00:19:06.383082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.308 [2024-12-14 00:19:06.383126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.308 qpair failed and we were unable to recover it. 00:38:27.308 [2024-12-14 00:19:06.383396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.308 [2024-12-14 00:19:06.383448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.308 qpair failed and we were unable to recover it. 00:38:27.308 [2024-12-14 00:19:06.383683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.308 [2024-12-14 00:19:06.383727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.308 qpair failed and we were unable to recover it. 00:38:27.308 [2024-12-14 00:19:06.384008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.308 [2024-12-14 00:19:06.384052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.308 qpair failed and we were unable to recover it. 00:38:27.308 [2024-12-14 00:19:06.384365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.308 [2024-12-14 00:19:06.384387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.308 qpair failed and we were unable to recover it. 00:38:27.308 [2024-12-14 00:19:06.384575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.308 [2024-12-14 00:19:06.384598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.308 qpair failed and we were unable to recover it. 00:38:27.308 [2024-12-14 00:19:06.384854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.308 [2024-12-14 00:19:06.384877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.308 qpair failed and we were unable to recover it. 00:38:27.308 [2024-12-14 00:19:06.385121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.308 [2024-12-14 00:19:06.385166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.308 qpair failed and we were unable to recover it. 00:38:27.308 [2024-12-14 00:19:06.385471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.308 [2024-12-14 00:19:06.385527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:27.308 qpair failed and we were unable to recover it. 00:38:27.308 [2024-12-14 00:19:06.385869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.308 [2024-12-14 00:19:06.385937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:27.308 qpair failed and we were unable to recover it. 00:38:27.308 [2024-12-14 00:19:06.386135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.308 [2024-12-14 00:19:06.386160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.308 qpair failed and we were unable to recover it. 00:38:27.308 [2024-12-14 00:19:06.386413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.308 [2024-12-14 00:19:06.386434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.308 qpair failed and we were unable to recover it. 00:38:27.308 [2024-12-14 00:19:06.386735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.309 [2024-12-14 00:19:06.386753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.309 qpair failed and we were unable to recover it. 00:38:27.309 [2024-12-14 00:19:06.386981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.309 [2024-12-14 00:19:06.387006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.309 qpair failed and we were unable to recover it. 00:38:27.309 [2024-12-14 00:19:06.387257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.309 [2024-12-14 00:19:06.387302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.309 qpair failed and we were unable to recover it. 00:38:27.309 [2024-12-14 00:19:06.387619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.309 [2024-12-14 00:19:06.387664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.309 qpair failed and we were unable to recover it. 00:38:27.309 [2024-12-14 00:19:06.387968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.309 [2024-12-14 00:19:06.388011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.309 qpair failed and we were unable to recover it. 00:38:27.309 [2024-12-14 00:19:06.388311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.309 [2024-12-14 00:19:06.388352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.309 qpair failed and we were unable to recover it. 00:38:27.309 [2024-12-14 00:19:06.388661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.309 [2024-12-14 00:19:06.388704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.309 qpair failed and we were unable to recover it. 00:38:27.309 [2024-12-14 00:19:06.389005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.309 [2024-12-14 00:19:06.389048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.309 qpair failed and we were unable to recover it. 00:38:27.309 [2024-12-14 00:19:06.389343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.309 [2024-12-14 00:19:06.389395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.309 qpair failed and we were unable to recover it. 00:38:27.309 [2024-12-14 00:19:06.389716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.309 [2024-12-14 00:19:06.389763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.309 qpair failed and we were unable to recover it. 00:38:27.309 [2024-12-14 00:19:06.390007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.309 [2024-12-14 00:19:06.390049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.309 qpair failed and we were unable to recover it. 00:38:27.309 [2024-12-14 00:19:06.390269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.309 [2024-12-14 00:19:06.390314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.309 qpair failed and we were unable to recover it. 00:38:27.309 [2024-12-14 00:19:06.390566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.309 [2024-12-14 00:19:06.390582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.309 qpair failed and we were unable to recover it. 00:38:27.309 [2024-12-14 00:19:06.390749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.309 [2024-12-14 00:19:06.390763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.309 qpair failed and we were unable to recover it. 00:38:27.309 [2024-12-14 00:19:06.390954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.309 [2024-12-14 00:19:06.390997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.309 qpair failed and we were unable to recover it. 00:38:27.309 [2024-12-14 00:19:06.391346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.309 [2024-12-14 00:19:06.391389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.309 qpair failed and we were unable to recover it. 00:38:27.309 [2024-12-14 00:19:06.391638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.309 [2024-12-14 00:19:06.391681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.309 qpair failed and we were unable to recover it. 00:38:27.309 [2024-12-14 00:19:06.391984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.309 [2024-12-14 00:19:06.392026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.309 qpair failed and we were unable to recover it. 00:38:27.309 [2024-12-14 00:19:06.392326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.309 [2024-12-14 00:19:06.392369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.309 qpair failed and we were unable to recover it. 00:38:27.309 [2024-12-14 00:19:06.392594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.309 [2024-12-14 00:19:06.392636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.309 qpair failed and we were unable to recover it. 00:38:27.309 [2024-12-14 00:19:06.392966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.309 [2024-12-14 00:19:06.393009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.309 qpair failed and we were unable to recover it. 00:38:27.309 [2024-12-14 00:19:06.393334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.309 [2024-12-14 00:19:06.393378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.309 qpair failed and we were unable to recover it. 00:38:27.309 [2024-12-14 00:19:06.393722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.309 [2024-12-14 00:19:06.393767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.309 qpair failed and we were unable to recover it. 00:38:27.309 [2024-12-14 00:19:06.394080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.309 [2024-12-14 00:19:06.394123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.309 qpair failed and we were unable to recover it. 00:38:27.309 [2024-12-14 00:19:06.394394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.309 [2024-12-14 00:19:06.394436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.309 qpair failed and we were unable to recover it. 00:38:27.309 [2024-12-14 00:19:06.394753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.309 [2024-12-14 00:19:06.394795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.309 qpair failed and we were unable to recover it. 00:38:27.309 [2024-12-14 00:19:06.394944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.309 [2024-12-14 00:19:06.394958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.309 qpair failed and we were unable to recover it. 00:38:27.309 [2024-12-14 00:19:06.395201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.309 [2024-12-14 00:19:06.395244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.309 qpair failed and we were unable to recover it. 00:38:27.309 [2024-12-14 00:19:06.395521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.309 [2024-12-14 00:19:06.395567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.309 qpair failed and we were unable to recover it. 00:38:27.309 [2024-12-14 00:19:06.395790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.309 [2024-12-14 00:19:06.395831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.309 qpair failed and we were unable to recover it. 00:38:27.309 [2024-12-14 00:19:06.395994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.309 [2024-12-14 00:19:06.396008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.309 qpair failed and we were unable to recover it. 00:38:27.309 [2024-12-14 00:19:06.396254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.309 [2024-12-14 00:19:06.396296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.309 qpair failed and we were unable to recover it. 00:38:27.309 [2024-12-14 00:19:06.396539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.309 [2024-12-14 00:19:06.396583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.309 qpair failed and we were unable to recover it. 00:38:27.309 [2024-12-14 00:19:06.396819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.309 [2024-12-14 00:19:06.396861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.309 qpair failed and we were unable to recover it. 00:38:27.309 [2024-12-14 00:19:06.397156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.309 [2024-12-14 00:19:06.397199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.309 qpair failed and we were unable to recover it. 00:38:27.309 [2024-12-14 00:19:06.397529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.309 [2024-12-14 00:19:06.397555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.309 qpair failed and we were unable to recover it. 00:38:27.309 [2024-12-14 00:19:06.397776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.309 [2024-12-14 00:19:06.397799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.310 qpair failed and we were unable to recover it. 00:38:27.310 [2024-12-14 00:19:06.397984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.310 [2024-12-14 00:19:06.398007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.310 qpair failed and we were unable to recover it. 00:38:27.310 [2024-12-14 00:19:06.398271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.310 [2024-12-14 00:19:06.398325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.310 qpair failed and we were unable to recover it. 00:38:27.310 [2024-12-14 00:19:06.398631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.310 [2024-12-14 00:19:06.398677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.310 qpair failed and we were unable to recover it. 00:38:27.310 [2024-12-14 00:19:06.398977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.310 [2024-12-14 00:19:06.399021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.310 qpair failed and we were unable to recover it. 00:38:27.310 [2024-12-14 00:19:06.399338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.310 [2024-12-14 00:19:06.399383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.310 qpair failed and we were unable to recover it. 00:38:27.310 [2024-12-14 00:19:06.399675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.310 [2024-12-14 00:19:06.399720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.310 qpair failed and we were unable to recover it. 00:38:27.310 [2024-12-14 00:19:06.400023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.310 [2024-12-14 00:19:06.400067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.310 qpair failed and we were unable to recover it. 00:38:27.310 [2024-12-14 00:19:06.400389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.310 [2024-12-14 00:19:06.400432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.310 qpair failed and we were unable to recover it. 00:38:27.310 [2024-12-14 00:19:06.400771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.310 [2024-12-14 00:19:06.400815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.310 qpair failed and we were unable to recover it. 00:38:27.310 [2024-12-14 00:19:06.401127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.310 [2024-12-14 00:19:06.401169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.310 qpair failed and we were unable to recover it. 00:38:27.310 [2024-12-14 00:19:06.401374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.310 [2024-12-14 00:19:06.401426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.310 qpair failed and we were unable to recover it. 00:38:27.310 [2024-12-14 00:19:06.401627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.310 [2024-12-14 00:19:06.401654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.310 qpair failed and we were unable to recover it. 00:38:27.310 [2024-12-14 00:19:06.401908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.310 [2024-12-14 00:19:06.401953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.310 qpair failed and we were unable to recover it. 00:38:27.310 [2024-12-14 00:19:06.402252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.310 [2024-12-14 00:19:06.402295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.310 qpair failed and we were unable to recover it. 00:38:27.310 [2024-12-14 00:19:06.402635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.310 [2024-12-14 00:19:06.402691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.310 qpair failed and we were unable to recover it. 00:38:27.310 [2024-12-14 00:19:06.402977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.310 [2024-12-14 00:19:06.403022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.310 qpair failed and we were unable to recover it. 00:38:27.310 [2024-12-14 00:19:06.403328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.310 [2024-12-14 00:19:06.403371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.310 qpair failed and we were unable to recover it. 00:38:27.310 [2024-12-14 00:19:06.403612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.310 [2024-12-14 00:19:06.403657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.310 qpair failed and we were unable to recover it. 00:38:27.310 [2024-12-14 00:19:06.403959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.310 [2024-12-14 00:19:06.404004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.310 qpair failed and we were unable to recover it. 00:38:27.310 [2024-12-14 00:19:06.404305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.310 [2024-12-14 00:19:06.404348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.310 qpair failed and we were unable to recover it. 00:38:27.310 [2024-12-14 00:19:06.404648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.310 [2024-12-14 00:19:06.404692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.310 qpair failed and we were unable to recover it. 00:38:27.310 [2024-12-14 00:19:06.405006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.310 [2024-12-14 00:19:06.405050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.310 qpair failed and we were unable to recover it. 00:38:27.310 [2024-12-14 00:19:06.405359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.310 [2024-12-14 00:19:06.405403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.310 qpair failed and we were unable to recover it. 00:38:27.310 [2024-12-14 00:19:06.405641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.310 [2024-12-14 00:19:06.405685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.310 qpair failed and we were unable to recover it. 00:38:27.310 [2024-12-14 00:19:06.405960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.310 [2024-12-14 00:19:06.406004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.310 qpair failed and we were unable to recover it. 00:38:27.310 [2024-12-14 00:19:06.406285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.310 [2024-12-14 00:19:06.406308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.310 qpair failed and we were unable to recover it. 00:38:27.310 [2024-12-14 00:19:06.406601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.310 [2024-12-14 00:19:06.406646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.310 qpair failed and we were unable to recover it. 00:38:27.310 [2024-12-14 00:19:06.406957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.310 [2024-12-14 00:19:06.407001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.310 qpair failed and we were unable to recover it. 00:38:27.310 [2024-12-14 00:19:06.407275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.310 [2024-12-14 00:19:06.407297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.310 qpair failed and we were unable to recover it. 00:38:27.310 [2024-12-14 00:19:06.407496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.310 [2024-12-14 00:19:06.407519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.310 qpair failed and we were unable to recover it. 00:38:27.310 [2024-12-14 00:19:06.407754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.310 [2024-12-14 00:19:06.407776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.310 qpair failed and we were unable to recover it. 00:38:27.310 [2024-12-14 00:19:06.407958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.310 [2024-12-14 00:19:06.407980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.310 qpair failed and we were unable to recover it. 00:38:27.310 [2024-12-14 00:19:06.408260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.310 [2024-12-14 00:19:06.408304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.310 qpair failed and we were unable to recover it. 00:38:27.310 [2024-12-14 00:19:06.408614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.310 [2024-12-14 00:19:06.408658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.310 qpair failed and we were unable to recover it. 00:38:27.310 [2024-12-14 00:19:06.408977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.310 [2024-12-14 00:19:06.409020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.310 qpair failed and we were unable to recover it. 00:38:27.310 [2024-12-14 00:19:06.409350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.310 [2024-12-14 00:19:06.409372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.310 qpair failed and we were unable to recover it. 00:38:27.310 [2024-12-14 00:19:06.409630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.310 [2024-12-14 00:19:06.409653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.310 qpair failed and we were unable to recover it. 00:38:27.310 [2024-12-14 00:19:06.409855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.310 [2024-12-14 00:19:06.409877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.310 qpair failed and we were unable to recover it. 00:38:27.310 [2024-12-14 00:19:06.410114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.310 [2024-12-14 00:19:06.410167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:27.310 qpair failed and we were unable to recover it. 00:38:27.311 [2024-12-14 00:19:06.410501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.311 [2024-12-14 00:19:06.410557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:27.311 qpair failed and we were unable to recover it. 00:38:27.311 [2024-12-14 00:19:06.410803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.311 [2024-12-14 00:19:06.410845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.311 qpair failed and we were unable to recover it. 00:38:27.311 [2024-12-14 00:19:06.411030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.311 [2024-12-14 00:19:06.411048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.311 qpair failed and we were unable to recover it. 00:38:27.311 [2024-12-14 00:19:06.411296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.311 [2024-12-14 00:19:06.411318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.311 qpair failed and we were unable to recover it. 00:38:27.311 [2024-12-14 00:19:06.411493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.311 [2024-12-14 00:19:06.411508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.311 qpair failed and we were unable to recover it. 00:38:27.594 [2024-12-14 00:19:06.411698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.594 [2024-12-14 00:19:06.411714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.594 qpair failed and we were unable to recover it. 00:38:27.594 [2024-12-14 00:19:06.411927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.594 [2024-12-14 00:19:06.411943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.594 qpair failed and we were unable to recover it. 00:38:27.594 [2024-12-14 00:19:06.412215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.594 [2024-12-14 00:19:06.412230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.594 qpair failed and we were unable to recover it. 00:38:27.594 [2024-12-14 00:19:06.412454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.594 [2024-12-14 00:19:06.412470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.594 qpair failed and we were unable to recover it. 00:38:27.594 [2024-12-14 00:19:06.412739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.594 [2024-12-14 00:19:06.412753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.594 qpair failed and we were unable to recover it. 00:38:27.594 [2024-12-14 00:19:06.412885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.594 [2024-12-14 00:19:06.412900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.594 qpair failed and we were unable to recover it. 00:38:27.594 [2024-12-14 00:19:06.413123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.594 [2024-12-14 00:19:06.413138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.594 qpair failed and we were unable to recover it. 00:38:27.594 [2024-12-14 00:19:06.413298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.594 [2024-12-14 00:19:06.413317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.594 qpair failed and we were unable to recover it. 00:38:27.594 [2024-12-14 00:19:06.413476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.594 [2024-12-14 00:19:06.413490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.594 qpair failed and we were unable to recover it. 00:38:27.594 [2024-12-14 00:19:06.413650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.594 [2024-12-14 00:19:06.413665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.594 qpair failed and we were unable to recover it. 00:38:27.594 [2024-12-14 00:19:06.413813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.594 [2024-12-14 00:19:06.413827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.594 qpair failed and we were unable to recover it. 00:38:27.594 [2024-12-14 00:19:06.413992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.594 [2024-12-14 00:19:06.414006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.594 qpair failed and we were unable to recover it. 00:38:27.594 [2024-12-14 00:19:06.414113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.594 [2024-12-14 00:19:06.414128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.594 qpair failed and we were unable to recover it. 00:38:27.594 [2024-12-14 00:19:06.414316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.594 [2024-12-14 00:19:06.414331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.594 qpair failed and we were unable to recover it. 00:38:27.594 [2024-12-14 00:19:06.414525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.594 [2024-12-14 00:19:06.414540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.594 qpair failed and we were unable to recover it. 00:38:27.594 [2024-12-14 00:19:06.414638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.594 [2024-12-14 00:19:06.414653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.594 qpair failed and we were unable to recover it. 00:38:27.594 [2024-12-14 00:19:06.414810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.594 [2024-12-14 00:19:06.414825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.594 qpair failed and we were unable to recover it. 00:38:27.594 [2024-12-14 00:19:06.415043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.594 [2024-12-14 00:19:06.415058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.594 qpair failed and we were unable to recover it. 00:38:27.594 [2024-12-14 00:19:06.415240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.594 [2024-12-14 00:19:06.415254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.594 qpair failed and we were unable to recover it. 00:38:27.594 [2024-12-14 00:19:06.415393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.594 [2024-12-14 00:19:06.415407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.594 qpair failed and we were unable to recover it. 00:38:27.594 [2024-12-14 00:19:06.415566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.595 [2024-12-14 00:19:06.415581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.595 qpair failed and we were unable to recover it. 00:38:27.595 [2024-12-14 00:19:06.415793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.595 [2024-12-14 00:19:06.415807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.595 qpair failed and we were unable to recover it. 00:38:27.595 [2024-12-14 00:19:06.416025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.595 [2024-12-14 00:19:06.416039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.595 qpair failed and we were unable to recover it. 00:38:27.595 [2024-12-14 00:19:06.416281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.595 [2024-12-14 00:19:06.416296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.595 qpair failed and we were unable to recover it. 00:38:27.595 [2024-12-14 00:19:06.416455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.595 [2024-12-14 00:19:06.416470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.595 qpair failed and we were unable to recover it. 00:38:27.595 [2024-12-14 00:19:06.416645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.595 [2024-12-14 00:19:06.416660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.595 qpair failed and we were unable to recover it. 00:38:27.595 [2024-12-14 00:19:06.416761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.595 [2024-12-14 00:19:06.416781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.595 qpair failed and we were unable to recover it. 00:38:27.595 [2024-12-14 00:19:06.416950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.595 [2024-12-14 00:19:06.416964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.595 qpair failed and we were unable to recover it. 00:38:27.595 [2024-12-14 00:19:06.417230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.595 [2024-12-14 00:19:06.417244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.595 qpair failed and we were unable to recover it. 00:38:27.595 [2024-12-14 00:19:06.417462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.595 [2024-12-14 00:19:06.417477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.595 qpair failed and we were unable to recover it. 00:38:27.595 [2024-12-14 00:19:06.417633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.595 [2024-12-14 00:19:06.417648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.595 qpair failed and we were unable to recover it. 00:38:27.595 [2024-12-14 00:19:06.417816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.595 [2024-12-14 00:19:06.417831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.595 qpair failed and we were unable to recover it. 00:38:27.595 [2024-12-14 00:19:06.417989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.595 [2024-12-14 00:19:06.418003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.595 qpair failed and we were unable to recover it. 00:38:27.595 [2024-12-14 00:19:06.418171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.595 [2024-12-14 00:19:06.418185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.595 qpair failed and we were unable to recover it. 00:38:27.595 [2024-12-14 00:19:06.418383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.595 [2024-12-14 00:19:06.418414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:27.595 qpair failed and we were unable to recover it. 00:38:27.595 [2024-12-14 00:19:06.418634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.595 [2024-12-14 00:19:06.418664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:27.595 qpair failed and we were unable to recover it. 00:38:27.595 [2024-12-14 00:19:06.418972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.595 [2024-12-14 00:19:06.418998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.595 qpair failed and we were unable to recover it. 00:38:27.595 [2024-12-14 00:19:06.419192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.595 [2024-12-14 00:19:06.419212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.595 qpair failed and we were unable to recover it. 00:38:27.595 [2024-12-14 00:19:06.419378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.595 [2024-12-14 00:19:06.419392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.595 qpair failed and we were unable to recover it. 00:38:27.595 [2024-12-14 00:19:06.419562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.595 [2024-12-14 00:19:06.419577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.595 qpair failed and we were unable to recover it. 00:38:27.595 [2024-12-14 00:19:06.419690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.595 [2024-12-14 00:19:06.419704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.595 qpair failed and we were unable to recover it. 00:38:27.595 [2024-12-14 00:19:06.419873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.595 [2024-12-14 00:19:06.419888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.595 qpair failed and we were unable to recover it. 00:38:27.595 [2024-12-14 00:19:06.420139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.595 [2024-12-14 00:19:06.420153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.595 qpair failed and we were unable to recover it. 00:38:27.595 [2024-12-14 00:19:06.420368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.595 [2024-12-14 00:19:06.420383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.595 qpair failed and we were unable to recover it. 00:38:27.595 [2024-12-14 00:19:06.420665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.595 [2024-12-14 00:19:06.420680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.595 qpair failed and we were unable to recover it. 00:38:27.595 [2024-12-14 00:19:06.420914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.595 [2024-12-14 00:19:06.420928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.595 qpair failed and we were unable to recover it. 00:38:27.595 [2024-12-14 00:19:06.421207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.595 [2024-12-14 00:19:06.421221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.595 qpair failed and we were unable to recover it. 00:38:27.595 [2024-12-14 00:19:06.421424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.595 [2024-12-14 00:19:06.421443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.595 qpair failed and we were unable to recover it. 00:38:27.595 [2024-12-14 00:19:06.421641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.595 [2024-12-14 00:19:06.421656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.595 qpair failed and we were unable to recover it. 00:38:27.595 [2024-12-14 00:19:06.421840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.595 [2024-12-14 00:19:06.421854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.595 qpair failed and we were unable to recover it. 00:38:27.595 [2024-12-14 00:19:06.422036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.595 [2024-12-14 00:19:06.422050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.595 qpair failed and we were unable to recover it. 00:38:27.595 [2024-12-14 00:19:06.422208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.595 [2024-12-14 00:19:06.422223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.595 qpair failed and we were unable to recover it. 00:38:27.595 [2024-12-14 00:19:06.422404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.595 [2024-12-14 00:19:06.422418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.595 qpair failed and we were unable to recover it. 00:38:27.595 [2024-12-14 00:19:06.422634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.595 [2024-12-14 00:19:06.422649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.595 qpair failed and we were unable to recover it. 00:38:27.595 [2024-12-14 00:19:06.422833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.595 [2024-12-14 00:19:06.422848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.595 qpair failed and we were unable to recover it. 00:38:27.595 [2024-12-14 00:19:06.423082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.595 [2024-12-14 00:19:06.423096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.595 qpair failed and we were unable to recover it. 00:38:27.595 [2024-12-14 00:19:06.423337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.595 [2024-12-14 00:19:06.423351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.595 qpair failed and we were unable to recover it. 00:38:27.595 [2024-12-14 00:19:06.423571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.595 [2024-12-14 00:19:06.423586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.595 qpair failed and we were unable to recover it. 00:38:27.595 [2024-12-14 00:19:06.423703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.596 [2024-12-14 00:19:06.423718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.596 qpair failed and we were unable to recover it. 00:38:27.596 [2024-12-14 00:19:06.423895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.596 [2024-12-14 00:19:06.423909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.596 qpair failed and we were unable to recover it. 00:38:27.596 [2024-12-14 00:19:06.423988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.596 [2024-12-14 00:19:06.424002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.596 qpair failed and we were unable to recover it. 00:38:27.596 [2024-12-14 00:19:06.424171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.596 [2024-12-14 00:19:06.424186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.596 qpair failed and we were unable to recover it. 00:38:27.596 [2024-12-14 00:19:06.424369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.596 [2024-12-14 00:19:06.424383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.596 qpair failed and we were unable to recover it. 00:38:27.596 [2024-12-14 00:19:06.424608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.596 [2024-12-14 00:19:06.424622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.596 qpair failed and we were unable to recover it. 00:38:27.596 [2024-12-14 00:19:06.424804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.596 [2024-12-14 00:19:06.424818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.596 qpair failed and we were unable to recover it. 00:38:27.596 [2024-12-14 00:19:06.424973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.596 [2024-12-14 00:19:06.424987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.596 qpair failed and we were unable to recover it. 00:38:27.596 [2024-12-14 00:19:06.425201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.596 [2024-12-14 00:19:06.425215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.596 qpair failed and we were unable to recover it. 00:38:27.596 [2024-12-14 00:19:06.425441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.596 [2024-12-14 00:19:06.425456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.596 qpair failed and we were unable to recover it. 00:38:27.596 [2024-12-14 00:19:06.425640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.596 [2024-12-14 00:19:06.425655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.596 qpair failed and we were unable to recover it. 00:38:27.596 [2024-12-14 00:19:06.425809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.596 [2024-12-14 00:19:06.425823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.596 qpair failed and we were unable to recover it. 00:38:27.596 [2024-12-14 00:19:06.426007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.596 [2024-12-14 00:19:06.426022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.596 qpair failed and we were unable to recover it. 00:38:27.596 [2024-12-14 00:19:06.426211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.596 [2024-12-14 00:19:06.426226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.596 qpair failed and we were unable to recover it. 00:38:27.596 [2024-12-14 00:19:06.426468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.596 [2024-12-14 00:19:06.426483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.596 qpair failed and we were unable to recover it. 00:38:27.596 [2024-12-14 00:19:06.426662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.596 [2024-12-14 00:19:06.426677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.596 qpair failed and we were unable to recover it. 00:38:27.596 [2024-12-14 00:19:06.426920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.596 [2024-12-14 00:19:06.426937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.596 qpair failed and we were unable to recover it. 00:38:27.596 [2024-12-14 00:19:06.427127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.596 [2024-12-14 00:19:06.427141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.596 qpair failed and we were unable to recover it. 00:38:27.596 [2024-12-14 00:19:06.427306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.596 [2024-12-14 00:19:06.427320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.596 qpair failed and we were unable to recover it. 00:38:27.596 [2024-12-14 00:19:06.427559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.596 [2024-12-14 00:19:06.427574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.596 qpair failed and we were unable to recover it. 00:38:27.596 [2024-12-14 00:19:06.427739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.596 [2024-12-14 00:19:06.427754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.596 qpair failed and we were unable to recover it. 00:38:27.596 [2024-12-14 00:19:06.427972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.596 [2024-12-14 00:19:06.427986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.596 qpair failed and we were unable to recover it. 00:38:27.596 [2024-12-14 00:19:06.428143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.596 [2024-12-14 00:19:06.428157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.596 qpair failed and we were unable to recover it. 00:38:27.596 [2024-12-14 00:19:06.428321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.596 [2024-12-14 00:19:06.428335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.596 qpair failed and we were unable to recover it. 00:38:27.596 [2024-12-14 00:19:06.428595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.596 [2024-12-14 00:19:06.428611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.596 qpair failed and we were unable to recover it. 00:38:27.596 [2024-12-14 00:19:06.428847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.596 [2024-12-14 00:19:06.428862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.596 qpair failed and we were unable to recover it. 00:38:27.596 [2024-12-14 00:19:06.429081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.596 [2024-12-14 00:19:06.429096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.596 qpair failed and we were unable to recover it. 00:38:27.596 [2024-12-14 00:19:06.429274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.596 [2024-12-14 00:19:06.429288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.596 qpair failed and we were unable to recover it. 00:38:27.596 [2024-12-14 00:19:06.429502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.596 [2024-12-14 00:19:06.429518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.596 qpair failed and we were unable to recover it. 00:38:27.596 [2024-12-14 00:19:06.429710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.596 [2024-12-14 00:19:06.429725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.596 qpair failed and we were unable to recover it. 00:38:27.596 [2024-12-14 00:19:06.429891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.596 [2024-12-14 00:19:06.429906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.596 qpair failed and we were unable to recover it. 00:38:27.596 [2024-12-14 00:19:06.430094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.596 [2024-12-14 00:19:06.430109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.596 qpair failed and we were unable to recover it. 00:38:27.596 [2024-12-14 00:19:06.430291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.596 [2024-12-14 00:19:06.430306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.596 qpair failed and we were unable to recover it. 00:38:27.596 [2024-12-14 00:19:06.430521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.596 [2024-12-14 00:19:06.430537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.596 qpair failed and we were unable to recover it. 00:38:27.596 [2024-12-14 00:19:06.430786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.596 [2024-12-14 00:19:06.430803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.596 qpair failed and we were unable to recover it. 00:38:27.596 [2024-12-14 00:19:06.431018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.596 [2024-12-14 00:19:06.431038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.596 qpair failed and we were unable to recover it. 00:38:27.596 [2024-12-14 00:19:06.431200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.596 [2024-12-14 00:19:06.431214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.596 qpair failed and we were unable to recover it. 00:38:27.596 [2024-12-14 00:19:06.431362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.596 [2024-12-14 00:19:06.431376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.596 qpair failed and we were unable to recover it. 00:38:27.596 [2024-12-14 00:19:06.431611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.596 [2024-12-14 00:19:06.431626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.596 qpair failed and we were unable to recover it. 00:38:27.596 [2024-12-14 00:19:06.431784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.597 [2024-12-14 00:19:06.431799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.597 qpair failed and we were unable to recover it. 00:38:27.597 [2024-12-14 00:19:06.431950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.597 [2024-12-14 00:19:06.431964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.597 qpair failed and we were unable to recover it. 00:38:27.597 [2024-12-14 00:19:06.432215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.597 [2024-12-14 00:19:06.432230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.597 qpair failed and we were unable to recover it. 00:38:27.597 [2024-12-14 00:19:06.432393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.597 [2024-12-14 00:19:06.432409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.597 qpair failed and we were unable to recover it. 00:38:27.597 [2024-12-14 00:19:06.432596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.597 [2024-12-14 00:19:06.432612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.597 qpair failed and we were unable to recover it. 00:38:27.597 [2024-12-14 00:19:06.432803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.597 [2024-12-14 00:19:06.432817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.597 qpair failed and we were unable to recover it. 00:38:27.597 [2024-12-14 00:19:06.433056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.597 [2024-12-14 00:19:06.433071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.597 qpair failed and we were unable to recover it. 00:38:27.597 [2024-12-14 00:19:06.433229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.597 [2024-12-14 00:19:06.433243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.597 qpair failed and we were unable to recover it. 00:38:27.597 [2024-12-14 00:19:06.433471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.597 [2024-12-14 00:19:06.433486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.597 qpair failed and we were unable to recover it. 00:38:27.597 [2024-12-14 00:19:06.433722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.597 [2024-12-14 00:19:06.433735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.597 qpair failed and we were unable to recover it. 00:38:27.597 [2024-12-14 00:19:06.433956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.597 [2024-12-14 00:19:06.433977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.597 qpair failed and we were unable to recover it. 00:38:27.597 [2024-12-14 00:19:06.434209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.597 [2024-12-14 00:19:06.434224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.597 qpair failed and we were unable to recover it. 00:38:27.597 [2024-12-14 00:19:06.434493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.597 [2024-12-14 00:19:06.434508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.597 qpair failed and we were unable to recover it. 00:38:27.597 [2024-12-14 00:19:06.434744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.597 [2024-12-14 00:19:06.434758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.597 qpair failed and we were unable to recover it. 00:38:27.597 [2024-12-14 00:19:06.434987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.597 [2024-12-14 00:19:06.435001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.597 qpair failed and we were unable to recover it. 00:38:27.597 [2024-12-14 00:19:06.435243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.597 [2024-12-14 00:19:06.435257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.597 qpair failed and we were unable to recover it. 00:38:27.597 [2024-12-14 00:19:06.435420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.597 [2024-12-14 00:19:06.435434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.597 qpair failed and we were unable to recover it. 00:38:27.597 [2024-12-14 00:19:06.435604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.597 [2024-12-14 00:19:06.435620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.597 qpair failed and we were unable to recover it. 00:38:27.597 [2024-12-14 00:19:06.435874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.597 [2024-12-14 00:19:06.435888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.597 qpair failed and we were unable to recover it. 00:38:27.597 [2024-12-14 00:19:06.436157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.597 [2024-12-14 00:19:06.436171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.597 qpair failed and we were unable to recover it. 00:38:27.597 [2024-12-14 00:19:06.436389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.597 [2024-12-14 00:19:06.436404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.597 qpair failed and we were unable to recover it. 00:38:27.597 [2024-12-14 00:19:06.436561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.597 [2024-12-14 00:19:06.436576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.597 qpair failed and we were unable to recover it. 00:38:27.597 [2024-12-14 00:19:06.436788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.597 [2024-12-14 00:19:06.436802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.597 qpair failed and we were unable to recover it. 00:38:27.597 [2024-12-14 00:19:06.437036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.597 [2024-12-14 00:19:06.437050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.597 qpair failed and we were unable to recover it. 00:38:27.597 [2024-12-14 00:19:06.437231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.597 [2024-12-14 00:19:06.437245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.597 qpair failed and we were unable to recover it. 00:38:27.597 [2024-12-14 00:19:06.437416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.597 [2024-12-14 00:19:06.437430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.597 qpair failed and we were unable to recover it. 00:38:27.597 [2024-12-14 00:19:06.437618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.597 [2024-12-14 00:19:06.437632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.597 qpair failed and we were unable to recover it. 00:38:27.597 [2024-12-14 00:19:06.437862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.597 [2024-12-14 00:19:06.437876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.597 qpair failed and we were unable to recover it. 00:38:27.597 [2024-12-14 00:19:06.438018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.597 [2024-12-14 00:19:06.438032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.597 qpair failed and we were unable to recover it. 00:38:27.597 [2024-12-14 00:19:06.438249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.597 [2024-12-14 00:19:06.438263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.597 qpair failed and we were unable to recover it. 00:38:27.597 [2024-12-14 00:19:06.438355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.597 [2024-12-14 00:19:06.438369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.597 qpair failed and we were unable to recover it. 00:38:27.597 [2024-12-14 00:19:06.438553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.597 [2024-12-14 00:19:06.438569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.597 qpair failed and we were unable to recover it. 00:38:27.597 [2024-12-14 00:19:06.438724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.597 [2024-12-14 00:19:06.438742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.597 qpair failed and we were unable to recover it. 00:38:27.597 [2024-12-14 00:19:06.438983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.597 [2024-12-14 00:19:06.438997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.597 qpair failed and we were unable to recover it. 00:38:27.597 [2024-12-14 00:19:06.439176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.597 [2024-12-14 00:19:06.439190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.597 qpair failed and we were unable to recover it. 00:38:27.597 [2024-12-14 00:19:06.439422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.597 [2024-12-14 00:19:06.439449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.597 qpair failed and we were unable to recover it. 00:38:27.597 [2024-12-14 00:19:06.439615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.597 [2024-12-14 00:19:06.439630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.597 qpair failed and we were unable to recover it. 00:38:27.597 [2024-12-14 00:19:06.439873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.597 [2024-12-14 00:19:06.439888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.597 qpair failed and we were unable to recover it. 00:38:27.597 [2024-12-14 00:19:06.440099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.597 [2024-12-14 00:19:06.440113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.597 qpair failed and we were unable to recover it. 00:38:27.597 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 58790 Killed "${NVMF_APP[@]}" "$@" 00:38:27.597 [2024-12-14 00:19:06.440340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.598 [2024-12-14 00:19:06.440354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.598 qpair failed and we were unable to recover it. 00:38:27.598 [2024-12-14 00:19:06.440588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.598 [2024-12-14 00:19:06.440606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.598 qpair failed and we were unable to recover it. 00:38:27.598 [2024-12-14 00:19:06.440822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.598 [2024-12-14 00:19:06.440836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.598 qpair failed and we were unable to recover it. 00:38:27.598 [2024-12-14 00:19:06.440908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.598 [2024-12-14 00:19:06.440923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.598 qpair failed and we were unable to recover it. 00:38:27.598 00:19:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 10.0.0.2 00:38:27.598 [2024-12-14 00:19:06.441108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.598 [2024-12-14 00:19:06.441123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.598 qpair failed and we were unable to recover it. 00:38:27.598 [2024-12-14 00:19:06.441291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.598 [2024-12-14 00:19:06.441305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.598 qpair failed and we were unable to recover it. 00:38:27.598 00:19:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:38:27.598 [2024-12-14 00:19:06.441534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.598 [2024-12-14 00:19:06.441550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.598 qpair failed and we were unable to recover it. 00:38:27.598 00:19:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:38:27.598 [2024-12-14 00:19:06.441784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.598 [2024-12-14 00:19:06.441799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.598 qpair failed and we were unable to recover it. 00:38:27.598 [2024-12-14 00:19:06.442009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.598 00:19:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:38:27.598 [2024-12-14 00:19:06.442024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.598 qpair failed and we were unable to recover it. 00:38:27.598 00:19:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:38:27.598 [2024-12-14 00:19:06.442300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.598 [2024-12-14 00:19:06.442315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.598 qpair failed and we were unable to recover it. 00:38:27.598 [2024-12-14 00:19:06.442561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.598 [2024-12-14 00:19:06.442576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.598 qpair failed and we were unable to recover it. 00:38:27.598 [2024-12-14 00:19:06.442793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.598 [2024-12-14 00:19:06.442808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.598 qpair failed and we were unable to recover it. 00:38:27.598 [2024-12-14 00:19:06.443020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.598 [2024-12-14 00:19:06.443035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.598 qpair failed and we were unable to recover it. 00:38:27.598 [2024-12-14 00:19:06.443301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.598 [2024-12-14 00:19:06.443316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.598 qpair failed and we were unable to recover it. 00:38:27.598 [2024-12-14 00:19:06.443481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.598 [2024-12-14 00:19:06.443496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.598 qpair failed and we were unable to recover it. 00:38:27.598 [2024-12-14 00:19:06.443729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.598 [2024-12-14 00:19:06.443756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.598 qpair failed and we were unable to recover it. 00:38:27.598 [2024-12-14 00:19:06.443946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.598 [2024-12-14 00:19:06.443961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.598 qpair failed and we were unable to recover it. 00:38:27.598 [2024-12-14 00:19:06.444123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.598 [2024-12-14 00:19:06.444138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.598 qpair failed and we were unable to recover it. 00:38:27.598 [2024-12-14 00:19:06.444362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.598 [2024-12-14 00:19:06.444377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.598 qpair failed and we were unable to recover it. 00:38:27.598 [2024-12-14 00:19:06.444566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.598 [2024-12-14 00:19:06.444581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.598 qpair failed and we were unable to recover it. 00:38:27.598 [2024-12-14 00:19:06.444781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.598 [2024-12-14 00:19:06.444797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.598 qpair failed and we were unable to recover it. 00:38:27.598 [2024-12-14 00:19:06.444954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.598 [2024-12-14 00:19:06.444974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.598 qpair failed and we were unable to recover it. 00:38:27.598 [2024-12-14 00:19:06.445202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.598 [2024-12-14 00:19:06.445217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.598 qpair failed and we were unable to recover it. 00:38:27.598 [2024-12-14 00:19:06.445442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.598 [2024-12-14 00:19:06.445457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.598 qpair failed and we were unable to recover it. 00:38:27.598 [2024-12-14 00:19:06.445638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.598 [2024-12-14 00:19:06.445653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.598 qpair failed and we were unable to recover it. 00:38:27.598 [2024-12-14 00:19:06.445858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.598 [2024-12-14 00:19:06.445874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.598 qpair failed and we were unable to recover it. 00:38:27.598 [2024-12-14 00:19:06.446064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.598 [2024-12-14 00:19:06.446079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.598 qpair failed and we were unable to recover it. 00:38:27.598 [2024-12-14 00:19:06.446334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.598 [2024-12-14 00:19:06.446350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.598 qpair failed and we were unable to recover it. 00:38:27.598 [2024-12-14 00:19:06.446609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.598 [2024-12-14 00:19:06.446625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.598 qpair failed and we were unable to recover it. 00:38:27.598 [2024-12-14 00:19:06.446820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.598 [2024-12-14 00:19:06.446834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.598 qpair failed and we were unable to recover it. 00:38:27.598 [2024-12-14 00:19:06.446994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.598 [2024-12-14 00:19:06.447008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.598 qpair failed and we were unable to recover it. 00:38:27.599 [2024-12-14 00:19:06.447243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.599 [2024-12-14 00:19:06.447258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.599 qpair failed and we were unable to recover it. 00:38:27.599 [2024-12-14 00:19:06.447414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.599 [2024-12-14 00:19:06.447429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.599 qpair failed and we were unable to recover it. 00:38:27.599 [2024-12-14 00:19:06.447597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.599 [2024-12-14 00:19:06.447612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.599 qpair failed and we were unable to recover it. 00:38:27.599 [2024-12-14 00:19:06.447759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.599 [2024-12-14 00:19:06.447774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.599 qpair failed and we were unable to recover it. 00:38:27.599 [2024-12-14 00:19:06.448013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.599 [2024-12-14 00:19:06.448027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.599 qpair failed and we were unable to recover it. 00:38:27.599 [2024-12-14 00:19:06.448240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.599 [2024-12-14 00:19:06.448254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.599 qpair failed and we were unable to recover it. 00:38:27.599 [2024-12-14 00:19:06.448510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.599 [2024-12-14 00:19:06.448526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.599 qpair failed and we were unable to recover it. 00:38:27.599 [2024-12-14 00:19:06.448690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.599 [2024-12-14 00:19:06.448705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.599 qpair failed and we were unable to recover it. 00:38:27.599 [2024-12-14 00:19:06.448938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.599 [2024-12-14 00:19:06.448954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe8 00:19:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # nvmfpid=59510 00:38:27.599 0 with addr=10.0.0.2, port=4420 00:38:27.599 qpair failed and we were unable to recover it. 00:38:27.599 [2024-12-14 00:19:06.449134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.599 [2024-12-14 00:19:06.449149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.599 qpair failed and we were unable to recover it. 00:38:27.599 00:19:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # waitforlisten 59510 00:38:27.599 [2024-12-14 00:19:06.449391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.599 00:19:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:38:27.599 [2024-12-14 00:19:06.449410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.599 qpair failed and we were unable to recover it. 00:38:27.599 [2024-12-14 00:19:06.449584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.599 [2024-12-14 00:19:06.449599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.599 qpair failed and we were unable to recover it. 00:38:27.599 00:19:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # '[' -z 59510 ']' 00:38:27.599 [2024-12-14 00:19:06.449784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.599 [2024-12-14 00:19:06.449799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.599 qpair failed and we were unable to recover it. 00:38:27.599 [2024-12-14 00:19:06.449965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.599 [2024-12-14 00:19:06.449980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.599 qpair failed and we were unable to recover it. 00:38:27.599 00:19:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:27.599 [2024-12-14 00:19:06.450218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.599 [2024-12-14 00:19:06.450233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.599 qpair failed and we were unable to recover it. 00:38:27.599 00:19:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:38:27.599 [2024-12-14 00:19:06.450459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.599 [2024-12-14 00:19:06.450475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.599 qpair failed and we were unable to recover it. 00:38:27.599 00:19:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:27.599 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:27.599 [2024-12-14 00:19:06.450634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.599 [2024-12-14 00:19:06.450651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.599 qpair failed and we were unable to recover it. 00:38:27.599 [2024-12-14 00:19:06.450859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.599 [2024-12-14 00:19:06.450874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.599 qpair failed and we were unable to recover it. 00:38:27.599 00:19:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:38:27.599 [2024-12-14 00:19:06.451041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.599 [2024-12-14 00:19:06.451057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.599 qpair failed and we were unable to recover it. 00:38:27.599 00:19:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:38:27.599 [2024-12-14 00:19:06.451211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.599 [2024-12-14 00:19:06.451227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.599 qpair failed and we were unable to recover it. 00:38:27.599 [2024-12-14 00:19:06.451379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.599 [2024-12-14 00:19:06.451394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.599 qpair failed and we were unable to recover it. 00:38:27.599 [2024-12-14 00:19:06.451561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.599 [2024-12-14 00:19:06.451576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.599 qpair failed and we were unable to recover it. 00:38:27.599 [2024-12-14 00:19:06.451737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.599 [2024-12-14 00:19:06.451753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.599 qpair failed and we were unable to recover it. 00:38:27.599 [2024-12-14 00:19:06.451969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.599 [2024-12-14 00:19:06.451984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.599 qpair failed and we were unable to recover it. 00:38:27.599 [2024-12-14 00:19:06.452196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.599 [2024-12-14 00:19:06.452211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.599 qpair failed and we were unable to recover it. 00:38:27.599 [2024-12-14 00:19:06.452323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.599 [2024-12-14 00:19:06.452339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.599 qpair failed and we were unable to recover it. 00:38:27.599 [2024-12-14 00:19:06.452492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.599 [2024-12-14 00:19:06.452508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.599 qpair failed and we were unable to recover it. 00:38:27.599 [2024-12-14 00:19:06.452748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.599 [2024-12-14 00:19:06.452764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.599 qpair failed and we were unable to recover it. 00:38:27.599 [2024-12-14 00:19:06.452981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.599 [2024-12-14 00:19:06.452996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.599 qpair failed and we were unable to recover it. 00:38:27.599 [2024-12-14 00:19:06.453115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.599 [2024-12-14 00:19:06.453131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.599 qpair failed and we were unable to recover it. 00:38:27.599 [2024-12-14 00:19:06.453326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.599 [2024-12-14 00:19:06.453342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.599 qpair failed and we were unable to recover it. 00:38:27.599 [2024-12-14 00:19:06.453588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.599 [2024-12-14 00:19:06.453604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.599 qpair failed and we were unable to recover it. 00:38:27.599 [2024-12-14 00:19:06.453731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.599 [2024-12-14 00:19:06.453749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.599 qpair failed and we were unable to recover it. 00:38:27.599 [2024-12-14 00:19:06.453929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.600 [2024-12-14 00:19:06.453948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.600 qpair failed and we were unable to recover it. 00:38:27.600 [2024-12-14 00:19:06.454218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.600 [2024-12-14 00:19:06.454233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.600 qpair failed and we were unable to recover it. 00:38:27.600 [2024-12-14 00:19:06.454381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.600 [2024-12-14 00:19:06.454397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.600 qpair failed and we were unable to recover it. 00:38:27.600 [2024-12-14 00:19:06.454608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.600 [2024-12-14 00:19:06.454624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.600 qpair failed and we were unable to recover it. 00:38:27.600 [2024-12-14 00:19:06.454724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.600 [2024-12-14 00:19:06.454740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.600 qpair failed and we were unable to recover it. 00:38:27.600 [2024-12-14 00:19:06.454982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.600 [2024-12-14 00:19:06.454998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.600 qpair failed and we were unable to recover it. 00:38:27.600 [2024-12-14 00:19:06.455102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.600 [2024-12-14 00:19:06.455117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.600 qpair failed and we were unable to recover it. 00:38:27.600 [2024-12-14 00:19:06.455339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.600 [2024-12-14 00:19:06.455354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.600 qpair failed and we were unable to recover it. 00:38:27.600 [2024-12-14 00:19:06.455516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.600 [2024-12-14 00:19:06.455533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.600 qpair failed and we were unable to recover it. 00:38:27.600 [2024-12-14 00:19:06.455697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.600 [2024-12-14 00:19:06.455713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.600 qpair failed and we were unable to recover it. 00:38:27.600 [2024-12-14 00:19:06.455924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.600 [2024-12-14 00:19:06.455939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.600 qpair failed and we were unable to recover it. 00:38:27.600 [2024-12-14 00:19:06.456104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.600 [2024-12-14 00:19:06.456119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.600 qpair failed and we were unable to recover it. 00:38:27.600 [2024-12-14 00:19:06.456358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.600 [2024-12-14 00:19:06.456374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.600 qpair failed and we were unable to recover it. 00:38:27.600 [2024-12-14 00:19:06.456624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.600 [2024-12-14 00:19:06.456640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.600 qpair failed and we were unable to recover it. 00:38:27.600 [2024-12-14 00:19:06.456812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.600 [2024-12-14 00:19:06.456827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.600 qpair failed and we were unable to recover it. 00:38:27.600 [2024-12-14 00:19:06.456988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.600 [2024-12-14 00:19:06.457002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.600 qpair failed and we were unable to recover it. 00:38:27.600 [2024-12-14 00:19:06.457241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.600 [2024-12-14 00:19:06.457256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.600 qpair failed and we were unable to recover it. 00:38:27.600 [2024-12-14 00:19:06.457488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.600 [2024-12-14 00:19:06.457503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.600 qpair failed and we were unable to recover it. 00:38:27.600 [2024-12-14 00:19:06.457712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.600 [2024-12-14 00:19:06.457727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.600 qpair failed and we were unable to recover it. 00:38:27.600 [2024-12-14 00:19:06.457955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.600 [2024-12-14 00:19:06.457970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.600 qpair failed and we were unable to recover it. 00:38:27.600 [2024-12-14 00:19:06.458144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.600 [2024-12-14 00:19:06.458158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.600 qpair failed and we were unable to recover it. 00:38:27.600 [2024-12-14 00:19:06.458347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.600 [2024-12-14 00:19:06.458371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.600 qpair failed and we were unable to recover it. 00:38:27.600 [2024-12-14 00:19:06.458573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.600 [2024-12-14 00:19:06.458588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.600 qpair failed and we were unable to recover it. 00:38:27.600 [2024-12-14 00:19:06.458772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.600 [2024-12-14 00:19:06.458786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.600 qpair failed and we were unable to recover it. 00:38:27.600 [2024-12-14 00:19:06.459007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.600 [2024-12-14 00:19:06.459022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.600 qpair failed and we were unable to recover it. 00:38:27.600 [2024-12-14 00:19:06.459119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.600 [2024-12-14 00:19:06.459134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.600 qpair failed and we were unable to recover it. 00:38:27.600 [2024-12-14 00:19:06.459308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.600 [2024-12-14 00:19:06.459323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.600 qpair failed and we were unable to recover it. 00:38:27.600 [2024-12-14 00:19:06.459497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.600 [2024-12-14 00:19:06.459514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.600 qpair failed and we were unable to recover it. 00:38:27.600 [2024-12-14 00:19:06.459669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.600 [2024-12-14 00:19:06.459685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.600 qpair failed and we were unable to recover it. 00:38:27.600 [2024-12-14 00:19:06.459844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.600 [2024-12-14 00:19:06.459860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.600 qpair failed and we were unable to recover it. 00:38:27.600 [2024-12-14 00:19:06.460021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.600 [2024-12-14 00:19:06.460036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.600 qpair failed and we were unable to recover it. 00:38:27.600 [2024-12-14 00:19:06.460294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.600 [2024-12-14 00:19:06.460310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.600 qpair failed and we were unable to recover it. 00:38:27.600 [2024-12-14 00:19:06.460561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.600 [2024-12-14 00:19:06.460577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.600 qpair failed and we were unable to recover it. 00:38:27.600 [2024-12-14 00:19:06.460752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.600 [2024-12-14 00:19:06.460767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.600 qpair failed and we were unable to recover it. 00:38:27.600 [2024-12-14 00:19:06.460912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.600 [2024-12-14 00:19:06.460926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.600 qpair failed and we were unable to recover it. 00:38:27.600 [2024-12-14 00:19:06.461179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.600 [2024-12-14 00:19:06.461193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.600 qpair failed and we were unable to recover it. 00:38:27.600 [2024-12-14 00:19:06.461433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.600 [2024-12-14 00:19:06.461454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.600 qpair failed and we were unable to recover it. 00:38:27.600 [2024-12-14 00:19:06.461680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.600 [2024-12-14 00:19:06.461695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.600 qpair failed and we were unable to recover it. 00:38:27.600 [2024-12-14 00:19:06.461950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.600 [2024-12-14 00:19:06.461965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.600 qpair failed and we were unable to recover it. 00:38:27.600 [2024-12-14 00:19:06.462127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.600 [2024-12-14 00:19:06.462142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.600 qpair failed and we were unable to recover it. 00:38:27.601 [2024-12-14 00:19:06.462369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.601 [2024-12-14 00:19:06.462386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.601 qpair failed and we were unable to recover it. 00:38:27.601 [2024-12-14 00:19:06.462553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.601 [2024-12-14 00:19:06.462569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.601 qpair failed and we were unable to recover it. 00:38:27.601 [2024-12-14 00:19:06.462747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.601 [2024-12-14 00:19:06.462762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.601 qpair failed and we were unable to recover it. 00:38:27.601 [2024-12-14 00:19:06.462877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.601 [2024-12-14 00:19:06.462891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.601 qpair failed and we were unable to recover it. 00:38:27.601 [2024-12-14 00:19:06.463043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.601 [2024-12-14 00:19:06.463058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.601 qpair failed and we were unable to recover it. 00:38:27.601 [2024-12-14 00:19:06.463238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.601 [2024-12-14 00:19:06.463252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.601 qpair failed and we were unable to recover it. 00:38:27.601 [2024-12-14 00:19:06.463414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.601 [2024-12-14 00:19:06.463429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.601 qpair failed and we were unable to recover it. 00:38:27.601 [2024-12-14 00:19:06.463586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.601 [2024-12-14 00:19:06.463601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.601 qpair failed and we were unable to recover it. 00:38:27.601 [2024-12-14 00:19:06.463839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.601 [2024-12-14 00:19:06.463854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.601 qpair failed and we were unable to recover it. 00:38:27.601 [2024-12-14 00:19:06.464050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.601 [2024-12-14 00:19:06.464065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.601 qpair failed and we were unable to recover it. 00:38:27.601 [2024-12-14 00:19:06.464234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.601 [2024-12-14 00:19:06.464248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.601 qpair failed and we were unable to recover it. 00:38:27.601 [2024-12-14 00:19:06.464391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.601 [2024-12-14 00:19:06.464406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.601 qpair failed and we were unable to recover it. 00:38:27.601 [2024-12-14 00:19:06.464636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.601 [2024-12-14 00:19:06.464651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.601 qpair failed and we were unable to recover it. 00:38:27.601 [2024-12-14 00:19:06.464882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.601 [2024-12-14 00:19:06.464896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.601 qpair failed and we were unable to recover it. 00:38:27.601 [2024-12-14 00:19:06.464994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.601 [2024-12-14 00:19:06.465008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.601 qpair failed and we were unable to recover it. 00:38:27.601 [2024-12-14 00:19:06.465197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.601 [2024-12-14 00:19:06.465212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.601 qpair failed and we were unable to recover it. 00:38:27.601 [2024-12-14 00:19:06.465421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.601 [2024-12-14 00:19:06.465444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.601 qpair failed and we were unable to recover it. 00:38:27.601 [2024-12-14 00:19:06.465625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.601 [2024-12-14 00:19:06.465640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.601 qpair failed and we were unable to recover it. 00:38:27.601 [2024-12-14 00:19:06.465828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.601 [2024-12-14 00:19:06.465843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.601 qpair failed and we were unable to recover it. 00:38:27.601 [2024-12-14 00:19:06.466008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.601 [2024-12-14 00:19:06.466023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.601 qpair failed and we were unable to recover it. 00:38:27.601 [2024-12-14 00:19:06.466277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.601 [2024-12-14 00:19:06.466291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.601 qpair failed and we were unable to recover it. 00:38:27.601 [2024-12-14 00:19:06.466497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.601 [2024-12-14 00:19:06.466513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.601 qpair failed and we were unable to recover it. 00:38:27.601 [2024-12-14 00:19:06.466722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.601 [2024-12-14 00:19:06.466735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.601 qpair failed and we were unable to recover it. 00:38:27.601 [2024-12-14 00:19:06.466956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.601 [2024-12-14 00:19:06.466972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.601 qpair failed and we were unable to recover it. 00:38:27.601 [2024-12-14 00:19:06.467071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.601 [2024-12-14 00:19:06.467085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.601 qpair failed and we were unable to recover it. 00:38:27.601 [2024-12-14 00:19:06.467245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.601 [2024-12-14 00:19:06.467259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.601 qpair failed and we were unable to recover it. 00:38:27.601 [2024-12-14 00:19:06.467412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.601 [2024-12-14 00:19:06.467426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.601 qpair failed and we were unable to recover it. 00:38:27.601 [2024-12-14 00:19:06.467564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.601 [2024-12-14 00:19:06.467579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.601 qpair failed and we were unable to recover it. 00:38:27.601 [2024-12-14 00:19:06.467738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.601 [2024-12-14 00:19:06.467752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.601 qpair failed and we were unable to recover it. 00:38:27.601 [2024-12-14 00:19:06.467919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.601 [2024-12-14 00:19:06.467933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.601 qpair failed and we were unable to recover it. 00:38:27.601 [2024-12-14 00:19:06.468117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.601 [2024-12-14 00:19:06.468130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.601 qpair failed and we were unable to recover it. 00:38:27.601 [2024-12-14 00:19:06.468309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.601 [2024-12-14 00:19:06.468323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.601 qpair failed and we were unable to recover it. 00:38:27.601 [2024-12-14 00:19:06.468540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.601 [2024-12-14 00:19:06.468555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.601 qpair failed and we were unable to recover it. 00:38:27.601 [2024-12-14 00:19:06.468659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.601 [2024-12-14 00:19:06.468674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.601 qpair failed and we were unable to recover it. 00:38:27.601 [2024-12-14 00:19:06.468927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.601 [2024-12-14 00:19:06.468942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.601 qpair failed and we were unable to recover it. 00:38:27.601 [2024-12-14 00:19:06.469127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.601 [2024-12-14 00:19:06.469141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.601 qpair failed and we were unable to recover it. 00:38:27.601 [2024-12-14 00:19:06.469305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.601 [2024-12-14 00:19:06.469319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.601 qpair failed and we were unable to recover it. 00:38:27.601 [2024-12-14 00:19:06.469467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.601 [2024-12-14 00:19:06.469482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.601 qpair failed and we were unable to recover it. 00:38:27.601 [2024-12-14 00:19:06.469638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.601 [2024-12-14 00:19:06.469652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.601 qpair failed and we were unable to recover it. 00:38:27.601 [2024-12-14 00:19:06.469803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.602 [2024-12-14 00:19:06.469817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.602 qpair failed and we were unable to recover it. 00:38:27.602 [2024-12-14 00:19:06.469968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.602 [2024-12-14 00:19:06.469984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.602 qpair failed and we were unable to recover it. 00:38:27.602 [2024-12-14 00:19:06.470125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.602 [2024-12-14 00:19:06.470140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.602 qpair failed and we were unable to recover it. 00:38:27.602 [2024-12-14 00:19:06.470384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.602 [2024-12-14 00:19:06.470399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.602 qpair failed and we were unable to recover it. 00:38:27.602 [2024-12-14 00:19:06.470671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.602 [2024-12-14 00:19:06.470685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.602 qpair failed and we were unable to recover it. 00:38:27.602 [2024-12-14 00:19:06.470858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.602 [2024-12-14 00:19:06.470872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.602 qpair failed and we were unable to recover it. 00:38:27.602 [2024-12-14 00:19:06.471031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.602 [2024-12-14 00:19:06.471053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.602 qpair failed and we were unable to recover it. 00:38:27.602 [2024-12-14 00:19:06.471237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.602 [2024-12-14 00:19:06.471251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.602 qpair failed and we were unable to recover it. 00:38:27.602 [2024-12-14 00:19:06.471482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.602 [2024-12-14 00:19:06.471496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.602 qpair failed and we were unable to recover it. 00:38:27.602 [2024-12-14 00:19:06.471635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.602 [2024-12-14 00:19:06.471649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.602 qpair failed and we were unable to recover it. 00:38:27.602 [2024-12-14 00:19:06.471799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.602 [2024-12-14 00:19:06.471813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.602 qpair failed and we were unable to recover it. 00:38:27.602 [2024-12-14 00:19:06.472020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.602 [2024-12-14 00:19:06.472034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.602 qpair failed and we were unable to recover it. 00:38:27.602 [2024-12-14 00:19:06.472195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.602 [2024-12-14 00:19:06.472211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.602 qpair failed and we were unable to recover it. 00:38:27.602 [2024-12-14 00:19:06.472390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.602 [2024-12-14 00:19:06.472404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.602 qpair failed and we were unable to recover it. 00:38:27.602 [2024-12-14 00:19:06.472687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.602 [2024-12-14 00:19:06.472706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.602 qpair failed and we were unable to recover it. 00:38:27.602 [2024-12-14 00:19:06.472815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.602 [2024-12-14 00:19:06.472830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.602 qpair failed and we were unable to recover it. 00:38:27.602 [2024-12-14 00:19:06.473019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.602 [2024-12-14 00:19:06.473034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.602 qpair failed and we were unable to recover it. 00:38:27.602 [2024-12-14 00:19:06.473218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.602 [2024-12-14 00:19:06.473233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.602 qpair failed and we were unable to recover it. 00:38:27.602 [2024-12-14 00:19:06.473409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.602 [2024-12-14 00:19:06.473423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.602 qpair failed and we were unable to recover it. 00:38:27.602 [2024-12-14 00:19:06.473587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.602 [2024-12-14 00:19:06.473602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.602 qpair failed and we were unable to recover it. 00:38:27.602 [2024-12-14 00:19:06.473759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.602 [2024-12-14 00:19:06.473774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.602 qpair failed and we were unable to recover it. 00:38:27.602 [2024-12-14 00:19:06.473927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.602 [2024-12-14 00:19:06.473941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.602 qpair failed and we were unable to recover it. 00:38:27.602 [2024-12-14 00:19:06.474041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.602 [2024-12-14 00:19:06.474055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.602 qpair failed and we were unable to recover it. 00:38:27.602 [2024-12-14 00:19:06.474213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.602 [2024-12-14 00:19:06.474228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.602 qpair failed and we were unable to recover it. 00:38:27.602 [2024-12-14 00:19:06.474393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.602 [2024-12-14 00:19:06.474407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.602 qpair failed and we were unable to recover it. 00:38:27.602 [2024-12-14 00:19:06.474497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.602 [2024-12-14 00:19:06.474511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.602 qpair failed and we were unable to recover it. 00:38:27.602 [2024-12-14 00:19:06.474752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.602 [2024-12-14 00:19:06.474767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.602 qpair failed and we were unable to recover it. 00:38:27.602 [2024-12-14 00:19:06.474860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.602 [2024-12-14 00:19:06.474873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.602 qpair failed and we were unable to recover it. 00:38:27.602 [2024-12-14 00:19:06.475036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.602 [2024-12-14 00:19:06.475050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.602 qpair failed and we were unable to recover it. 00:38:27.602 [2024-12-14 00:19:06.475272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.602 [2024-12-14 00:19:06.475287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.602 qpair failed and we were unable to recover it. 00:38:27.602 [2024-12-14 00:19:06.475475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.602 [2024-12-14 00:19:06.475490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.602 qpair failed and we were unable to recover it. 00:38:27.602 [2024-12-14 00:19:06.475656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.602 [2024-12-14 00:19:06.475669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.602 qpair failed and we were unable to recover it. 00:38:27.602 [2024-12-14 00:19:06.475887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.602 [2024-12-14 00:19:06.475902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.602 qpair failed and we were unable to recover it. 00:38:27.602 [2024-12-14 00:19:06.476144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.602 [2024-12-14 00:19:06.476158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.602 qpair failed and we were unable to recover it. 00:38:27.602 [2024-12-14 00:19:06.476324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.602 [2024-12-14 00:19:06.476338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.602 qpair failed and we were unable to recover it. 00:38:27.602 [2024-12-14 00:19:06.476582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.602 [2024-12-14 00:19:06.476598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.602 qpair failed and we were unable to recover it. 00:38:27.602 [2024-12-14 00:19:06.476715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.602 [2024-12-14 00:19:06.476729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.602 qpair failed and we were unable to recover it. 00:38:27.602 [2024-12-14 00:19:06.476883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.602 [2024-12-14 00:19:06.476898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.602 qpair failed and we were unable to recover it. 00:38:27.602 [2024-12-14 00:19:06.476988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.602 [2024-12-14 00:19:06.477002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.602 qpair failed and we were unable to recover it. 00:38:27.602 [2024-12-14 00:19:06.477181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.603 [2024-12-14 00:19:06.477195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.603 qpair failed and we were unable to recover it. 00:38:27.603 [2024-12-14 00:19:06.477290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.603 [2024-12-14 00:19:06.477304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.603 qpair failed and we were unable to recover it. 00:38:27.603 [2024-12-14 00:19:06.477515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.603 [2024-12-14 00:19:06.477532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.603 qpair failed and we were unable to recover it. 00:38:27.603 [2024-12-14 00:19:06.477761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.603 [2024-12-14 00:19:06.477775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.603 qpair failed and we were unable to recover it. 00:38:27.603 [2024-12-14 00:19:06.477914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.603 [2024-12-14 00:19:06.477929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.603 qpair failed and we were unable to recover it. 00:38:27.603 [2024-12-14 00:19:06.478046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.603 [2024-12-14 00:19:06.478060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.603 qpair failed and we were unable to recover it. 00:38:27.603 [2024-12-14 00:19:06.478141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.603 [2024-12-14 00:19:06.478155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.603 qpair failed and we were unable to recover it. 00:38:27.603 [2024-12-14 00:19:06.478329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.603 [2024-12-14 00:19:06.478342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.603 qpair failed and we were unable to recover it. 00:38:27.603 [2024-12-14 00:19:06.478579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.603 [2024-12-14 00:19:06.478594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.603 qpair failed and we were unable to recover it. 00:38:27.603 [2024-12-14 00:19:06.478778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.603 [2024-12-14 00:19:06.478794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.603 qpair failed and we were unable to recover it. 00:38:27.603 [2024-12-14 00:19:06.478999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.603 [2024-12-14 00:19:06.479013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.603 qpair failed and we were unable to recover it. 00:38:27.603 [2024-12-14 00:19:06.479268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.603 [2024-12-14 00:19:06.479282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.603 qpair failed and we were unable to recover it. 00:38:27.603 [2024-12-14 00:19:06.479445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.603 [2024-12-14 00:19:06.479459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.603 qpair failed and we were unable to recover it. 00:38:27.603 [2024-12-14 00:19:06.479644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.603 [2024-12-14 00:19:06.479658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.603 qpair failed and we were unable to recover it. 00:38:27.603 [2024-12-14 00:19:06.479816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.603 [2024-12-14 00:19:06.479829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.603 qpair failed and we were unable to recover it. 00:38:27.603 [2024-12-14 00:19:06.479985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.603 [2024-12-14 00:19:06.479999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.603 qpair failed and we were unable to recover it. 00:38:27.603 [2024-12-14 00:19:06.480094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.603 [2024-12-14 00:19:06.480108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.603 qpair failed and we were unable to recover it. 00:38:27.603 [2024-12-14 00:19:06.480313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.603 [2024-12-14 00:19:06.480327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.603 qpair failed and we were unable to recover it. 00:38:27.603 [2024-12-14 00:19:06.480505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.603 [2024-12-14 00:19:06.480519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.603 qpair failed and we were unable to recover it. 00:38:27.603 [2024-12-14 00:19:06.480702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.603 [2024-12-14 00:19:06.480715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.603 qpair failed and we were unable to recover it. 00:38:27.603 [2024-12-14 00:19:06.480940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.603 [2024-12-14 00:19:06.480959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.603 qpair failed and we were unable to recover it. 00:38:27.603 [2024-12-14 00:19:06.481268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.603 [2024-12-14 00:19:06.481281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.603 qpair failed and we were unable to recover it. 00:38:27.603 [2024-12-14 00:19:06.481382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.603 [2024-12-14 00:19:06.481395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.603 qpair failed and we were unable to recover it. 00:38:27.603 [2024-12-14 00:19:06.481554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.603 [2024-12-14 00:19:06.481568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.603 qpair failed and we were unable to recover it. 00:38:27.603 [2024-12-14 00:19:06.481738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.603 [2024-12-14 00:19:06.481752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.603 qpair failed and we were unable to recover it. 00:38:27.603 [2024-12-14 00:19:06.481925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.603 [2024-12-14 00:19:06.481939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.603 qpair failed and we were unable to recover it. 00:38:27.603 [2024-12-14 00:19:06.482103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.603 [2024-12-14 00:19:06.482117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.603 qpair failed and we were unable to recover it. 00:38:27.603 [2024-12-14 00:19:06.482370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.603 [2024-12-14 00:19:06.482385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.603 qpair failed and we were unable to recover it. 00:38:27.603 [2024-12-14 00:19:06.482474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.603 [2024-12-14 00:19:06.482488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.603 qpair failed and we were unable to recover it. 00:38:27.603 [2024-12-14 00:19:06.482640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.603 [2024-12-14 00:19:06.482654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.603 qpair failed and we were unable to recover it. 00:38:27.603 [2024-12-14 00:19:06.482797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.603 [2024-12-14 00:19:06.482811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.603 qpair failed and we were unable to recover it. 00:38:27.603 [2024-12-14 00:19:06.483062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.604 [2024-12-14 00:19:06.483096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.604 qpair failed and we were unable to recover it. 00:38:27.604 [2024-12-14 00:19:06.483240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.604 [2024-12-14 00:19:06.483254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.604 qpair failed and we were unable to recover it. 00:38:27.604 [2024-12-14 00:19:06.483429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.604 [2024-12-14 00:19:06.483449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.604 qpair failed and we were unable to recover it. 00:38:27.604 [2024-12-14 00:19:06.483615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.604 [2024-12-14 00:19:06.483629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.604 qpair failed and we were unable to recover it. 00:38:27.604 [2024-12-14 00:19:06.483807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.604 [2024-12-14 00:19:06.483821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.604 qpair failed and we were unable to recover it. 00:38:27.604 [2024-12-14 00:19:06.483908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.604 [2024-12-14 00:19:06.483922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.604 qpair failed and we were unable to recover it. 00:38:27.604 [2024-12-14 00:19:06.484190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.604 [2024-12-14 00:19:06.484204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.604 qpair failed and we were unable to recover it. 00:38:27.604 [2024-12-14 00:19:06.484379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.604 [2024-12-14 00:19:06.484393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.604 qpair failed and we were unable to recover it. 00:38:27.604 [2024-12-14 00:19:06.484559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.604 [2024-12-14 00:19:06.484574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.604 qpair failed and we were unable to recover it. 00:38:27.604 [2024-12-14 00:19:06.484724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.604 [2024-12-14 00:19:06.484738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.604 qpair failed and we were unable to recover it. 00:38:27.604 [2024-12-14 00:19:06.484957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.604 [2024-12-14 00:19:06.484972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.604 qpair failed and we were unable to recover it. 00:38:27.604 [2024-12-14 00:19:06.485078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.604 [2024-12-14 00:19:06.485094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.604 qpair failed and we were unable to recover it. 00:38:27.604 [2024-12-14 00:19:06.485244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.604 [2024-12-14 00:19:06.485258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.604 qpair failed and we were unable to recover it. 00:38:27.604 [2024-12-14 00:19:06.485331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.604 [2024-12-14 00:19:06.485344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.604 qpair failed and we were unable to recover it. 00:38:27.604 [2024-12-14 00:19:06.485580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.604 [2024-12-14 00:19:06.485593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.604 qpair failed and we were unable to recover it. 00:38:27.604 [2024-12-14 00:19:06.485678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.604 [2024-12-14 00:19:06.485691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.604 qpair failed and we were unable to recover it. 00:38:27.604 [2024-12-14 00:19:06.485780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.604 [2024-12-14 00:19:06.485793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.604 qpair failed and we were unable to recover it. 00:38:27.604 [2024-12-14 00:19:06.486056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.604 [2024-12-14 00:19:06.486070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.604 qpair failed and we were unable to recover it. 00:38:27.604 [2024-12-14 00:19:06.486242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.604 [2024-12-14 00:19:06.486255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.604 qpair failed and we were unable to recover it. 00:38:27.604 [2024-12-14 00:19:06.486404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.604 [2024-12-14 00:19:06.486418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.604 qpair failed and we were unable to recover it. 00:38:27.604 [2024-12-14 00:19:06.486686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.604 [2024-12-14 00:19:06.486701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.604 qpair failed and we were unable to recover it. 00:38:27.604 [2024-12-14 00:19:06.486844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.604 [2024-12-14 00:19:06.486857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.604 qpair failed and we were unable to recover it. 00:38:27.604 [2024-12-14 00:19:06.487014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.604 [2024-12-14 00:19:06.487028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.604 qpair failed and we were unable to recover it. 00:38:27.604 [2024-12-14 00:19:06.487191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.604 [2024-12-14 00:19:06.487205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.604 qpair failed and we were unable to recover it. 00:38:27.604 [2024-12-14 00:19:06.487477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.604 [2024-12-14 00:19:06.487492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.604 qpair failed and we were unable to recover it. 00:38:27.604 [2024-12-14 00:19:06.487654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.604 [2024-12-14 00:19:06.487668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.604 qpair failed and we were unable to recover it. 00:38:27.604 [2024-12-14 00:19:06.487776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.604 [2024-12-14 00:19:06.487790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.604 qpair failed and we were unable to recover it. 00:38:27.604 [2024-12-14 00:19:06.487926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.604 [2024-12-14 00:19:06.487939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.604 qpair failed and we were unable to recover it. 00:38:27.604 [2024-12-14 00:19:06.488047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.604 [2024-12-14 00:19:06.488062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.604 qpair failed and we were unable to recover it. 00:38:27.604 [2024-12-14 00:19:06.488217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.604 [2024-12-14 00:19:06.488231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.604 qpair failed and we were unable to recover it. 00:38:27.604 [2024-12-14 00:19:06.488398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.604 [2024-12-14 00:19:06.488412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.604 qpair failed and we were unable to recover it. 00:38:27.604 [2024-12-14 00:19:06.488643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.604 [2024-12-14 00:19:06.488658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.604 qpair failed and we were unable to recover it. 00:38:27.604 [2024-12-14 00:19:06.488802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.604 [2024-12-14 00:19:06.488816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.604 qpair failed and we were unable to recover it. 00:38:27.604 [2024-12-14 00:19:06.488910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.604 [2024-12-14 00:19:06.488924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.604 qpair failed and we were unable to recover it. 00:38:27.604 [2024-12-14 00:19:06.489128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.604 [2024-12-14 00:19:06.489142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.604 qpair failed and we were unable to recover it. 00:38:27.604 [2024-12-14 00:19:06.489323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.604 [2024-12-14 00:19:06.489358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.604 qpair failed and we were unable to recover it. 00:38:27.604 [2024-12-14 00:19:06.489582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.604 [2024-12-14 00:19:06.489597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.604 qpair failed and we were unable to recover it. 00:38:27.604 [2024-12-14 00:19:06.489691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.604 [2024-12-14 00:19:06.489705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.605 qpair failed and we were unable to recover it. 00:38:27.605 [2024-12-14 00:19:06.489929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.605 [2024-12-14 00:19:06.489944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.605 qpair failed and we were unable to recover it. 00:38:27.605 [2024-12-14 00:19:06.490039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.605 [2024-12-14 00:19:06.490054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.605 qpair failed and we were unable to recover it. 00:38:27.605 [2024-12-14 00:19:06.490237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.605 [2024-12-14 00:19:06.490252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.605 qpair failed and we were unable to recover it. 00:38:27.605 [2024-12-14 00:19:06.490420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.605 [2024-12-14 00:19:06.490434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.605 qpair failed and we were unable to recover it. 00:38:27.605 [2024-12-14 00:19:06.490622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.605 [2024-12-14 00:19:06.490639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.605 qpair failed and we were unable to recover it. 00:38:27.605 [2024-12-14 00:19:06.490720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.605 [2024-12-14 00:19:06.490734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.605 qpair failed and we were unable to recover it. 00:38:27.605 [2024-12-14 00:19:06.490895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.605 [2024-12-14 00:19:06.490910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.605 qpair failed and we were unable to recover it. 00:38:27.605 [2024-12-14 00:19:06.491026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.605 [2024-12-14 00:19:06.491041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.605 qpair failed and we were unable to recover it. 00:38:27.605 [2024-12-14 00:19:06.491250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.605 [2024-12-14 00:19:06.491264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.605 qpair failed and we were unable to recover it. 00:38:27.605 [2024-12-14 00:19:06.491494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.605 [2024-12-14 00:19:06.491509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.605 qpair failed and we were unable to recover it. 00:38:27.605 [2024-12-14 00:19:06.491609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.605 [2024-12-14 00:19:06.491623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.605 qpair failed and we were unable to recover it. 00:38:27.605 [2024-12-14 00:19:06.491828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.605 [2024-12-14 00:19:06.491842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.605 qpair failed and we were unable to recover it. 00:38:27.605 [2024-12-14 00:19:06.491941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.605 [2024-12-14 00:19:06.491955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.605 qpair failed and we were unable to recover it. 00:38:27.605 [2024-12-14 00:19:06.492146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.605 [2024-12-14 00:19:06.492163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.605 qpair failed and we were unable to recover it. 00:38:27.605 [2024-12-14 00:19:06.492250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.605 [2024-12-14 00:19:06.492264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.605 qpair failed and we were unable to recover it. 00:38:27.605 [2024-12-14 00:19:06.492480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.605 [2024-12-14 00:19:06.492494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.605 qpair failed and we were unable to recover it. 00:38:27.605 [2024-12-14 00:19:06.492722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.605 [2024-12-14 00:19:06.492736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.605 qpair failed and we were unable to recover it. 00:38:27.605 [2024-12-14 00:19:06.492889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.605 [2024-12-14 00:19:06.492903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.605 qpair failed and we were unable to recover it. 00:38:27.605 [2024-12-14 00:19:06.493155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.605 [2024-12-14 00:19:06.493170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.605 qpair failed and we were unable to recover it. 00:38:27.605 [2024-12-14 00:19:06.493328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.605 [2024-12-14 00:19:06.493342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.605 qpair failed and we were unable to recover it. 00:38:27.605 [2024-12-14 00:19:06.493507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.605 [2024-12-14 00:19:06.493522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.605 qpair failed and we were unable to recover it. 00:38:27.605 [2024-12-14 00:19:06.493727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.605 [2024-12-14 00:19:06.493741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.605 qpair failed and we were unable to recover it. 00:38:27.605 [2024-12-14 00:19:06.493991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.605 [2024-12-14 00:19:06.494007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.605 qpair failed and we were unable to recover it. 00:38:27.605 [2024-12-14 00:19:06.494256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.605 [2024-12-14 00:19:06.494271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.605 qpair failed and we were unable to recover it. 00:38:27.605 [2024-12-14 00:19:06.494374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.605 [2024-12-14 00:19:06.494388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.605 qpair failed and we were unable to recover it. 00:38:27.605 [2024-12-14 00:19:06.494477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.605 [2024-12-14 00:19:06.494492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.605 qpair failed and we were unable to recover it. 00:38:27.605 [2024-12-14 00:19:06.494602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.605 [2024-12-14 00:19:06.494624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.605 qpair failed and we were unable to recover it. 00:38:27.605 [2024-12-14 00:19:06.494767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.605 [2024-12-14 00:19:06.494782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.605 qpair failed and we were unable to recover it. 00:38:27.605 [2024-12-14 00:19:06.494875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.605 [2024-12-14 00:19:06.494889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.605 qpair failed and we were unable to recover it. 00:38:27.605 [2024-12-14 00:19:06.494986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.605 [2024-12-14 00:19:06.495000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.605 qpair failed and we were unable to recover it. 00:38:27.605 [2024-12-14 00:19:06.495157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.605 [2024-12-14 00:19:06.495172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.605 qpair failed and we were unable to recover it. 00:38:27.605 [2024-12-14 00:19:06.495391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.605 [2024-12-14 00:19:06.495404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.605 qpair failed and we were unable to recover it. 00:38:27.605 [2024-12-14 00:19:06.495551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.605 [2024-12-14 00:19:06.495567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.605 qpair failed and we were unable to recover it. 00:38:27.605 [2024-12-14 00:19:06.495659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.605 [2024-12-14 00:19:06.495673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.605 qpair failed and we were unable to recover it. 00:38:27.605 [2024-12-14 00:19:06.495880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.605 [2024-12-14 00:19:06.495894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.605 qpair failed and we were unable to recover it. 00:38:27.605 [2024-12-14 00:19:06.496047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.605 [2024-12-14 00:19:06.496061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.605 qpair failed and we were unable to recover it. 00:38:27.605 [2024-12-14 00:19:06.496223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.605 [2024-12-14 00:19:06.496238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.605 qpair failed and we were unable to recover it. 00:38:27.605 [2024-12-14 00:19:06.496468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.605 [2024-12-14 00:19:06.496484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.605 qpair failed and we were unable to recover it. 00:38:27.605 [2024-12-14 00:19:06.496649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.605 [2024-12-14 00:19:06.496664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.605 qpair failed and we were unable to recover it. 00:38:27.606 [2024-12-14 00:19:06.496742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.606 [2024-12-14 00:19:06.496755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.606 qpair failed and we were unable to recover it. 00:38:27.606 [2024-12-14 00:19:06.496982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.606 [2024-12-14 00:19:06.496998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.606 qpair failed and we were unable to recover it. 00:38:27.606 [2024-12-14 00:19:06.497303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.606 [2024-12-14 00:19:06.497317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.606 qpair failed and we were unable to recover it. 00:38:27.606 [2024-12-14 00:19:06.497551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.606 [2024-12-14 00:19:06.497567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.606 qpair failed and we were unable to recover it. 00:38:27.606 [2024-12-14 00:19:06.497650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.606 [2024-12-14 00:19:06.497664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.606 qpair failed and we were unable to recover it. 00:38:27.606 [2024-12-14 00:19:06.497888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.606 [2024-12-14 00:19:06.497903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.606 qpair failed and we were unable to recover it. 00:38:27.606 [2024-12-14 00:19:06.498011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.606 [2024-12-14 00:19:06.498025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.606 qpair failed and we were unable to recover it. 00:38:27.606 [2024-12-14 00:19:06.498231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.606 [2024-12-14 00:19:06.498246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.606 qpair failed and we were unable to recover it. 00:38:27.606 [2024-12-14 00:19:06.498387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.606 [2024-12-14 00:19:06.498402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.606 qpair failed and we were unable to recover it. 00:38:27.606 [2024-12-14 00:19:06.498487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.606 [2024-12-14 00:19:06.498502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.606 qpair failed and we were unable to recover it. 00:38:27.606 [2024-12-14 00:19:06.498602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.606 [2024-12-14 00:19:06.498616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.606 qpair failed and we were unable to recover it. 00:38:27.606 [2024-12-14 00:19:06.498762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.606 [2024-12-14 00:19:06.498777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.606 qpair failed and we were unable to recover it. 00:38:27.606 [2024-12-14 00:19:06.498959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.606 [2024-12-14 00:19:06.498974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.606 qpair failed and we were unable to recover it. 00:38:27.606 [2024-12-14 00:19:06.499232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.606 [2024-12-14 00:19:06.499248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.606 qpair failed and we were unable to recover it. 00:38:27.606 [2024-12-14 00:19:06.499331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.606 [2024-12-14 00:19:06.499348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.606 qpair failed and we were unable to recover it. 00:38:27.606 [2024-12-14 00:19:06.499526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.606 [2024-12-14 00:19:06.499542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.606 qpair failed and we were unable to recover it. 00:38:27.606 [2024-12-14 00:19:06.499717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.606 [2024-12-14 00:19:06.499732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.606 qpair failed and we were unable to recover it. 00:38:27.606 [2024-12-14 00:19:06.499844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.606 [2024-12-14 00:19:06.499858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.606 qpair failed and we were unable to recover it. 00:38:27.606 [2024-12-14 00:19:06.500018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.606 [2024-12-14 00:19:06.500033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.606 qpair failed and we were unable to recover it. 00:38:27.606 [2024-12-14 00:19:06.500323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.606 [2024-12-14 00:19:06.500338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.606 qpair failed and we were unable to recover it. 00:38:27.606 [2024-12-14 00:19:06.500522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.606 [2024-12-14 00:19:06.500537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.606 qpair failed and we were unable to recover it. 00:38:27.606 [2024-12-14 00:19:06.500694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.606 [2024-12-14 00:19:06.500709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.606 qpair failed and we were unable to recover it. 00:38:27.606 [2024-12-14 00:19:06.500874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.606 [2024-12-14 00:19:06.500888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.606 qpair failed and we were unable to recover it. 00:38:27.606 [2024-12-14 00:19:06.501094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.606 [2024-12-14 00:19:06.501109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.606 qpair failed and we were unable to recover it. 00:38:27.606 [2024-12-14 00:19:06.501317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.606 [2024-12-14 00:19:06.501331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.606 qpair failed and we were unable to recover it. 00:38:27.606 [2024-12-14 00:19:06.501491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.606 [2024-12-14 00:19:06.501505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.606 qpair failed and we were unable to recover it. 00:38:27.606 [2024-12-14 00:19:06.501663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.606 [2024-12-14 00:19:06.501677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.606 qpair failed and we were unable to recover it. 00:38:27.606 [2024-12-14 00:19:06.501770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.606 [2024-12-14 00:19:06.501784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.606 qpair failed and we were unable to recover it. 00:38:27.606 [2024-12-14 00:19:06.501893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.606 [2024-12-14 00:19:06.501908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.606 qpair failed and we were unable to recover it. 00:38:27.606 [2024-12-14 00:19:06.502020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.606 [2024-12-14 00:19:06.502034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.606 qpair failed and we were unable to recover it. 00:38:27.606 [2024-12-14 00:19:06.502129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.606 [2024-12-14 00:19:06.502143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.606 qpair failed and we were unable to recover it. 00:38:27.606 [2024-12-14 00:19:06.502307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.606 [2024-12-14 00:19:06.502321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.606 qpair failed and we were unable to recover it. 00:38:27.606 [2024-12-14 00:19:06.502518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.606 [2024-12-14 00:19:06.502533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.606 qpair failed and we were unable to recover it. 00:38:27.606 [2024-12-14 00:19:06.502643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.606 [2024-12-14 00:19:06.502658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.606 qpair failed and we were unable to recover it. 00:38:27.606 [2024-12-14 00:19:06.502746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.606 [2024-12-14 00:19:06.502760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.606 qpair failed and we were unable to recover it. 00:38:27.606 [2024-12-14 00:19:06.502905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.606 [2024-12-14 00:19:06.502919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.606 qpair failed and we were unable to recover it. 00:38:27.606 [2024-12-14 00:19:06.503128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.606 [2024-12-14 00:19:06.503141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.606 qpair failed and we were unable to recover it. 00:38:27.606 [2024-12-14 00:19:06.503347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.606 [2024-12-14 00:19:06.503362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.606 qpair failed and we were unable to recover it. 00:38:27.606 [2024-12-14 00:19:06.503594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.606 [2024-12-14 00:19:06.503611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.606 qpair failed and we were unable to recover it. 00:38:27.607 [2024-12-14 00:19:06.503759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.607 [2024-12-14 00:19:06.503773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.607 qpair failed and we were unable to recover it. 00:38:27.607 [2024-12-14 00:19:06.503867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.607 [2024-12-14 00:19:06.503881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.607 qpair failed and we were unable to recover it. 00:38:27.607 [2024-12-14 00:19:06.504046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.607 [2024-12-14 00:19:06.504060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.607 qpair failed and we were unable to recover it. 00:38:27.607 [2024-12-14 00:19:06.504210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.607 [2024-12-14 00:19:06.504225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.607 qpair failed and we were unable to recover it. 00:38:27.607 [2024-12-14 00:19:06.504374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.607 [2024-12-14 00:19:06.504388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.607 qpair failed and we were unable to recover it. 00:38:27.607 [2024-12-14 00:19:06.504557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.607 [2024-12-14 00:19:06.504571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.607 qpair failed and we were unable to recover it. 00:38:27.607 [2024-12-14 00:19:06.504656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.607 [2024-12-14 00:19:06.504669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.607 qpair failed and we were unable to recover it. 00:38:27.607 [2024-12-14 00:19:06.504759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.607 [2024-12-14 00:19:06.504773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.607 qpair failed and we were unable to recover it. 00:38:27.607 [2024-12-14 00:19:06.504918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.607 [2024-12-14 00:19:06.504932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.607 qpair failed and we were unable to recover it. 00:38:27.607 [2024-12-14 00:19:06.505130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.607 [2024-12-14 00:19:06.505144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.607 qpair failed and we were unable to recover it. 00:38:27.607 [2024-12-14 00:19:06.505301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.607 [2024-12-14 00:19:06.505315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.607 qpair failed and we were unable to recover it. 00:38:27.607 [2024-12-14 00:19:06.505559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.607 [2024-12-14 00:19:06.505575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.607 qpair failed and we were unable to recover it. 00:38:27.607 [2024-12-14 00:19:06.505681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.607 [2024-12-14 00:19:06.505699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.607 qpair failed and we were unable to recover it. 00:38:27.607 [2024-12-14 00:19:06.505875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.607 [2024-12-14 00:19:06.505889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.607 qpair failed and we were unable to recover it. 00:38:27.607 [2024-12-14 00:19:06.506044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.607 [2024-12-14 00:19:06.506057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.607 qpair failed and we were unable to recover it. 00:38:27.607 [2024-12-14 00:19:06.506311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.607 [2024-12-14 00:19:06.506328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.607 qpair failed and we were unable to recover it. 00:38:27.607 [2024-12-14 00:19:06.506497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.607 [2024-12-14 00:19:06.506512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.607 qpair failed and we were unable to recover it. 00:38:27.607 [2024-12-14 00:19:06.506717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.607 [2024-12-14 00:19:06.506731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.607 qpair failed and we were unable to recover it. 00:38:27.607 [2024-12-14 00:19:06.506901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.607 [2024-12-14 00:19:06.506914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.607 qpair failed and we were unable to recover it. 00:38:27.607 [2024-12-14 00:19:06.507087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.607 [2024-12-14 00:19:06.507101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.607 qpair failed and we were unable to recover it. 00:38:27.607 [2024-12-14 00:19:06.507193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.607 [2024-12-14 00:19:06.507207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.607 qpair failed and we were unable to recover it. 00:38:27.607 [2024-12-14 00:19:06.507387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.607 [2024-12-14 00:19:06.507401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.607 qpair failed and we were unable to recover it. 00:38:27.607 [2024-12-14 00:19:06.507488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.607 [2024-12-14 00:19:06.507504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.607 qpair failed and we were unable to recover it. 00:38:27.607 [2024-12-14 00:19:06.507660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.607 [2024-12-14 00:19:06.507674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.607 qpair failed and we were unable to recover it. 00:38:27.607 [2024-12-14 00:19:06.507781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.607 [2024-12-14 00:19:06.507795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.607 qpair failed and we were unable to recover it. 00:38:27.607 [2024-12-14 00:19:06.507892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.607 [2024-12-14 00:19:06.507906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.607 qpair failed and we were unable to recover it. 00:38:27.607 [2024-12-14 00:19:06.507992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.607 [2024-12-14 00:19:06.508007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.607 qpair failed and we were unable to recover it. 00:38:27.607 [2024-12-14 00:19:06.508167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.607 [2024-12-14 00:19:06.508181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.607 qpair failed and we were unable to recover it. 00:38:27.607 [2024-12-14 00:19:06.508271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.607 [2024-12-14 00:19:06.508285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.607 qpair failed and we were unable to recover it. 00:38:27.607 [2024-12-14 00:19:06.508444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.607 [2024-12-14 00:19:06.508458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.607 qpair failed and we were unable to recover it. 00:38:27.607 [2024-12-14 00:19:06.508613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.607 [2024-12-14 00:19:06.508627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.607 qpair failed and we were unable to recover it. 00:38:27.607 [2024-12-14 00:19:06.508730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.607 [2024-12-14 00:19:06.508744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.607 qpair failed and we were unable to recover it. 00:38:27.607 [2024-12-14 00:19:06.508843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.607 [2024-12-14 00:19:06.508857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.607 qpair failed and we were unable to recover it. 00:38:27.607 [2024-12-14 00:19:06.508989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.607 [2024-12-14 00:19:06.509003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.607 qpair failed and we were unable to recover it. 00:38:27.607 [2024-12-14 00:19:06.509199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.607 [2024-12-14 00:19:06.509213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.608 qpair failed and we were unable to recover it. 00:38:27.608 [2024-12-14 00:19:06.509457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.608 [2024-12-14 00:19:06.509470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.608 qpair failed and we were unable to recover it. 00:38:27.608 [2024-12-14 00:19:06.509624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.608 [2024-12-14 00:19:06.509638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.608 qpair failed and we were unable to recover it. 00:38:27.608 [2024-12-14 00:19:06.509774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.608 [2024-12-14 00:19:06.509788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.608 qpair failed and we were unable to recover it. 00:38:27.608 [2024-12-14 00:19:06.509898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.608 [2024-12-14 00:19:06.509912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.608 qpair failed and we were unable to recover it. 00:38:27.608 [2024-12-14 00:19:06.510077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.608 [2024-12-14 00:19:06.510091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.608 qpair failed and we were unable to recover it. 00:38:27.608 [2024-12-14 00:19:06.510278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.608 [2024-12-14 00:19:06.510292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.608 qpair failed and we were unable to recover it. 00:38:27.608 [2024-12-14 00:19:06.510429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.608 [2024-12-14 00:19:06.510455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.608 qpair failed and we were unable to recover it. 00:38:27.608 [2024-12-14 00:19:06.510632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.608 [2024-12-14 00:19:06.510646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.608 qpair failed and we were unable to recover it. 00:38:27.608 [2024-12-14 00:19:06.510749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.608 [2024-12-14 00:19:06.510762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.608 qpair failed and we were unable to recover it. 00:38:27.608 [2024-12-14 00:19:06.510964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.608 [2024-12-14 00:19:06.510977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.608 qpair failed and we were unable to recover it. 00:38:27.608 [2024-12-14 00:19:06.511189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.608 [2024-12-14 00:19:06.511203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.608 qpair failed and we were unable to recover it. 00:38:27.608 [2024-12-14 00:19:06.511347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.608 [2024-12-14 00:19:06.511360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.608 qpair failed and we were unable to recover it. 00:38:27.608 [2024-12-14 00:19:06.511570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.608 [2024-12-14 00:19:06.511585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.608 qpair failed and we were unable to recover it. 00:38:27.608 [2024-12-14 00:19:06.511750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.608 [2024-12-14 00:19:06.511763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.608 qpair failed and we were unable to recover it. 00:38:27.608 [2024-12-14 00:19:06.511988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.608 [2024-12-14 00:19:06.512003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.608 qpair failed and we were unable to recover it. 00:38:27.608 [2024-12-14 00:19:06.512077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.608 [2024-12-14 00:19:06.512091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.608 qpair failed and we were unable to recover it. 00:38:27.608 [2024-12-14 00:19:06.512303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.608 [2024-12-14 00:19:06.512317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.608 qpair failed and we were unable to recover it. 00:38:27.608 [2024-12-14 00:19:06.512575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.608 [2024-12-14 00:19:06.512589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.608 qpair failed and we were unable to recover it. 00:38:27.608 [2024-12-14 00:19:06.512753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.608 [2024-12-14 00:19:06.512767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.608 qpair failed and we were unable to recover it. 00:38:27.608 [2024-12-14 00:19:06.512872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.608 [2024-12-14 00:19:06.512886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.608 qpair failed and we were unable to recover it. 00:38:27.608 [2024-12-14 00:19:06.512993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.608 [2024-12-14 00:19:06.513009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.608 qpair failed and we were unable to recover it. 00:38:27.608 [2024-12-14 00:19:06.513220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.608 [2024-12-14 00:19:06.513234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.608 qpair failed and we were unable to recover it. 00:38:27.608 [2024-12-14 00:19:06.513340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.608 [2024-12-14 00:19:06.513353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.608 qpair failed and we were unable to recover it. 00:38:27.608 [2024-12-14 00:19:06.513533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.608 [2024-12-14 00:19:06.513548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.608 qpair failed and we were unable to recover it. 00:38:27.608 [2024-12-14 00:19:06.513700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.608 [2024-12-14 00:19:06.513715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.608 qpair failed and we were unable to recover it. 00:38:27.608 [2024-12-14 00:19:06.513862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.608 [2024-12-14 00:19:06.513875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.608 qpair failed and we were unable to recover it. 00:38:27.608 [2024-12-14 00:19:06.514018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.608 [2024-12-14 00:19:06.514031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.608 qpair failed and we were unable to recover it. 00:38:27.608 [2024-12-14 00:19:06.514195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.608 [2024-12-14 00:19:06.514208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.608 qpair failed and we were unable to recover it. 00:38:27.608 [2024-12-14 00:19:06.514311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.608 [2024-12-14 00:19:06.514324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.608 qpair failed and we were unable to recover it. 00:38:27.608 [2024-12-14 00:19:06.514461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.608 [2024-12-14 00:19:06.514476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.608 qpair failed and we were unable to recover it. 00:38:27.608 [2024-12-14 00:19:06.514570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.608 [2024-12-14 00:19:06.514584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.608 qpair failed and we were unable to recover it. 00:38:27.608 [2024-12-14 00:19:06.514686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.608 [2024-12-14 00:19:06.514700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.608 qpair failed and we were unable to recover it. 00:38:27.608 [2024-12-14 00:19:06.514872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.608 [2024-12-14 00:19:06.514886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.608 qpair failed and we were unable to recover it. 00:38:27.608 [2024-12-14 00:19:06.515036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.609 [2024-12-14 00:19:06.515050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.609 qpair failed and we were unable to recover it. 00:38:27.609 [2024-12-14 00:19:06.515307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.609 [2024-12-14 00:19:06.515321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.609 qpair failed and we were unable to recover it. 00:38:27.609 [2024-12-14 00:19:06.515560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.609 [2024-12-14 00:19:06.515575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.609 qpair failed and we were unable to recover it. 00:38:27.609 [2024-12-14 00:19:06.515682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.609 [2024-12-14 00:19:06.515697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.609 qpair failed and we were unable to recover it. 00:38:27.609 [2024-12-14 00:19:06.515866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.609 [2024-12-14 00:19:06.515880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.609 qpair failed and we were unable to recover it. 00:38:27.609 [2024-12-14 00:19:06.516032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.609 [2024-12-14 00:19:06.516046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.609 qpair failed and we were unable to recover it. 00:38:27.609 [2024-12-14 00:19:06.516222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.609 [2024-12-14 00:19:06.516236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.609 qpair failed and we were unable to recover it. 00:38:27.609 [2024-12-14 00:19:06.516387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.609 [2024-12-14 00:19:06.516401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.609 qpair failed and we were unable to recover it. 00:38:27.609 [2024-12-14 00:19:06.516582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.609 [2024-12-14 00:19:06.516609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.609 qpair failed and we were unable to recover it. 00:38:27.609 [2024-12-14 00:19:06.516760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.609 [2024-12-14 00:19:06.516774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.609 qpair failed and we were unable to recover it. 00:38:27.609 [2024-12-14 00:19:06.516868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.609 [2024-12-14 00:19:06.516882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.609 qpair failed and we were unable to recover it. 00:38:27.609 [2024-12-14 00:19:06.517065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.609 [2024-12-14 00:19:06.517078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.609 qpair failed and we were unable to recover it. 00:38:27.609 [2024-12-14 00:19:06.517280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.609 [2024-12-14 00:19:06.517294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.609 qpair failed and we were unable to recover it. 00:38:27.609 [2024-12-14 00:19:06.517502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.609 [2024-12-14 00:19:06.517517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.609 qpair failed and we were unable to recover it. 00:38:27.609 [2024-12-14 00:19:06.517713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.609 [2024-12-14 00:19:06.517728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.609 qpair failed and we were unable to recover it. 00:38:27.609 [2024-12-14 00:19:06.517838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.609 [2024-12-14 00:19:06.517852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.609 qpair failed and we were unable to recover it. 00:38:27.609 [2024-12-14 00:19:06.517948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.609 [2024-12-14 00:19:06.517962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.609 qpair failed and we were unable to recover it. 00:38:27.609 [2024-12-14 00:19:06.518205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.609 [2024-12-14 00:19:06.518219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.609 qpair failed and we were unable to recover it. 00:38:27.609 [2024-12-14 00:19:06.518485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.609 [2024-12-14 00:19:06.518501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.609 qpair failed and we were unable to recover it. 00:38:27.609 [2024-12-14 00:19:06.518681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.609 [2024-12-14 00:19:06.518695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.609 qpair failed and we were unable to recover it. 00:38:27.609 [2024-12-14 00:19:06.518791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.609 [2024-12-14 00:19:06.518805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.609 qpair failed and we were unable to recover it. 00:38:27.609 [2024-12-14 00:19:06.518940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.609 [2024-12-14 00:19:06.518953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.609 qpair failed and we were unable to recover it. 00:38:27.609 [2024-12-14 00:19:06.519184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.609 [2024-12-14 00:19:06.519198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.609 qpair failed and we were unable to recover it. 00:38:27.609 [2024-12-14 00:19:06.519347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.609 [2024-12-14 00:19:06.519360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.609 qpair failed and we were unable to recover it. 00:38:27.609 [2024-12-14 00:19:06.519576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.609 [2024-12-14 00:19:06.519590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.609 qpair failed and we were unable to recover it. 00:38:27.609 [2024-12-14 00:19:06.519785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.609 [2024-12-14 00:19:06.519800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.609 qpair failed and we were unable to recover it. 00:38:27.609 [2024-12-14 00:19:06.519890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.609 [2024-12-14 00:19:06.519903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.609 qpair failed and we were unable to recover it. 00:38:27.609 [2024-12-14 00:19:06.520144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.609 [2024-12-14 00:19:06.520160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.609 qpair failed and we were unable to recover it. 00:38:27.609 [2024-12-14 00:19:06.520334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.609 [2024-12-14 00:19:06.520348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.609 qpair failed and we were unable to recover it. 00:38:27.609 [2024-12-14 00:19:06.520528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.609 [2024-12-14 00:19:06.520542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.609 qpair failed and we were unable to recover it. 00:38:27.609 [2024-12-14 00:19:06.520633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.609 [2024-12-14 00:19:06.520648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.609 qpair failed and we were unable to recover it. 00:38:27.609 [2024-12-14 00:19:06.520875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.609 [2024-12-14 00:19:06.520889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.609 qpair failed and we were unable to recover it. 00:38:27.609 [2024-12-14 00:19:06.520994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.609 [2024-12-14 00:19:06.521008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.609 qpair failed and we were unable to recover it. 00:38:27.609 [2024-12-14 00:19:06.521095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.609 [2024-12-14 00:19:06.521109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.609 qpair failed and we were unable to recover it. 00:38:27.609 [2024-12-14 00:19:06.521214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.609 [2024-12-14 00:19:06.521229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.609 qpair failed and we were unable to recover it. 00:38:27.609 [2024-12-14 00:19:06.521430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.610 [2024-12-14 00:19:06.521458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.610 qpair failed and we were unable to recover it. 00:38:27.610 [2024-12-14 00:19:06.521611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.610 [2024-12-14 00:19:06.521625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.610 qpair failed and we were unable to recover it. 00:38:27.610 [2024-12-14 00:19:06.521772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.610 [2024-12-14 00:19:06.521786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.610 qpair failed and we were unable to recover it. 00:38:27.610 [2024-12-14 00:19:06.521944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.610 [2024-12-14 00:19:06.521958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.610 qpair failed and we were unable to recover it. 00:38:27.610 [2024-12-14 00:19:06.522133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.610 [2024-12-14 00:19:06.522147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.610 qpair failed and we were unable to recover it. 00:38:27.610 [2024-12-14 00:19:06.522410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.610 [2024-12-14 00:19:06.522424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.610 qpair failed and we were unable to recover it. 00:38:27.610 [2024-12-14 00:19:06.522613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.610 [2024-12-14 00:19:06.522628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.610 qpair failed and we were unable to recover it. 00:38:27.610 [2024-12-14 00:19:06.522855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.610 [2024-12-14 00:19:06.522869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.610 qpair failed and we were unable to recover it. 00:38:27.610 [2024-12-14 00:19:06.523020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.610 [2024-12-14 00:19:06.523034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.610 qpair failed and we were unable to recover it. 00:38:27.610 [2024-12-14 00:19:06.523286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.610 [2024-12-14 00:19:06.523300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.610 qpair failed and we were unable to recover it. 00:38:27.610 [2024-12-14 00:19:06.523485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.610 [2024-12-14 00:19:06.523500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.610 qpair failed and we were unable to recover it. 00:38:27.610 [2024-12-14 00:19:06.523643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.610 [2024-12-14 00:19:06.523658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.610 qpair failed and we were unable to recover it. 00:38:27.610 [2024-12-14 00:19:06.523814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.610 [2024-12-14 00:19:06.523828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.610 qpair failed and we were unable to recover it. 00:38:27.610 [2024-12-14 00:19:06.523985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.610 [2024-12-14 00:19:06.523999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.610 qpair failed and we were unable to recover it. 00:38:27.610 [2024-12-14 00:19:06.524245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.610 [2024-12-14 00:19:06.524260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.610 qpair failed and we were unable to recover it. 00:38:27.610 [2024-12-14 00:19:06.524441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.610 [2024-12-14 00:19:06.524456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.610 qpair failed and we were unable to recover it. 00:38:27.610 [2024-12-14 00:19:06.524608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.610 [2024-12-14 00:19:06.524621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.610 qpair failed and we were unable to recover it. 00:38:27.610 [2024-12-14 00:19:06.524777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.610 [2024-12-14 00:19:06.524790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.610 qpair failed and we were unable to recover it. 00:38:27.610 [2024-12-14 00:19:06.524929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.610 [2024-12-14 00:19:06.524943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.610 qpair failed and we were unable to recover it. 00:38:27.610 [2024-12-14 00:19:06.525098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.610 [2024-12-14 00:19:06.525113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.610 qpair failed and we were unable to recover it. 00:38:27.610 [2024-12-14 00:19:06.525247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.610 [2024-12-14 00:19:06.525260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.610 qpair failed and we were unable to recover it. 00:38:27.610 [2024-12-14 00:19:06.525354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.610 [2024-12-14 00:19:06.525368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.610 qpair failed and we were unable to recover it. 00:38:27.610 [2024-12-14 00:19:06.525588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.610 [2024-12-14 00:19:06.525603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.610 qpair failed and we were unable to recover it. 00:38:27.610 [2024-12-14 00:19:06.525682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.610 [2024-12-14 00:19:06.525696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.610 qpair failed and we were unable to recover it. 00:38:27.610 [2024-12-14 00:19:06.525846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.610 [2024-12-14 00:19:06.525860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.610 qpair failed and we were unable to recover it. 00:38:27.610 [2024-12-14 00:19:06.526012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.610 [2024-12-14 00:19:06.526026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.610 qpair failed and we were unable to recover it. 00:38:27.610 [2024-12-14 00:19:06.526169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.610 [2024-12-14 00:19:06.526182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.610 qpair failed and we were unable to recover it. 00:38:27.610 [2024-12-14 00:19:06.526293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.610 [2024-12-14 00:19:06.526306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.610 qpair failed and we were unable to recover it. 00:38:27.610 [2024-12-14 00:19:06.526507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.610 [2024-12-14 00:19:06.526522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.610 qpair failed and we were unable to recover it. 00:38:27.610 [2024-12-14 00:19:06.526681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.610 [2024-12-14 00:19:06.526695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.610 qpair failed and we were unable to recover it. 00:38:27.610 [2024-12-14 00:19:06.526833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.610 [2024-12-14 00:19:06.526847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.610 qpair failed and we were unable to recover it. 00:38:27.610 [2024-12-14 00:19:06.526992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.610 [2024-12-14 00:19:06.527005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.610 qpair failed and we were unable to recover it. 00:38:27.610 [2024-12-14 00:19:06.527233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.610 [2024-12-14 00:19:06.527249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.610 qpair failed and we were unable to recover it. 00:38:27.610 [2024-12-14 00:19:06.527451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.610 [2024-12-14 00:19:06.527465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.610 qpair failed and we were unable to recover it. 00:38:27.610 [2024-12-14 00:19:06.527570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.610 [2024-12-14 00:19:06.527584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.610 qpair failed and we were unable to recover it. 00:38:27.610 [2024-12-14 00:19:06.527744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.610 [2024-12-14 00:19:06.527758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.610 qpair failed and we were unable to recover it. 00:38:27.610 [2024-12-14 00:19:06.527966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.610 [2024-12-14 00:19:06.527980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.610 qpair failed and we were unable to recover it. 00:38:27.610 [2024-12-14 00:19:06.528155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.610 [2024-12-14 00:19:06.528174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.610 qpair failed and we were unable to recover it. 00:38:27.610 [2024-12-14 00:19:06.528265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.610 [2024-12-14 00:19:06.528279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.610 qpair failed and we were unable to recover it. 00:38:27.610 [2024-12-14 00:19:06.528447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.610 [2024-12-14 00:19:06.528461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.610 qpair failed and we were unable to recover it. 00:38:27.611 [2024-12-14 00:19:06.528719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.611 [2024-12-14 00:19:06.528733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.611 qpair failed and we were unable to recover it. 00:38:27.611 [2024-12-14 00:19:06.528820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.611 [2024-12-14 00:19:06.528833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.611 qpair failed and we were unable to recover it. 00:38:27.611 [2024-12-14 00:19:06.528941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.611 [2024-12-14 00:19:06.528954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.611 qpair failed and we were unable to recover it. 00:38:27.611 [2024-12-14 00:19:06.529130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.611 [2024-12-14 00:19:06.529144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.611 qpair failed and we were unable to recover it. 00:38:27.611 [2024-12-14 00:19:06.529307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.611 [2024-12-14 00:19:06.529321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.611 qpair failed and we were unable to recover it. 00:38:27.611 [2024-12-14 00:19:06.529401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.611 [2024-12-14 00:19:06.529414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.611 qpair failed and we were unable to recover it. 00:38:27.611 [2024-12-14 00:19:06.529607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.611 [2024-12-14 00:19:06.529622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.611 qpair failed and we were unable to recover it. 00:38:27.611 [2024-12-14 00:19:06.529782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.611 [2024-12-14 00:19:06.529796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.611 qpair failed and we were unable to recover it. 00:38:27.611 [2024-12-14 00:19:06.529886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.611 [2024-12-14 00:19:06.529900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.611 qpair failed and we were unable to recover it. 00:38:27.611 [2024-12-14 00:19:06.530044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.611 [2024-12-14 00:19:06.530057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.611 qpair failed and we were unable to recover it. 00:38:27.611 [2024-12-14 00:19:06.530153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.611 [2024-12-14 00:19:06.530166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.611 qpair failed and we were unable to recover it. 00:38:27.611 [2024-12-14 00:19:06.530309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.611 [2024-12-14 00:19:06.530323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.611 qpair failed and we were unable to recover it. 00:38:27.611 [2024-12-14 00:19:06.530470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.611 [2024-12-14 00:19:06.530484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.611 qpair failed and we were unable to recover it. 00:38:27.611 [2024-12-14 00:19:06.530641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.611 [2024-12-14 00:19:06.530655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.611 qpair failed and we were unable to recover it. 00:38:27.611 [2024-12-14 00:19:06.530830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.611 [2024-12-14 00:19:06.530844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.611 qpair failed and we were unable to recover it. 00:38:27.611 [2024-12-14 00:19:06.531101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.611 [2024-12-14 00:19:06.531115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.611 qpair failed and we were unable to recover it. 00:38:27.611 [2024-12-14 00:19:06.531251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.611 [2024-12-14 00:19:06.531264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.611 qpair failed and we were unable to recover it. 00:38:27.611 [2024-12-14 00:19:06.531436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.611 [2024-12-14 00:19:06.531458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.611 qpair failed and we were unable to recover it. 00:38:27.611 [2024-12-14 00:19:06.531624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.611 [2024-12-14 00:19:06.531638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.611 qpair failed and we were unable to recover it. 00:38:27.611 [2024-12-14 00:19:06.531735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.611 [2024-12-14 00:19:06.531751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.611 qpair failed and we were unable to recover it. 00:38:27.611 [2024-12-14 00:19:06.531930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.611 [2024-12-14 00:19:06.531944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.611 qpair failed and we were unable to recover it. 00:38:27.611 [2024-12-14 00:19:06.532080] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:38:27.611 [2024-12-14 00:19:06.532100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.611 [2024-12-14 00:19:06.532116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.611 qpair failed and we were unable to recover it. 00:38:27.611 [2024-12-14 00:19:06.532161] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:38:27.611 [2024-12-14 00:19:06.532354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.611 [2024-12-14 00:19:06.532368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.611 qpair failed and we were unable to recover it. 00:38:27.611 [2024-12-14 00:19:06.532517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.611 [2024-12-14 00:19:06.532530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.611 qpair failed and we were unable to recover it. 00:38:27.611 [2024-12-14 00:19:06.532738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.611 [2024-12-14 00:19:06.532750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.611 qpair failed and we were unable to recover it. 00:38:27.611 [2024-12-14 00:19:06.532899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.611 [2024-12-14 00:19:06.532913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.611 qpair failed and we were unable to recover it. 00:38:27.611 [2024-12-14 00:19:06.533133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.611 [2024-12-14 00:19:06.533147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.611 qpair failed and we were unable to recover it. 00:38:27.611 [2024-12-14 00:19:06.533376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.611 [2024-12-14 00:19:06.533389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.611 qpair failed and we were unable to recover it. 00:38:27.611 [2024-12-14 00:19:06.533564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.611 [2024-12-14 00:19:06.533578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.611 qpair failed and we were unable to recover it. 00:38:27.611 [2024-12-14 00:19:06.533730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.611 [2024-12-14 00:19:06.533744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.611 qpair failed and we were unable to recover it. 00:38:27.611 [2024-12-14 00:19:06.533822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.611 [2024-12-14 00:19:06.533836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.611 qpair failed and we were unable to recover it. 00:38:27.611 [2024-12-14 00:19:06.533936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.611 [2024-12-14 00:19:06.533951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.611 qpair failed and we were unable to recover it. 00:38:27.611 [2024-12-14 00:19:06.534188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.611 [2024-12-14 00:19:06.534202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.611 qpair failed and we were unable to recover it. 00:38:27.611 [2024-12-14 00:19:06.534405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.611 [2024-12-14 00:19:06.534419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.611 qpair failed and we were unable to recover it. 00:38:27.611 [2024-12-14 00:19:06.534675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.611 [2024-12-14 00:19:06.534689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.611 qpair failed and we were unable to recover it. 00:38:27.611 [2024-12-14 00:19:06.534900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.611 [2024-12-14 00:19:06.534914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.611 qpair failed and we were unable to recover it. 00:38:27.611 [2024-12-14 00:19:06.535069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.611 [2024-12-14 00:19:06.535083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.611 qpair failed and we were unable to recover it. 00:38:27.611 [2024-12-14 00:19:06.535343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.611 [2024-12-14 00:19:06.535357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.611 qpair failed and we were unable to recover it. 00:38:27.612 [2024-12-14 00:19:06.535533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.612 [2024-12-14 00:19:06.535548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.612 qpair failed and we were unable to recover it. 00:38:27.612 [2024-12-14 00:19:06.535709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.612 [2024-12-14 00:19:06.535723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.612 qpair failed and we were unable to recover it. 00:38:27.612 [2024-12-14 00:19:06.535822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.612 [2024-12-14 00:19:06.535835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.612 qpair failed and we were unable to recover it. 00:38:27.612 [2024-12-14 00:19:06.536098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.612 [2024-12-14 00:19:06.536112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.612 qpair failed and we were unable to recover it. 00:38:27.612 [2024-12-14 00:19:06.536335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.612 [2024-12-14 00:19:06.536349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.612 qpair failed and we were unable to recover it. 00:38:27.612 [2024-12-14 00:19:06.536521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.612 [2024-12-14 00:19:06.536535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.612 qpair failed and we were unable to recover it. 00:38:27.612 [2024-12-14 00:19:06.536681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.612 [2024-12-14 00:19:06.536695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.612 qpair failed and we were unable to recover it. 00:38:27.612 [2024-12-14 00:19:06.536798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.612 [2024-12-14 00:19:06.536812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.612 qpair failed and we were unable to recover it. 00:38:27.612 [2024-12-14 00:19:06.536895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.612 [2024-12-14 00:19:06.536909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.612 qpair failed and we were unable to recover it. 00:38:27.612 [2024-12-14 00:19:06.537005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.612 [2024-12-14 00:19:06.537019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.612 qpair failed and we were unable to recover it. 00:38:27.612 [2024-12-14 00:19:06.537124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.612 [2024-12-14 00:19:06.537138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.612 qpair failed and we were unable to recover it. 00:38:27.612 [2024-12-14 00:19:06.537369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.612 [2024-12-14 00:19:06.537383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.612 qpair failed and we were unable to recover it. 00:38:27.612 [2024-12-14 00:19:06.537653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.612 [2024-12-14 00:19:06.537667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.612 qpair failed and we were unable to recover it. 00:38:27.612 [2024-12-14 00:19:06.537755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.612 [2024-12-14 00:19:06.537769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.612 qpair failed and we were unable to recover it. 00:38:27.612 [2024-12-14 00:19:06.538015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.612 [2024-12-14 00:19:06.538029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.612 qpair failed and we were unable to recover it. 00:38:27.612 [2024-12-14 00:19:06.538203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.612 [2024-12-14 00:19:06.538217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.612 qpair failed and we were unable to recover it. 00:38:27.612 [2024-12-14 00:19:06.538379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.612 [2024-12-14 00:19:06.538393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.612 qpair failed and we were unable to recover it. 00:38:27.612 [2024-12-14 00:19:06.538487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.612 [2024-12-14 00:19:06.538501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.612 qpair failed and we were unable to recover it. 00:38:27.612 [2024-12-14 00:19:06.538744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.612 [2024-12-14 00:19:06.538758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.612 qpair failed and we were unable to recover it. 00:38:27.612 [2024-12-14 00:19:06.538853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.612 [2024-12-14 00:19:06.538866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.612 qpair failed and we were unable to recover it. 00:38:27.612 [2024-12-14 00:19:06.539055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.612 [2024-12-14 00:19:06.539070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.612 qpair failed and we were unable to recover it. 00:38:27.612 [2024-12-14 00:19:06.539310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.612 [2024-12-14 00:19:06.539324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.612 qpair failed and we were unable to recover it. 00:38:27.612 [2024-12-14 00:19:06.539503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.612 [2024-12-14 00:19:06.539518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.612 qpair failed and we were unable to recover it. 00:38:27.612 [2024-12-14 00:19:06.539682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.612 [2024-12-14 00:19:06.539701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.612 qpair failed and we were unable to recover it. 00:38:27.612 [2024-12-14 00:19:06.539947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.612 [2024-12-14 00:19:06.539961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.612 qpair failed and we were unable to recover it. 00:38:27.612 [2024-12-14 00:19:06.540213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.612 [2024-12-14 00:19:06.540226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.612 qpair failed and we were unable to recover it. 00:38:27.612 [2024-12-14 00:19:06.540374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.612 [2024-12-14 00:19:06.540388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.612 qpair failed and we were unable to recover it. 00:38:27.612 [2024-12-14 00:19:06.540537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.612 [2024-12-14 00:19:06.540552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.612 qpair failed and we were unable to recover it. 00:38:27.612 [2024-12-14 00:19:06.540651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.612 [2024-12-14 00:19:06.540665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.612 qpair failed and we were unable to recover it. 00:38:27.612 [2024-12-14 00:19:06.540810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.612 [2024-12-14 00:19:06.540824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.612 qpair failed and we were unable to recover it. 00:38:27.612 [2024-12-14 00:19:06.540998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.612 [2024-12-14 00:19:06.541012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.612 qpair failed and we were unable to recover it. 00:38:27.612 [2024-12-14 00:19:06.541245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.612 [2024-12-14 00:19:06.541259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.612 qpair failed and we were unable to recover it. 00:38:27.612 [2024-12-14 00:19:06.541509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.612 [2024-12-14 00:19:06.541524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.612 qpair failed and we were unable to recover it. 00:38:27.612 [2024-12-14 00:19:06.541674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.612 [2024-12-14 00:19:06.541690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.612 qpair failed and we were unable to recover it. 00:38:27.612 [2024-12-14 00:19:06.541788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.613 [2024-12-14 00:19:06.541801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.613 qpair failed and we were unable to recover it. 00:38:27.613 [2024-12-14 00:19:06.541886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.613 [2024-12-14 00:19:06.541899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.613 qpair failed and we were unable to recover it. 00:38:27.613 [2024-12-14 00:19:06.542133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.613 [2024-12-14 00:19:06.542147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.613 qpair failed and we were unable to recover it. 00:38:27.613 [2024-12-14 00:19:06.542326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.613 [2024-12-14 00:19:06.542340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.613 qpair failed and we were unable to recover it. 00:38:27.613 [2024-12-14 00:19:06.542501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.613 [2024-12-14 00:19:06.542516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.613 qpair failed and we were unable to recover it. 00:38:27.613 [2024-12-14 00:19:06.542726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.613 [2024-12-14 00:19:06.542740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.613 qpair failed and we were unable to recover it. 00:38:27.613 [2024-12-14 00:19:06.542896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.613 [2024-12-14 00:19:06.542910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.613 qpair failed and we were unable to recover it. 00:38:27.613 [2024-12-14 00:19:06.543056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.613 [2024-12-14 00:19:06.543071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.613 qpair failed and we were unable to recover it. 00:38:27.613 [2024-12-14 00:19:06.543229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.613 [2024-12-14 00:19:06.543243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.613 qpair failed and we were unable to recover it. 00:38:27.613 [2024-12-14 00:19:06.543473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.613 [2024-12-14 00:19:06.543489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.613 qpair failed and we were unable to recover it. 00:38:27.613 [2024-12-14 00:19:06.543599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.613 [2024-12-14 00:19:06.543613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.613 qpair failed and we were unable to recover it. 00:38:27.613 [2024-12-14 00:19:06.543764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.613 [2024-12-14 00:19:06.543779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.613 qpair failed and we were unable to recover it. 00:38:27.613 [2024-12-14 00:19:06.543972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.613 [2024-12-14 00:19:06.543986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.613 qpair failed and we were unable to recover it. 00:38:27.613 [2024-12-14 00:19:06.544204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.613 [2024-12-14 00:19:06.544218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.613 qpair failed and we were unable to recover it. 00:38:27.613 [2024-12-14 00:19:06.544459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.613 [2024-12-14 00:19:06.544474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.613 qpair failed and we were unable to recover it. 00:38:27.613 [2024-12-14 00:19:06.544612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.613 [2024-12-14 00:19:06.544626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.613 qpair failed and we were unable to recover it. 00:38:27.613 [2024-12-14 00:19:06.544795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.613 [2024-12-14 00:19:06.544810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.613 qpair failed and we were unable to recover it. 00:38:27.613 [2024-12-14 00:19:06.544965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.613 [2024-12-14 00:19:06.544979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.613 qpair failed and we were unable to recover it. 00:38:27.613 [2024-12-14 00:19:06.545136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.613 [2024-12-14 00:19:06.545150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.613 qpair failed and we were unable to recover it. 00:38:27.613 [2024-12-14 00:19:06.545351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.613 [2024-12-14 00:19:06.545365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.613 qpair failed and we were unable to recover it. 00:38:27.613 [2024-12-14 00:19:06.545517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.613 [2024-12-14 00:19:06.545531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.613 qpair failed and we were unable to recover it. 00:38:27.613 [2024-12-14 00:19:06.545688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.613 [2024-12-14 00:19:06.545702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.613 qpair failed and we were unable to recover it. 00:38:27.613 [2024-12-14 00:19:06.545870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.613 [2024-12-14 00:19:06.545884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.613 qpair failed and we were unable to recover it. 00:38:27.613 [2024-12-14 00:19:06.545981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.613 [2024-12-14 00:19:06.545995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.613 qpair failed and we were unable to recover it. 00:38:27.613 [2024-12-14 00:19:06.546075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.613 [2024-12-14 00:19:06.546089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.613 qpair failed and we were unable to recover it. 00:38:27.613 [2024-12-14 00:19:06.546311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.613 [2024-12-14 00:19:06.546325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.613 qpair failed and we were unable to recover it. 00:38:27.613 [2024-12-14 00:19:06.546458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.613 [2024-12-14 00:19:06.546501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.613 qpair failed and we were unable to recover it. 00:38:27.613 [2024-12-14 00:19:06.546704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.613 [2024-12-14 00:19:06.546737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:27.613 qpair failed and we were unable to recover it. 00:38:27.613 [2024-12-14 00:19:06.547049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.613 [2024-12-14 00:19:06.547084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:27.613 qpair failed and we were unable to recover it. 00:38:27.613 [2024-12-14 00:19:06.547358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.613 [2024-12-14 00:19:06.547374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.613 qpair failed and we were unable to recover it. 00:38:27.613 [2024-12-14 00:19:06.547527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.613 [2024-12-14 00:19:06.547541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.613 qpair failed and we were unable to recover it. 00:38:27.613 [2024-12-14 00:19:06.547762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.613 [2024-12-14 00:19:06.547776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.613 qpair failed and we were unable to recover it. 00:38:27.613 [2024-12-14 00:19:06.547921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.613 [2024-12-14 00:19:06.547935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.613 qpair failed and we were unable to recover it. 00:38:27.613 [2024-12-14 00:19:06.548136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.613 [2024-12-14 00:19:06.548150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.613 qpair failed and we were unable to recover it. 00:38:27.613 [2024-12-14 00:19:06.548378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.613 [2024-12-14 00:19:06.548393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.613 qpair failed and we were unable to recover it. 00:38:27.613 [2024-12-14 00:19:06.548555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.613 [2024-12-14 00:19:06.548570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.613 qpair failed and we were unable to recover it. 00:38:27.613 [2024-12-14 00:19:06.548782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.613 [2024-12-14 00:19:06.548796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.613 qpair failed and we were unable to recover it. 00:38:27.613 [2024-12-14 00:19:06.548887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.613 [2024-12-14 00:19:06.548900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.613 qpair failed and we were unable to recover it. 00:38:27.613 [2024-12-14 00:19:06.548998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.613 [2024-12-14 00:19:06.549012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.613 qpair failed and we were unable to recover it. 00:38:27.613 [2024-12-14 00:19:06.549178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.614 [2024-12-14 00:19:06.549194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.614 qpair failed and we were unable to recover it. 00:38:27.614 [2024-12-14 00:19:06.549333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.614 [2024-12-14 00:19:06.549347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.614 qpair failed and we were unable to recover it. 00:38:27.614 [2024-12-14 00:19:06.549525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.614 [2024-12-14 00:19:06.549539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.614 qpair failed and we were unable to recover it. 00:38:27.614 [2024-12-14 00:19:06.549643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.614 [2024-12-14 00:19:06.549657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.614 qpair failed and we were unable to recover it. 00:38:27.614 [2024-12-14 00:19:06.549860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.614 [2024-12-14 00:19:06.549874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.614 qpair failed and we were unable to recover it. 00:38:27.614 [2024-12-14 00:19:06.550090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.614 [2024-12-14 00:19:06.550104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.614 qpair failed and we were unable to recover it. 00:38:27.614 [2024-12-14 00:19:06.550295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.614 [2024-12-14 00:19:06.550309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.614 qpair failed and we were unable to recover it. 00:38:27.614 [2024-12-14 00:19:06.550558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.614 [2024-12-14 00:19:06.550573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.614 qpair failed and we were unable to recover it. 00:38:27.614 [2024-12-14 00:19:06.550744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.614 [2024-12-14 00:19:06.550757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.614 qpair failed and we were unable to recover it. 00:38:27.614 [2024-12-14 00:19:06.550924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.614 [2024-12-14 00:19:06.550939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.614 qpair failed and we were unable to recover it. 00:38:27.614 [2024-12-14 00:19:06.551111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.614 [2024-12-14 00:19:06.551125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.614 qpair failed and we were unable to recover it. 00:38:27.614 [2024-12-14 00:19:06.551278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.614 [2024-12-14 00:19:06.551292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.614 qpair failed and we were unable to recover it. 00:38:27.614 [2024-12-14 00:19:06.551378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.614 [2024-12-14 00:19:06.551391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.614 qpair failed and we were unable to recover it. 00:38:27.614 [2024-12-14 00:19:06.551565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.614 [2024-12-14 00:19:06.551579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.614 qpair failed and we were unable to recover it. 00:38:27.614 [2024-12-14 00:19:06.551739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.614 [2024-12-14 00:19:06.551752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.614 qpair failed and we were unable to recover it. 00:38:27.614 [2024-12-14 00:19:06.551891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.614 [2024-12-14 00:19:06.551905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.614 qpair failed and we were unable to recover it. 00:38:27.614 [2024-12-14 00:19:06.552056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.614 [2024-12-14 00:19:06.552070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.614 qpair failed and we were unable to recover it. 00:38:27.614 [2024-12-14 00:19:06.552364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.614 [2024-12-14 00:19:06.552389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.614 qpair failed and we were unable to recover it. 00:38:27.614 [2024-12-14 00:19:06.552602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.614 [2024-12-14 00:19:06.552617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.614 qpair failed and we were unable to recover it. 00:38:27.614 [2024-12-14 00:19:06.552782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.614 [2024-12-14 00:19:06.552795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.614 qpair failed and we were unable to recover it. 00:38:27.614 [2024-12-14 00:19:06.553013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.614 [2024-12-14 00:19:06.553028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.614 qpair failed and we were unable to recover it. 00:38:27.614 [2024-12-14 00:19:06.553249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.614 [2024-12-14 00:19:06.553262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.614 qpair failed and we were unable to recover it. 00:38:27.614 [2024-12-14 00:19:06.553507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.614 [2024-12-14 00:19:06.553521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.614 qpair failed and we were unable to recover it. 00:38:27.614 [2024-12-14 00:19:06.553676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.614 [2024-12-14 00:19:06.553689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.614 qpair failed and we were unable to recover it. 00:38:27.614 [2024-12-14 00:19:06.553797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.614 [2024-12-14 00:19:06.553812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.614 qpair failed and we were unable to recover it. 00:38:27.614 [2024-12-14 00:19:06.554024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.614 [2024-12-14 00:19:06.554038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.614 qpair failed and we were unable to recover it. 00:38:27.614 [2024-12-14 00:19:06.554181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.614 [2024-12-14 00:19:06.554195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.614 qpair failed and we were unable to recover it. 00:38:27.614 [2024-12-14 00:19:06.554385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.614 [2024-12-14 00:19:06.554414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.614 qpair failed and we were unable to recover it. 00:38:27.614 [2024-12-14 00:19:06.554598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.614 [2024-12-14 00:19:06.554624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:27.614 qpair failed and we were unable to recover it. 00:38:27.614 [2024-12-14 00:19:06.554735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.614 [2024-12-14 00:19:06.554758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:27.614 qpair failed and we were unable to recover it. 00:38:27.614 [2024-12-14 00:19:06.554921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.614 [2024-12-14 00:19:06.554937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.614 qpair failed and we were unable to recover it. 00:38:27.614 [2024-12-14 00:19:06.555036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.614 [2024-12-14 00:19:06.555049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.614 qpair failed and we were unable to recover it. 00:38:27.614 [2024-12-14 00:19:06.555273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.614 [2024-12-14 00:19:06.555287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.614 qpair failed and we were unable to recover it. 00:38:27.614 [2024-12-14 00:19:06.555370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.614 [2024-12-14 00:19:06.555383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.614 qpair failed and we were unable to recover it. 00:38:27.614 [2024-12-14 00:19:06.555467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.614 [2024-12-14 00:19:06.555481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.614 qpair failed and we were unable to recover it. 00:38:27.614 [2024-12-14 00:19:06.555621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.614 [2024-12-14 00:19:06.555635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.614 qpair failed and we were unable to recover it. 00:38:27.614 [2024-12-14 00:19:06.555769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.614 [2024-12-14 00:19:06.555783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.614 qpair failed and we were unable to recover it. 00:38:27.614 [2024-12-14 00:19:06.555988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.614 [2024-12-14 00:19:06.556002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.614 qpair failed and we were unable to recover it. 00:38:27.614 [2024-12-14 00:19:06.556185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.614 [2024-12-14 00:19:06.556199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.614 qpair failed and we were unable to recover it. 00:38:27.614 [2024-12-14 00:19:06.556344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.614 [2024-12-14 00:19:06.556357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.614 qpair failed and we were unable to recover it. 00:38:27.615 [2024-12-14 00:19:06.556548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.615 [2024-12-14 00:19:06.556568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.615 qpair failed and we were unable to recover it. 00:38:27.615 [2024-12-14 00:19:06.556739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.615 [2024-12-14 00:19:06.556753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.615 qpair failed and we were unable to recover it. 00:38:27.615 [2024-12-14 00:19:06.556996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.615 [2024-12-14 00:19:06.557009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.615 qpair failed and we were unable to recover it. 00:38:27.615 [2024-12-14 00:19:06.557196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.615 [2024-12-14 00:19:06.557210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.615 qpair failed and we were unable to recover it. 00:38:27.615 [2024-12-14 00:19:06.557363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.615 [2024-12-14 00:19:06.557377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.615 qpair failed and we were unable to recover it. 00:38:27.615 [2024-12-14 00:19:06.557623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.615 [2024-12-14 00:19:06.557637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.615 qpair failed and we were unable to recover it. 00:38:27.615 [2024-12-14 00:19:06.557816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.615 [2024-12-14 00:19:06.557830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.615 qpair failed and we were unable to recover it. 00:38:27.615 [2024-12-14 00:19:06.557927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.615 [2024-12-14 00:19:06.557941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.615 qpair failed and we were unable to recover it. 00:38:27.615 [2024-12-14 00:19:06.558025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.615 [2024-12-14 00:19:06.558038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.615 qpair failed and we were unable to recover it. 00:38:27.615 [2024-12-14 00:19:06.558239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.615 [2024-12-14 00:19:06.558253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.615 qpair failed and we were unable to recover it. 00:38:27.615 [2024-12-14 00:19:06.558459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.615 [2024-12-14 00:19:06.558473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.615 qpair failed and we were unable to recover it. 00:38:27.615 [2024-12-14 00:19:06.558630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.615 [2024-12-14 00:19:06.558643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.615 qpair failed and we were unable to recover it. 00:38:27.615 [2024-12-14 00:19:06.558865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.615 [2024-12-14 00:19:06.558879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.615 qpair failed and we were unable to recover it. 00:38:27.615 [2024-12-14 00:19:06.559106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.615 [2024-12-14 00:19:06.559120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.615 qpair failed and we were unable to recover it. 00:38:27.615 [2024-12-14 00:19:06.559295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.615 [2024-12-14 00:19:06.559309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.615 qpair failed and we were unable to recover it. 00:38:27.615 [2024-12-14 00:19:06.559465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.615 [2024-12-14 00:19:06.559480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.615 qpair failed and we were unable to recover it. 00:38:27.615 [2024-12-14 00:19:06.559570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.615 [2024-12-14 00:19:06.559585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.615 qpair failed and we were unable to recover it. 00:38:27.615 [2024-12-14 00:19:06.559834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.615 [2024-12-14 00:19:06.559847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.615 qpair failed and we were unable to recover it. 00:38:27.615 [2024-12-14 00:19:06.559948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.615 [2024-12-14 00:19:06.559962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.615 qpair failed and we were unable to recover it. 00:38:27.615 [2024-12-14 00:19:06.560142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.615 [2024-12-14 00:19:06.560156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.615 qpair failed and we were unable to recover it. 00:38:27.615 [2024-12-14 00:19:06.560386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.615 [2024-12-14 00:19:06.560400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.615 qpair failed and we were unable to recover it. 00:38:27.615 [2024-12-14 00:19:06.560555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.615 [2024-12-14 00:19:06.560569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.615 qpair failed and we were unable to recover it. 00:38:27.615 [2024-12-14 00:19:06.560791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.615 [2024-12-14 00:19:06.560807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.615 qpair failed and we were unable to recover it. 00:38:27.615 [2024-12-14 00:19:06.560882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.615 [2024-12-14 00:19:06.560896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.615 qpair failed and we were unable to recover it. 00:38:27.615 [2024-12-14 00:19:06.561137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.615 [2024-12-14 00:19:06.561151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.615 qpair failed and we were unable to recover it. 00:38:27.615 [2024-12-14 00:19:06.561249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.615 [2024-12-14 00:19:06.561264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.615 qpair failed and we were unable to recover it. 00:38:27.615 [2024-12-14 00:19:06.561422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.615 [2024-12-14 00:19:06.561441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.615 qpair failed and we were unable to recover it. 00:38:27.615 [2024-12-14 00:19:06.561593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.615 [2024-12-14 00:19:06.561608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.615 qpair failed and we were unable to recover it. 00:38:27.615 [2024-12-14 00:19:06.561836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.615 [2024-12-14 00:19:06.561850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.615 qpair failed and we were unable to recover it. 00:38:27.615 [2024-12-14 00:19:06.561959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.615 [2024-12-14 00:19:06.561973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.615 qpair failed and we were unable to recover it. 00:38:27.615 [2024-12-14 00:19:06.562242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.615 [2024-12-14 00:19:06.562256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.615 qpair failed and we were unable to recover it. 00:38:27.615 [2024-12-14 00:19:06.562397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.615 [2024-12-14 00:19:06.562411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.615 qpair failed and we were unable to recover it. 00:38:27.615 [2024-12-14 00:19:06.562622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.615 [2024-12-14 00:19:06.562637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.615 qpair failed and we were unable to recover it. 00:38:27.615 [2024-12-14 00:19:06.562812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.615 [2024-12-14 00:19:06.562826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.615 qpair failed and we were unable to recover it. 00:38:27.615 [2024-12-14 00:19:06.562975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.615 [2024-12-14 00:19:06.562989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.615 qpair failed and we were unable to recover it. 00:38:27.615 [2024-12-14 00:19:06.563162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.615 [2024-12-14 00:19:06.563177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.615 qpair failed and we were unable to recover it. 00:38:27.615 [2024-12-14 00:19:06.563342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.615 [2024-12-14 00:19:06.563356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.615 qpair failed and we were unable to recover it. 00:38:27.615 [2024-12-14 00:19:06.563497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.615 [2024-12-14 00:19:06.563511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.615 qpair failed and we were unable to recover it. 00:38:27.615 [2024-12-14 00:19:06.563757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.615 [2024-12-14 00:19:06.563771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.615 qpair failed and we were unable to recover it. 00:38:27.615 [2024-12-14 00:19:06.563924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.616 [2024-12-14 00:19:06.563938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.616 qpair failed and we were unable to recover it. 00:38:27.616 [2024-12-14 00:19:06.564152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.616 [2024-12-14 00:19:06.564169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.616 qpair failed and we were unable to recover it. 00:38:27.616 [2024-12-14 00:19:06.564263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.616 [2024-12-14 00:19:06.564277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.616 qpair failed and we were unable to recover it. 00:38:27.616 [2024-12-14 00:19:06.564504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.616 [2024-12-14 00:19:06.564519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.616 qpair failed and we were unable to recover it. 00:38:27.616 [2024-12-14 00:19:06.564738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.616 [2024-12-14 00:19:06.564752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.616 qpair failed and we were unable to recover it. 00:38:27.616 [2024-12-14 00:19:06.564858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.616 [2024-12-14 00:19:06.564881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.616 qpair failed and we were unable to recover it. 00:38:27.616 [2024-12-14 00:19:06.565061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.616 [2024-12-14 00:19:06.565076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.616 qpair failed and we were unable to recover it. 00:38:27.616 [2024-12-14 00:19:06.565235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.616 [2024-12-14 00:19:06.565250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.616 qpair failed and we were unable to recover it. 00:38:27.616 [2024-12-14 00:19:06.565500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.616 [2024-12-14 00:19:06.565514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.616 qpair failed and we were unable to recover it. 00:38:27.616 [2024-12-14 00:19:06.565665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.616 [2024-12-14 00:19:06.565679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.616 qpair failed and we were unable to recover it. 00:38:27.616 [2024-12-14 00:19:06.565828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.616 [2024-12-14 00:19:06.565842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.616 qpair failed and we were unable to recover it. 00:38:27.616 [2024-12-14 00:19:06.565998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.616 [2024-12-14 00:19:06.566012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.616 qpair failed and we were unable to recover it. 00:38:27.616 [2024-12-14 00:19:06.566170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.616 [2024-12-14 00:19:06.566184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.616 qpair failed and we were unable to recover it. 00:38:27.616 [2024-12-14 00:19:06.566358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.616 [2024-12-14 00:19:06.566372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.616 qpair failed and we were unable to recover it. 00:38:27.616 [2024-12-14 00:19:06.566532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.616 [2024-12-14 00:19:06.566547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.616 qpair failed and we were unable to recover it. 00:38:27.616 [2024-12-14 00:19:06.566703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.616 [2024-12-14 00:19:06.566718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.616 qpair failed and we were unable to recover it. 00:38:27.616 [2024-12-14 00:19:06.566867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.616 [2024-12-14 00:19:06.566882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.616 qpair failed and we were unable to recover it. 00:38:27.616 [2024-12-14 00:19:06.567078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.616 [2024-12-14 00:19:06.567092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.616 qpair failed and we were unable to recover it. 00:38:27.616 [2024-12-14 00:19:06.567327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.616 [2024-12-14 00:19:06.567342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.616 qpair failed and we were unable to recover it. 00:38:27.616 [2024-12-14 00:19:06.567444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.616 [2024-12-14 00:19:06.567459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.616 qpair failed and we were unable to recover it. 00:38:27.616 [2024-12-14 00:19:06.567569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.616 [2024-12-14 00:19:06.567584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.616 qpair failed and we were unable to recover it. 00:38:27.616 [2024-12-14 00:19:06.567688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.616 [2024-12-14 00:19:06.567701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.616 qpair failed and we were unable to recover it. 00:38:27.616 [2024-12-14 00:19:06.567941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.616 [2024-12-14 00:19:06.567955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.616 qpair failed and we were unable to recover it. 00:38:27.616 [2024-12-14 00:19:06.568063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.616 [2024-12-14 00:19:06.568077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.616 qpair failed and we were unable to recover it. 00:38:27.616 [2024-12-14 00:19:06.568220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.616 [2024-12-14 00:19:06.568234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.616 qpair failed and we were unable to recover it. 00:38:27.616 [2024-12-14 00:19:06.568337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.616 [2024-12-14 00:19:06.568351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.616 qpair failed and we were unable to recover it. 00:38:27.616 [2024-12-14 00:19:06.568528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.616 [2024-12-14 00:19:06.568543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.616 qpair failed and we were unable to recover it. 00:38:27.616 [2024-12-14 00:19:06.568637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.616 [2024-12-14 00:19:06.568651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.616 qpair failed and we were unable to recover it. 00:38:27.616 [2024-12-14 00:19:06.568791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.616 [2024-12-14 00:19:06.568805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.616 qpair failed and we were unable to recover it. 00:38:27.616 [2024-12-14 00:19:06.568975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.616 [2024-12-14 00:19:06.568989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.616 qpair failed and we were unable to recover it. 00:38:27.616 [2024-12-14 00:19:06.569134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.616 [2024-12-14 00:19:06.569148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.616 qpair failed and we were unable to recover it. 00:38:27.616 [2024-12-14 00:19:06.569365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.616 [2024-12-14 00:19:06.569380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.616 qpair failed and we were unable to recover it. 00:38:27.616 [2024-12-14 00:19:06.569535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.616 [2024-12-14 00:19:06.569549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.616 qpair failed and we were unable to recover it. 00:38:27.616 [2024-12-14 00:19:06.569636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.616 [2024-12-14 00:19:06.569650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.616 qpair failed and we were unable to recover it. 00:38:27.616 [2024-12-14 00:19:06.569806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.616 [2024-12-14 00:19:06.569819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.616 qpair failed and we were unable to recover it. 00:38:27.616 [2024-12-14 00:19:06.569902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.616 [2024-12-14 00:19:06.569916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.616 qpair failed and we were unable to recover it. 00:38:27.616 [2024-12-14 00:19:06.570096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.616 [2024-12-14 00:19:06.570110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.616 qpair failed and we were unable to recover it. 00:38:27.616 [2024-12-14 00:19:06.570336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.616 [2024-12-14 00:19:06.570350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.617 qpair failed and we were unable to recover it. 00:38:27.617 [2024-12-14 00:19:06.570501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.617 [2024-12-14 00:19:06.570515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.617 qpair failed and we were unable to recover it. 00:38:27.617 [2024-12-14 00:19:06.570688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.617 [2024-12-14 00:19:06.570702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.617 qpair failed and we were unable to recover it. 00:38:27.617 [2024-12-14 00:19:06.570838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.617 [2024-12-14 00:19:06.570852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.617 qpair failed and we were unable to recover it. 00:38:27.617 [2024-12-14 00:19:06.570952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.617 [2024-12-14 00:19:06.570968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.617 qpair failed and we were unable to recover it. 00:38:27.617 [2024-12-14 00:19:06.571263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.617 [2024-12-14 00:19:06.571277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.617 qpair failed and we were unable to recover it. 00:38:27.617 [2024-12-14 00:19:06.571425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.617 [2024-12-14 00:19:06.571444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.617 qpair failed and we were unable to recover it. 00:38:27.617 [2024-12-14 00:19:06.571646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.617 [2024-12-14 00:19:06.571660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.617 qpair failed and we were unable to recover it. 00:38:27.617 [2024-12-14 00:19:06.571752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.617 [2024-12-14 00:19:06.571765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.617 qpair failed and we were unable to recover it. 00:38:27.617 [2024-12-14 00:19:06.571922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.617 [2024-12-14 00:19:06.571937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.617 qpair failed and we were unable to recover it. 00:38:27.617 [2024-12-14 00:19:06.572089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.617 [2024-12-14 00:19:06.572102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.617 qpair failed and we were unable to recover it. 00:38:27.617 [2024-12-14 00:19:06.572266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.617 [2024-12-14 00:19:06.572282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.617 qpair failed and we were unable to recover it. 00:38:27.617 [2024-12-14 00:19:06.572435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.617 [2024-12-14 00:19:06.572455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.617 qpair failed and we were unable to recover it. 00:38:27.617 [2024-12-14 00:19:06.572558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.617 [2024-12-14 00:19:06.572573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.617 qpair failed and we were unable to recover it. 00:38:27.617 [2024-12-14 00:19:06.572721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.617 [2024-12-14 00:19:06.572735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.617 qpair failed and we were unable to recover it. 00:38:27.617 [2024-12-14 00:19:06.572911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.617 [2024-12-14 00:19:06.572925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.617 qpair failed and we were unable to recover it. 00:38:27.617 [2024-12-14 00:19:06.573269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.617 [2024-12-14 00:19:06.573283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.617 qpair failed and we were unable to recover it. 00:38:27.617 [2024-12-14 00:19:06.573454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.617 [2024-12-14 00:19:06.573469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.617 qpair failed and we were unable to recover it. 00:38:27.617 [2024-12-14 00:19:06.573581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.617 [2024-12-14 00:19:06.573595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.617 qpair failed and we were unable to recover it. 00:38:27.617 [2024-12-14 00:19:06.573748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.617 [2024-12-14 00:19:06.573762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.617 qpair failed and we were unable to recover it. 00:38:27.617 [2024-12-14 00:19:06.573993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.617 [2024-12-14 00:19:06.574007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.617 qpair failed and we were unable to recover it. 00:38:27.617 [2024-12-14 00:19:06.574187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.617 [2024-12-14 00:19:06.574200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.617 qpair failed and we were unable to recover it. 00:38:27.617 [2024-12-14 00:19:06.574434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.617 [2024-12-14 00:19:06.574454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.617 qpair failed and we were unable to recover it. 00:38:27.617 [2024-12-14 00:19:06.574557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.617 [2024-12-14 00:19:06.574572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.617 qpair failed and we were unable to recover it. 00:38:27.617 [2024-12-14 00:19:06.574676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.617 [2024-12-14 00:19:06.574690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.617 qpair failed and we were unable to recover it. 00:38:27.617 [2024-12-14 00:19:06.574787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.617 [2024-12-14 00:19:06.574802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.617 qpair failed and we were unable to recover it. 00:38:27.617 [2024-12-14 00:19:06.574947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.617 [2024-12-14 00:19:06.574961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.617 qpair failed and we were unable to recover it. 00:38:27.617 [2024-12-14 00:19:06.575185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.617 [2024-12-14 00:19:06.575199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.617 qpair failed and we were unable to recover it. 00:38:27.617 [2024-12-14 00:19:06.575305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.617 [2024-12-14 00:19:06.575319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.617 qpair failed and we were unable to recover it. 00:38:27.617 [2024-12-14 00:19:06.575475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.617 [2024-12-14 00:19:06.575489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.617 qpair failed and we were unable to recover it. 00:38:27.617 [2024-12-14 00:19:06.575669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.617 [2024-12-14 00:19:06.575683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.617 qpair failed and we were unable to recover it. 00:38:27.617 [2024-12-14 00:19:06.575788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.617 [2024-12-14 00:19:06.575815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:27.617 qpair failed and we were unable to recover it. 00:38:27.617 [2024-12-14 00:19:06.576010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.617 [2024-12-14 00:19:06.576032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:27.617 qpair failed and we were unable to recover it. 00:38:27.617 [2024-12-14 00:19:06.576133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.617 [2024-12-14 00:19:06.576159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:27.617 qpair failed and we were unable to recover it. 00:38:27.617 [2024-12-14 00:19:06.576406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.617 [2024-12-14 00:19:06.576429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:27.617 qpair failed and we were unable to recover it. 00:38:27.617 [2024-12-14 00:19:06.576612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.617 [2024-12-14 00:19:06.576634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:27.617 qpair failed and we were unable to recover it. 00:38:27.617 [2024-12-14 00:19:06.576818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.617 [2024-12-14 00:19:06.576840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:27.617 qpair failed and we were unable to recover it. 00:38:27.617 [2024-12-14 00:19:06.577027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.618 [2024-12-14 00:19:06.577046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.618 qpair failed and we were unable to recover it. 00:38:27.618 [2024-12-14 00:19:06.577140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.618 [2024-12-14 00:19:06.577159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.618 qpair failed and we were unable to recover it. 00:38:27.618 [2024-12-14 00:19:06.577315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.618 [2024-12-14 00:19:06.577330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.618 qpair failed and we were unable to recover it. 00:38:27.618 [2024-12-14 00:19:06.577541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.618 [2024-12-14 00:19:06.577556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.618 qpair failed and we were unable to recover it. 00:38:27.618 [2024-12-14 00:19:06.577712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.618 [2024-12-14 00:19:06.577726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.618 qpair failed and we were unable to recover it. 00:38:27.618 [2024-12-14 00:19:06.577863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.618 [2024-12-14 00:19:06.577877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.618 qpair failed and we were unable to recover it. 00:38:27.618 [2024-12-14 00:19:06.578075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.618 [2024-12-14 00:19:06.578088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.618 qpair failed and we were unable to recover it. 00:38:27.618 [2024-12-14 00:19:06.578180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.618 [2024-12-14 00:19:06.578196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.618 qpair failed and we were unable to recover it. 00:38:27.618 [2024-12-14 00:19:06.578405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.618 [2024-12-14 00:19:06.578420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.618 qpair failed and we were unable to recover it. 00:38:27.618 [2024-12-14 00:19:06.578588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.618 [2024-12-14 00:19:06.578603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.618 qpair failed and we were unable to recover it. 00:38:27.618 [2024-12-14 00:19:06.578709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.618 [2024-12-14 00:19:06.578723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.618 qpair failed and we were unable to recover it. 00:38:27.618 [2024-12-14 00:19:06.578871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.618 [2024-12-14 00:19:06.578884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.618 qpair failed and we were unable to recover it. 00:38:27.618 [2024-12-14 00:19:06.579207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.618 [2024-12-14 00:19:06.579220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.618 qpair failed and we were unable to recover it. 00:38:27.618 [2024-12-14 00:19:06.579424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.618 [2024-12-14 00:19:06.579443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.618 qpair failed and we were unable to recover it. 00:38:27.618 [2024-12-14 00:19:06.579597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.618 [2024-12-14 00:19:06.579611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.618 qpair failed and we were unable to recover it. 00:38:27.618 [2024-12-14 00:19:06.579834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.618 [2024-12-14 00:19:06.579848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.618 qpair failed and we were unable to recover it. 00:38:27.618 [2024-12-14 00:19:06.580148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.618 [2024-12-14 00:19:06.580162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.618 qpair failed and we were unable to recover it. 00:38:27.618 [2024-12-14 00:19:06.580314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.618 [2024-12-14 00:19:06.580328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.618 qpair failed and we were unable to recover it. 00:38:27.618 [2024-12-14 00:19:06.580535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.618 [2024-12-14 00:19:06.580550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.618 qpair failed and we were unable to recover it. 00:38:27.618 [2024-12-14 00:19:06.580702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.618 [2024-12-14 00:19:06.580716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.618 qpair failed and we were unable to recover it. 00:38:27.618 [2024-12-14 00:19:06.580895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.618 [2024-12-14 00:19:06.580909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.618 qpair failed and we were unable to recover it. 00:38:27.618 [2024-12-14 00:19:06.581023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.618 [2024-12-14 00:19:06.581038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.618 qpair failed and we were unable to recover it. 00:38:27.618 [2024-12-14 00:19:06.581210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.618 [2024-12-14 00:19:06.581224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.618 qpair failed and we were unable to recover it. 00:38:27.618 [2024-12-14 00:19:06.581371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.618 [2024-12-14 00:19:06.581384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.618 qpair failed and we were unable to recover it. 00:38:27.618 [2024-12-14 00:19:06.581542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.618 [2024-12-14 00:19:06.581557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.618 qpair failed and we were unable to recover it. 00:38:27.618 [2024-12-14 00:19:06.581716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.618 [2024-12-14 00:19:06.581730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.618 qpair failed and we were unable to recover it. 00:38:27.618 [2024-12-14 00:19:06.581878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.618 [2024-12-14 00:19:06.581892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.618 qpair failed and we were unable to recover it. 00:38:27.618 [2024-12-14 00:19:06.582040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.618 [2024-12-14 00:19:06.582054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.618 qpair failed and we were unable to recover it. 00:38:27.618 [2024-12-14 00:19:06.582190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.618 [2024-12-14 00:19:06.582203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.618 qpair failed and we were unable to recover it. 00:38:27.618 [2024-12-14 00:19:06.582348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.618 [2024-12-14 00:19:06.582361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.618 qpair failed and we were unable to recover it. 00:38:27.618 [2024-12-14 00:19:06.582541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.618 [2024-12-14 00:19:06.582556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.618 qpair failed and we were unable to recover it. 00:38:27.618 [2024-12-14 00:19:06.582708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.618 [2024-12-14 00:19:06.582722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.618 qpair failed and we were unable to recover it. 00:38:27.618 [2024-12-14 00:19:06.582929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.618 [2024-12-14 00:19:06.582943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.618 qpair failed and we were unable to recover it. 00:38:27.618 [2024-12-14 00:19:06.583160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.618 [2024-12-14 00:19:06.583174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.618 qpair failed and we were unable to recover it. 00:38:27.618 [2024-12-14 00:19:06.583322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.618 [2024-12-14 00:19:06.583337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.618 qpair failed and we were unable to recover it. 00:38:27.618 [2024-12-14 00:19:06.583584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.618 [2024-12-14 00:19:06.583598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.618 qpair failed and we were unable to recover it. 00:38:27.618 [2024-12-14 00:19:06.583706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.618 [2024-12-14 00:19:06.583720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.619 qpair failed and we were unable to recover it. 00:38:27.619 [2024-12-14 00:19:06.583833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.619 [2024-12-14 00:19:06.583848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.619 qpair failed and we were unable to recover it. 00:38:27.619 [2024-12-14 00:19:06.584002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.619 [2024-12-14 00:19:06.584016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.619 qpair failed and we were unable to recover it. 00:38:27.619 [2024-12-14 00:19:06.584228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.619 [2024-12-14 00:19:06.584241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.619 qpair failed and we were unable to recover it. 00:38:27.619 [2024-12-14 00:19:06.584413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.619 [2024-12-14 00:19:06.584427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.619 qpair failed and we were unable to recover it. 00:38:27.619 [2024-12-14 00:19:06.584599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.619 [2024-12-14 00:19:06.584614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.619 qpair failed and we were unable to recover it. 00:38:27.619 [2024-12-14 00:19:06.584773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.619 [2024-12-14 00:19:06.584787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.619 qpair failed and we were unable to recover it. 00:38:27.619 [2024-12-14 00:19:06.584874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.619 [2024-12-14 00:19:06.584888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.619 qpair failed and we were unable to recover it. 00:38:27.619 [2024-12-14 00:19:06.585063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.619 [2024-12-14 00:19:06.585077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.619 qpair failed and we were unable to recover it. 00:38:27.619 [2024-12-14 00:19:06.585303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.619 [2024-12-14 00:19:06.585317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.619 qpair failed and we were unable to recover it. 00:38:27.619 [2024-12-14 00:19:06.585405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.619 [2024-12-14 00:19:06.585419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.619 qpair failed and we were unable to recover it. 00:38:27.619 [2024-12-14 00:19:06.585512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.619 [2024-12-14 00:19:06.585529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.619 qpair failed and we were unable to recover it. 00:38:27.619 [2024-12-14 00:19:06.585731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.619 [2024-12-14 00:19:06.585745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.619 qpair failed and we were unable to recover it. 00:38:27.619 [2024-12-14 00:19:06.585900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.619 [2024-12-14 00:19:06.585914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.619 qpair failed and we were unable to recover it. 00:38:27.619 [2024-12-14 00:19:06.586135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.619 [2024-12-14 00:19:06.586149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.619 qpair failed and we were unable to recover it. 00:38:27.619 [2024-12-14 00:19:06.586405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.619 [2024-12-14 00:19:06.586419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.619 qpair failed and we were unable to recover it. 00:38:27.619 [2024-12-14 00:19:06.586594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.619 [2024-12-14 00:19:06.586609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.619 qpair failed and we were unable to recover it. 00:38:27.619 [2024-12-14 00:19:06.586760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.619 [2024-12-14 00:19:06.586775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.619 qpair failed and we were unable to recover it. 00:38:27.619 [2024-12-14 00:19:06.586925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.619 [2024-12-14 00:19:06.586940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.619 qpair failed and we were unable to recover it. 00:38:27.619 [2024-12-14 00:19:06.587086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.619 [2024-12-14 00:19:06.587099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.619 qpair failed and we were unable to recover it. 00:38:27.619 [2024-12-14 00:19:06.587257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.619 [2024-12-14 00:19:06.587271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.619 qpair failed and we were unable to recover it. 00:38:27.619 [2024-12-14 00:19:06.587446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.619 [2024-12-14 00:19:06.587460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.619 qpair failed and we were unable to recover it. 00:38:27.619 [2024-12-14 00:19:06.587609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.619 [2024-12-14 00:19:06.587624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.619 qpair failed and we were unable to recover it. 00:38:27.619 [2024-12-14 00:19:06.587846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.619 [2024-12-14 00:19:06.587860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.619 qpair failed and we were unable to recover it. 00:38:27.619 [2024-12-14 00:19:06.588000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.619 [2024-12-14 00:19:06.588014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.619 qpair failed and we were unable to recover it. 00:38:27.619 [2024-12-14 00:19:06.588221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.619 [2024-12-14 00:19:06.588236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.619 qpair failed and we were unable to recover it. 00:38:27.619 [2024-12-14 00:19:06.588395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.619 [2024-12-14 00:19:06.588409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.619 qpair failed and we were unable to recover it. 00:38:27.619 [2024-12-14 00:19:06.588637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.619 [2024-12-14 00:19:06.588651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.619 qpair failed and we were unable to recover it. 00:38:27.619 [2024-12-14 00:19:06.588758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.619 [2024-12-14 00:19:06.588773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.619 qpair failed and we were unable to recover it. 00:38:27.619 [2024-12-14 00:19:06.588945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.619 [2024-12-14 00:19:06.588964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.619 qpair failed and we were unable to recover it. 00:38:27.619 [2024-12-14 00:19:06.589064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.619 [2024-12-14 00:19:06.589078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.619 qpair failed and we were unable to recover it. 00:38:27.619 [2024-12-14 00:19:06.589164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.619 [2024-12-14 00:19:06.589178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.619 qpair failed and we were unable to recover it. 00:38:27.619 [2024-12-14 00:19:06.589399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.619 [2024-12-14 00:19:06.589413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.619 qpair failed and we were unable to recover it. 00:38:27.619 [2024-12-14 00:19:06.589595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.619 [2024-12-14 00:19:06.589610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.619 qpair failed and we were unable to recover it. 00:38:27.619 [2024-12-14 00:19:06.589827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.619 [2024-12-14 00:19:06.589841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.619 qpair failed and we were unable to recover it. 00:38:27.619 [2024-12-14 00:19:06.590058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.619 [2024-12-14 00:19:06.590072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.619 qpair failed and we were unable to recover it. 00:38:27.619 [2024-12-14 00:19:06.590287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.619 [2024-12-14 00:19:06.590301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.619 qpair failed and we were unable to recover it. 00:38:27.619 [2024-12-14 00:19:06.590535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.619 [2024-12-14 00:19:06.590549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.619 qpair failed and we were unable to recover it. 00:38:27.619 [2024-12-14 00:19:06.590767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.619 [2024-12-14 00:19:06.590786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.619 qpair failed and we were unable to recover it. 00:38:27.619 [2024-12-14 00:19:06.590945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.619 [2024-12-14 00:19:06.590959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.619 qpair failed and we were unable to recover it. 00:38:27.620 [2024-12-14 00:19:06.591129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.620 [2024-12-14 00:19:06.591142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.620 qpair failed and we were unable to recover it. 00:38:27.620 [2024-12-14 00:19:06.591329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.620 [2024-12-14 00:19:06.591344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.620 qpair failed and we were unable to recover it. 00:38:27.620 [2024-12-14 00:19:06.591497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.620 [2024-12-14 00:19:06.591510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.620 qpair failed and we were unable to recover it. 00:38:27.620 [2024-12-14 00:19:06.591731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.620 [2024-12-14 00:19:06.591745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.620 qpair failed and we were unable to recover it. 00:38:27.620 [2024-12-14 00:19:06.591958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.620 [2024-12-14 00:19:06.591972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.620 qpair failed and we were unable to recover it. 00:38:27.620 [2024-12-14 00:19:06.592126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.620 [2024-12-14 00:19:06.592140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.620 qpair failed and we were unable to recover it. 00:38:27.620 [2024-12-14 00:19:06.592285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.620 [2024-12-14 00:19:06.592300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.620 qpair failed and we were unable to recover it. 00:38:27.620 [2024-12-14 00:19:06.592452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.620 [2024-12-14 00:19:06.592466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.620 qpair failed and we were unable to recover it. 00:38:27.620 [2024-12-14 00:19:06.592692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.620 [2024-12-14 00:19:06.592706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.620 qpair failed and we were unable to recover it. 00:38:27.620 [2024-12-14 00:19:06.592866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.620 [2024-12-14 00:19:06.592880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.620 qpair failed and we were unable to recover it. 00:38:27.620 [2024-12-14 00:19:06.593050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.620 [2024-12-14 00:19:06.593063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.620 qpair failed and we were unable to recover it. 00:38:27.620 [2024-12-14 00:19:06.593269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.620 [2024-12-14 00:19:06.593283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.620 qpair failed and we were unable to recover it. 00:38:27.620 [2024-12-14 00:19:06.593449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.620 [2024-12-14 00:19:06.593463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.620 qpair failed and we were unable to recover it. 00:38:27.620 [2024-12-14 00:19:06.593625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.620 [2024-12-14 00:19:06.593639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.620 qpair failed and we were unable to recover it. 00:38:27.620 [2024-12-14 00:19:06.593736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.620 [2024-12-14 00:19:06.593750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.620 qpair failed and we were unable to recover it. 00:38:27.620 [2024-12-14 00:19:06.593973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.620 [2024-12-14 00:19:06.593987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.620 qpair failed and we were unable to recover it. 00:38:27.620 [2024-12-14 00:19:06.594175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.620 [2024-12-14 00:19:06.594189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.620 qpair failed and we were unable to recover it. 00:38:27.620 [2024-12-14 00:19:06.594338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.620 [2024-12-14 00:19:06.594352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.620 qpair failed and we were unable to recover it. 00:38:27.620 [2024-12-14 00:19:06.594602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.620 [2024-12-14 00:19:06.594617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.620 qpair failed and we were unable to recover it. 00:38:27.620 [2024-12-14 00:19:06.594770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.620 [2024-12-14 00:19:06.594784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.620 qpair failed and we were unable to recover it. 00:38:27.620 [2024-12-14 00:19:06.595020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.620 [2024-12-14 00:19:06.595035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.620 qpair failed and we were unable to recover it. 00:38:27.620 [2024-12-14 00:19:06.595180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.620 [2024-12-14 00:19:06.595194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.620 qpair failed and we were unable to recover it. 00:38:27.620 [2024-12-14 00:19:06.595421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.620 [2024-12-14 00:19:06.595435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.620 qpair failed and we were unable to recover it. 00:38:27.620 [2024-12-14 00:19:06.595679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.620 [2024-12-14 00:19:06.595693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.620 qpair failed and we were unable to recover it. 00:38:27.620 [2024-12-14 00:19:06.595900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.620 [2024-12-14 00:19:06.595914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.620 qpair failed and we were unable to recover it. 00:38:27.620 [2024-12-14 00:19:06.596078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.620 [2024-12-14 00:19:06.596092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.620 qpair failed and we were unable to recover it. 00:38:27.620 [2024-12-14 00:19:06.596200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.620 [2024-12-14 00:19:06.596214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.620 qpair failed and we were unable to recover it. 00:38:27.620 [2024-12-14 00:19:06.596372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.620 [2024-12-14 00:19:06.596386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.620 qpair failed and we were unable to recover it. 00:38:27.620 [2024-12-14 00:19:06.596475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.620 [2024-12-14 00:19:06.596490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.620 qpair failed and we were unable to recover it. 00:38:27.620 [2024-12-14 00:19:06.596709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.620 [2024-12-14 00:19:06.596723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.620 qpair failed and we were unable to recover it. 00:38:27.620 [2024-12-14 00:19:06.596899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.620 [2024-12-14 00:19:06.596913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.620 qpair failed and we were unable to recover it. 00:38:27.620 [2024-12-14 00:19:06.597081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.620 [2024-12-14 00:19:06.597095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.620 qpair failed and we were unable to recover it. 00:38:27.620 [2024-12-14 00:19:06.597277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.620 [2024-12-14 00:19:06.597291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.620 qpair failed and we were unable to recover it. 00:38:27.620 [2024-12-14 00:19:06.597426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.620 [2024-12-14 00:19:06.597447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.620 qpair failed and we were unable to recover it. 00:38:27.620 [2024-12-14 00:19:06.597605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.620 [2024-12-14 00:19:06.597619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.620 qpair failed and we were unable to recover it. 00:38:27.620 [2024-12-14 00:19:06.597820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.620 [2024-12-14 00:19:06.597833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.620 qpair failed and we were unable to recover it. 00:38:27.620 [2024-12-14 00:19:06.598121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.620 [2024-12-14 00:19:06.598136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.620 qpair failed and we were unable to recover it. 00:38:27.620 [2024-12-14 00:19:06.598367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.620 [2024-12-14 00:19:06.598381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.620 qpair failed and we were unable to recover it. 00:38:27.620 [2024-12-14 00:19:06.598590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.620 [2024-12-14 00:19:06.598607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.620 qpair failed and we were unable to recover it. 00:38:27.620 [2024-12-14 00:19:06.598710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.621 [2024-12-14 00:19:06.598724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.621 qpair failed and we were unable to recover it. 00:38:27.621 [2024-12-14 00:19:06.598830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.621 [2024-12-14 00:19:06.598844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.621 qpair failed and we were unable to recover it. 00:38:27.621 [2024-12-14 00:19:06.599004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.621 [2024-12-14 00:19:06.599018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.621 qpair failed and we were unable to recover it. 00:38:27.621 [2024-12-14 00:19:06.599189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.621 [2024-12-14 00:19:06.599203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.621 qpair failed and we were unable to recover it. 00:38:27.621 [2024-12-14 00:19:06.599432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.621 [2024-12-14 00:19:06.599450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.621 qpair failed and we were unable to recover it. 00:38:27.621 [2024-12-14 00:19:06.599554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.621 [2024-12-14 00:19:06.599568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.621 qpair failed and we were unable to recover it. 00:38:27.621 [2024-12-14 00:19:06.599714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.621 [2024-12-14 00:19:06.599728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.621 qpair failed and we were unable to recover it. 00:38:27.621 [2024-12-14 00:19:06.599811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.621 [2024-12-14 00:19:06.599824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.621 qpair failed and we were unable to recover it. 00:38:27.621 [2024-12-14 00:19:06.599924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.621 [2024-12-14 00:19:06.599938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.621 qpair failed and we were unable to recover it. 00:38:27.621 [2024-12-14 00:19:06.600082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.621 [2024-12-14 00:19:06.600096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.621 qpair failed and we were unable to recover it. 00:38:27.621 [2024-12-14 00:19:06.600237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.621 [2024-12-14 00:19:06.600252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.621 qpair failed and we were unable to recover it. 00:38:27.621 [2024-12-14 00:19:06.600461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.621 [2024-12-14 00:19:06.600476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.621 qpair failed and we were unable to recover it. 00:38:27.621 [2024-12-14 00:19:06.600629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.621 [2024-12-14 00:19:06.600643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.621 qpair failed and we were unable to recover it. 00:38:27.621 [2024-12-14 00:19:06.600861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.621 [2024-12-14 00:19:06.600877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.621 qpair failed and we were unable to recover it. 00:38:27.621 [2024-12-14 00:19:06.601119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.621 [2024-12-14 00:19:06.601137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.621 qpair failed and we were unable to recover it. 00:38:27.621 [2024-12-14 00:19:06.601304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.621 [2024-12-14 00:19:06.601318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.621 qpair failed and we were unable to recover it. 00:38:27.621 [2024-12-14 00:19:06.601524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.621 [2024-12-14 00:19:06.601541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.621 qpair failed and we were unable to recover it. 00:38:27.621 [2024-12-14 00:19:06.601649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.621 [2024-12-14 00:19:06.601663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.621 qpair failed and we were unable to recover it. 00:38:27.621 [2024-12-14 00:19:06.601865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.621 [2024-12-14 00:19:06.601880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.621 qpair failed and we were unable to recover it. 00:38:27.621 [2024-12-14 00:19:06.602066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.621 [2024-12-14 00:19:06.602080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.621 qpair failed and we were unable to recover it. 00:38:27.621 [2024-12-14 00:19:06.602289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.621 [2024-12-14 00:19:06.602303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.621 qpair failed and we were unable to recover it. 00:38:27.621 [2024-12-14 00:19:06.602452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.621 [2024-12-14 00:19:06.602467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.621 qpair failed and we were unable to recover it. 00:38:27.621 [2024-12-14 00:19:06.602552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.621 [2024-12-14 00:19:06.602565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.621 qpair failed and we were unable to recover it. 00:38:27.621 [2024-12-14 00:19:06.602730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.621 [2024-12-14 00:19:06.602758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.621 qpair failed and we were unable to recover it. 00:38:27.621 [2024-12-14 00:19:06.602948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.621 [2024-12-14 00:19:06.602961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.621 qpair failed and we were unable to recover it. 00:38:27.621 [2024-12-14 00:19:06.603169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.621 [2024-12-14 00:19:06.603183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.621 qpair failed and we were unable to recover it. 00:38:27.621 [2024-12-14 00:19:06.603329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.621 [2024-12-14 00:19:06.603343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.621 qpair failed and we were unable to recover it. 00:38:27.621 [2024-12-14 00:19:06.603447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.621 [2024-12-14 00:19:06.603461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.621 qpair failed and we were unable to recover it. 00:38:27.621 [2024-12-14 00:19:06.603621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.621 [2024-12-14 00:19:06.603635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.621 qpair failed and we were unable to recover it. 00:38:27.621 [2024-12-14 00:19:06.603849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.621 [2024-12-14 00:19:06.603863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.621 qpair failed and we were unable to recover it. 00:38:27.621 [2024-12-14 00:19:06.603958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.621 [2024-12-14 00:19:06.603972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.621 qpair failed and we were unable to recover it. 00:38:27.621 [2024-12-14 00:19:06.604222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.621 [2024-12-14 00:19:06.604235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.621 qpair failed and we were unable to recover it. 00:38:27.621 [2024-12-14 00:19:06.604488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.621 [2024-12-14 00:19:06.604501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.621 qpair failed and we were unable to recover it. 00:38:27.621 [2024-12-14 00:19:06.604605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.621 [2024-12-14 00:19:06.604618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.621 qpair failed and we were unable to recover it. 00:38:27.621 [2024-12-14 00:19:06.604724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.622 [2024-12-14 00:19:06.604738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.622 qpair failed and we were unable to recover it. 00:38:27.622 [2024-12-14 00:19:06.604828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.622 [2024-12-14 00:19:06.604841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.622 qpair failed and we were unable to recover it. 00:38:27.622 [2024-12-14 00:19:06.604977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.622 [2024-12-14 00:19:06.604990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.622 qpair failed and we were unable to recover it. 00:38:27.622 [2024-12-14 00:19:06.605095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.622 [2024-12-14 00:19:06.605108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.622 qpair failed and we were unable to recover it. 00:38:27.622 [2024-12-14 00:19:06.605319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.622 [2024-12-14 00:19:06.605333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.622 qpair failed and we were unable to recover it. 00:38:27.622 [2024-12-14 00:19:06.605436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.622 [2024-12-14 00:19:06.605456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.622 qpair failed and we were unable to recover it. 00:38:27.622 [2024-12-14 00:19:06.605607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.622 [2024-12-14 00:19:06.605621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.622 qpair failed and we were unable to recover it. 00:38:27.622 [2024-12-14 00:19:06.605828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.622 [2024-12-14 00:19:06.605841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.622 qpair failed and we were unable to recover it. 00:38:27.622 [2024-12-14 00:19:06.605977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.622 [2024-12-14 00:19:06.605991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.622 qpair failed and we were unable to recover it. 00:38:27.622 [2024-12-14 00:19:06.606237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.622 [2024-12-14 00:19:06.606251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.622 qpair failed and we were unable to recover it. 00:38:27.622 [2024-12-14 00:19:06.606402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.622 [2024-12-14 00:19:06.606416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.622 qpair failed and we were unable to recover it. 00:38:27.622 [2024-12-14 00:19:06.606667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.622 [2024-12-14 00:19:06.606681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.622 qpair failed and we were unable to recover it. 00:38:27.622 [2024-12-14 00:19:06.606785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.622 [2024-12-14 00:19:06.606799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.622 qpair failed and we were unable to recover it. 00:38:27.622 [2024-12-14 00:19:06.606948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.622 [2024-12-14 00:19:06.606965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.622 qpair failed and we were unable to recover it. 00:38:27.622 [2024-12-14 00:19:06.607128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.622 [2024-12-14 00:19:06.607141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.622 qpair failed and we were unable to recover it. 00:38:27.622 [2024-12-14 00:19:06.607372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.622 [2024-12-14 00:19:06.607386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.622 qpair failed and we were unable to recover it. 00:38:27.622 [2024-12-14 00:19:06.607573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.622 [2024-12-14 00:19:06.607587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.622 qpair failed and we were unable to recover it. 00:38:27.622 [2024-12-14 00:19:06.607687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.622 [2024-12-14 00:19:06.607701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.622 qpair failed and we were unable to recover it. 00:38:27.622 [2024-12-14 00:19:06.607856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.622 [2024-12-14 00:19:06.607869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.622 qpair failed and we were unable to recover it. 00:38:27.622 [2024-12-14 00:19:06.607979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.622 [2024-12-14 00:19:06.607993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.622 qpair failed and we were unable to recover it. 00:38:27.622 [2024-12-14 00:19:06.608132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.622 [2024-12-14 00:19:06.608146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.622 qpair failed and we were unable to recover it. 00:38:27.622 [2024-12-14 00:19:06.608380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.622 [2024-12-14 00:19:06.608395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.622 qpair failed and we were unable to recover it. 00:38:27.622 [2024-12-14 00:19:06.608616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.622 [2024-12-14 00:19:06.608630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.622 qpair failed and we were unable to recover it. 00:38:27.622 [2024-12-14 00:19:06.608835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.622 [2024-12-14 00:19:06.608849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.622 qpair failed and we were unable to recover it. 00:38:27.622 [2024-12-14 00:19:06.608950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.622 [2024-12-14 00:19:06.608964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.622 qpair failed and we were unable to recover it. 00:38:27.622 [2024-12-14 00:19:06.609110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.622 [2024-12-14 00:19:06.609124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.622 qpair failed and we were unable to recover it. 00:38:27.622 [2024-12-14 00:19:06.609290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.622 [2024-12-14 00:19:06.609303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.622 qpair failed and we were unable to recover it. 00:38:27.622 [2024-12-14 00:19:06.609423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.622 [2024-12-14 00:19:06.609440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.622 qpair failed and we were unable to recover it. 00:38:27.622 [2024-12-14 00:19:06.609545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.622 [2024-12-14 00:19:06.609560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.622 qpair failed and we were unable to recover it. 00:38:27.622 [2024-12-14 00:19:06.609705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.622 [2024-12-14 00:19:06.609725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.622 qpair failed and we were unable to recover it. 00:38:27.622 [2024-12-14 00:19:06.609876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.622 [2024-12-14 00:19:06.609889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.622 qpair failed and we were unable to recover it. 00:38:27.622 [2024-12-14 00:19:06.609989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.622 [2024-12-14 00:19:06.610003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.622 qpair failed and we were unable to recover it. 00:38:27.622 [2024-12-14 00:19:06.610266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.622 [2024-12-14 00:19:06.610280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.622 qpair failed and we were unable to recover it. 00:38:27.622 [2024-12-14 00:19:06.610453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.622 [2024-12-14 00:19:06.610467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.622 qpair failed and we were unable to recover it. 00:38:27.622 [2024-12-14 00:19:06.610617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.622 [2024-12-14 00:19:06.610631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.622 qpair failed and we were unable to recover it. 00:38:27.622 [2024-12-14 00:19:06.610836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.622 [2024-12-14 00:19:06.610850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.622 qpair failed and we were unable to recover it. 00:38:27.622 [2024-12-14 00:19:06.611003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.622 [2024-12-14 00:19:06.611016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.622 qpair failed and we were unable to recover it. 00:38:27.622 [2024-12-14 00:19:06.611174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.622 [2024-12-14 00:19:06.611187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.622 qpair failed and we were unable to recover it. 00:38:27.622 [2024-12-14 00:19:06.611388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.622 [2024-12-14 00:19:06.611402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.623 qpair failed and we were unable to recover it. 00:38:27.623 [2024-12-14 00:19:06.611505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.623 [2024-12-14 00:19:06.611519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.623 qpair failed and we were unable to recover it. 00:38:27.623 [2024-12-14 00:19:06.611726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.623 [2024-12-14 00:19:06.611740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.623 qpair failed and we were unable to recover it. 00:38:27.623 [2024-12-14 00:19:06.611943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.623 [2024-12-14 00:19:06.611957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.623 qpair failed and we were unable to recover it. 00:38:27.623 [2024-12-14 00:19:06.612195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.623 [2024-12-14 00:19:06.612209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.623 qpair failed and we were unable to recover it. 00:38:27.623 [2024-12-14 00:19:06.612381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.623 [2024-12-14 00:19:06.612398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.623 qpair failed and we were unable to recover it. 00:38:27.623 [2024-12-14 00:19:06.612507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.623 [2024-12-14 00:19:06.612526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.623 qpair failed and we were unable to recover it. 00:38:27.623 [2024-12-14 00:19:06.612680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.623 [2024-12-14 00:19:06.612696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.623 qpair failed and we were unable to recover it. 00:38:27.623 [2024-12-14 00:19:06.612797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.623 [2024-12-14 00:19:06.612810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.623 qpair failed and we were unable to recover it. 00:38:27.623 [2024-12-14 00:19:06.613017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.623 [2024-12-14 00:19:06.613031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.623 qpair failed and we were unable to recover it. 00:38:27.623 [2024-12-14 00:19:06.613259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.623 [2024-12-14 00:19:06.613273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.623 qpair failed and we were unable to recover it. 00:38:27.623 [2024-12-14 00:19:06.613450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.623 [2024-12-14 00:19:06.613464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.623 qpair failed and we were unable to recover it. 00:38:27.623 [2024-12-14 00:19:06.613569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.623 [2024-12-14 00:19:06.613583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.623 qpair failed and we were unable to recover it. 00:38:27.623 [2024-12-14 00:19:06.613694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.623 [2024-12-14 00:19:06.613708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.623 qpair failed and we were unable to recover it. 00:38:27.623 [2024-12-14 00:19:06.613809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.623 [2024-12-14 00:19:06.613823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.623 qpair failed and we were unable to recover it. 00:38:27.623 [2024-12-14 00:19:06.614051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.623 [2024-12-14 00:19:06.614065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.623 qpair failed and we were unable to recover it. 00:38:27.623 [2024-12-14 00:19:06.614294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.623 [2024-12-14 00:19:06.614308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.623 qpair failed and we were unable to recover it. 00:38:27.623 [2024-12-14 00:19:06.614584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.623 [2024-12-14 00:19:06.614598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.623 qpair failed and we were unable to recover it. 00:38:27.623 [2024-12-14 00:19:06.614735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.623 [2024-12-14 00:19:06.614749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.623 qpair failed and we were unable to recover it. 00:38:27.623 [2024-12-14 00:19:06.614889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.623 [2024-12-14 00:19:06.614903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.623 qpair failed and we were unable to recover it. 00:38:27.623 [2024-12-14 00:19:06.615082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.623 [2024-12-14 00:19:06.615095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.623 qpair failed and we were unable to recover it. 00:38:27.623 [2024-12-14 00:19:06.615247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.623 [2024-12-14 00:19:06.615261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.623 qpair failed and we were unable to recover it. 00:38:27.623 [2024-12-14 00:19:06.615416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.623 [2024-12-14 00:19:06.615430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.623 qpair failed and we were unable to recover it. 00:38:27.623 [2024-12-14 00:19:06.615530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.623 [2024-12-14 00:19:06.615544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.623 qpair failed and we were unable to recover it. 00:38:27.623 [2024-12-14 00:19:06.615715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.623 [2024-12-14 00:19:06.615729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.623 qpair failed and we were unable to recover it. 00:38:27.623 [2024-12-14 00:19:06.615821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.623 [2024-12-14 00:19:06.615835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.623 qpair failed and we were unable to recover it. 00:38:27.623 [2024-12-14 00:19:06.615990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.623 [2024-12-14 00:19:06.616004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.623 qpair failed and we were unable to recover it. 00:38:27.623 [2024-12-14 00:19:06.616227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.623 [2024-12-14 00:19:06.616241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.623 qpair failed and we were unable to recover it. 00:38:27.623 [2024-12-14 00:19:06.616449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.623 [2024-12-14 00:19:06.616462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.623 qpair failed and we were unable to recover it. 00:38:27.623 [2024-12-14 00:19:06.616571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.623 [2024-12-14 00:19:06.616584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.623 qpair failed and we were unable to recover it. 00:38:27.623 [2024-12-14 00:19:06.616680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.623 [2024-12-14 00:19:06.616694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.623 qpair failed and we were unable to recover it. 00:38:27.623 [2024-12-14 00:19:06.616781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.623 [2024-12-14 00:19:06.616794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.623 qpair failed and we were unable to recover it. 00:38:27.623 [2024-12-14 00:19:06.616950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.623 [2024-12-14 00:19:06.616963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.623 qpair failed and we were unable to recover it. 00:38:27.623 [2024-12-14 00:19:06.617147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.623 [2024-12-14 00:19:06.617161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.623 qpair failed and we were unable to recover it. 00:38:27.623 [2024-12-14 00:19:06.617345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.623 [2024-12-14 00:19:06.617359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.623 qpair failed and we were unable to recover it. 00:38:27.623 [2024-12-14 00:19:06.617511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.623 [2024-12-14 00:19:06.617525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.623 qpair failed and we were unable to recover it. 00:38:27.623 [2024-12-14 00:19:06.617608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.623 [2024-12-14 00:19:06.617622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.623 qpair failed and we were unable to recover it. 00:38:27.623 [2024-12-14 00:19:06.617713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.623 [2024-12-14 00:19:06.617727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.623 qpair failed and we were unable to recover it. 00:38:27.623 [2024-12-14 00:19:06.617865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.623 [2024-12-14 00:19:06.617878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.623 qpair failed and we were unable to recover it. 00:38:27.623 [2024-12-14 00:19:06.618030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.623 [2024-12-14 00:19:06.618044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.623 qpair failed and we were unable to recover it. 00:38:27.624 [2024-12-14 00:19:06.618219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.624 [2024-12-14 00:19:06.618232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.624 qpair failed and we were unable to recover it. 00:38:27.624 [2024-12-14 00:19:06.618321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.624 [2024-12-14 00:19:06.618334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.624 qpair failed and we were unable to recover it. 00:38:27.624 [2024-12-14 00:19:06.618536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.624 [2024-12-14 00:19:06.618550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.624 qpair failed and we were unable to recover it. 00:38:27.624 [2024-12-14 00:19:06.618709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.624 [2024-12-14 00:19:06.618723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.624 qpair failed and we were unable to recover it. 00:38:27.624 [2024-12-14 00:19:06.618898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.624 [2024-12-14 00:19:06.618912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.624 qpair failed and we were unable to recover it. 00:38:27.624 [2024-12-14 00:19:06.619107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.624 [2024-12-14 00:19:06.619121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.624 qpair failed and we were unable to recover it. 00:38:27.624 [2024-12-14 00:19:06.619306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.624 [2024-12-14 00:19:06.619320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.624 qpair failed and we were unable to recover it. 00:38:27.624 [2024-12-14 00:19:06.619412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.624 [2024-12-14 00:19:06.619428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.624 qpair failed and we were unable to recover it. 00:38:27.624 [2024-12-14 00:19:06.619598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.624 [2024-12-14 00:19:06.619612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.624 qpair failed and we were unable to recover it. 00:38:27.624 [2024-12-14 00:19:06.619766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.624 [2024-12-14 00:19:06.619779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.624 qpair failed and we were unable to recover it. 00:38:27.624 [2024-12-14 00:19:06.619880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.624 [2024-12-14 00:19:06.619893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.624 qpair failed and we were unable to recover it. 00:38:27.624 [2024-12-14 00:19:06.620041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.624 [2024-12-14 00:19:06.620053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.624 qpair failed and we were unable to recover it. 00:38:27.624 [2024-12-14 00:19:06.620216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.624 [2024-12-14 00:19:06.620230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.624 qpair failed and we were unable to recover it. 00:38:27.624 [2024-12-14 00:19:06.620411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.624 [2024-12-14 00:19:06.620424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.624 qpair failed and we were unable to recover it. 00:38:27.624 [2024-12-14 00:19:06.620544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.624 [2024-12-14 00:19:06.620558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.624 qpair failed and we were unable to recover it. 00:38:27.624 [2024-12-14 00:19:06.620760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.624 [2024-12-14 00:19:06.620773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.624 qpair failed and we were unable to recover it. 00:38:27.624 [2024-12-14 00:19:06.620874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.624 [2024-12-14 00:19:06.620888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.624 qpair failed and we were unable to recover it. 00:38:27.624 [2024-12-14 00:19:06.621149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.624 [2024-12-14 00:19:06.621162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.624 qpair failed and we were unable to recover it. 00:38:27.624 [2024-12-14 00:19:06.621332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.624 [2024-12-14 00:19:06.621345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.624 qpair failed and we were unable to recover it. 00:38:27.624 [2024-12-14 00:19:06.621588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.624 [2024-12-14 00:19:06.621602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.624 qpair failed and we were unable to recover it. 00:38:27.624 [2024-12-14 00:19:06.621807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.624 [2024-12-14 00:19:06.621821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.624 qpair failed and we were unable to recover it. 00:38:27.624 [2024-12-14 00:19:06.621928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.624 [2024-12-14 00:19:06.621942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.624 qpair failed and we were unable to recover it. 00:38:27.624 [2024-12-14 00:19:06.622124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.624 [2024-12-14 00:19:06.622137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.624 qpair failed and we were unable to recover it. 00:38:27.624 [2024-12-14 00:19:06.622279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.624 [2024-12-14 00:19:06.622292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.624 qpair failed and we were unable to recover it. 00:38:27.624 [2024-12-14 00:19:06.622448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.624 [2024-12-14 00:19:06.622462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.624 qpair failed and we were unable to recover it. 00:38:27.624 [2024-12-14 00:19:06.622561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.624 [2024-12-14 00:19:06.622575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.624 qpair failed and we were unable to recover it. 00:38:27.624 [2024-12-14 00:19:06.622681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.624 [2024-12-14 00:19:06.622695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.624 qpair failed and we were unable to recover it. 00:38:27.624 [2024-12-14 00:19:06.622850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.624 [2024-12-14 00:19:06.622863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.624 qpair failed and we were unable to recover it. 00:38:27.624 [2024-12-14 00:19:06.623043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.624 [2024-12-14 00:19:06.623057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.624 qpair failed and we were unable to recover it. 00:38:27.624 [2024-12-14 00:19:06.623167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.624 [2024-12-14 00:19:06.623183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.624 qpair failed and we were unable to recover it. 00:38:27.624 [2024-12-14 00:19:06.623333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.624 [2024-12-14 00:19:06.623352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.624 qpair failed and we were unable to recover it. 00:38:27.624 [2024-12-14 00:19:06.623500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.624 [2024-12-14 00:19:06.623514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.624 qpair failed and we were unable to recover it. 00:38:27.624 [2024-12-14 00:19:06.623624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.624 [2024-12-14 00:19:06.623637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.624 qpair failed and we were unable to recover it. 00:38:27.624 [2024-12-14 00:19:06.623842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.624 [2024-12-14 00:19:06.623855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.624 qpair failed and we were unable to recover it. 00:38:27.624 [2024-12-14 00:19:06.624018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.624 [2024-12-14 00:19:06.624031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.624 qpair failed and we were unable to recover it. 00:38:27.624 [2024-12-14 00:19:06.624257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.624 [2024-12-14 00:19:06.624271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.624 qpair failed and we were unable to recover it. 00:38:27.624 [2024-12-14 00:19:06.624427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.624 [2024-12-14 00:19:06.624454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.624 qpair failed and we were unable to recover it. 00:38:27.624 [2024-12-14 00:19:06.624554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.624 [2024-12-14 00:19:06.624568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.624 qpair failed and we were unable to recover it. 00:38:27.624 [2024-12-14 00:19:06.624660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.624 [2024-12-14 00:19:06.624673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.624 qpair failed and we were unable to recover it. 00:38:27.624 [2024-12-14 00:19:06.624828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.625 [2024-12-14 00:19:06.624841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.625 qpair failed and we were unable to recover it. 00:38:27.625 [2024-12-14 00:19:06.624936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.625 [2024-12-14 00:19:06.624949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.625 qpair failed and we were unable to recover it. 00:38:27.625 [2024-12-14 00:19:06.625045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.625 [2024-12-14 00:19:06.625059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.625 qpair failed and we were unable to recover it. 00:38:27.625 [2024-12-14 00:19:06.625192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.625 [2024-12-14 00:19:06.625206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.625 qpair failed and we were unable to recover it. 00:38:27.625 [2024-12-14 00:19:06.625346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.625 [2024-12-14 00:19:06.625359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.625 qpair failed and we were unable to recover it. 00:38:27.625 [2024-12-14 00:19:06.625503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.625 [2024-12-14 00:19:06.625518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.625 qpair failed and we were unable to recover it. 00:38:27.625 [2024-12-14 00:19:06.625659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.625 [2024-12-14 00:19:06.625673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.625 qpair failed and we were unable to recover it. 00:38:27.625 [2024-12-14 00:19:06.625767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.625 [2024-12-14 00:19:06.625781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.625 qpair failed and we were unable to recover it. 00:38:27.625 [2024-12-14 00:19:06.625877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.625 [2024-12-14 00:19:06.625893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.625 qpair failed and we were unable to recover it. 00:38:27.625 [2024-12-14 00:19:06.625997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.625 [2024-12-14 00:19:06.626011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.625 qpair failed and we were unable to recover it. 00:38:27.625 [2024-12-14 00:19:06.626186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.625 [2024-12-14 00:19:06.626199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.625 qpair failed and we were unable to recover it. 00:38:27.625 [2024-12-14 00:19:06.626371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.625 [2024-12-14 00:19:06.626386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.625 qpair failed and we were unable to recover it. 00:38:27.625 [2024-12-14 00:19:06.626551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.625 [2024-12-14 00:19:06.626566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.625 qpair failed and we were unable to recover it. 00:38:27.625 [2024-12-14 00:19:06.626654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.625 [2024-12-14 00:19:06.626668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.625 qpair failed and we were unable to recover it. 00:38:27.625 [2024-12-14 00:19:06.626776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.625 [2024-12-14 00:19:06.626790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.625 qpair failed and we were unable to recover it. 00:38:27.625 [2024-12-14 00:19:06.626933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.625 [2024-12-14 00:19:06.626947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.625 qpair failed and we were unable to recover it. 00:38:27.625 [2024-12-14 00:19:06.627209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.625 [2024-12-14 00:19:06.627223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.625 qpair failed and we were unable to recover it. 00:38:27.625 [2024-12-14 00:19:06.627318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.625 [2024-12-14 00:19:06.627332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.625 qpair failed and we were unable to recover it. 00:38:27.625 [2024-12-14 00:19:06.627507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.625 [2024-12-14 00:19:06.627521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.625 qpair failed and we were unable to recover it. 00:38:27.625 [2024-12-14 00:19:06.627751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.625 [2024-12-14 00:19:06.627765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.625 qpair failed and we were unable to recover it. 00:38:27.625 [2024-12-14 00:19:06.627945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.625 [2024-12-14 00:19:06.627959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.625 qpair failed and we were unable to recover it. 00:38:27.625 [2024-12-14 00:19:06.628142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.625 [2024-12-14 00:19:06.628156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.625 qpair failed and we were unable to recover it. 00:38:27.625 [2024-12-14 00:19:06.628268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.625 [2024-12-14 00:19:06.628282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.625 qpair failed and we were unable to recover it. 00:38:27.625 [2024-12-14 00:19:06.628431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.625 [2024-12-14 00:19:06.628449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.625 qpair failed and we were unable to recover it. 00:38:27.625 [2024-12-14 00:19:06.628603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.625 [2024-12-14 00:19:06.628620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.625 qpair failed and we were unable to recover it. 00:38:27.625 [2024-12-14 00:19:06.628843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.625 [2024-12-14 00:19:06.628857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.625 qpair failed and we were unable to recover it. 00:38:27.625 [2024-12-14 00:19:06.629109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.625 [2024-12-14 00:19:06.629123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.625 qpair failed and we were unable to recover it. 00:38:27.625 [2024-12-14 00:19:06.629283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.625 [2024-12-14 00:19:06.629297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.625 qpair failed and we were unable to recover it. 00:38:27.625 [2024-12-14 00:19:06.629505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.625 [2024-12-14 00:19:06.629519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.625 qpair failed and we were unable to recover it. 00:38:27.625 [2024-12-14 00:19:06.629673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.625 [2024-12-14 00:19:06.629687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.625 qpair failed and we were unable to recover it. 00:38:27.625 [2024-12-14 00:19:06.629843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.625 [2024-12-14 00:19:06.629856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.625 qpair failed and we were unable to recover it. 00:38:27.625 [2024-12-14 00:19:06.630011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.625 [2024-12-14 00:19:06.630033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.625 qpair failed and we were unable to recover it. 00:38:27.625 [2024-12-14 00:19:06.630291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.625 [2024-12-14 00:19:06.630307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.625 qpair failed and we were unable to recover it. 00:38:27.625 [2024-12-14 00:19:06.630565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.625 [2024-12-14 00:19:06.630579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.625 qpair failed and we were unable to recover it. 00:38:27.625 [2024-12-14 00:19:06.630681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.625 [2024-12-14 00:19:06.630695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.625 qpair failed and we were unable to recover it. 00:38:27.626 [2024-12-14 00:19:06.630912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.626 [2024-12-14 00:19:06.630925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.626 qpair failed and we were unable to recover it. 00:38:27.626 [2024-12-14 00:19:06.631148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.626 [2024-12-14 00:19:06.631162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.626 qpair failed and we were unable to recover it. 00:38:27.626 [2024-12-14 00:19:06.631434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.626 [2024-12-14 00:19:06.631455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.626 qpair failed and we were unable to recover it. 00:38:27.626 [2024-12-14 00:19:06.631554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.626 [2024-12-14 00:19:06.631568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.626 qpair failed and we were unable to recover it. 00:38:27.626 [2024-12-14 00:19:06.631746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.626 [2024-12-14 00:19:06.631760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.626 qpair failed and we were unable to recover it. 00:38:27.626 [2024-12-14 00:19:06.631916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.626 [2024-12-14 00:19:06.631929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.626 qpair failed and we were unable to recover it. 00:38:27.626 [2024-12-14 00:19:06.632087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.626 [2024-12-14 00:19:06.632100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.626 qpair failed and we were unable to recover it. 00:38:27.626 [2024-12-14 00:19:06.632252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.626 [2024-12-14 00:19:06.632266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.626 qpair failed and we were unable to recover it. 00:38:27.626 [2024-12-14 00:19:06.632480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.626 [2024-12-14 00:19:06.632495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.626 qpair failed and we were unable to recover it. 00:38:27.626 [2024-12-14 00:19:06.632744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.626 [2024-12-14 00:19:06.632757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.626 qpair failed and we were unable to recover it. 00:38:27.626 [2024-12-14 00:19:06.632913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.626 [2024-12-14 00:19:06.632927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.626 qpair failed and we were unable to recover it. 00:38:27.626 [2024-12-14 00:19:06.633110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.626 [2024-12-14 00:19:06.633123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.626 qpair failed and we were unable to recover it. 00:38:27.626 [2024-12-14 00:19:06.633353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.626 [2024-12-14 00:19:06.633366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.626 qpair failed and we were unable to recover it. 00:38:27.626 [2024-12-14 00:19:06.633471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.626 [2024-12-14 00:19:06.633488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.626 qpair failed and we were unable to recover it. 00:38:27.626 [2024-12-14 00:19:06.633655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.626 [2024-12-14 00:19:06.633669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.626 qpair failed and we were unable to recover it. 00:38:27.626 [2024-12-14 00:19:06.633860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.626 [2024-12-14 00:19:06.633874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.626 qpair failed and we were unable to recover it. 00:38:27.626 [2024-12-14 00:19:06.633962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.626 [2024-12-14 00:19:06.633976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.626 qpair failed and we were unable to recover it. 00:38:27.626 [2024-12-14 00:19:06.634168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.626 [2024-12-14 00:19:06.634182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.626 qpair failed and we were unable to recover it. 00:38:27.626 [2024-12-14 00:19:06.634409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.626 [2024-12-14 00:19:06.634422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.626 qpair failed and we were unable to recover it. 00:38:27.626 [2024-12-14 00:19:06.634556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.626 [2024-12-14 00:19:06.634573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.626 qpair failed and we were unable to recover it. 00:38:27.626 [2024-12-14 00:19:06.634781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.626 [2024-12-14 00:19:06.634798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.626 qpair failed and we were unable to recover it. 00:38:27.626 [2024-12-14 00:19:06.634975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.626 [2024-12-14 00:19:06.634988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.626 qpair failed and we were unable to recover it. 00:38:27.626 [2024-12-14 00:19:06.635202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.626 [2024-12-14 00:19:06.635215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.626 qpair failed and we were unable to recover it. 00:38:27.626 [2024-12-14 00:19:06.635314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.626 [2024-12-14 00:19:06.635328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.626 qpair failed and we were unable to recover it. 00:38:27.626 [2024-12-14 00:19:06.635444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.626 [2024-12-14 00:19:06.635457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.626 qpair failed and we were unable to recover it. 00:38:27.626 [2024-12-14 00:19:06.635610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.626 [2024-12-14 00:19:06.635624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.626 qpair failed and we were unable to recover it. 00:38:27.626 [2024-12-14 00:19:06.635772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.626 [2024-12-14 00:19:06.635785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.626 qpair failed and we were unable to recover it. 00:38:27.626 [2024-12-14 00:19:06.635943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.626 [2024-12-14 00:19:06.635957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.626 qpair failed and we were unable to recover it. 00:38:27.626 [2024-12-14 00:19:06.636166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.626 [2024-12-14 00:19:06.636179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.626 qpair failed and we were unable to recover it. 00:38:27.626 [2024-12-14 00:19:06.636287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.626 [2024-12-14 00:19:06.636300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.626 qpair failed and we were unable to recover it. 00:38:27.626 [2024-12-14 00:19:06.636442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.626 [2024-12-14 00:19:06.636456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.626 qpair failed and we were unable to recover it. 00:38:27.626 [2024-12-14 00:19:06.636612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.626 [2024-12-14 00:19:06.636625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.626 qpair failed and we were unable to recover it. 00:38:27.626 [2024-12-14 00:19:06.636717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.626 [2024-12-14 00:19:06.636731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.626 qpair failed and we were unable to recover it. 00:38:27.626 [2024-12-14 00:19:06.636893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.626 [2024-12-14 00:19:06.636907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.626 qpair failed and we were unable to recover it. 00:38:27.626 [2024-12-14 00:19:06.637076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.626 [2024-12-14 00:19:06.637090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.626 qpair failed and we were unable to recover it. 00:38:27.626 [2024-12-14 00:19:06.637247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.626 [2024-12-14 00:19:06.637261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.626 qpair failed and we were unable to recover it. 00:38:27.626 [2024-12-14 00:19:06.637345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.626 [2024-12-14 00:19:06.637359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.626 qpair failed and we were unable to recover it. 00:38:27.626 [2024-12-14 00:19:06.637534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.626 [2024-12-14 00:19:06.637548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.626 qpair failed and we were unable to recover it. 00:38:27.626 [2024-12-14 00:19:06.637692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.626 [2024-12-14 00:19:06.637706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.627 qpair failed and we were unable to recover it. 00:38:27.627 [2024-12-14 00:19:06.637816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.627 [2024-12-14 00:19:06.637829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.627 qpair failed and we were unable to recover it. 00:38:27.627 [2024-12-14 00:19:06.638036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.627 [2024-12-14 00:19:06.638050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.627 qpair failed and we were unable to recover it. 00:38:27.627 [2024-12-14 00:19:06.638221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.627 [2024-12-14 00:19:06.638234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.627 qpair failed and we were unable to recover it. 00:38:27.627 [2024-12-14 00:19:06.638461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.627 [2024-12-14 00:19:06.638475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.627 qpair failed and we were unable to recover it. 00:38:27.627 [2024-12-14 00:19:06.638725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.627 [2024-12-14 00:19:06.638739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.627 qpair failed and we were unable to recover it. 00:38:27.627 [2024-12-14 00:19:06.638983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.627 [2024-12-14 00:19:06.638997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.627 qpair failed and we were unable to recover it. 00:38:27.627 [2024-12-14 00:19:06.639164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.627 [2024-12-14 00:19:06.639178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.627 qpair failed and we were unable to recover it. 00:38:27.627 [2024-12-14 00:19:06.639335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.627 [2024-12-14 00:19:06.639349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.627 qpair failed and we were unable to recover it. 00:38:27.627 [2024-12-14 00:19:06.639459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.627 [2024-12-14 00:19:06.639472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.627 qpair failed and we were unable to recover it. 00:38:27.627 [2024-12-14 00:19:06.639648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.627 [2024-12-14 00:19:06.639662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.627 qpair failed and we were unable to recover it. 00:38:27.627 [2024-12-14 00:19:06.639819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.627 [2024-12-14 00:19:06.639837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.627 qpair failed and we were unable to recover it. 00:38:27.627 [2024-12-14 00:19:06.639993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.627 [2024-12-14 00:19:06.640007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.627 qpair failed and we were unable to recover it. 00:38:27.627 [2024-12-14 00:19:06.640157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.627 [2024-12-14 00:19:06.640170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.627 qpair failed and we were unable to recover it. 00:38:27.627 [2024-12-14 00:19:06.640381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.627 [2024-12-14 00:19:06.640395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.627 qpair failed and we were unable to recover it. 00:38:27.627 [2024-12-14 00:19:06.640616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.627 [2024-12-14 00:19:06.640633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.627 qpair failed and we were unable to recover it. 00:38:27.627 [2024-12-14 00:19:06.640857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.627 [2024-12-14 00:19:06.640871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.627 qpair failed and we were unable to recover it. 00:38:27.627 [2024-12-14 00:19:06.641129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.627 [2024-12-14 00:19:06.641143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.627 qpair failed and we were unable to recover it. 00:38:27.627 [2024-12-14 00:19:06.641281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.627 [2024-12-14 00:19:06.641294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.627 qpair failed and we were unable to recover it. 00:38:27.627 [2024-12-14 00:19:06.641380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.627 [2024-12-14 00:19:06.641393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.627 qpair failed and we were unable to recover it. 00:38:27.627 [2024-12-14 00:19:06.641620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.627 [2024-12-14 00:19:06.641634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.627 qpair failed and we were unable to recover it. 00:38:27.627 [2024-12-14 00:19:06.641725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.627 [2024-12-14 00:19:06.641739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.627 qpair failed and we were unable to recover it. 00:38:27.627 [2024-12-14 00:19:06.641884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.627 [2024-12-14 00:19:06.641897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.627 qpair failed and we were unable to recover it. 00:38:27.627 [2024-12-14 00:19:06.642059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.627 [2024-12-14 00:19:06.642072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.627 qpair failed and we were unable to recover it. 00:38:27.627 [2024-12-14 00:19:06.642239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.627 [2024-12-14 00:19:06.642253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.627 qpair failed and we were unable to recover it. 00:38:27.627 [2024-12-14 00:19:06.642482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.627 [2024-12-14 00:19:06.642496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.627 qpair failed and we were unable to recover it. 00:38:27.627 [2024-12-14 00:19:06.642609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.627 [2024-12-14 00:19:06.642623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.627 qpair failed and we were unable to recover it. 00:38:27.627 [2024-12-14 00:19:06.642773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.627 [2024-12-14 00:19:06.642787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.627 qpair failed and we were unable to recover it. 00:38:27.627 [2024-12-14 00:19:06.642888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.627 [2024-12-14 00:19:06.642901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.627 qpair failed and we were unable to recover it. 00:38:27.627 [2024-12-14 00:19:06.643112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.627 [2024-12-14 00:19:06.643125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.627 qpair failed and we were unable to recover it. 00:38:27.627 [2024-12-14 00:19:06.643279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.627 [2024-12-14 00:19:06.643294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.627 qpair failed and we were unable to recover it. 00:38:27.627 [2024-12-14 00:19:06.643495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.627 [2024-12-14 00:19:06.643509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.627 qpair failed and we were unable to recover it. 00:38:27.627 [2024-12-14 00:19:06.643753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.627 [2024-12-14 00:19:06.643766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.627 qpair failed and we were unable to recover it. 00:38:27.627 [2024-12-14 00:19:06.643958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.627 [2024-12-14 00:19:06.643972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.627 qpair failed and we were unable to recover it. 00:38:27.627 [2024-12-14 00:19:06.644148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.627 [2024-12-14 00:19:06.644161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.627 qpair failed and we were unable to recover it. 00:38:27.627 [2024-12-14 00:19:06.644254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.627 [2024-12-14 00:19:06.644268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.627 qpair failed and we were unable to recover it. 00:38:27.627 [2024-12-14 00:19:06.644352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.627 [2024-12-14 00:19:06.644366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.627 qpair failed and we were unable to recover it. 00:38:27.627 [2024-12-14 00:19:06.644543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.627 [2024-12-14 00:19:06.644557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.627 qpair failed and we were unable to recover it. 00:38:27.627 [2024-12-14 00:19:06.644721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.627 [2024-12-14 00:19:06.644734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.627 qpair failed and we were unable to recover it. 00:38:27.627 [2024-12-14 00:19:06.644894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.627 [2024-12-14 00:19:06.644907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.627 qpair failed and we were unable to recover it. 00:38:27.628 [2024-12-14 00:19:06.645109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.628 [2024-12-14 00:19:06.645122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.628 qpair failed and we were unable to recover it. 00:38:27.628 [2024-12-14 00:19:06.645301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.628 [2024-12-14 00:19:06.645314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.628 qpair failed and we were unable to recover it. 00:38:27.628 [2024-12-14 00:19:06.645497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.628 [2024-12-14 00:19:06.645512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.628 qpair failed and we were unable to recover it. 00:38:27.628 [2024-12-14 00:19:06.645615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.628 [2024-12-14 00:19:06.645629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.628 qpair failed and we were unable to recover it. 00:38:27.628 [2024-12-14 00:19:06.645858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.628 [2024-12-14 00:19:06.645871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.628 qpair failed and we were unable to recover it. 00:38:27.628 [2024-12-14 00:19:06.646041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.628 [2024-12-14 00:19:06.646054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.628 qpair failed and we were unable to recover it. 00:38:27.628 [2024-12-14 00:19:06.646281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.628 [2024-12-14 00:19:06.646303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.628 qpair failed and we were unable to recover it. 00:38:27.628 [2024-12-14 00:19:06.646557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.628 [2024-12-14 00:19:06.646571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.628 qpair failed and we were unable to recover it. 00:38:27.628 [2024-12-14 00:19:06.646665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.628 [2024-12-14 00:19:06.646678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.628 qpair failed and we were unable to recover it. 00:38:27.628 [2024-12-14 00:19:06.646831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.628 [2024-12-14 00:19:06.646844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.628 qpair failed and we were unable to recover it. 00:38:27.628 [2024-12-14 00:19:06.647074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.628 [2024-12-14 00:19:06.647088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.628 qpair failed and we were unable to recover it. 00:38:27.628 [2024-12-14 00:19:06.647318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.628 [2024-12-14 00:19:06.647332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.628 qpair failed and we were unable to recover it. 00:38:27.628 [2024-12-14 00:19:06.647528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.628 [2024-12-14 00:19:06.647541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.628 qpair failed and we were unable to recover it. 00:38:27.628 [2024-12-14 00:19:06.647744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.628 [2024-12-14 00:19:06.647757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.628 qpair failed and we were unable to recover it. 00:38:27.628 [2024-12-14 00:19:06.647912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.628 [2024-12-14 00:19:06.647926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.628 qpair failed and we were unable to recover it. 00:38:27.628 [2024-12-14 00:19:06.648083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.628 [2024-12-14 00:19:06.648102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.628 qpair failed and we were unable to recover it. 00:38:27.628 [2024-12-14 00:19:06.648312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.628 [2024-12-14 00:19:06.648325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.628 qpair failed and we were unable to recover it. 00:38:27.628 [2024-12-14 00:19:06.648579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.628 [2024-12-14 00:19:06.648593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.628 qpair failed and we were unable to recover it. 00:38:27.628 [2024-12-14 00:19:06.648783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.628 [2024-12-14 00:19:06.648797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.628 qpair failed and we were unable to recover it. 00:38:27.628 [2024-12-14 00:19:06.648950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.628 [2024-12-14 00:19:06.648963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.628 qpair failed and we were unable to recover it. 00:38:27.628 [2024-12-14 00:19:06.649227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.628 [2024-12-14 00:19:06.649240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.628 qpair failed and we were unable to recover it. 00:38:27.628 [2024-12-14 00:19:06.649448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.628 [2024-12-14 00:19:06.649462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.628 qpair failed and we were unable to recover it. 00:38:27.628 [2024-12-14 00:19:06.649572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.628 [2024-12-14 00:19:06.649586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.628 qpair failed and we were unable to recover it. 00:38:27.628 [2024-12-14 00:19:06.649803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.628 [2024-12-14 00:19:06.649816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.628 qpair failed and we were unable to recover it. 00:38:27.628 [2024-12-14 00:19:06.649984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.628 [2024-12-14 00:19:06.649997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.628 qpair failed and we were unable to recover it. 00:38:27.628 [2024-12-14 00:19:06.650229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.628 [2024-12-14 00:19:06.650246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.628 qpair failed and we were unable to recover it. 00:38:27.628 [2024-12-14 00:19:06.650511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.628 [2024-12-14 00:19:06.650526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.628 qpair failed and we were unable to recover it. 00:38:27.628 [2024-12-14 00:19:06.650700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.628 [2024-12-14 00:19:06.650714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.628 qpair failed and we were unable to recover it. 00:38:27.628 [2024-12-14 00:19:06.650815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.628 [2024-12-14 00:19:06.650829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.628 qpair failed and we were unable to recover it. 00:38:27.628 [2024-12-14 00:19:06.650927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.628 [2024-12-14 00:19:06.650941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.628 qpair failed and we were unable to recover it. 00:38:27.628 [2024-12-14 00:19:06.651125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.628 [2024-12-14 00:19:06.651139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.628 qpair failed and we were unable to recover it. 00:38:27.628 [2024-12-14 00:19:06.651297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.628 [2024-12-14 00:19:06.651311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.628 qpair failed and we were unable to recover it. 00:38:27.628 [2024-12-14 00:19:06.651417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.628 [2024-12-14 00:19:06.651431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.628 qpair failed and we were unable to recover it. 00:38:27.628 [2024-12-14 00:19:06.651582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.628 [2024-12-14 00:19:06.651595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.628 qpair failed and we were unable to recover it. 00:38:27.628 [2024-12-14 00:19:06.651766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.628 [2024-12-14 00:19:06.651780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.628 qpair failed and we were unable to recover it. 00:38:27.628 [2024-12-14 00:19:06.651924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.628 [2024-12-14 00:19:06.651938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.628 qpair failed and we were unable to recover it. 00:38:27.628 [2024-12-14 00:19:06.652075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.628 [2024-12-14 00:19:06.652088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.628 qpair failed and we were unable to recover it. 00:38:27.628 [2024-12-14 00:19:06.652222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.628 [2024-12-14 00:19:06.652235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.628 qpair failed and we were unable to recover it. 00:38:27.628 [2024-12-14 00:19:06.652389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.628 [2024-12-14 00:19:06.652406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.628 qpair failed and we were unable to recover it. 00:38:27.628 [2024-12-14 00:19:06.652564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.629 [2024-12-14 00:19:06.652578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.629 qpair failed and we were unable to recover it. 00:38:27.629 [2024-12-14 00:19:06.652679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.629 [2024-12-14 00:19:06.652692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.629 qpair failed and we were unable to recover it. 00:38:27.629 [2024-12-14 00:19:06.652783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.629 [2024-12-14 00:19:06.652796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.629 qpair failed and we were unable to recover it. 00:38:27.629 [2024-12-14 00:19:06.652983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.629 [2024-12-14 00:19:06.653020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.629 qpair failed and we were unable to recover it. 00:38:27.629 [2024-12-14 00:19:06.653210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.629 [2024-12-14 00:19:06.653243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:27.629 qpair failed and we were unable to recover it. 00:38:27.629 [2024-12-14 00:19:06.653473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.629 [2024-12-14 00:19:06.653497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:27.629 qpair failed and we were unable to recover it. 00:38:27.629 [2024-12-14 00:19:06.653621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.629 [2024-12-14 00:19:06.653643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:27.629 qpair failed and we were unable to recover it. 00:38:27.629 [2024-12-14 00:19:06.653819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.629 [2024-12-14 00:19:06.653842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:27.629 qpair failed and we were unable to recover it. 00:38:27.629 [2024-12-14 00:19:06.654096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.629 [2024-12-14 00:19:06.654117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:27.629 qpair failed and we were unable to recover it. 00:38:27.629 [2024-12-14 00:19:06.654368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.629 [2024-12-14 00:19:06.654384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.629 qpair failed and we were unable to recover it. 00:38:27.629 [2024-12-14 00:19:06.654554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.629 [2024-12-14 00:19:06.654568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.629 qpair failed and we were unable to recover it. 00:38:27.629 [2024-12-14 00:19:06.654717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.629 [2024-12-14 00:19:06.654731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.629 qpair failed and we were unable to recover it. 00:38:27.629 [2024-12-14 00:19:06.654825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.629 [2024-12-14 00:19:06.654839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.629 qpair failed and we were unable to recover it. 00:38:27.629 [2024-12-14 00:19:06.654985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.629 [2024-12-14 00:19:06.654998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.629 qpair failed and we were unable to recover it. 00:38:27.629 [2024-12-14 00:19:06.655175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.629 [2024-12-14 00:19:06.655189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.629 qpair failed and we were unable to recover it. 00:38:27.629 [2024-12-14 00:19:06.655448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.629 [2024-12-14 00:19:06.655464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.629 qpair failed and we were unable to recover it. 00:38:27.629 [2024-12-14 00:19:06.655613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.629 [2024-12-14 00:19:06.655629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.629 qpair failed and we were unable to recover it. 00:38:27.629 [2024-12-14 00:19:06.655737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.629 [2024-12-14 00:19:06.655751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.629 qpair failed and we were unable to recover it. 00:38:27.629 [2024-12-14 00:19:06.655952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.629 [2024-12-14 00:19:06.655965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.629 qpair failed and we were unable to recover it. 00:38:27.629 [2024-12-14 00:19:06.656217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.629 [2024-12-14 00:19:06.656232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.629 qpair failed and we were unable to recover it. 00:38:27.629 [2024-12-14 00:19:06.656481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.629 [2024-12-14 00:19:06.656495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.629 qpair failed and we were unable to recover it. 00:38:27.629 [2024-12-14 00:19:06.656652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.629 [2024-12-14 00:19:06.656665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.629 qpair failed and we were unable to recover it. 00:38:27.629 [2024-12-14 00:19:06.656765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.629 [2024-12-14 00:19:06.656779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.629 qpair failed and we were unable to recover it. 00:38:27.629 [2024-12-14 00:19:06.657001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.629 [2024-12-14 00:19:06.657014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.629 qpair failed and we were unable to recover it. 00:38:27.629 [2024-12-14 00:19:06.657108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.629 [2024-12-14 00:19:06.657121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.629 qpair failed and we were unable to recover it. 00:38:27.629 [2024-12-14 00:19:06.657347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.629 [2024-12-14 00:19:06.657361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.629 qpair failed and we were unable to recover it. 00:38:27.629 [2024-12-14 00:19:06.657553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.629 [2024-12-14 00:19:06.657566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.629 qpair failed and we were unable to recover it. 00:38:27.629 [2024-12-14 00:19:06.657658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.629 [2024-12-14 00:19:06.657671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.629 qpair failed and we were unable to recover it. 00:38:27.629 [2024-12-14 00:19:06.657839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.629 [2024-12-14 00:19:06.657852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.629 qpair failed and we were unable to recover it. 00:38:27.629 [2024-12-14 00:19:06.657950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.629 [2024-12-14 00:19:06.657964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.629 qpair failed and we were unable to recover it. 00:38:27.629 [2024-12-14 00:19:06.658141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.629 [2024-12-14 00:19:06.658154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.629 qpair failed and we were unable to recover it. 00:38:27.629 [2024-12-14 00:19:06.658304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.629 [2024-12-14 00:19:06.658318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.629 qpair failed and we were unable to recover it. 00:38:27.629 [2024-12-14 00:19:06.658570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.629 [2024-12-14 00:19:06.658584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.629 qpair failed and we were unable to recover it. 00:38:27.629 [2024-12-14 00:19:06.658737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.629 [2024-12-14 00:19:06.658751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.629 qpair failed and we were unable to recover it. 00:38:27.629 [2024-12-14 00:19:06.658848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.629 [2024-12-14 00:19:06.658861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.629 qpair failed and we were unable to recover it. 00:38:27.629 [2024-12-14 00:19:06.658968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.629 [2024-12-14 00:19:06.658982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.629 qpair failed and we were unable to recover it. 00:38:27.629 [2024-12-14 00:19:06.659138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.629 [2024-12-14 00:19:06.659152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.629 qpair failed and we were unable to recover it. 00:38:27.629 [2024-12-14 00:19:06.659250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.629 [2024-12-14 00:19:06.659268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.629 qpair failed and we were unable to recover it. 00:38:27.629 [2024-12-14 00:19:06.659419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.629 [2024-12-14 00:19:06.659432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.629 qpair failed and we were unable to recover it. 00:38:27.629 [2024-12-14 00:19:06.659610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.630 [2024-12-14 00:19:06.659623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.630 qpair failed and we were unable to recover it. 00:38:27.630 [2024-12-14 00:19:06.659777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.630 [2024-12-14 00:19:06.659791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.630 qpair failed and we were unable to recover it. 00:38:27.630 [2024-12-14 00:19:06.660038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.630 [2024-12-14 00:19:06.660051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.630 qpair failed and we were unable to recover it. 00:38:27.630 [2024-12-14 00:19:06.660298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.630 [2024-12-14 00:19:06.660311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.630 qpair failed and we were unable to recover it. 00:38:27.630 [2024-12-14 00:19:06.660555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.630 [2024-12-14 00:19:06.660583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.630 qpair failed and we were unable to recover it. 00:38:27.630 [2024-12-14 00:19:06.660724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.630 [2024-12-14 00:19:06.660759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:27.630 qpair failed and we were unable to recover it. 00:38:27.630 [2024-12-14 00:19:06.660929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.630 [2024-12-14 00:19:06.660953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:27.630 qpair failed and we were unable to recover it. 00:38:27.630 [2024-12-14 00:19:06.661140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.630 [2024-12-14 00:19:06.661162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:27.630 qpair failed and we were unable to recover it. 00:38:27.630 [2024-12-14 00:19:06.661376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.630 [2024-12-14 00:19:06.661398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:27.630 qpair failed and we were unable to recover it. 00:38:27.630 [2024-12-14 00:19:06.661669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.630 [2024-12-14 00:19:06.661691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:27.630 qpair failed and we were unable to recover it. 00:38:27.630 [2024-12-14 00:19:06.661859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.630 [2024-12-14 00:19:06.661881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:27.630 qpair failed and we were unable to recover it. 00:38:27.630 [2024-12-14 00:19:06.662004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.630 [2024-12-14 00:19:06.662025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:27.630 qpair failed and we were unable to recover it. 00:38:27.630 [2024-12-14 00:19:06.662296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.630 [2024-12-14 00:19:06.662318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:27.630 qpair failed and we were unable to recover it. 00:38:27.630 [2024-12-14 00:19:06.662423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.630 [2024-12-14 00:19:06.662451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:27.630 qpair failed and we were unable to recover it. 00:38:27.630 [2024-12-14 00:19:06.662605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.630 [2024-12-14 00:19:06.662627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:27.630 qpair failed and we were unable to recover it. 00:38:27.630 [2024-12-14 00:19:06.662801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.630 [2024-12-14 00:19:06.662823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:27.630 qpair failed and we were unable to recover it. 00:38:27.630 [2024-12-14 00:19:06.662999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.630 [2024-12-14 00:19:06.663020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:27.630 qpair failed and we were unable to recover it. 00:38:27.630 [2024-12-14 00:19:06.663228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.630 [2024-12-14 00:19:06.663254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:27.630 qpair failed and we were unable to recover it. 00:38:27.630 [2024-12-14 00:19:06.663337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.630 [2024-12-14 00:19:06.663358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:27.630 qpair failed and we were unable to recover it. 00:38:27.630 [2024-12-14 00:19:06.663540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.630 [2024-12-14 00:19:06.663556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.630 qpair failed and we were unable to recover it. 00:38:27.630 [2024-12-14 00:19:06.663761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.630 [2024-12-14 00:19:06.663775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.630 qpair failed and we were unable to recover it. 00:38:27.630 [2024-12-14 00:19:06.663870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.630 [2024-12-14 00:19:06.663884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.630 qpair failed and we were unable to recover it. 00:38:27.630 [2024-12-14 00:19:06.664040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.630 [2024-12-14 00:19:06.664053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.630 qpair failed and we were unable to recover it. 00:38:27.630 [2024-12-14 00:19:06.664250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.630 [2024-12-14 00:19:06.664264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.630 qpair failed and we were unable to recover it. 00:38:27.630 [2024-12-14 00:19:06.664478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.630 [2024-12-14 00:19:06.664492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.630 qpair failed and we were unable to recover it. 00:38:27.630 [2024-12-14 00:19:06.664645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.630 [2024-12-14 00:19:06.664659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.630 qpair failed and we were unable to recover it. 00:38:27.630 [2024-12-14 00:19:06.664875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.630 [2024-12-14 00:19:06.664889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.630 qpair failed and we were unable to recover it. 00:38:27.630 [2024-12-14 00:19:06.665039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.630 [2024-12-14 00:19:06.665052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.630 qpair failed and we were unable to recover it. 00:38:27.630 [2024-12-14 00:19:06.665318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.630 [2024-12-14 00:19:06.665331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.630 qpair failed and we were unable to recover it. 00:38:27.630 [2024-12-14 00:19:06.665495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.630 [2024-12-14 00:19:06.665509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.630 qpair failed and we were unable to recover it. 00:38:27.630 [2024-12-14 00:19:06.665717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.630 [2024-12-14 00:19:06.665731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.630 qpair failed and we were unable to recover it. 00:38:27.630 [2024-12-14 00:19:06.665966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.630 [2024-12-14 00:19:06.665980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.630 qpair failed and we were unable to recover it. 00:38:27.630 [2024-12-14 00:19:06.666080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.630 [2024-12-14 00:19:06.666093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.630 qpair failed and we were unable to recover it. 00:38:27.630 [2024-12-14 00:19:06.666256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.630 [2024-12-14 00:19:06.666271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.630 qpair failed and we were unable to recover it. 00:38:27.630 [2024-12-14 00:19:06.666418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.630 [2024-12-14 00:19:06.666432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.630 qpair failed and we were unable to recover it. 00:38:27.631 [2024-12-14 00:19:06.666600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.631 [2024-12-14 00:19:06.666614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.631 qpair failed and we were unable to recover it. 00:38:27.631 [2024-12-14 00:19:06.666720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.631 [2024-12-14 00:19:06.666734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.631 qpair failed and we were unable to recover it. 00:38:27.631 [2024-12-14 00:19:06.666838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.631 [2024-12-14 00:19:06.666851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.631 qpair failed and we were unable to recover it. 00:38:27.631 [2024-12-14 00:19:06.666999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.631 [2024-12-14 00:19:06.667012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.631 qpair failed and we were unable to recover it. 00:38:27.631 [2024-12-14 00:19:06.667213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.631 [2024-12-14 00:19:06.667227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.631 qpair failed and we were unable to recover it. 00:38:27.631 [2024-12-14 00:19:06.667406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.631 [2024-12-14 00:19:06.667420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.631 qpair failed and we were unable to recover it. 00:38:27.631 [2024-12-14 00:19:06.667563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.631 [2024-12-14 00:19:06.667577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.631 qpair failed and we were unable to recover it. 00:38:27.631 [2024-12-14 00:19:06.667833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.631 [2024-12-14 00:19:06.667847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.631 qpair failed and we were unable to recover it. 00:38:27.631 [2024-12-14 00:19:06.667944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.631 [2024-12-14 00:19:06.667957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.631 qpair failed and we were unable to recover it. 00:38:27.631 [2024-12-14 00:19:06.668059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.631 [2024-12-14 00:19:06.668085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.631 qpair failed and we were unable to recover it. 00:38:27.631 [2024-12-14 00:19:06.668200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.631 [2024-12-14 00:19:06.668225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:27.631 qpair failed and we were unable to recover it. 00:38:27.631 [2024-12-14 00:19:06.668487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.631 [2024-12-14 00:19:06.668510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:27.631 qpair failed and we were unable to recover it. 00:38:27.631 [2024-12-14 00:19:06.668631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.631 [2024-12-14 00:19:06.668646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.631 qpair failed and we were unable to recover it. 00:38:27.631 [2024-12-14 00:19:06.668799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.631 [2024-12-14 00:19:06.668813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.631 qpair failed and we were unable to recover it. 00:38:27.631 [2024-12-14 00:19:06.669052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.631 [2024-12-14 00:19:06.669065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.631 qpair failed and we were unable to recover it. 00:38:27.631 [2024-12-14 00:19:06.669321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.631 [2024-12-14 00:19:06.669335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.631 qpair failed and we were unable to recover it. 00:38:27.631 [2024-12-14 00:19:06.669582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.631 [2024-12-14 00:19:06.669596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.631 qpair failed and we were unable to recover it. 00:38:27.631 [2024-12-14 00:19:06.669747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.631 [2024-12-14 00:19:06.669761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.631 qpair failed and we were unable to recover it. 00:38:27.631 [2024-12-14 00:19:06.669931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.631 [2024-12-14 00:19:06.669944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.631 qpair failed and we were unable to recover it. 00:38:27.631 [2024-12-14 00:19:06.670100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.631 [2024-12-14 00:19:06.670114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.631 qpair failed and we were unable to recover it. 00:38:27.631 [2024-12-14 00:19:06.670207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.631 [2024-12-14 00:19:06.670220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.631 qpair failed and we were unable to recover it. 00:38:27.631 [2024-12-14 00:19:06.670474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.631 [2024-12-14 00:19:06.670488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.631 qpair failed and we were unable to recover it. 00:38:27.631 [2024-12-14 00:19:06.670638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.631 [2024-12-14 00:19:06.670654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.631 qpair failed and we were unable to recover it. 00:38:27.631 [2024-12-14 00:19:06.670831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.631 [2024-12-14 00:19:06.670847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.631 qpair failed and we were unable to recover it. 00:38:27.631 [2024-12-14 00:19:06.671008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.631 [2024-12-14 00:19:06.671022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.631 qpair failed and we were unable to recover it. 00:38:27.631 [2024-12-14 00:19:06.671042] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:38:27.631 [2024-12-14 00:19:06.671254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.631 [2024-12-14 00:19:06.671269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.631 qpair failed and we were unable to recover it. 00:38:27.631 [2024-12-14 00:19:06.671448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.631 [2024-12-14 00:19:06.671462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.631 qpair failed and we were unable to recover it. 00:38:27.631 [2024-12-14 00:19:06.671662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.631 [2024-12-14 00:19:06.671676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.631 qpair failed and we were unable to recover it. 00:38:27.631 [2024-12-14 00:19:06.671879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.631 [2024-12-14 00:19:06.671893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.631 qpair failed and we were unable to recover it. 00:38:27.631 [2024-12-14 00:19:06.672035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.631 [2024-12-14 00:19:06.672048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.631 qpair failed and we were unable to recover it. 00:38:27.631 [2024-12-14 00:19:06.672188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.631 [2024-12-14 00:19:06.672206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.631 qpair failed and we were unable to recover it. 00:38:27.631 [2024-12-14 00:19:06.672362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.631 [2024-12-14 00:19:06.672376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.631 qpair failed and we were unable to recover it. 00:38:27.631 [2024-12-14 00:19:06.672559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.631 [2024-12-14 00:19:06.672573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.631 qpair failed and we were unable to recover it. 00:38:27.631 [2024-12-14 00:19:06.672778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.631 [2024-12-14 00:19:06.672792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.631 qpair failed and we were unable to recover it. 00:38:27.631 [2024-12-14 00:19:06.673018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.631 [2024-12-14 00:19:06.673032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.631 qpair failed and we were unable to recover it. 00:38:27.631 [2024-12-14 00:19:06.673271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.631 [2024-12-14 00:19:06.673285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.631 qpair failed and we were unable to recover it. 00:38:27.631 [2024-12-14 00:19:06.673381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.631 [2024-12-14 00:19:06.673395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.631 qpair failed and we were unable to recover it. 00:38:27.631 [2024-12-14 00:19:06.673560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.631 [2024-12-14 00:19:06.673574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.631 qpair failed and we were unable to recover it. 00:38:27.631 [2024-12-14 00:19:06.673797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.631 [2024-12-14 00:19:06.673811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.631 qpair failed and we were unable to recover it. 00:38:27.631 [2024-12-14 00:19:06.673914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.631 [2024-12-14 00:19:06.673928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.631 qpair failed and we were unable to recover it. 00:38:27.631 [2024-12-14 00:19:06.674079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.631 [2024-12-14 00:19:06.674092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.631 qpair failed and we were unable to recover it. 00:38:27.632 [2024-12-14 00:19:06.674268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.632 [2024-12-14 00:19:06.674281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.632 qpair failed and we were unable to recover it. 00:38:27.632 [2024-12-14 00:19:06.674430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.632 [2024-12-14 00:19:06.674449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.632 qpair failed and we were unable to recover it. 00:38:27.632 [2024-12-14 00:19:06.674568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.632 [2024-12-14 00:19:06.674582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.632 qpair failed and we were unable to recover it. 00:38:27.632 [2024-12-14 00:19:06.674805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.632 [2024-12-14 00:19:06.674819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.632 qpair failed and we were unable to recover it. 00:38:27.632 [2024-12-14 00:19:06.675028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.632 [2024-12-14 00:19:06.675061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.632 qpair failed and we were unable to recover it. 00:38:27.632 [2024-12-14 00:19:06.675163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.632 [2024-12-14 00:19:06.675177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.632 qpair failed and we were unable to recover it. 00:38:27.632 [2024-12-14 00:19:06.675378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.632 [2024-12-14 00:19:06.675392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.632 qpair failed and we were unable to recover it. 00:38:27.632 [2024-12-14 00:19:06.675602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.632 [2024-12-14 00:19:06.675616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.632 qpair failed and we were unable to recover it. 00:38:27.632 [2024-12-14 00:19:06.675763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.632 [2024-12-14 00:19:06.675776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.632 qpair failed and we were unable to recover it. 00:38:27.632 [2024-12-14 00:19:06.675937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.632 [2024-12-14 00:19:06.675950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.632 qpair failed and we were unable to recover it. 00:38:27.632 [2024-12-14 00:19:06.676166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.632 [2024-12-14 00:19:06.676180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.632 qpair failed and we were unable to recover it. 00:38:27.632 [2024-12-14 00:19:06.676275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.632 [2024-12-14 00:19:06.676288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.632 qpair failed and we were unable to recover it. 00:38:27.632 [2024-12-14 00:19:06.676499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.632 [2024-12-14 00:19:06.676513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.632 qpair failed and we were unable to recover it. 00:38:27.632 [2024-12-14 00:19:06.676719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.632 [2024-12-14 00:19:06.676733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.632 qpair failed and we were unable to recover it. 00:38:27.632 [2024-12-14 00:19:06.677002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.632 [2024-12-14 00:19:06.677015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.632 qpair failed and we were unable to recover it. 00:38:27.632 [2024-12-14 00:19:06.677202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.632 [2024-12-14 00:19:06.677216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.632 qpair failed and we were unable to recover it. 00:38:27.632 [2024-12-14 00:19:06.677376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.632 [2024-12-14 00:19:06.677389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.632 qpair failed and we were unable to recover it. 00:38:27.632 [2024-12-14 00:19:06.677613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.632 [2024-12-14 00:19:06.677627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.632 qpair failed and we were unable to recover it. 00:38:27.632 [2024-12-14 00:19:06.677718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.632 [2024-12-14 00:19:06.677732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.632 qpair failed and we were unable to recover it. 00:38:27.632 [2024-12-14 00:19:06.677885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.632 [2024-12-14 00:19:06.677898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.632 qpair failed and we were unable to recover it. 00:38:27.632 [2024-12-14 00:19:06.678000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.632 [2024-12-14 00:19:06.678013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.632 qpair failed and we were unable to recover it. 00:38:27.632 [2024-12-14 00:19:06.678103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.632 [2024-12-14 00:19:06.678120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.632 qpair failed and we were unable to recover it. 00:38:27.632 [2024-12-14 00:19:06.678272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.632 [2024-12-14 00:19:06.678287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.632 qpair failed and we were unable to recover it. 00:38:27.632 [2024-12-14 00:19:06.678518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.632 [2024-12-14 00:19:06.678532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.632 qpair failed and we were unable to recover it. 00:38:27.632 [2024-12-14 00:19:06.678741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.632 [2024-12-14 00:19:06.678755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.632 qpair failed and we were unable to recover it. 00:38:27.632 [2024-12-14 00:19:06.679000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.632 [2024-12-14 00:19:06.679013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.632 qpair failed and we were unable to recover it. 00:38:27.632 [2024-12-14 00:19:06.679245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.632 [2024-12-14 00:19:06.679258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.632 qpair failed and we were unable to recover it. 00:38:27.632 [2024-12-14 00:19:06.679411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.632 [2024-12-14 00:19:06.679425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.632 qpair failed and we were unable to recover it. 00:38:27.632 [2024-12-14 00:19:06.679640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.632 [2024-12-14 00:19:06.679655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.632 qpair failed and we were unable to recover it. 00:38:27.632 [2024-12-14 00:19:06.679797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.632 [2024-12-14 00:19:06.679811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.632 qpair failed and we were unable to recover it. 00:38:27.632 [2024-12-14 00:19:06.679988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.632 [2024-12-14 00:19:06.680003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.632 qpair failed and we were unable to recover it. 00:38:27.632 [2024-12-14 00:19:06.680231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.632 [2024-12-14 00:19:06.680244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.632 qpair failed and we were unable to recover it. 00:38:27.632 [2024-12-14 00:19:06.680453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.632 [2024-12-14 00:19:06.680467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.632 qpair failed and we were unable to recover it. 00:38:27.632 [2024-12-14 00:19:06.680703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.632 [2024-12-14 00:19:06.680717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.632 qpair failed and we were unable to recover it. 00:38:27.632 [2024-12-14 00:19:06.680825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.632 [2024-12-14 00:19:06.680839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.632 qpair failed and we were unable to recover it. 00:38:27.632 [2024-12-14 00:19:06.680952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.632 [2024-12-14 00:19:06.680966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.632 qpair failed and we were unable to recover it. 00:38:27.632 [2024-12-14 00:19:06.681233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.632 [2024-12-14 00:19:06.681251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.632 qpair failed and we were unable to recover it. 00:38:27.632 [2024-12-14 00:19:06.681446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.632 [2024-12-14 00:19:06.681460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.632 qpair failed and we were unable to recover it. 00:38:27.632 [2024-12-14 00:19:06.681579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.632 [2024-12-14 00:19:06.681593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.632 qpair failed and we were unable to recover it. 00:38:27.632 [2024-12-14 00:19:06.681680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.632 [2024-12-14 00:19:06.681694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.632 qpair failed and we were unable to recover it. 00:38:27.632 [2024-12-14 00:19:06.681918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.632 [2024-12-14 00:19:06.681933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.632 qpair failed and we were unable to recover it. 00:38:27.632 [2024-12-14 00:19:06.682138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.632 [2024-12-14 00:19:06.682152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.632 qpair failed and we were unable to recover it. 00:38:27.632 [2024-12-14 00:19:06.682380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.632 [2024-12-14 00:19:06.682394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.633 qpair failed and we were unable to recover it. 00:38:27.633 [2024-12-14 00:19:06.682564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.633 [2024-12-14 00:19:06.682579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.633 qpair failed and we were unable to recover it. 00:38:27.633 [2024-12-14 00:19:06.682777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.633 [2024-12-14 00:19:06.682791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.633 qpair failed and we were unable to recover it. 00:38:27.633 [2024-12-14 00:19:06.682888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.633 [2024-12-14 00:19:06.682902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.633 qpair failed and we were unable to recover it. 00:38:27.633 [2024-12-14 00:19:06.683095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.633 [2024-12-14 00:19:06.683111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.633 qpair failed and we were unable to recover it. 00:38:27.633 [2024-12-14 00:19:06.683275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.633 [2024-12-14 00:19:06.683289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.633 qpair failed and we were unable to recover it. 00:38:27.633 [2024-12-14 00:19:06.683447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.633 [2024-12-14 00:19:06.683462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.633 qpair failed and we were unable to recover it. 00:38:27.633 [2024-12-14 00:19:06.683582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.633 [2024-12-14 00:19:06.683597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.633 qpair failed and we were unable to recover it. 00:38:27.633 [2024-12-14 00:19:06.683669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.633 [2024-12-14 00:19:06.683684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.633 qpair failed and we were unable to recover it. 00:38:27.633 [2024-12-14 00:19:06.683861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.633 [2024-12-14 00:19:06.683875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.633 qpair failed and we were unable to recover it. 00:38:27.633 [2024-12-14 00:19:06.683977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.633 [2024-12-14 00:19:06.683991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.633 qpair failed and we were unable to recover it. 00:38:27.633 [2024-12-14 00:19:06.684146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.633 [2024-12-14 00:19:06.684161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.633 qpair failed and we were unable to recover it. 00:38:27.633 [2024-12-14 00:19:06.684378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.633 [2024-12-14 00:19:06.684392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.633 qpair failed and we were unable to recover it. 00:38:27.633 [2024-12-14 00:19:06.684486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.633 [2024-12-14 00:19:06.684501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.633 qpair failed and we were unable to recover it. 00:38:27.633 [2024-12-14 00:19:06.684656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.633 [2024-12-14 00:19:06.684671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.633 qpair failed and we were unable to recover it. 00:38:27.633 [2024-12-14 00:19:06.684808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.633 [2024-12-14 00:19:06.684823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.633 qpair failed and we were unable to recover it. 00:38:27.633 [2024-12-14 00:19:06.684997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.633 [2024-12-14 00:19:06.685012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.633 qpair failed and we were unable to recover it. 00:38:27.633 [2024-12-14 00:19:06.685186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.633 [2024-12-14 00:19:06.685201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.633 qpair failed and we were unable to recover it. 00:38:27.633 [2024-12-14 00:19:06.685298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.633 [2024-12-14 00:19:06.685312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.633 qpair failed and we were unable to recover it. 00:38:27.633 [2024-12-14 00:19:06.685494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.633 [2024-12-14 00:19:06.685513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.633 qpair failed and we were unable to recover it. 00:38:27.633 [2024-12-14 00:19:06.685715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.633 [2024-12-14 00:19:06.685730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.633 qpair failed and we were unable to recover it. 00:38:27.633 [2024-12-14 00:19:06.685835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.633 [2024-12-14 00:19:06.685849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.633 qpair failed and we were unable to recover it. 00:38:27.633 [2024-12-14 00:19:06.686008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.633 [2024-12-14 00:19:06.686023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.633 qpair failed and we were unable to recover it. 00:38:27.633 [2024-12-14 00:19:06.686181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.633 [2024-12-14 00:19:06.686196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.633 qpair failed and we were unable to recover it. 00:38:27.633 [2024-12-14 00:19:06.686287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.633 [2024-12-14 00:19:06.686302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.633 qpair failed and we were unable to recover it. 00:38:27.633 [2024-12-14 00:19:06.686464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.633 [2024-12-14 00:19:06.686480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.633 qpair failed and we were unable to recover it. 00:38:27.633 [2024-12-14 00:19:06.686625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.633 [2024-12-14 00:19:06.686648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.633 qpair failed and we were unable to recover it. 00:38:27.633 [2024-12-14 00:19:06.686810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.633 [2024-12-14 00:19:06.686826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.633 qpair failed and we were unable to recover it. 00:38:27.633 [2024-12-14 00:19:06.687035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.633 [2024-12-14 00:19:06.687050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.633 qpair failed and we were unable to recover it. 00:38:27.633 [2024-12-14 00:19:06.687260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.633 [2024-12-14 00:19:06.687275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.633 qpair failed and we were unable to recover it. 00:38:27.633 [2024-12-14 00:19:06.687373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.633 [2024-12-14 00:19:06.687387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.633 qpair failed and we were unable to recover it. 00:38:27.633 [2024-12-14 00:19:06.687560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.633 [2024-12-14 00:19:06.687576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.633 qpair failed and we were unable to recover it. 00:38:27.633 [2024-12-14 00:19:06.687732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.633 [2024-12-14 00:19:06.687749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.633 qpair failed and we were unable to recover it. 00:38:27.633 [2024-12-14 00:19:06.687851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.633 [2024-12-14 00:19:06.687865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.633 qpair failed and we were unable to recover it. 00:38:27.633 [2024-12-14 00:19:06.688022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.633 [2024-12-14 00:19:06.688038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.633 qpair failed and we were unable to recover it. 00:38:27.633 [2024-12-14 00:19:06.688216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.633 [2024-12-14 00:19:06.688230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.633 qpair failed and we were unable to recover it. 00:38:27.633 [2024-12-14 00:19:06.688435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.633 [2024-12-14 00:19:06.688457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.633 qpair failed and we were unable to recover it. 00:38:27.633 [2024-12-14 00:19:06.688598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.633 [2024-12-14 00:19:06.688613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.633 qpair failed and we were unable to recover it. 00:38:27.633 [2024-12-14 00:19:06.688716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.633 [2024-12-14 00:19:06.688732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.633 qpair failed and we were unable to recover it. 00:38:27.633 [2024-12-14 00:19:06.688938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.633 [2024-12-14 00:19:06.688954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.633 qpair failed and we were unable to recover it. 00:38:27.633 [2024-12-14 00:19:06.689144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.633 [2024-12-14 00:19:06.689160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.633 qpair failed and we were unable to recover it. 00:38:27.633 [2024-12-14 00:19:06.689312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.633 [2024-12-14 00:19:06.689327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.633 qpair failed and we were unable to recover it. 00:38:27.633 [2024-12-14 00:19:06.689428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.633 [2024-12-14 00:19:06.689448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.633 qpair failed and we were unable to recover it. 00:38:27.634 [2024-12-14 00:19:06.689617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.634 [2024-12-14 00:19:06.689631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.634 qpair failed and we were unable to recover it. 00:38:27.634 [2024-12-14 00:19:06.689786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.634 [2024-12-14 00:19:06.689800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.634 qpair failed and we were unable to recover it. 00:38:27.634 [2024-12-14 00:19:06.690028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.634 [2024-12-14 00:19:06.690042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.634 qpair failed and we were unable to recover it. 00:38:27.634 [2024-12-14 00:19:06.690146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.634 [2024-12-14 00:19:06.690160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.634 qpair failed and we were unable to recover it. 00:38:27.634 [2024-12-14 00:19:06.690315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.634 [2024-12-14 00:19:06.690329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.634 qpair failed and we were unable to recover it. 00:38:27.634 [2024-12-14 00:19:06.690610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.634 [2024-12-14 00:19:06.690624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.634 qpair failed and we were unable to recover it. 00:38:27.634 [2024-12-14 00:19:06.690720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.634 [2024-12-14 00:19:06.690734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.634 qpair failed and we were unable to recover it. 00:38:27.634 [2024-12-14 00:19:06.690882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.634 [2024-12-14 00:19:06.690896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.634 qpair failed and we were unable to recover it. 00:38:27.634 [2024-12-14 00:19:06.691050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.634 [2024-12-14 00:19:06.691064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.634 qpair failed and we were unable to recover it. 00:38:27.634 [2024-12-14 00:19:06.691156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.634 [2024-12-14 00:19:06.691170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.634 qpair failed and we were unable to recover it. 00:38:27.634 [2024-12-14 00:19:06.691396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.634 [2024-12-14 00:19:06.691410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.634 qpair failed and we were unable to recover it. 00:38:27.634 [2024-12-14 00:19:06.691617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.634 [2024-12-14 00:19:06.691631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.634 qpair failed and we were unable to recover it. 00:38:27.634 [2024-12-14 00:19:06.691796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.634 [2024-12-14 00:19:06.691810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.634 qpair failed and we were unable to recover it. 00:38:27.634 [2024-12-14 00:19:06.691904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.634 [2024-12-14 00:19:06.691918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.634 qpair failed and we were unable to recover it. 00:38:27.634 [2024-12-14 00:19:06.692094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.634 [2024-12-14 00:19:06.692108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.634 qpair failed and we were unable to recover it. 00:38:27.634 [2024-12-14 00:19:06.692313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.634 [2024-12-14 00:19:06.692327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.634 qpair failed and we were unable to recover it. 00:38:27.634 [2024-12-14 00:19:06.692423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.634 [2024-12-14 00:19:06.692445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.634 qpair failed and we were unable to recover it. 00:38:27.634 [2024-12-14 00:19:06.692650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.634 [2024-12-14 00:19:06.692664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.634 qpair failed and we were unable to recover it. 00:38:27.634 [2024-12-14 00:19:06.692813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.634 [2024-12-14 00:19:06.692827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.634 qpair failed and we were unable to recover it. 00:38:27.634 [2024-12-14 00:19:06.692915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.634 [2024-12-14 00:19:06.692928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.634 qpair failed and we were unable to recover it. 00:38:27.634 [2024-12-14 00:19:06.693080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.634 [2024-12-14 00:19:06.693093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.634 qpair failed and we were unable to recover it. 00:38:27.634 [2024-12-14 00:19:06.693241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.634 [2024-12-14 00:19:06.693255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.634 qpair failed and we were unable to recover it. 00:38:27.634 [2024-12-14 00:19:06.693432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.634 [2024-12-14 00:19:06.693450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.634 qpair failed and we were unable to recover it. 00:38:27.634 [2024-12-14 00:19:06.693558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.634 [2024-12-14 00:19:06.693571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.634 qpair failed and we were unable to recover it. 00:38:27.634 [2024-12-14 00:19:06.693739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.634 [2024-12-14 00:19:06.693753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.634 qpair failed and we were unable to recover it. 00:38:27.634 [2024-12-14 00:19:06.693953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.634 [2024-12-14 00:19:06.693968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.634 qpair failed and we were unable to recover it. 00:38:27.634 [2024-12-14 00:19:06.694184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.634 [2024-12-14 00:19:06.694197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.634 qpair failed and we were unable to recover it. 00:38:27.634 [2024-12-14 00:19:06.694363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.634 [2024-12-14 00:19:06.694377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.634 qpair failed and we were unable to recover it. 00:38:27.634 [2024-12-14 00:19:06.694586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.634 [2024-12-14 00:19:06.694600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.634 qpair failed and we were unable to recover it. 00:38:27.634 [2024-12-14 00:19:06.694783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.634 [2024-12-14 00:19:06.694797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.634 qpair failed and we were unable to recover it. 00:38:27.634 [2024-12-14 00:19:06.694894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.634 [2024-12-14 00:19:06.694907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.634 qpair failed and we were unable to recover it. 00:38:27.634 [2024-12-14 00:19:06.695054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.634 [2024-12-14 00:19:06.695067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.634 qpair failed and we were unable to recover it. 00:38:27.634 [2024-12-14 00:19:06.695268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.634 [2024-12-14 00:19:06.695282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.634 qpair failed and we were unable to recover it. 00:38:27.634 [2024-12-14 00:19:06.695487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.634 [2024-12-14 00:19:06.695501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.634 qpair failed and we were unable to recover it. 00:38:27.634 [2024-12-14 00:19:06.695681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.634 [2024-12-14 00:19:06.695696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.634 qpair failed and we were unable to recover it. 00:38:27.634 [2024-12-14 00:19:06.695881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.634 [2024-12-14 00:19:06.695896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.634 qpair failed and we were unable to recover it. 00:38:27.634 [2024-12-14 00:19:06.696120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.634 [2024-12-14 00:19:06.696133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.634 qpair failed and we were unable to recover it. 00:38:27.634 [2024-12-14 00:19:06.696232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.634 [2024-12-14 00:19:06.696246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.634 qpair failed and we were unable to recover it. 00:38:27.634 [2024-12-14 00:19:06.696391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.634 [2024-12-14 00:19:06.696405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.634 qpair failed and we were unable to recover it. 00:38:27.634 [2024-12-14 00:19:06.696616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.634 [2024-12-14 00:19:06.696630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.635 qpair failed and we were unable to recover it. 00:38:27.635 [2024-12-14 00:19:06.696721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.635 [2024-12-14 00:19:06.696735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.635 qpair failed and we were unable to recover it. 00:38:27.635 [2024-12-14 00:19:06.696823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.635 [2024-12-14 00:19:06.696837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.635 qpair failed and we were unable to recover it. 00:38:27.635 [2024-12-14 00:19:06.696983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.635 [2024-12-14 00:19:06.696996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.635 qpair failed and we were unable to recover it. 00:38:27.635 [2024-12-14 00:19:06.697206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.635 [2024-12-14 00:19:06.697220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.635 qpair failed and we were unable to recover it. 00:38:27.635 [2024-12-14 00:19:06.697370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.635 [2024-12-14 00:19:06.697383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.635 qpair failed and we were unable to recover it. 00:38:27.635 [2024-12-14 00:19:06.697522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.635 [2024-12-14 00:19:06.697536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.635 qpair failed and we were unable to recover it. 00:38:27.635 [2024-12-14 00:19:06.697725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.635 [2024-12-14 00:19:06.697740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.635 qpair failed and we were unable to recover it. 00:38:27.635 [2024-12-14 00:19:06.697892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.635 [2024-12-14 00:19:06.697911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.635 qpair failed and we were unable to recover it. 00:38:27.635 [2024-12-14 00:19:06.698128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.635 [2024-12-14 00:19:06.698142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.635 qpair failed and we were unable to recover it. 00:38:27.635 [2024-12-14 00:19:06.698373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.635 [2024-12-14 00:19:06.698386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.635 qpair failed and we were unable to recover it. 00:38:27.635 [2024-12-14 00:19:06.698546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.635 [2024-12-14 00:19:06.698560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.635 qpair failed and we were unable to recover it. 00:38:27.635 [2024-12-14 00:19:06.698788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.635 [2024-12-14 00:19:06.698802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.635 qpair failed and we were unable to recover it. 00:38:27.635 [2024-12-14 00:19:06.698978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.635 [2024-12-14 00:19:06.698991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.635 qpair failed and we were unable to recover it. 00:38:27.635 [2024-12-14 00:19:06.699142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.635 [2024-12-14 00:19:06.699156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.635 qpair failed and we were unable to recover it. 00:38:27.635 [2024-12-14 00:19:06.699383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.635 [2024-12-14 00:19:06.699396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.635 qpair failed and we were unable to recover it. 00:38:27.635 [2024-12-14 00:19:06.699577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.635 [2024-12-14 00:19:06.699590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.635 qpair failed and we were unable to recover it. 00:38:27.635 [2024-12-14 00:19:06.699745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.635 [2024-12-14 00:19:06.699761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.635 qpair failed and we were unable to recover it. 00:38:27.635 [2024-12-14 00:19:06.699854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.635 [2024-12-14 00:19:06.699868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.635 qpair failed and we were unable to recover it. 00:38:27.635 [2024-12-14 00:19:06.700020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.635 [2024-12-14 00:19:06.700034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.635 qpair failed and we were unable to recover it. 00:38:27.635 [2024-12-14 00:19:06.700210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.635 [2024-12-14 00:19:06.700224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.635 qpair failed and we were unable to recover it. 00:38:27.635 [2024-12-14 00:19:06.700427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.635 [2024-12-14 00:19:06.700455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.635 qpair failed and we were unable to recover it. 00:38:27.635 [2024-12-14 00:19:06.700670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.635 [2024-12-14 00:19:06.700684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.635 qpair failed and we were unable to recover it. 00:38:27.635 [2024-12-14 00:19:06.700840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.635 [2024-12-14 00:19:06.700854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.635 qpair failed and we were unable to recover it. 00:38:27.635 [2024-12-14 00:19:06.700944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.635 [2024-12-14 00:19:06.700958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.635 qpair failed and we were unable to recover it. 00:38:27.635 [2024-12-14 00:19:06.701070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.635 [2024-12-14 00:19:06.701084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.635 qpair failed and we were unable to recover it. 00:38:27.635 [2024-12-14 00:19:06.701175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.635 [2024-12-14 00:19:06.701189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.635 qpair failed and we were unable to recover it. 00:38:27.635 [2024-12-14 00:19:06.701355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.635 [2024-12-14 00:19:06.701369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.635 qpair failed and we were unable to recover it. 00:38:27.635 [2024-12-14 00:19:06.701475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.635 [2024-12-14 00:19:06.701489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.635 qpair failed and we were unable to recover it. 00:38:27.635 [2024-12-14 00:19:06.701588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.635 [2024-12-14 00:19:06.701602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.635 qpair failed and we were unable to recover it. 00:38:27.635 [2024-12-14 00:19:06.701700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.635 [2024-12-14 00:19:06.701713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.635 qpair failed and we were unable to recover it. 00:38:27.635 [2024-12-14 00:19:06.701787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.635 [2024-12-14 00:19:06.701801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.635 qpair failed and we were unable to recover it. 00:38:27.635 [2024-12-14 00:19:06.701980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.635 [2024-12-14 00:19:06.701994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.635 qpair failed and we were unable to recover it. 00:38:27.635 [2024-12-14 00:19:06.702167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.635 [2024-12-14 00:19:06.702181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.635 qpair failed and we were unable to recover it. 00:38:27.635 [2024-12-14 00:19:06.702323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.635 [2024-12-14 00:19:06.702337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.635 qpair failed and we were unable to recover it. 00:38:27.635 [2024-12-14 00:19:06.702556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.635 [2024-12-14 00:19:06.702571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.635 qpair failed and we were unable to recover it. 00:38:27.635 [2024-12-14 00:19:06.702662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.635 [2024-12-14 00:19:06.702676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.635 qpair failed and we were unable to recover it. 00:38:27.635 [2024-12-14 00:19:06.702829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.635 [2024-12-14 00:19:06.702842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.635 qpair failed and we were unable to recover it. 00:38:27.635 [2024-12-14 00:19:06.703075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.635 [2024-12-14 00:19:06.703089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.635 qpair failed and we were unable to recover it. 00:38:27.635 [2024-12-14 00:19:06.703297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.635 [2024-12-14 00:19:06.703310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.635 qpair failed and we were unable to recover it. 00:38:27.635 [2024-12-14 00:19:06.703545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.635 [2024-12-14 00:19:06.703560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.635 qpair failed and we were unable to recover it. 00:38:27.635 [2024-12-14 00:19:06.703720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.635 [2024-12-14 00:19:06.703734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.635 qpair failed and we were unable to recover it. 00:38:27.635 [2024-12-14 00:19:06.703823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.635 [2024-12-14 00:19:06.703836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.635 qpair failed and we were unable to recover it. 00:38:27.635 [2024-12-14 00:19:06.703991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.635 [2024-12-14 00:19:06.704004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.635 qpair failed and we were unable to recover it. 00:38:27.635 [2024-12-14 00:19:06.704186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.635 [2024-12-14 00:19:06.704201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.635 qpair failed and we were unable to recover it. 00:38:27.635 [2024-12-14 00:19:06.704344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.635 [2024-12-14 00:19:06.704358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.635 qpair failed and we were unable to recover it. 00:38:27.636 [2024-12-14 00:19:06.704592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.636 [2024-12-14 00:19:06.704606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.636 qpair failed and we were unable to recover it. 00:38:27.636 [2024-12-14 00:19:06.704755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.636 [2024-12-14 00:19:06.704769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.636 qpair failed and we were unable to recover it. 00:38:27.636 [2024-12-14 00:19:06.704924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.636 [2024-12-14 00:19:06.704937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.636 qpair failed and we were unable to recover it. 00:38:27.636 [2024-12-14 00:19:06.705103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.636 [2024-12-14 00:19:06.705117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.636 qpair failed and we were unable to recover it. 00:38:27.636 [2024-12-14 00:19:06.705274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.636 [2024-12-14 00:19:06.705288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.636 qpair failed and we were unable to recover it. 00:38:27.636 [2024-12-14 00:19:06.705473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.636 [2024-12-14 00:19:06.705488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.636 qpair failed and we were unable to recover it. 00:38:27.636 [2024-12-14 00:19:06.705575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.636 [2024-12-14 00:19:06.705589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.636 qpair failed and we were unable to recover it. 00:38:27.636 [2024-12-14 00:19:06.705740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.636 [2024-12-14 00:19:06.705754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.636 qpair failed and we were unable to recover it. 00:38:27.636 [2024-12-14 00:19:06.705864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.636 [2024-12-14 00:19:06.705878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.636 qpair failed and we were unable to recover it. 00:38:27.636 [2024-12-14 00:19:06.706030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.636 [2024-12-14 00:19:06.706044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.636 qpair failed and we were unable to recover it. 00:38:27.636 [2024-12-14 00:19:06.706158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.636 [2024-12-14 00:19:06.706171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.636 qpair failed and we were unable to recover it. 00:38:27.636 [2024-12-14 00:19:06.706254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.636 [2024-12-14 00:19:06.706270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.636 qpair failed and we were unable to recover it. 00:38:27.636 [2024-12-14 00:19:06.706449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.636 [2024-12-14 00:19:06.706463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.636 qpair failed and we were unable to recover it. 00:38:27.636 [2024-12-14 00:19:06.706573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.636 [2024-12-14 00:19:06.706587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.636 qpair failed and we were unable to recover it. 00:38:27.636 [2024-12-14 00:19:06.706700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.636 [2024-12-14 00:19:06.706714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.636 qpair failed and we were unable to recover it. 00:38:27.636 [2024-12-14 00:19:06.706794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.636 [2024-12-14 00:19:06.706808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.636 qpair failed and we were unable to recover it. 00:38:27.636 [2024-12-14 00:19:06.706887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.636 [2024-12-14 00:19:06.706900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.636 qpair failed and we were unable to recover it. 00:38:27.636 [2024-12-14 00:19:06.707042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.636 [2024-12-14 00:19:06.707055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.636 qpair failed and we were unable to recover it. 00:38:27.636 [2024-12-14 00:19:06.707210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.636 [2024-12-14 00:19:06.707224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.636 qpair failed and we were unable to recover it. 00:38:27.636 [2024-12-14 00:19:06.707365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.636 [2024-12-14 00:19:06.707379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.636 qpair failed and we were unable to recover it. 00:38:27.636 [2024-12-14 00:19:06.707522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.636 [2024-12-14 00:19:06.707536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.636 qpair failed and we were unable to recover it. 00:38:27.636 [2024-12-14 00:19:06.707695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.636 [2024-12-14 00:19:06.707709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.636 qpair failed and we were unable to recover it. 00:38:27.636 [2024-12-14 00:19:06.707804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.636 [2024-12-14 00:19:06.707818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.636 qpair failed and we were unable to recover it. 00:38:27.636 [2024-12-14 00:19:06.707905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.636 [2024-12-14 00:19:06.707918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.636 qpair failed and we were unable to recover it. 00:38:27.636 [2024-12-14 00:19:06.708139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.636 [2024-12-14 00:19:06.708153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.636 qpair failed and we were unable to recover it. 00:38:27.636 [2024-12-14 00:19:06.708234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.636 [2024-12-14 00:19:06.708248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.636 qpair failed and we were unable to recover it. 00:38:27.636 [2024-12-14 00:19:06.708475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.636 [2024-12-14 00:19:06.708502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.636 qpair failed and we were unable to recover it. 00:38:27.636 [2024-12-14 00:19:06.708588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.636 [2024-12-14 00:19:06.708602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.636 qpair failed and we were unable to recover it. 00:38:27.636 [2024-12-14 00:19:06.708757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.636 [2024-12-14 00:19:06.708770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.636 qpair failed and we were unable to recover it. 00:38:27.636 [2024-12-14 00:19:06.708851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.636 [2024-12-14 00:19:06.708865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.636 qpair failed and we were unable to recover it. 00:38:27.636 [2024-12-14 00:19:06.709020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.636 [2024-12-14 00:19:06.709034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.636 qpair failed and we were unable to recover it. 00:38:27.636 [2024-12-14 00:19:06.709298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.636 [2024-12-14 00:19:06.709312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.636 qpair failed and we were unable to recover it. 00:38:27.636 [2024-12-14 00:19:06.709571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.636 [2024-12-14 00:19:06.709585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.636 qpair failed and we were unable to recover it. 00:38:27.636 [2024-12-14 00:19:06.709687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.636 [2024-12-14 00:19:06.709701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.636 qpair failed and we were unable to recover it. 00:38:27.636 [2024-12-14 00:19:06.709850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.636 [2024-12-14 00:19:06.709864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.636 qpair failed and we were unable to recover it. 00:38:27.636 [2024-12-14 00:19:06.709953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.636 [2024-12-14 00:19:06.709967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.636 qpair failed and we were unable to recover it. 00:38:27.636 [2024-12-14 00:19:06.710117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.636 [2024-12-14 00:19:06.710131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.636 qpair failed and we were unable to recover it. 00:38:27.909 [2024-12-14 00:19:06.710451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.909 [2024-12-14 00:19:06.710466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.909 qpair failed and we were unable to recover it. 00:38:27.909 [2024-12-14 00:19:06.710703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.909 [2024-12-14 00:19:06.710718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.909 qpair failed and we were unable to recover it. 00:38:27.909 [2024-12-14 00:19:06.710811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.909 [2024-12-14 00:19:06.710825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.909 qpair failed and we were unable to recover it. 00:38:27.909 [2024-12-14 00:19:06.710925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.909 [2024-12-14 00:19:06.710939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.909 qpair failed and we were unable to recover it. 00:38:27.909 [2024-12-14 00:19:06.711097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.909 [2024-12-14 00:19:06.711110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.909 qpair failed and we were unable to recover it. 00:38:27.909 [2024-12-14 00:19:06.711209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.909 [2024-12-14 00:19:06.711223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.909 qpair failed and we were unable to recover it. 00:38:27.909 [2024-12-14 00:19:06.711391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.909 [2024-12-14 00:19:06.711405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.909 qpair failed and we were unable to recover it. 00:38:27.909 [2024-12-14 00:19:06.711569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.909 [2024-12-14 00:19:06.711584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.909 qpair failed and we were unable to recover it. 00:38:27.909 [2024-12-14 00:19:06.711691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.909 [2024-12-14 00:19:06.711705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.909 qpair failed and we were unable to recover it. 00:38:27.909 [2024-12-14 00:19:06.711866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.909 [2024-12-14 00:19:06.711881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.909 qpair failed and we were unable to recover it. 00:38:27.909 [2024-12-14 00:19:06.711978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.909 [2024-12-14 00:19:06.711992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.909 qpair failed and we were unable to recover it. 00:38:27.909 [2024-12-14 00:19:06.712065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.909 [2024-12-14 00:19:06.712079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.909 qpair failed and we were unable to recover it. 00:38:27.909 [2024-12-14 00:19:06.712232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.909 [2024-12-14 00:19:06.712246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.909 qpair failed and we were unable to recover it. 00:38:27.909 [2024-12-14 00:19:06.712338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.909 [2024-12-14 00:19:06.712352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.909 qpair failed and we were unable to recover it. 00:38:27.909 [2024-12-14 00:19:06.712435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.909 [2024-12-14 00:19:06.712458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.909 qpair failed and we were unable to recover it. 00:38:27.909 [2024-12-14 00:19:06.712685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.909 [2024-12-14 00:19:06.712699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.909 qpair failed and we were unable to recover it. 00:38:27.909 [2024-12-14 00:19:06.712853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.909 [2024-12-14 00:19:06.712867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.909 qpair failed and we were unable to recover it. 00:38:27.909 [2024-12-14 00:19:06.712959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.909 [2024-12-14 00:19:06.712973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.909 qpair failed and we were unable to recover it. 00:38:27.909 [2024-12-14 00:19:06.713072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.909 [2024-12-14 00:19:06.713085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.910 qpair failed and we were unable to recover it. 00:38:27.910 [2024-12-14 00:19:06.713178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.910 [2024-12-14 00:19:06.713191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.910 qpair failed and we were unable to recover it. 00:38:27.910 [2024-12-14 00:19:06.713275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.910 [2024-12-14 00:19:06.713289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.910 qpair failed and we were unable to recover it. 00:38:27.910 [2024-12-14 00:19:06.713375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.910 [2024-12-14 00:19:06.713389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.910 qpair failed and we were unable to recover it. 00:38:27.910 [2024-12-14 00:19:06.713613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.910 [2024-12-14 00:19:06.713628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.910 qpair failed and we were unable to recover it. 00:38:27.910 [2024-12-14 00:19:06.713782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.910 [2024-12-14 00:19:06.713795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.910 qpair failed and we were unable to recover it. 00:38:27.910 [2024-12-14 00:19:06.714897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.910 [2024-12-14 00:19:06.714931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.910 qpair failed and we were unable to recover it. 00:38:27.910 [2024-12-14 00:19:06.715146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.910 [2024-12-14 00:19:06.715162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.910 qpair failed and we were unable to recover it. 00:38:27.910 [2024-12-14 00:19:06.715345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.910 [2024-12-14 00:19:06.715359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.910 qpair failed and we were unable to recover it. 00:38:27.910 [2024-12-14 00:19:06.715545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.910 [2024-12-14 00:19:06.715560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.910 qpair failed and we were unable to recover it. 00:38:27.910 [2024-12-14 00:19:06.715669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.910 [2024-12-14 00:19:06.715683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.910 qpair failed and we were unable to recover it. 00:38:27.910 [2024-12-14 00:19:06.715905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.910 [2024-12-14 00:19:06.715918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.910 qpair failed and we were unable to recover it. 00:38:27.910 [2024-12-14 00:19:06.716160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.910 [2024-12-14 00:19:06.716174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.910 qpair failed and we were unable to recover it. 00:38:27.910 [2024-12-14 00:19:06.716346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.910 [2024-12-14 00:19:06.716360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.910 qpair failed and we were unable to recover it. 00:38:27.910 [2024-12-14 00:19:06.716511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.910 [2024-12-14 00:19:06.716526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.910 qpair failed and we were unable to recover it. 00:38:27.910 [2024-12-14 00:19:06.716727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.910 [2024-12-14 00:19:06.716741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.910 qpair failed and we were unable to recover it. 00:38:27.910 [2024-12-14 00:19:06.716886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.910 [2024-12-14 00:19:06.716900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.910 qpair failed and we were unable to recover it. 00:38:27.910 [2024-12-14 00:19:06.717000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.910 [2024-12-14 00:19:06.717014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.910 qpair failed and we were unable to recover it. 00:38:27.910 [2024-12-14 00:19:06.717248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.910 [2024-12-14 00:19:06.717262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.910 qpair failed and we were unable to recover it. 00:38:27.910 [2024-12-14 00:19:06.717403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.910 [2024-12-14 00:19:06.717417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.910 qpair failed and we were unable to recover it. 00:38:27.910 [2024-12-14 00:19:06.717582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.910 [2024-12-14 00:19:06.717596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.910 qpair failed and we were unable to recover it. 00:38:27.910 [2024-12-14 00:19:06.717685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.910 [2024-12-14 00:19:06.717699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.910 qpair failed and we were unable to recover it. 00:38:27.910 [2024-12-14 00:19:06.717806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.910 [2024-12-14 00:19:06.717820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.910 qpair failed and we were unable to recover it. 00:38:27.910 [2024-12-14 00:19:06.717911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.910 [2024-12-14 00:19:06.717925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.910 qpair failed and we were unable to recover it. 00:38:27.910 [2024-12-14 00:19:06.718123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.910 [2024-12-14 00:19:06.718137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.910 qpair failed and we were unable to recover it. 00:38:27.910 [2024-12-14 00:19:06.718237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.910 [2024-12-14 00:19:06.718252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.910 qpair failed and we were unable to recover it. 00:38:27.910 [2024-12-14 00:19:06.718461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.910 [2024-12-14 00:19:06.718475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.910 qpair failed and we were unable to recover it. 00:38:27.910 [2024-12-14 00:19:06.718565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.910 [2024-12-14 00:19:06.718579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.910 qpair failed and we were unable to recover it. 00:38:27.910 [2024-12-14 00:19:06.718659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.910 [2024-12-14 00:19:06.718673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.910 qpair failed and we were unable to recover it. 00:38:27.910 [2024-12-14 00:19:06.718781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.910 [2024-12-14 00:19:06.718795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.910 qpair failed and we were unable to recover it. 00:38:27.910 [2024-12-14 00:19:06.718890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.910 [2024-12-14 00:19:06.718904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.910 qpair failed and we were unable to recover it. 00:38:27.910 [2024-12-14 00:19:06.719149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.910 [2024-12-14 00:19:06.719163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.910 qpair failed and we were unable to recover it. 00:38:27.910 [2024-12-14 00:19:06.719318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.910 [2024-12-14 00:19:06.719332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.910 qpair failed and we were unable to recover it. 00:38:27.910 [2024-12-14 00:19:06.719493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.910 [2024-12-14 00:19:06.719507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.910 qpair failed and we were unable to recover it. 00:38:27.910 [2024-12-14 00:19:06.719595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.910 [2024-12-14 00:19:06.719609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.910 qpair failed and we were unable to recover it. 00:38:27.910 [2024-12-14 00:19:06.719723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.910 [2024-12-14 00:19:06.719744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.910 qpair failed and we were unable to recover it. 00:38:27.910 [2024-12-14 00:19:06.719894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.910 [2024-12-14 00:19:06.719911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.910 qpair failed and we were unable to recover it. 00:38:27.910 [2024-12-14 00:19:06.720012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.910 [2024-12-14 00:19:06.720025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.910 qpair failed and we were unable to recover it. 00:38:27.910 [2024-12-14 00:19:06.720168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.910 [2024-12-14 00:19:06.720182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.910 qpair failed and we were unable to recover it. 00:38:27.910 [2024-12-14 00:19:06.720349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.910 [2024-12-14 00:19:06.720363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.910 qpair failed and we were unable to recover it. 00:38:27.910 [2024-12-14 00:19:06.720444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.910 [2024-12-14 00:19:06.720458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.910 qpair failed and we were unable to recover it. 00:38:27.910 [2024-12-14 00:19:06.720571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.910 [2024-12-14 00:19:06.720585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.910 qpair failed and we were unable to recover it. 00:38:27.910 [2024-12-14 00:19:06.720687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.910 [2024-12-14 00:19:06.720701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.911 qpair failed and we were unable to recover it. 00:38:27.911 [2024-12-14 00:19:06.720858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.911 [2024-12-14 00:19:06.720872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.911 qpair failed and we were unable to recover it. 00:38:27.911 [2024-12-14 00:19:06.721041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.911 [2024-12-14 00:19:06.721055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.911 qpair failed and we were unable to recover it. 00:38:27.911 [2024-12-14 00:19:06.721143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.911 [2024-12-14 00:19:06.721158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.911 qpair failed and we were unable to recover it. 00:38:27.911 [2024-12-14 00:19:06.721390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.911 [2024-12-14 00:19:06.721404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.911 qpair failed and we were unable to recover it. 00:38:27.911 [2024-12-14 00:19:06.721566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.911 [2024-12-14 00:19:06.721581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.911 qpair failed and we were unable to recover it. 00:38:27.911 [2024-12-14 00:19:06.721726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.911 [2024-12-14 00:19:06.721740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.911 qpair failed and we were unable to recover it. 00:38:27.911 [2024-12-14 00:19:06.721904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.911 [2024-12-14 00:19:06.721917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.911 qpair failed and we were unable to recover it. 00:38:27.911 [2024-12-14 00:19:06.722103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.911 [2024-12-14 00:19:06.722117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.911 qpair failed and we were unable to recover it. 00:38:27.911 [2024-12-14 00:19:06.722208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.911 [2024-12-14 00:19:06.722222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.911 qpair failed and we were unable to recover it. 00:38:27.911 [2024-12-14 00:19:06.722369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.911 [2024-12-14 00:19:06.722383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.911 qpair failed and we were unable to recover it. 00:38:27.911 [2024-12-14 00:19:06.722572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.911 [2024-12-14 00:19:06.722586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.911 qpair failed and we were unable to recover it. 00:38:27.911 [2024-12-14 00:19:06.722818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.911 [2024-12-14 00:19:06.722832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.911 qpair failed and we were unable to recover it. 00:38:27.911 [2024-12-14 00:19:06.722996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.911 [2024-12-14 00:19:06.723012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.911 qpair failed and we were unable to recover it. 00:38:27.911 [2024-12-14 00:19:06.723301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.911 [2024-12-14 00:19:06.723315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.911 qpair failed and we were unable to recover it. 00:38:27.911 [2024-12-14 00:19:06.723487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.911 [2024-12-14 00:19:06.723501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.911 qpair failed and we were unable to recover it. 00:38:27.911 [2024-12-14 00:19:06.723589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.911 [2024-12-14 00:19:06.723603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.911 qpair failed and we were unable to recover it. 00:38:27.911 [2024-12-14 00:19:06.723753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.911 [2024-12-14 00:19:06.723766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.911 qpair failed and we were unable to recover it. 00:38:27.911 [2024-12-14 00:19:06.723855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.911 [2024-12-14 00:19:06.723868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.911 qpair failed and we were unable to recover it. 00:38:27.911 [2024-12-14 00:19:06.724022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.911 [2024-12-14 00:19:06.724036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.911 qpair failed and we were unable to recover it. 00:38:27.911 [2024-12-14 00:19:06.724237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.911 [2024-12-14 00:19:06.724251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.911 qpair failed and we were unable to recover it. 00:38:27.911 [2024-12-14 00:19:06.724362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.911 [2024-12-14 00:19:06.724376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.911 qpair failed and we were unable to recover it. 00:38:27.911 [2024-12-14 00:19:06.724476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.911 [2024-12-14 00:19:06.724491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.911 qpair failed and we were unable to recover it. 00:38:27.911 [2024-12-14 00:19:06.724607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.911 [2024-12-14 00:19:06.724621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.911 qpair failed and we were unable to recover it. 00:38:27.911 [2024-12-14 00:19:06.724778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.911 [2024-12-14 00:19:06.724791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.911 qpair failed and we were unable to recover it. 00:38:27.911 [2024-12-14 00:19:06.724919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.911 [2024-12-14 00:19:06.724934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.911 qpair failed and we were unable to recover it. 00:38:27.911 [2024-12-14 00:19:06.725177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.911 [2024-12-14 00:19:06.725191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.911 qpair failed and we were unable to recover it. 00:38:27.911 [2024-12-14 00:19:06.725338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.911 [2024-12-14 00:19:06.725352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.911 qpair failed and we were unable to recover it. 00:38:27.911 [2024-12-14 00:19:06.725492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.911 [2024-12-14 00:19:06.725506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.911 qpair failed and we were unable to recover it. 00:38:27.911 [2024-12-14 00:19:06.725652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.911 [2024-12-14 00:19:06.725667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.911 qpair failed and we were unable to recover it. 00:38:27.911 [2024-12-14 00:19:06.725753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.911 [2024-12-14 00:19:06.725766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.911 qpair failed and we were unable to recover it. 00:38:27.911 [2024-12-14 00:19:06.725855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.911 [2024-12-14 00:19:06.725869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.911 qpair failed and we were unable to recover it. 00:38:27.911 [2024-12-14 00:19:06.726059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.911 [2024-12-14 00:19:06.726073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.911 qpair failed and we were unable to recover it. 00:38:27.911 [2024-12-14 00:19:06.726275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.911 [2024-12-14 00:19:06.726289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.911 qpair failed and we were unable to recover it. 00:38:27.911 [2024-12-14 00:19:06.726453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.911 [2024-12-14 00:19:06.726470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.911 qpair failed and we were unable to recover it. 00:38:27.911 [2024-12-14 00:19:06.726648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.911 [2024-12-14 00:19:06.726662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.911 qpair failed and we were unable to recover it. 00:38:27.911 [2024-12-14 00:19:06.726862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.911 [2024-12-14 00:19:06.726876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.911 qpair failed and we were unable to recover it. 00:38:27.911 [2024-12-14 00:19:06.727035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.911 [2024-12-14 00:19:06.727049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.911 qpair failed and we were unable to recover it. 00:38:27.911 [2024-12-14 00:19:06.727194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.911 [2024-12-14 00:19:06.727208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.911 qpair failed and we were unable to recover it. 00:38:27.911 [2024-12-14 00:19:06.727312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.911 [2024-12-14 00:19:06.727326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.911 qpair failed and we were unable to recover it. 00:38:27.911 [2024-12-14 00:19:06.727507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.911 [2024-12-14 00:19:06.727521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.911 qpair failed and we were unable to recover it. 00:38:27.911 [2024-12-14 00:19:06.727603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.911 [2024-12-14 00:19:06.727616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.911 qpair failed and we were unable to recover it. 00:38:27.911 [2024-12-14 00:19:06.727784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.911 [2024-12-14 00:19:06.727797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.911 qpair failed and we were unable to recover it. 00:38:27.912 [2024-12-14 00:19:06.727886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.912 [2024-12-14 00:19:06.727900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.912 qpair failed and we were unable to recover it. 00:38:27.912 [2024-12-14 00:19:06.728052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.912 [2024-12-14 00:19:06.728066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.912 qpair failed and we were unable to recover it. 00:38:27.912 [2024-12-14 00:19:06.728217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.912 [2024-12-14 00:19:06.728230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.912 qpair failed and we were unable to recover it. 00:38:27.912 [2024-12-14 00:19:06.728379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.912 [2024-12-14 00:19:06.728393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.912 qpair failed and we were unable to recover it. 00:38:27.912 [2024-12-14 00:19:06.728542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.912 [2024-12-14 00:19:06.728557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.912 qpair failed and we were unable to recover it. 00:38:27.912 [2024-12-14 00:19:06.728665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.912 [2024-12-14 00:19:06.728679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.912 qpair failed and we were unable to recover it. 00:38:27.912 [2024-12-14 00:19:06.728789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.912 [2024-12-14 00:19:06.728803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.912 qpair failed and we were unable to recover it. 00:38:27.912 [2024-12-14 00:19:06.728957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.912 [2024-12-14 00:19:06.728971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.912 qpair failed and we were unable to recover it. 00:38:27.912 [2024-12-14 00:19:06.729058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.912 [2024-12-14 00:19:06.729072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.912 qpair failed and we were unable to recover it. 00:38:27.912 [2024-12-14 00:19:06.729240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.912 [2024-12-14 00:19:06.729255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.912 qpair failed and we were unable to recover it. 00:38:27.912 [2024-12-14 00:19:06.729480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.912 [2024-12-14 00:19:06.729496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.912 qpair failed and we were unable to recover it. 00:38:27.912 [2024-12-14 00:19:06.729633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.912 [2024-12-14 00:19:06.729648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.912 qpair failed and we were unable to recover it. 00:38:27.912 [2024-12-14 00:19:06.729739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.912 [2024-12-14 00:19:06.729753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.912 qpair failed and we were unable to recover it. 00:38:27.912 [2024-12-14 00:19:06.729863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.912 [2024-12-14 00:19:06.729877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.912 qpair failed and we were unable to recover it. 00:38:27.912 [2024-12-14 00:19:06.729963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.912 [2024-12-14 00:19:06.729981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.912 qpair failed and we were unable to recover it. 00:38:27.912 [2024-12-14 00:19:06.730145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.912 [2024-12-14 00:19:06.730159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.912 qpair failed and we were unable to recover it. 00:38:27.912 [2024-12-14 00:19:06.730369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.912 [2024-12-14 00:19:06.730383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.912 qpair failed and we were unable to recover it. 00:38:27.912 [2024-12-14 00:19:06.730549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.912 [2024-12-14 00:19:06.730564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.912 qpair failed and we were unable to recover it. 00:38:27.912 [2024-12-14 00:19:06.730741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.912 [2024-12-14 00:19:06.730755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.912 qpair failed and we were unable to recover it. 00:38:27.912 [2024-12-14 00:19:06.730856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.912 [2024-12-14 00:19:06.730870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.912 qpair failed and we were unable to recover it. 00:38:27.912 [2024-12-14 00:19:06.730951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.912 [2024-12-14 00:19:06.730964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.912 qpair failed and we were unable to recover it. 00:38:27.912 [2024-12-14 00:19:06.731105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.912 [2024-12-14 00:19:06.731118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.912 qpair failed and we were unable to recover it. 00:38:27.912 [2024-12-14 00:19:06.731355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.912 [2024-12-14 00:19:06.731369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.912 qpair failed and we were unable to recover it. 00:38:27.912 [2024-12-14 00:19:06.731506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.912 [2024-12-14 00:19:06.731521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.912 qpair failed and we were unable to recover it. 00:38:27.912 [2024-12-14 00:19:06.731681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.912 [2024-12-14 00:19:06.731696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.912 qpair failed and we were unable to recover it. 00:38:27.912 [2024-12-14 00:19:06.731841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.912 [2024-12-14 00:19:06.731855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.912 qpair failed and we were unable to recover it. 00:38:27.912 [2024-12-14 00:19:06.731946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.912 [2024-12-14 00:19:06.731960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.912 qpair failed and we were unable to recover it. 00:38:27.912 [2024-12-14 00:19:06.732053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.912 [2024-12-14 00:19:06.732067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.912 qpair failed and we were unable to recover it. 00:38:27.912 [2024-12-14 00:19:06.732297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.912 [2024-12-14 00:19:06.732311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.912 qpair failed and we were unable to recover it. 00:38:27.912 [2024-12-14 00:19:06.732452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.912 [2024-12-14 00:19:06.732467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.912 qpair failed and we were unable to recover it. 00:38:27.912 [2024-12-14 00:19:06.732544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.912 [2024-12-14 00:19:06.732558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.912 qpair failed and we were unable to recover it. 00:38:27.912 [2024-12-14 00:19:06.732694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.912 [2024-12-14 00:19:06.732710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.912 qpair failed and we were unable to recover it. 00:38:27.912 [2024-12-14 00:19:06.732886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.912 [2024-12-14 00:19:06.732901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.912 qpair failed and we were unable to recover it. 00:38:27.912 [2024-12-14 00:19:06.733039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.912 [2024-12-14 00:19:06.733053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.912 qpair failed and we were unable to recover it. 00:38:27.912 [2024-12-14 00:19:06.733223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.912 [2024-12-14 00:19:06.733237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.912 qpair failed and we were unable to recover it. 00:38:27.912 [2024-12-14 00:19:06.733446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.912 [2024-12-14 00:19:06.733461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.912 qpair failed and we were unable to recover it. 00:38:27.912 [2024-12-14 00:19:06.733662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.912 [2024-12-14 00:19:06.733676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.912 qpair failed and we were unable to recover it. 00:38:27.912 [2024-12-14 00:19:06.733829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.912 [2024-12-14 00:19:06.733843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.912 qpair failed and we were unable to recover it. 00:38:27.912 [2024-12-14 00:19:06.734072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.912 [2024-12-14 00:19:06.734085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.912 qpair failed and we were unable to recover it. 00:38:27.912 [2024-12-14 00:19:06.734172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.913 [2024-12-14 00:19:06.734185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.913 qpair failed and we were unable to recover it. 00:38:27.913 [2024-12-14 00:19:06.734338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.913 [2024-12-14 00:19:06.734352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.913 qpair failed and we were unable to recover it. 00:38:27.913 [2024-12-14 00:19:06.734581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.913 [2024-12-14 00:19:06.734595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.913 qpair failed and we were unable to recover it. 00:38:27.913 [2024-12-14 00:19:06.734697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.913 [2024-12-14 00:19:06.734711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.913 qpair failed and we were unable to recover it. 00:38:27.913 [2024-12-14 00:19:06.734868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.913 [2024-12-14 00:19:06.734882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.913 qpair failed and we were unable to recover it. 00:38:27.913 [2024-12-14 00:19:06.734988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.913 [2024-12-14 00:19:06.735001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.913 qpair failed and we were unable to recover it. 00:38:27.913 [2024-12-14 00:19:06.735159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.913 [2024-12-14 00:19:06.735173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.913 qpair failed and we were unable to recover it. 00:38:27.913 [2024-12-14 00:19:06.735371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.913 [2024-12-14 00:19:06.735385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.913 qpair failed and we were unable to recover it. 00:38:27.913 [2024-12-14 00:19:06.735557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.913 [2024-12-14 00:19:06.735571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.913 qpair failed and we were unable to recover it. 00:38:27.913 [2024-12-14 00:19:06.735662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.913 [2024-12-14 00:19:06.735675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.913 qpair failed and we were unable to recover it. 00:38:27.913 [2024-12-14 00:19:06.735821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.913 [2024-12-14 00:19:06.735835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.913 qpair failed and we were unable to recover it. 00:38:27.913 [2024-12-14 00:19:06.735932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.913 [2024-12-14 00:19:06.735946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.913 qpair failed and we were unable to recover it. 00:38:27.913 [2024-12-14 00:19:06.736127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.913 [2024-12-14 00:19:06.736141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.913 qpair failed and we were unable to recover it. 00:38:27.913 [2024-12-14 00:19:06.736224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.913 [2024-12-14 00:19:06.736238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.913 qpair failed and we were unable to recover it. 00:38:27.913 [2024-12-14 00:19:06.736328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.913 [2024-12-14 00:19:06.736342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.913 qpair failed and we were unable to recover it. 00:38:27.913 [2024-12-14 00:19:06.736500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.913 [2024-12-14 00:19:06.736515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.913 qpair failed and we were unable to recover it. 00:38:27.913 [2024-12-14 00:19:06.736682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.913 [2024-12-14 00:19:06.736697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.913 qpair failed and we were unable to recover it. 00:38:27.913 [2024-12-14 00:19:06.736780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.913 [2024-12-14 00:19:06.736794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.913 qpair failed and we were unable to recover it. 00:38:27.913 [2024-12-14 00:19:06.736944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.913 [2024-12-14 00:19:06.736957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.913 qpair failed and we were unable to recover it. 00:38:27.913 [2024-12-14 00:19:06.737072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.913 [2024-12-14 00:19:06.737110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.913 qpair failed and we were unable to recover it. 00:38:27.913 [2024-12-14 00:19:06.737220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.913 [2024-12-14 00:19:06.737243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.913 qpair failed and we were unable to recover it. 00:38:27.913 [2024-12-14 00:19:06.737365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.913 [2024-12-14 00:19:06.737388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.913 qpair failed and we were unable to recover it. 00:38:27.913 [2024-12-14 00:19:06.737481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.913 [2024-12-14 00:19:06.737503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.913 qpair failed and we were unable to recover it. 00:38:27.913 [2024-12-14 00:19:06.737644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.913 [2024-12-14 00:19:06.737667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.913 qpair failed and we were unable to recover it. 00:38:27.913 [2024-12-14 00:19:06.737873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.913 [2024-12-14 00:19:06.737895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.913 qpair failed and we were unable to recover it. 00:38:27.913 [2024-12-14 00:19:06.737992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.913 [2024-12-14 00:19:06.738022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.913 qpair failed and we were unable to recover it. 00:38:27.913 [2024-12-14 00:19:06.738149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.913 [2024-12-14 00:19:06.738170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.913 qpair failed and we were unable to recover it. 00:38:27.913 [2024-12-14 00:19:06.738267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.913 [2024-12-14 00:19:06.738288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.913 qpair failed and we were unable to recover it. 00:38:27.913 [2024-12-14 00:19:06.738386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.913 [2024-12-14 00:19:06.738402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.913 qpair failed and we were unable to recover it. 00:38:27.913 [2024-12-14 00:19:06.738547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.913 [2024-12-14 00:19:06.738562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.913 qpair failed and we were unable to recover it. 00:38:27.913 [2024-12-14 00:19:06.738638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.913 [2024-12-14 00:19:06.738653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.913 qpair failed and we were unable to recover it. 00:38:27.913 [2024-12-14 00:19:06.738743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.913 [2024-12-14 00:19:06.738757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.913 qpair failed and we were unable to recover it. 00:38:27.913 [2024-12-14 00:19:06.738826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.913 [2024-12-14 00:19:06.738844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.913 qpair failed and we were unable to recover it. 00:38:27.913 [2024-12-14 00:19:06.738918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.913 [2024-12-14 00:19:06.738932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.913 qpair failed and we were unable to recover it. 00:38:27.913 [2024-12-14 00:19:06.739076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.913 [2024-12-14 00:19:06.739090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.913 qpair failed and we were unable to recover it. 00:38:27.913 [2024-12-14 00:19:06.739269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.913 [2024-12-14 00:19:06.739283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.913 qpair failed and we were unable to recover it. 00:38:27.913 [2024-12-14 00:19:06.739359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.914 [2024-12-14 00:19:06.739373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.914 qpair failed and we were unable to recover it. 00:38:27.914 [2024-12-14 00:19:06.739506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.914 [2024-12-14 00:19:06.739520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.914 qpair failed and we were unable to recover it. 00:38:27.914 [2024-12-14 00:19:06.739591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.914 [2024-12-14 00:19:06.739605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.914 qpair failed and we were unable to recover it. 00:38:27.914 [2024-12-14 00:19:06.739682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.914 [2024-12-14 00:19:06.739696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.914 qpair failed and we were unable to recover it. 00:38:27.914 [2024-12-14 00:19:06.739782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.914 [2024-12-14 00:19:06.739796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.914 qpair failed and we were unable to recover it. 00:38:27.914 [2024-12-14 00:19:06.739879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.914 [2024-12-14 00:19:06.739893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.914 qpair failed and we were unable to recover it. 00:38:27.914 [2024-12-14 00:19:06.739990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.914 [2024-12-14 00:19:06.740004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.914 qpair failed and we were unable to recover it. 00:38:27.914 [2024-12-14 00:19:06.740165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.914 [2024-12-14 00:19:06.740179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.914 qpair failed and we were unable to recover it. 00:38:27.914 [2024-12-14 00:19:06.740274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.914 [2024-12-14 00:19:06.740288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.914 qpair failed and we were unable to recover it. 00:38:27.914 [2024-12-14 00:19:06.740456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.914 [2024-12-14 00:19:06.740470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.914 qpair failed and we were unable to recover it. 00:38:27.914 [2024-12-14 00:19:06.740677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.914 [2024-12-14 00:19:06.740691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.914 qpair failed and we were unable to recover it. 00:38:27.914 [2024-12-14 00:19:06.740759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.914 [2024-12-14 00:19:06.740773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.914 qpair failed and we were unable to recover it. 00:38:27.914 [2024-12-14 00:19:06.740869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.914 [2024-12-14 00:19:06.740887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.914 qpair failed and we were unable to recover it. 00:38:27.914 [2024-12-14 00:19:06.740976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.914 [2024-12-14 00:19:06.740991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.914 qpair failed and we were unable to recover it. 00:38:27.914 [2024-12-14 00:19:06.741074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.914 [2024-12-14 00:19:06.741088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.914 qpair failed and we were unable to recover it. 00:38:27.914 [2024-12-14 00:19:06.741185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.914 [2024-12-14 00:19:06.741199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.914 qpair failed and we were unable to recover it. 00:38:27.914 [2024-12-14 00:19:06.741276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.914 [2024-12-14 00:19:06.741291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.914 qpair failed and we were unable to recover it. 00:38:27.914 [2024-12-14 00:19:06.741367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.914 [2024-12-14 00:19:06.741382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.914 qpair failed and we were unable to recover it. 00:38:27.914 [2024-12-14 00:19:06.741465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.914 [2024-12-14 00:19:06.741479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.914 qpair failed and we were unable to recover it. 00:38:27.914 [2024-12-14 00:19:06.741555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.914 [2024-12-14 00:19:06.741570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.914 qpair failed and we were unable to recover it. 00:38:27.914 [2024-12-14 00:19:06.741653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.914 [2024-12-14 00:19:06.741666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.914 qpair failed and we were unable to recover it. 00:38:27.914 [2024-12-14 00:19:06.741741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.914 [2024-12-14 00:19:06.741755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.914 qpair failed and we were unable to recover it. 00:38:27.914 [2024-12-14 00:19:06.741926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.914 [2024-12-14 00:19:06.741939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.914 qpair failed and we were unable to recover it. 00:38:27.914 [2024-12-14 00:19:06.742059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.914 [2024-12-14 00:19:06.742105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:27.914 qpair failed and we were unable to recover it. 00:38:27.914 [2024-12-14 00:19:06.742243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.914 [2024-12-14 00:19:06.742290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:27.914 qpair failed and we were unable to recover it. 00:38:27.914 [2024-12-14 00:19:06.742414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.914 [2024-12-14 00:19:06.742447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.914 qpair failed and we were unable to recover it. 00:38:27.914 [2024-12-14 00:19:06.742552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.914 [2024-12-14 00:19:06.742569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.914 qpair failed and we were unable to recover it. 00:38:27.914 [2024-12-14 00:19:06.742661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.914 [2024-12-14 00:19:06.742675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.914 qpair failed and we were unable to recover it. 00:38:27.914 [2024-12-14 00:19:06.742758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.914 [2024-12-14 00:19:06.742772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.914 qpair failed and we were unable to recover it. 00:38:27.914 [2024-12-14 00:19:06.742911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.914 [2024-12-14 00:19:06.742925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.914 qpair failed and we were unable to recover it. 00:38:27.914 [2024-12-14 00:19:06.743016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.914 [2024-12-14 00:19:06.743029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.914 qpair failed and we were unable to recover it. 00:38:27.914 [2024-12-14 00:19:06.743110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.914 [2024-12-14 00:19:06.743123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.914 qpair failed and we were unable to recover it. 00:38:27.914 [2024-12-14 00:19:06.743204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.914 [2024-12-14 00:19:06.743218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.914 qpair failed and we were unable to recover it. 00:38:27.914 [2024-12-14 00:19:06.743387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.914 [2024-12-14 00:19:06.743401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.914 qpair failed and we were unable to recover it. 00:38:27.914 [2024-12-14 00:19:06.743481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.914 [2024-12-14 00:19:06.743496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.914 qpair failed and we were unable to recover it. 00:38:27.914 [2024-12-14 00:19:06.743582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.914 [2024-12-14 00:19:06.743597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.914 qpair failed and we were unable to recover it. 00:38:27.914 [2024-12-14 00:19:06.743684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.914 [2024-12-14 00:19:06.743699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.914 qpair failed and we were unable to recover it. 00:38:27.914 [2024-12-14 00:19:06.743764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.914 [2024-12-14 00:19:06.743778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.914 qpair failed and we were unable to recover it. 00:38:27.914 [2024-12-14 00:19:06.743866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.914 [2024-12-14 00:19:06.743880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.914 qpair failed and we were unable to recover it. 00:38:27.914 [2024-12-14 00:19:06.743944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.914 [2024-12-14 00:19:06.743959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.914 qpair failed and we were unable to recover it. 00:38:27.914 [2024-12-14 00:19:06.744042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.914 [2024-12-14 00:19:06.744056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.914 qpair failed and we were unable to recover it. 00:38:27.914 [2024-12-14 00:19:06.744125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.914 [2024-12-14 00:19:06.744138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.914 qpair failed and we were unable to recover it. 00:38:27.914 [2024-12-14 00:19:06.744222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.914 [2024-12-14 00:19:06.744236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.914 qpair failed and we were unable to recover it. 00:38:27.915 [2024-12-14 00:19:06.744317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.915 [2024-12-14 00:19:06.744331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.915 qpair failed and we were unable to recover it. 00:38:27.915 [2024-12-14 00:19:06.744480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.915 [2024-12-14 00:19:06.744495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.915 qpair failed and we were unable to recover it. 00:38:27.915 [2024-12-14 00:19:06.744567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.915 [2024-12-14 00:19:06.744581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.915 qpair failed and we were unable to recover it. 00:38:27.915 [2024-12-14 00:19:06.744670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.915 [2024-12-14 00:19:06.744685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.915 qpair failed and we were unable to recover it. 00:38:27.915 [2024-12-14 00:19:06.744771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.915 [2024-12-14 00:19:06.744784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.915 qpair failed and we were unable to recover it. 00:38:27.915 [2024-12-14 00:19:06.744869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.915 [2024-12-14 00:19:06.744883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.915 qpair failed and we were unable to recover it. 00:38:27.915 [2024-12-14 00:19:06.744964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.915 [2024-12-14 00:19:06.744978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.915 qpair failed and we were unable to recover it. 00:38:27.915 [2024-12-14 00:19:06.745119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.915 [2024-12-14 00:19:06.745133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.915 qpair failed and we were unable to recover it. 00:38:27.915 [2024-12-14 00:19:06.745216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.915 [2024-12-14 00:19:06.745231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.915 qpair failed and we were unable to recover it. 00:38:27.915 [2024-12-14 00:19:06.745333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.915 [2024-12-14 00:19:06.745346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.915 qpair failed and we were unable to recover it. 00:38:27.915 [2024-12-14 00:19:06.745494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.915 [2024-12-14 00:19:06.745509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.915 qpair failed and we were unable to recover it. 00:38:27.915 [2024-12-14 00:19:06.745592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.915 [2024-12-14 00:19:06.745608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.915 qpair failed and we were unable to recover it. 00:38:27.915 [2024-12-14 00:19:06.745692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.915 [2024-12-14 00:19:06.745706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.915 qpair failed and we were unable to recover it. 00:38:27.915 [2024-12-14 00:19:06.745791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.915 [2024-12-14 00:19:06.745805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.915 qpair failed and we were unable to recover it. 00:38:27.915 [2024-12-14 00:19:06.745888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.915 [2024-12-14 00:19:06.745902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.915 qpair failed and we were unable to recover it. 00:38:27.915 [2024-12-14 00:19:06.745974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.915 [2024-12-14 00:19:06.745987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.915 qpair failed and we were unable to recover it. 00:38:27.915 [2024-12-14 00:19:06.746122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.915 [2024-12-14 00:19:06.746136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.915 qpair failed and we were unable to recover it. 00:38:27.915 [2024-12-14 00:19:06.746201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.915 [2024-12-14 00:19:06.746215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.915 qpair failed and we were unable to recover it. 00:38:27.915 [2024-12-14 00:19:06.746351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.915 [2024-12-14 00:19:06.746365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.915 qpair failed and we were unable to recover it. 00:38:27.915 [2024-12-14 00:19:06.746466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.915 [2024-12-14 00:19:06.746479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.915 qpair failed and we were unable to recover it. 00:38:27.915 [2024-12-14 00:19:06.746577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.915 [2024-12-14 00:19:06.746604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:27.915 qpair failed and we were unable to recover it. 00:38:27.915 [2024-12-14 00:19:06.746704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.915 [2024-12-14 00:19:06.746734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:27.915 qpair failed and we were unable to recover it. 00:38:27.915 [2024-12-14 00:19:06.746922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.915 [2024-12-14 00:19:06.746948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.915 qpair failed and we were unable to recover it. 00:38:27.915 [2024-12-14 00:19:06.747026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.915 [2024-12-14 00:19:06.747041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.915 qpair failed and we were unable to recover it. 00:38:27.915 [2024-12-14 00:19:06.747138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.915 [2024-12-14 00:19:06.747152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.915 qpair failed and we were unable to recover it. 00:38:27.915 [2024-12-14 00:19:06.747247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.915 [2024-12-14 00:19:06.747261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.915 qpair failed and we were unable to recover it. 00:38:27.915 [2024-12-14 00:19:06.747366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.915 [2024-12-14 00:19:06.747381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.915 qpair failed and we were unable to recover it. 00:38:27.915 [2024-12-14 00:19:06.747527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.915 [2024-12-14 00:19:06.747542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.915 qpair failed and we were unable to recover it. 00:38:27.915 [2024-12-14 00:19:06.747621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.915 [2024-12-14 00:19:06.747635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.915 qpair failed and we were unable to recover it. 00:38:27.915 [2024-12-14 00:19:06.747768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.915 [2024-12-14 00:19:06.747782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.915 qpair failed and we were unable to recover it. 00:38:27.915 [2024-12-14 00:19:06.747870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.915 [2024-12-14 00:19:06.747883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.915 qpair failed and we were unable to recover it. 00:38:27.915 [2024-12-14 00:19:06.748021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.915 [2024-12-14 00:19:06.748035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.915 qpair failed and we were unable to recover it. 00:38:27.915 [2024-12-14 00:19:06.748109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.915 [2024-12-14 00:19:06.748122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.915 qpair failed and we were unable to recover it. 00:38:27.915 [2024-12-14 00:19:06.748208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.915 [2024-12-14 00:19:06.748224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.915 qpair failed and we were unable to recover it. 00:38:27.915 [2024-12-14 00:19:06.748391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.915 [2024-12-14 00:19:06.748404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.915 qpair failed and we were unable to recover it. 00:38:27.915 [2024-12-14 00:19:06.748504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.915 [2024-12-14 00:19:06.748518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.915 qpair failed and we were unable to recover it. 00:38:27.915 [2024-12-14 00:19:06.748668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.915 [2024-12-14 00:19:06.748684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.915 qpair failed and we were unable to recover it. 00:38:27.915 [2024-12-14 00:19:06.748842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.915 [2024-12-14 00:19:06.748856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.915 qpair failed and we were unable to recover it. 00:38:27.915 [2024-12-14 00:19:06.748929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.915 [2024-12-14 00:19:06.748942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.915 qpair failed and we were unable to recover it. 00:38:27.915 [2024-12-14 00:19:06.749017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.915 [2024-12-14 00:19:06.749036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.915 qpair failed and we were unable to recover it. 00:38:27.915 [2024-12-14 00:19:06.749198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.915 [2024-12-14 00:19:06.749212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.915 qpair failed and we were unable to recover it. 00:38:27.915 [2024-12-14 00:19:06.749360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.915 [2024-12-14 00:19:06.749373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.915 qpair failed and we were unable to recover it. 00:38:27.915 [2024-12-14 00:19:06.749457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.916 [2024-12-14 00:19:06.749471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.916 qpair failed and we were unable to recover it. 00:38:27.916 [2024-12-14 00:19:06.749618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.916 [2024-12-14 00:19:06.749632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.916 qpair failed and we were unable to recover it. 00:38:27.916 [2024-12-14 00:19:06.749714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.916 [2024-12-14 00:19:06.749728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.916 qpair failed and we were unable to recover it. 00:38:27.916 [2024-12-14 00:19:06.749817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.916 [2024-12-14 00:19:06.749831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.916 qpair failed and we were unable to recover it. 00:38:27.916 [2024-12-14 00:19:06.749976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.916 [2024-12-14 00:19:06.749990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.916 qpair failed and we were unable to recover it. 00:38:27.916 [2024-12-14 00:19:06.750069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.916 [2024-12-14 00:19:06.750082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.916 qpair failed and we were unable to recover it. 00:38:27.916 [2024-12-14 00:19:06.750150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.916 [2024-12-14 00:19:06.750163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.916 qpair failed and we were unable to recover it. 00:38:27.916 [2024-12-14 00:19:06.750243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.916 [2024-12-14 00:19:06.750257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.916 qpair failed and we were unable to recover it. 00:38:27.916 [2024-12-14 00:19:06.750391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.916 [2024-12-14 00:19:06.750405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.916 qpair failed and we were unable to recover it. 00:38:27.916 [2024-12-14 00:19:06.750627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.916 [2024-12-14 00:19:06.750642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.916 qpair failed and we were unable to recover it. 00:38:27.916 [2024-12-14 00:19:06.750725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.916 [2024-12-14 00:19:06.750739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.916 qpair failed and we were unable to recover it. 00:38:27.916 [2024-12-14 00:19:06.750897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.916 [2024-12-14 00:19:06.750910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.916 qpair failed and we were unable to recover it. 00:38:27.916 [2024-12-14 00:19:06.750991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.916 [2024-12-14 00:19:06.751005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.916 qpair failed and we were unable to recover it. 00:38:27.916 [2024-12-14 00:19:06.751168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.916 [2024-12-14 00:19:06.751182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.916 qpair failed and we were unable to recover it. 00:38:27.916 [2024-12-14 00:19:06.751261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.916 [2024-12-14 00:19:06.751275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.916 qpair failed and we were unable to recover it. 00:38:27.916 [2024-12-14 00:19:06.751370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.916 [2024-12-14 00:19:06.751384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.916 qpair failed and we were unable to recover it. 00:38:27.916 [2024-12-14 00:19:06.751454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.916 [2024-12-14 00:19:06.751468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.916 qpair failed and we were unable to recover it. 00:38:27.916 [2024-12-14 00:19:06.751634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.916 [2024-12-14 00:19:06.751648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.916 qpair failed and we were unable to recover it. 00:38:27.916 [2024-12-14 00:19:06.751788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.916 [2024-12-14 00:19:06.751812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:27.916 qpair failed and we were unable to recover it. 00:38:27.916 [2024-12-14 00:19:06.751930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.916 [2024-12-14 00:19:06.751952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:27.916 qpair failed and we were unable to recover it. 00:38:27.916 [2024-12-14 00:19:06.752109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.916 [2024-12-14 00:19:06.752131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:27.916 qpair failed and we were unable to recover it. 00:38:27.916 [2024-12-14 00:19:06.752283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.916 [2024-12-14 00:19:06.752304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:27.916 qpair failed and we were unable to recover it. 00:38:27.916 [2024-12-14 00:19:06.752408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.916 [2024-12-14 00:19:06.752428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:27.916 qpair failed and we were unable to recover it. 00:38:27.916 [2024-12-14 00:19:06.752635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.916 [2024-12-14 00:19:06.752658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:27.916 qpair failed and we were unable to recover it. 00:38:27.916 [2024-12-14 00:19:06.752810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.916 [2024-12-14 00:19:06.752837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:27.916 qpair failed and we were unable to recover it. 00:38:27.916 [2024-12-14 00:19:06.752932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.916 [2024-12-14 00:19:06.752954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:27.916 qpair failed and we were unable to recover it. 00:38:27.916 [2024-12-14 00:19:06.753173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.916 [2024-12-14 00:19:06.753194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:27.916 qpair failed and we were unable to recover it. 00:38:27.916 [2024-12-14 00:19:06.753307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.916 [2024-12-14 00:19:06.753328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:27.916 qpair failed and we were unable to recover it. 00:38:27.916 [2024-12-14 00:19:06.753551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.916 [2024-12-14 00:19:06.753573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:27.916 qpair failed and we were unable to recover it. 00:38:27.916 [2024-12-14 00:19:06.753727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.916 [2024-12-14 00:19:06.753748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:27.916 qpair failed and we were unable to recover it. 00:38:27.916 [2024-12-14 00:19:06.753843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.916 [2024-12-14 00:19:06.753864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:27.916 qpair failed and we were unable to recover it. 00:38:27.916 [2024-12-14 00:19:06.754036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.916 [2024-12-14 00:19:06.754057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:27.916 qpair failed and we were unable to recover it. 00:38:27.916 [2024-12-14 00:19:06.754149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.916 [2024-12-14 00:19:06.754170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:27.916 qpair failed and we were unable to recover it. 00:38:27.916 [2024-12-14 00:19:06.754247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.916 [2024-12-14 00:19:06.754263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.916 qpair failed and we were unable to recover it. 00:38:27.916 [2024-12-14 00:19:06.754351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.916 [2024-12-14 00:19:06.754364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.916 qpair failed and we were unable to recover it. 00:38:27.916 [2024-12-14 00:19:06.754569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.916 [2024-12-14 00:19:06.754583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.916 qpair failed and we were unable to recover it. 00:38:27.916 [2024-12-14 00:19:06.754679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.916 [2024-12-14 00:19:06.754694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.916 qpair failed and we were unable to recover it. 00:38:27.916 [2024-12-14 00:19:06.754760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.916 [2024-12-14 00:19:06.754774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.916 qpair failed and we were unable to recover it. 00:38:27.916 [2024-12-14 00:19:06.754920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.916 [2024-12-14 00:19:06.754934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.916 qpair failed and we were unable to recover it. 00:38:27.916 [2024-12-14 00:19:06.755006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.916 [2024-12-14 00:19:06.755019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.916 qpair failed and we were unable to recover it. 00:38:27.916 [2024-12-14 00:19:06.755114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.916 [2024-12-14 00:19:06.755127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.916 qpair failed and we were unable to recover it. 00:38:27.916 [2024-12-14 00:19:06.755223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.916 [2024-12-14 00:19:06.755237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.916 qpair failed and we were unable to recover it. 00:38:27.916 [2024-12-14 00:19:06.755312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.916 [2024-12-14 00:19:06.755325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.917 qpair failed and we were unable to recover it. 00:38:27.917 [2024-12-14 00:19:06.755401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.917 [2024-12-14 00:19:06.755414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.917 qpair failed and we were unable to recover it. 00:38:27.917 [2024-12-14 00:19:06.755521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.917 [2024-12-14 00:19:06.755536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.917 qpair failed and we were unable to recover it. 00:38:27.917 [2024-12-14 00:19:06.755676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.917 [2024-12-14 00:19:06.755689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.917 qpair failed and we were unable to recover it. 00:38:27.917 [2024-12-14 00:19:06.755846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.917 [2024-12-14 00:19:06.755859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.917 qpair failed and we were unable to recover it. 00:38:27.917 [2024-12-14 00:19:06.756007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.917 [2024-12-14 00:19:06.756020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.917 qpair failed and we were unable to recover it. 00:38:27.917 [2024-12-14 00:19:06.756085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.917 [2024-12-14 00:19:06.756098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.917 qpair failed and we were unable to recover it. 00:38:27.917 [2024-12-14 00:19:06.756168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.917 [2024-12-14 00:19:06.756182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.917 qpair failed and we were unable to recover it. 00:38:27.917 [2024-12-14 00:19:06.756273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.917 [2024-12-14 00:19:06.756286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.917 qpair failed and we were unable to recover it. 00:38:27.917 [2024-12-14 00:19:06.756466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.917 [2024-12-14 00:19:06.756480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.917 qpair failed and we were unable to recover it. 00:38:27.917 [2024-12-14 00:19:06.756634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.917 [2024-12-14 00:19:06.756648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.917 qpair failed and we were unable to recover it. 00:38:27.917 [2024-12-14 00:19:06.756749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.917 [2024-12-14 00:19:06.756762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.917 qpair failed and we were unable to recover it. 00:38:27.917 [2024-12-14 00:19:06.756829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.917 [2024-12-14 00:19:06.756842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.917 qpair failed and we were unable to recover it. 00:38:27.917 [2024-12-14 00:19:06.757000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.917 [2024-12-14 00:19:06.757013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.917 qpair failed and we were unable to recover it. 00:38:27.917 [2024-12-14 00:19:06.757078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.917 [2024-12-14 00:19:06.757092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.917 qpair failed and we were unable to recover it. 00:38:27.917 [2024-12-14 00:19:06.757174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.917 [2024-12-14 00:19:06.757187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.917 qpair failed and we were unable to recover it. 00:38:27.917 [2024-12-14 00:19:06.757276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.917 [2024-12-14 00:19:06.757292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.917 qpair failed and we were unable to recover it. 00:38:27.917 [2024-12-14 00:19:06.757447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.917 [2024-12-14 00:19:06.757461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.917 qpair failed and we were unable to recover it. 00:38:27.917 [2024-12-14 00:19:06.757543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.917 [2024-12-14 00:19:06.757557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.917 qpair failed and we were unable to recover it. 00:38:27.917 [2024-12-14 00:19:06.757638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.917 [2024-12-14 00:19:06.757652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.917 qpair failed and we were unable to recover it. 00:38:27.917 [2024-12-14 00:19:06.757712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.917 [2024-12-14 00:19:06.757725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.917 qpair failed and we were unable to recover it. 00:38:27.917 [2024-12-14 00:19:06.757797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.917 [2024-12-14 00:19:06.757811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.917 qpair failed and we were unable to recover it. 00:38:27.917 [2024-12-14 00:19:06.757920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.917 [2024-12-14 00:19:06.757933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.917 qpair failed and we were unable to recover it. 00:38:27.917 [2024-12-14 00:19:06.758021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.917 [2024-12-14 00:19:06.758034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.917 qpair failed and we were unable to recover it. 00:38:27.917 [2024-12-14 00:19:06.758104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.917 [2024-12-14 00:19:06.758118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.917 qpair failed and we were unable to recover it. 00:38:27.917 [2024-12-14 00:19:06.758275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.917 [2024-12-14 00:19:06.758288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.917 qpair failed and we were unable to recover it. 00:38:27.917 [2024-12-14 00:19:06.758430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.917 [2024-12-14 00:19:06.758449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.917 qpair failed and we were unable to recover it. 00:38:27.917 [2024-12-14 00:19:06.758525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.917 [2024-12-14 00:19:06.758539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.917 qpair failed and we were unable to recover it. 00:38:27.917 [2024-12-14 00:19:06.758620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.917 [2024-12-14 00:19:06.758634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.917 qpair failed and we were unable to recover it. 00:38:27.917 [2024-12-14 00:19:06.758707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.917 [2024-12-14 00:19:06.758732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.917 qpair failed and we were unable to recover it. 00:38:27.917 [2024-12-14 00:19:06.758833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.917 [2024-12-14 00:19:06.758847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.917 qpair failed and we were unable to recover it. 00:38:27.917 [2024-12-14 00:19:06.758915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.917 [2024-12-14 00:19:06.758928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.917 qpair failed and we were unable to recover it. 00:38:27.917 [2024-12-14 00:19:06.758996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.917 [2024-12-14 00:19:06.759009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.917 qpair failed and we were unable to recover it. 00:38:27.917 [2024-12-14 00:19:06.759109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.917 [2024-12-14 00:19:06.759123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.917 qpair failed and we were unable to recover it. 00:38:27.917 [2024-12-14 00:19:06.759195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.917 [2024-12-14 00:19:06.759213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.917 qpair failed and we were unable to recover it. 00:38:27.917 [2024-12-14 00:19:06.759298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.917 [2024-12-14 00:19:06.759311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.917 qpair failed and we were unable to recover it. 00:38:27.917 [2024-12-14 00:19:06.759393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.917 [2024-12-14 00:19:06.759406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.917 qpair failed and we were unable to recover it. 00:38:27.917 [2024-12-14 00:19:06.759554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.917 [2024-12-14 00:19:06.759568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.918 qpair failed and we were unable to recover it. 00:38:27.918 [2024-12-14 00:19:06.759643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.918 [2024-12-14 00:19:06.759657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.918 qpair failed and we were unable to recover it. 00:38:27.918 [2024-12-14 00:19:06.759743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.918 [2024-12-14 00:19:06.759757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.918 qpair failed and we were unable to recover it. 00:38:27.918 [2024-12-14 00:19:06.759839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.918 [2024-12-14 00:19:06.759852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.918 qpair failed and we were unable to recover it. 00:38:27.918 [2024-12-14 00:19:06.759996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.918 [2024-12-14 00:19:06.760010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.918 qpair failed and we were unable to recover it. 00:38:27.918 [2024-12-14 00:19:06.760157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.918 [2024-12-14 00:19:06.760171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.918 qpair failed and we were unable to recover it. 00:38:27.918 [2024-12-14 00:19:06.760266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.918 [2024-12-14 00:19:06.760280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.918 qpair failed and we were unable to recover it. 00:38:27.918 [2024-12-14 00:19:06.760366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.918 [2024-12-14 00:19:06.760379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.918 qpair failed and we were unable to recover it. 00:38:27.918 [2024-12-14 00:19:06.760542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.918 [2024-12-14 00:19:06.760556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.918 qpair failed and we were unable to recover it. 00:38:27.918 [2024-12-14 00:19:06.760710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.918 [2024-12-14 00:19:06.760724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.918 qpair failed and we were unable to recover it. 00:38:27.918 [2024-12-14 00:19:06.760860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.918 [2024-12-14 00:19:06.760873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.918 qpair failed and we were unable to recover it. 00:38:27.918 [2024-12-14 00:19:06.760957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.918 [2024-12-14 00:19:06.760970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.918 qpair failed and we were unable to recover it. 00:38:27.918 [2024-12-14 00:19:06.761048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.918 [2024-12-14 00:19:06.761061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.918 qpair failed and we were unable to recover it. 00:38:27.918 [2024-12-14 00:19:06.761178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.918 [2024-12-14 00:19:06.761192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.918 qpair failed and we were unable to recover it. 00:38:27.918 [2024-12-14 00:19:06.761271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.918 [2024-12-14 00:19:06.761284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.918 qpair failed and we were unable to recover it. 00:38:27.918 [2024-12-14 00:19:06.761357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.918 [2024-12-14 00:19:06.761370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.918 qpair failed and we were unable to recover it. 00:38:27.918 [2024-12-14 00:19:06.761436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.918 [2024-12-14 00:19:06.761454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.918 qpair failed and we were unable to recover it. 00:38:27.918 [2024-12-14 00:19:06.761531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.918 [2024-12-14 00:19:06.761546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.918 qpair failed and we were unable to recover it. 00:38:27.918 [2024-12-14 00:19:06.761683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.918 [2024-12-14 00:19:06.761696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.918 qpair failed and we were unable to recover it. 00:38:27.918 [2024-12-14 00:19:06.761797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.918 [2024-12-14 00:19:06.761813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.918 qpair failed and we were unable to recover it. 00:38:27.918 [2024-12-14 00:19:06.761880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.918 [2024-12-14 00:19:06.761893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.918 qpair failed and we were unable to recover it. 00:38:27.918 [2024-12-14 00:19:06.762059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.918 [2024-12-14 00:19:06.762072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.918 qpair failed and we were unable to recover it. 00:38:27.918 [2024-12-14 00:19:06.762148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.918 [2024-12-14 00:19:06.762162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.918 qpair failed and we were unable to recover it. 00:38:27.918 [2024-12-14 00:19:06.762304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.918 [2024-12-14 00:19:06.762318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.918 qpair failed and we were unable to recover it. 00:38:27.918 [2024-12-14 00:19:06.762456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.918 [2024-12-14 00:19:06.762470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.918 qpair failed and we were unable to recover it. 00:38:27.918 [2024-12-14 00:19:06.762560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.918 [2024-12-14 00:19:06.762573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.918 qpair failed and we were unable to recover it. 00:38:27.918 [2024-12-14 00:19:06.762650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.918 [2024-12-14 00:19:06.762663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.918 qpair failed and we were unable to recover it. 00:38:27.918 [2024-12-14 00:19:06.762866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.918 [2024-12-14 00:19:06.762880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.918 qpair failed and we were unable to recover it. 00:38:27.918 [2024-12-14 00:19:06.763066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.918 [2024-12-14 00:19:06.763080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.918 qpair failed and we were unable to recover it. 00:38:27.918 [2024-12-14 00:19:06.763231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.918 [2024-12-14 00:19:06.763245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.918 qpair failed and we were unable to recover it. 00:38:27.918 [2024-12-14 00:19:06.763387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.918 [2024-12-14 00:19:06.763400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.918 qpair failed and we were unable to recover it. 00:38:27.918 [2024-12-14 00:19:06.763493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.918 [2024-12-14 00:19:06.763507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.918 qpair failed and we were unable to recover it. 00:38:27.918 [2024-12-14 00:19:06.763643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.918 [2024-12-14 00:19:06.763657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.918 qpair failed and we were unable to recover it. 00:38:27.918 [2024-12-14 00:19:06.763727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.918 [2024-12-14 00:19:06.763741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.918 qpair failed and we were unable to recover it. 00:38:27.918 [2024-12-14 00:19:06.763819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.918 [2024-12-14 00:19:06.763833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.918 qpair failed and we were unable to recover it. 00:38:27.918 [2024-12-14 00:19:06.763913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.918 [2024-12-14 00:19:06.763927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.918 qpair failed and we were unable to recover it. 00:38:27.918 [2024-12-14 00:19:06.764012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.918 [2024-12-14 00:19:06.764025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.918 qpair failed and we were unable to recover it. 00:38:27.918 [2024-12-14 00:19:06.764115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.918 [2024-12-14 00:19:06.764128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.918 qpair failed and we were unable to recover it. 00:38:27.918 [2024-12-14 00:19:06.764203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.918 [2024-12-14 00:19:06.764217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.918 qpair failed and we were unable to recover it. 00:38:27.918 [2024-12-14 00:19:06.764364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.918 [2024-12-14 00:19:06.764377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.918 qpair failed and we were unable to recover it. 00:38:27.918 [2024-12-14 00:19:06.764525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.918 [2024-12-14 00:19:06.764539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.918 qpair failed and we were unable to recover it. 00:38:27.918 [2024-12-14 00:19:06.764624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.918 [2024-12-14 00:19:06.764638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.918 qpair failed and we were unable to recover it. 00:38:27.918 [2024-12-14 00:19:06.764801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.918 [2024-12-14 00:19:06.764814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.918 qpair failed and we were unable to recover it. 00:38:27.919 [2024-12-14 00:19:06.764906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.919 [2024-12-14 00:19:06.764920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.919 qpair failed and we were unable to recover it. 00:38:27.919 [2024-12-14 00:19:06.765036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.919 [2024-12-14 00:19:06.765049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.919 qpair failed and we were unable to recover it. 00:38:27.919 [2024-12-14 00:19:06.765131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.919 [2024-12-14 00:19:06.765145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.919 qpair failed and we were unable to recover it. 00:38:27.919 [2024-12-14 00:19:06.765234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.919 [2024-12-14 00:19:06.765248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.919 qpair failed and we were unable to recover it. 00:38:27.919 [2024-12-14 00:19:06.765310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.919 [2024-12-14 00:19:06.765323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.919 qpair failed and we were unable to recover it. 00:38:27.919 [2024-12-14 00:19:06.765407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.919 [2024-12-14 00:19:06.765420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.919 qpair failed and we were unable to recover it. 00:38:27.919 [2024-12-14 00:19:06.765567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.919 [2024-12-14 00:19:06.765581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.919 qpair failed and we were unable to recover it. 00:38:27.919 [2024-12-14 00:19:06.765669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.919 [2024-12-14 00:19:06.765683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.919 qpair failed and we were unable to recover it. 00:38:27.919 [2024-12-14 00:19:06.765768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.919 [2024-12-14 00:19:06.765781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.919 qpair failed and we were unable to recover it. 00:38:27.919 [2024-12-14 00:19:06.765846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.919 [2024-12-14 00:19:06.765859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.919 qpair failed and we were unable to recover it. 00:38:27.919 [2024-12-14 00:19:06.765939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.919 [2024-12-14 00:19:06.765953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.919 qpair failed and we were unable to recover it. 00:38:27.919 [2024-12-14 00:19:06.766037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.919 [2024-12-14 00:19:06.766050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.919 qpair failed and we were unable to recover it. 00:38:27.919 [2024-12-14 00:19:06.766186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.919 [2024-12-14 00:19:06.766199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.919 qpair failed and we were unable to recover it. 00:38:27.919 [2024-12-14 00:19:06.766292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.919 [2024-12-14 00:19:06.766306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.919 qpair failed and we were unable to recover it. 00:38:27.919 [2024-12-14 00:19:06.766473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.919 [2024-12-14 00:19:06.766487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.919 qpair failed and we were unable to recover it. 00:38:27.919 [2024-12-14 00:19:06.766571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.919 [2024-12-14 00:19:06.766585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.919 qpair failed and we were unable to recover it. 00:38:27.919 [2024-12-14 00:19:06.766723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.919 [2024-12-14 00:19:06.766739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.919 qpair failed and we were unable to recover it. 00:38:27.919 [2024-12-14 00:19:06.766828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.919 [2024-12-14 00:19:06.766843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.919 qpair failed and we were unable to recover it. 00:38:27.919 [2024-12-14 00:19:06.766998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.919 [2024-12-14 00:19:06.767018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.919 qpair failed and we were unable to recover it. 00:38:27.919 [2024-12-14 00:19:06.767103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.919 [2024-12-14 00:19:06.767117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.919 qpair failed and we were unable to recover it. 00:38:27.919 [2024-12-14 00:19:06.767204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.919 [2024-12-14 00:19:06.767217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.919 qpair failed and we were unable to recover it. 00:38:27.919 [2024-12-14 00:19:06.767300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.919 [2024-12-14 00:19:06.767313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.919 qpair failed and we were unable to recover it. 00:38:27.919 [2024-12-14 00:19:06.767384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.919 [2024-12-14 00:19:06.767398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.919 qpair failed and we were unable to recover it. 00:38:27.919 [2024-12-14 00:19:06.767522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.919 [2024-12-14 00:19:06.767536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.919 qpair failed and we were unable to recover it. 00:38:27.919 [2024-12-14 00:19:06.767675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.919 [2024-12-14 00:19:06.767688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.919 qpair failed and we were unable to recover it. 00:38:27.919 [2024-12-14 00:19:06.767766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.919 [2024-12-14 00:19:06.767780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.919 qpair failed and we were unable to recover it. 00:38:27.919 [2024-12-14 00:19:06.767852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.919 [2024-12-14 00:19:06.767865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.919 qpair failed and we were unable to recover it. 00:38:27.919 [2024-12-14 00:19:06.767939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.919 [2024-12-14 00:19:06.767952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.919 qpair failed and we were unable to recover it. 00:38:27.919 [2024-12-14 00:19:06.768043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.919 [2024-12-14 00:19:06.768057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.919 qpair failed and we were unable to recover it. 00:38:27.919 [2024-12-14 00:19:06.768194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.919 [2024-12-14 00:19:06.768208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.919 qpair failed and we were unable to recover it. 00:38:27.919 [2024-12-14 00:19:06.768300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.919 [2024-12-14 00:19:06.768314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.919 qpair failed and we were unable to recover it. 00:38:27.919 [2024-12-14 00:19:06.768468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.919 [2024-12-14 00:19:06.768482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.919 qpair failed and we were unable to recover it. 00:38:27.919 [2024-12-14 00:19:06.768563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.919 [2024-12-14 00:19:06.768576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.919 qpair failed and we were unable to recover it. 00:38:27.919 [2024-12-14 00:19:06.768647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.919 [2024-12-14 00:19:06.768660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.919 qpair failed and we were unable to recover it. 00:38:27.919 [2024-12-14 00:19:06.768810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.919 [2024-12-14 00:19:06.768824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.919 qpair failed and we were unable to recover it. 00:38:27.919 [2024-12-14 00:19:06.768962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.919 [2024-12-14 00:19:06.768975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.919 qpair failed and we were unable to recover it. 00:38:27.919 [2024-12-14 00:19:06.769118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.919 [2024-12-14 00:19:06.769132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.919 qpair failed and we were unable to recover it. 00:38:27.919 [2024-12-14 00:19:06.769220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.919 [2024-12-14 00:19:06.769234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.919 qpair failed and we were unable to recover it. 00:38:27.919 [2024-12-14 00:19:06.769324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.919 [2024-12-14 00:19:06.769338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.919 qpair failed and we were unable to recover it. 00:38:27.919 [2024-12-14 00:19:06.769418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.919 [2024-12-14 00:19:06.769431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.919 qpair failed and we were unable to recover it. 00:38:27.919 [2024-12-14 00:19:06.769533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.919 [2024-12-14 00:19:06.769546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.919 qpair failed and we were unable to recover it. 00:38:27.919 [2024-12-14 00:19:06.769621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.919 [2024-12-14 00:19:06.769635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.919 qpair failed and we were unable to recover it. 00:38:27.919 [2024-12-14 00:19:06.769720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.919 [2024-12-14 00:19:06.769734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.920 qpair failed and we were unable to recover it. 00:38:27.920 [2024-12-14 00:19:06.769813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.920 [2024-12-14 00:19:06.769826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.920 qpair failed and we were unable to recover it. 00:38:27.920 [2024-12-14 00:19:06.769910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.920 [2024-12-14 00:19:06.769924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.920 qpair failed and we were unable to recover it. 00:38:27.920 [2024-12-14 00:19:06.770073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.920 [2024-12-14 00:19:06.770086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.920 qpair failed and we were unable to recover it. 00:38:27.920 [2024-12-14 00:19:06.770177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.920 [2024-12-14 00:19:06.770190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.920 qpair failed and we were unable to recover it. 00:38:27.920 [2024-12-14 00:19:06.770332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.920 [2024-12-14 00:19:06.770345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.920 qpair failed and we were unable to recover it. 00:38:27.920 [2024-12-14 00:19:06.770417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.920 [2024-12-14 00:19:06.770431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.920 qpair failed and we were unable to recover it. 00:38:27.920 [2024-12-14 00:19:06.770508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.920 [2024-12-14 00:19:06.770522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.920 qpair failed and we were unable to recover it. 00:38:27.920 [2024-12-14 00:19:06.770599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.920 [2024-12-14 00:19:06.770613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.920 qpair failed and we were unable to recover it. 00:38:27.920 [2024-12-14 00:19:06.770696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.920 [2024-12-14 00:19:06.770710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.920 qpair failed and we were unable to recover it. 00:38:27.920 [2024-12-14 00:19:06.770787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.920 [2024-12-14 00:19:06.770800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.920 qpair failed and we were unable to recover it. 00:38:27.920 [2024-12-14 00:19:06.770942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.920 [2024-12-14 00:19:06.770956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.920 qpair failed and we were unable to recover it. 00:38:27.920 [2024-12-14 00:19:06.771042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.920 [2024-12-14 00:19:06.771056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.920 qpair failed and we were unable to recover it. 00:38:27.920 [2024-12-14 00:19:06.771161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.920 [2024-12-14 00:19:06.771175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.920 qpair failed and we were unable to recover it. 00:38:27.920 [2024-12-14 00:19:06.771312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.920 [2024-12-14 00:19:06.771328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.920 qpair failed and we were unable to recover it. 00:38:27.920 [2024-12-14 00:19:06.771469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.920 [2024-12-14 00:19:06.771483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.920 qpair failed and we were unable to recover it. 00:38:27.920 [2024-12-14 00:19:06.771558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.920 [2024-12-14 00:19:06.771571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.920 qpair failed and we were unable to recover it. 00:38:27.920 [2024-12-14 00:19:06.771656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.920 [2024-12-14 00:19:06.771669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.920 qpair failed and we were unable to recover it. 00:38:27.920 [2024-12-14 00:19:06.771746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.920 [2024-12-14 00:19:06.771759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.920 qpair failed and we were unable to recover it. 00:38:27.920 [2024-12-14 00:19:06.771832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.920 [2024-12-14 00:19:06.771846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.920 qpair failed and we were unable to recover it. 00:38:27.920 [2024-12-14 00:19:06.771980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.920 [2024-12-14 00:19:06.771993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.920 qpair failed and we were unable to recover it. 00:38:27.920 [2024-12-14 00:19:06.772128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.920 [2024-12-14 00:19:06.772141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.920 qpair failed and we were unable to recover it. 00:38:27.920 [2024-12-14 00:19:06.772277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.920 [2024-12-14 00:19:06.772292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.920 qpair failed and we were unable to recover it. 00:38:27.920 [2024-12-14 00:19:06.772383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.920 [2024-12-14 00:19:06.772397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.920 qpair failed and we were unable to recover it. 00:38:27.920 [2024-12-14 00:19:06.772545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.920 [2024-12-14 00:19:06.772559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.920 qpair failed and we were unable to recover it. 00:38:27.920 [2024-12-14 00:19:06.772650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.920 [2024-12-14 00:19:06.772664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.920 qpair failed and we were unable to recover it. 00:38:27.920 [2024-12-14 00:19:06.772748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.920 [2024-12-14 00:19:06.772762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.920 qpair failed and we were unable to recover it. 00:38:27.920 [2024-12-14 00:19:06.772905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.920 [2024-12-14 00:19:06.772918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.920 qpair failed and we were unable to recover it. 00:38:27.920 [2024-12-14 00:19:06.773008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.920 [2024-12-14 00:19:06.773022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.920 qpair failed and we were unable to recover it. 00:38:27.920 [2024-12-14 00:19:06.773164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.920 [2024-12-14 00:19:06.773177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.920 qpair failed and we were unable to recover it. 00:38:27.920 [2024-12-14 00:19:06.773250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.920 [2024-12-14 00:19:06.773264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.920 qpair failed and we were unable to recover it. 00:38:27.920 [2024-12-14 00:19:06.773411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.920 [2024-12-14 00:19:06.773424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.920 qpair failed and we were unable to recover it. 00:38:27.920 [2024-12-14 00:19:06.773511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.920 [2024-12-14 00:19:06.773525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.920 qpair failed and we were unable to recover it. 00:38:27.920 [2024-12-14 00:19:06.773663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.920 [2024-12-14 00:19:06.773676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.920 qpair failed and we were unable to recover it. 00:38:27.920 [2024-12-14 00:19:06.773753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.920 [2024-12-14 00:19:06.773767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.920 qpair failed and we were unable to recover it. 00:38:27.920 [2024-12-14 00:19:06.773848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.920 [2024-12-14 00:19:06.773862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.920 qpair failed and we were unable to recover it. 00:38:27.920 [2024-12-14 00:19:06.774125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.920 [2024-12-14 00:19:06.774139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.920 qpair failed and we were unable to recover it. 00:38:27.920 [2024-12-14 00:19:06.774351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.920 [2024-12-14 00:19:06.774365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.920 qpair failed and we were unable to recover it. 00:38:27.920 [2024-12-14 00:19:06.774473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.920 [2024-12-14 00:19:06.774487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.920 qpair failed and we were unable to recover it. 00:38:27.920 [2024-12-14 00:19:06.774656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.920 [2024-12-14 00:19:06.774672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.920 qpair failed and we were unable to recover it. 00:38:27.920 [2024-12-14 00:19:06.774759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.920 [2024-12-14 00:19:06.774777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.920 qpair failed and we were unable to recover it. 00:38:27.920 [2024-12-14 00:19:06.774893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.920 [2024-12-14 00:19:06.774906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.920 qpair failed and we were unable to recover it. 00:38:27.920 [2024-12-14 00:19:06.775085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.920 [2024-12-14 00:19:06.775099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.920 qpair failed and we were unable to recover it. 00:38:27.921 [2024-12-14 00:19:06.775197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.921 [2024-12-14 00:19:06.775210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.921 qpair failed and we were unable to recover it. 00:38:27.921 [2024-12-14 00:19:06.775311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.921 [2024-12-14 00:19:06.775324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.921 qpair failed and we were unable to recover it. 00:38:27.921 [2024-12-14 00:19:06.775422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.921 [2024-12-14 00:19:06.775436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.921 qpair failed and we were unable to recover it. 00:38:27.921 [2024-12-14 00:19:06.775524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.921 [2024-12-14 00:19:06.775538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.921 qpair failed and we were unable to recover it. 00:38:27.921 [2024-12-14 00:19:06.775727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.921 [2024-12-14 00:19:06.775741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.921 qpair failed and we were unable to recover it. 00:38:27.921 [2024-12-14 00:19:06.775821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.921 [2024-12-14 00:19:06.775835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.921 qpair failed and we were unable to recover it. 00:38:27.921 [2024-12-14 00:19:06.775923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.921 [2024-12-14 00:19:06.775937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.921 qpair failed and we were unable to recover it. 00:38:27.921 [2024-12-14 00:19:06.776119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.921 [2024-12-14 00:19:06.776133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.921 qpair failed and we were unable to recover it. 00:38:27.921 [2024-12-14 00:19:06.776233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.921 [2024-12-14 00:19:06.776246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.921 qpair failed and we were unable to recover it. 00:38:27.921 [2024-12-14 00:19:06.776461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.921 [2024-12-14 00:19:06.776475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.921 qpair failed and we were unable to recover it. 00:38:27.921 [2024-12-14 00:19:06.776639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.921 [2024-12-14 00:19:06.776653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.921 qpair failed and we were unable to recover it. 00:38:27.921 [2024-12-14 00:19:06.776808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.921 [2024-12-14 00:19:06.776824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.921 qpair failed and we were unable to recover it. 00:38:27.921 [2024-12-14 00:19:06.776897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.921 [2024-12-14 00:19:06.776910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.921 qpair failed and we were unable to recover it. 00:38:27.921 [2024-12-14 00:19:06.777118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.921 [2024-12-14 00:19:06.777131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.921 qpair failed and we were unable to recover it. 00:38:27.921 [2024-12-14 00:19:06.777206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.921 [2024-12-14 00:19:06.777219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.921 qpair failed and we were unable to recover it. 00:38:27.921 [2024-12-14 00:19:06.777435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.921 [2024-12-14 00:19:06.777469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.921 qpair failed and we were unable to recover it. 00:38:27.921 [2024-12-14 00:19:06.777622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.921 [2024-12-14 00:19:06.777636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.921 qpair failed and we were unable to recover it. 00:38:27.921 [2024-12-14 00:19:06.777750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.921 [2024-12-14 00:19:06.777764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.921 qpair failed and we were unable to recover it. 00:38:27.921 [2024-12-14 00:19:06.777913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.921 [2024-12-14 00:19:06.777927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.921 qpair failed and we were unable to recover it. 00:38:27.921 [2024-12-14 00:19:06.778027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.921 [2024-12-14 00:19:06.778040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.921 qpair failed and we were unable to recover it. 00:38:27.921 [2024-12-14 00:19:06.778185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.921 [2024-12-14 00:19:06.778199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.921 qpair failed and we were unable to recover it. 00:38:27.921 [2024-12-14 00:19:06.778429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.921 [2024-12-14 00:19:06.778450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.921 qpair failed and we were unable to recover it. 00:38:27.921 [2024-12-14 00:19:06.778664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.921 [2024-12-14 00:19:06.778678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.921 qpair failed and we were unable to recover it. 00:38:27.921 [2024-12-14 00:19:06.778825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.921 [2024-12-14 00:19:06.778839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.921 qpair failed and we were unable to recover it. 00:38:27.921 [2024-12-14 00:19:06.778944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.921 [2024-12-14 00:19:06.778957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.921 qpair failed and we were unable to recover it. 00:38:27.921 [2024-12-14 00:19:06.779247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.921 [2024-12-14 00:19:06.779262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.921 qpair failed and we were unable to recover it. 00:38:27.921 [2024-12-14 00:19:06.779419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.921 [2024-12-14 00:19:06.779433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.921 qpair failed and we were unable to recover it. 00:38:27.921 [2024-12-14 00:19:06.779540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.921 [2024-12-14 00:19:06.779557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.921 qpair failed and we were unable to recover it. 00:38:27.921 [2024-12-14 00:19:06.779707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.921 [2024-12-14 00:19:06.779720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.921 qpair failed and we were unable to recover it. 00:38:27.921 [2024-12-14 00:19:06.779819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.921 [2024-12-14 00:19:06.779833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.921 qpair failed and we were unable to recover it. 00:38:27.921 [2024-12-14 00:19:06.779928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.921 [2024-12-14 00:19:06.779942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.921 qpair failed and we were unable to recover it. 00:38:27.921 [2024-12-14 00:19:06.780107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.921 [2024-12-14 00:19:06.780121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.921 qpair failed and we were unable to recover it. 00:38:27.921 [2024-12-14 00:19:06.780200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.921 [2024-12-14 00:19:06.780214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.921 qpair failed and we were unable to recover it. 00:38:27.921 [2024-12-14 00:19:06.780365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.921 [2024-12-14 00:19:06.780379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.921 qpair failed and we were unable to recover it. 00:38:27.921 [2024-12-14 00:19:06.780613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.921 [2024-12-14 00:19:06.780627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.921 qpair failed and we were unable to recover it. 00:38:27.921 [2024-12-14 00:19:06.780710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.921 [2024-12-14 00:19:06.780724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.921 qpair failed and we were unable to recover it. 00:38:27.921 [2024-12-14 00:19:06.780815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.921 [2024-12-14 00:19:06.780829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.921 qpair failed and we were unable to recover it. 00:38:27.921 [2024-12-14 00:19:06.780973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.921 [2024-12-14 00:19:06.780986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.921 qpair failed and we were unable to recover it. 00:38:27.922 [2024-12-14 00:19:06.781092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.922 [2024-12-14 00:19:06.781106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.922 qpair failed and we were unable to recover it. 00:38:27.922 [2024-12-14 00:19:06.781247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.922 [2024-12-14 00:19:06.781261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.922 qpair failed and we were unable to recover it. 00:38:27.922 [2024-12-14 00:19:06.781461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.922 [2024-12-14 00:19:06.781474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.922 qpair failed and we were unable to recover it. 00:38:27.922 [2024-12-14 00:19:06.781587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.922 [2024-12-14 00:19:06.781601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.922 qpair failed and we were unable to recover it. 00:38:27.922 [2024-12-14 00:19:06.781689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.922 [2024-12-14 00:19:06.781703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.922 qpair failed and we were unable to recover it. 00:38:27.922 [2024-12-14 00:19:06.781850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.922 [2024-12-14 00:19:06.781864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.922 qpair failed and we were unable to recover it. 00:38:27.922 [2024-12-14 00:19:06.782072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.922 [2024-12-14 00:19:06.782086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.922 qpair failed and we were unable to recover it. 00:38:27.922 [2024-12-14 00:19:06.782349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.922 [2024-12-14 00:19:06.782363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.922 qpair failed and we were unable to recover it. 00:38:27.922 [2024-12-14 00:19:06.782504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.922 [2024-12-14 00:19:06.782518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.922 qpair failed and we were unable to recover it. 00:38:27.922 [2024-12-14 00:19:06.782699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.922 [2024-12-14 00:19:06.782713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.922 qpair failed and we were unable to recover it. 00:38:27.922 [2024-12-14 00:19:06.782814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.922 [2024-12-14 00:19:06.782827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.922 qpair failed and we were unable to recover it. 00:38:27.922 [2024-12-14 00:19:06.782993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.922 [2024-12-14 00:19:06.783007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.922 qpair failed and we were unable to recover it. 00:38:27.922 [2024-12-14 00:19:06.783323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.922 [2024-12-14 00:19:06.783338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.922 qpair failed and we were unable to recover it. 00:38:27.922 [2024-12-14 00:19:06.783505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.922 [2024-12-14 00:19:06.783525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.922 qpair failed and we were unable to recover it. 00:38:27.922 [2024-12-14 00:19:06.783670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.922 [2024-12-14 00:19:06.783685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.922 qpair failed and we were unable to recover it. 00:38:27.922 [2024-12-14 00:19:06.783773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.922 [2024-12-14 00:19:06.783787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.922 qpair failed and we were unable to recover it. 00:38:27.922 [2024-12-14 00:19:06.783810] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:38:27.922 [2024-12-14 00:19:06.783843] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:38:27.922 [2024-12-14 00:19:06.783854] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:38:27.922 [2024-12-14 00:19:06.783864] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:38:27.922 [2024-12-14 00:19:06.783866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.922 [2024-12-14 00:19:06.783872] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:38:27.922 [2024-12-14 00:19:06.783880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.922 qpair failed and we were unable to recover it. 00:38:27.922 [2024-12-14 00:19:06.783977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.922 [2024-12-14 00:19:06.783990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.922 qpair failed and we were unable to recover it. 00:38:27.922 [2024-12-14 00:19:06.784149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.922 [2024-12-14 00:19:06.784162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.922 qpair failed and we were unable to recover it. 00:38:27.922 [2024-12-14 00:19:06.784267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.922 [2024-12-14 00:19:06.784280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.922 qpair failed and we were unable to recover it. 00:38:27.922 [2024-12-14 00:19:06.784467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.922 [2024-12-14 00:19:06.784481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.922 qpair failed and we were unable to recover it. 00:38:27.922 [2024-12-14 00:19:06.784643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.922 [2024-12-14 00:19:06.784657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.922 qpair failed and we were unable to recover it. 00:38:27.922 [2024-12-14 00:19:06.784758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.922 [2024-12-14 00:19:06.784774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.922 qpair failed and we were unable to recover it. 00:38:27.922 [2024-12-14 00:19:06.784928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.922 [2024-12-14 00:19:06.784947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.922 qpair failed and we were unable to recover it. 00:38:27.922 [2024-12-14 00:19:06.785223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.922 [2024-12-14 00:19:06.785236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.922 qpair failed and we were unable to recover it. 00:38:27.922 [2024-12-14 00:19:06.785405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.922 [2024-12-14 00:19:06.785418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.922 qpair failed and we were unable to recover it. 00:38:27.922 [2024-12-14 00:19:06.785599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.922 [2024-12-14 00:19:06.785613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.922 qpair failed and we were unable to recover it. 00:38:27.922 [2024-12-14 00:19:06.785812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.922 [2024-12-14 00:19:06.785826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.922 qpair failed and we were unable to recover it. 00:38:27.922 [2024-12-14 00:19:06.785927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.922 [2024-12-14 00:19:06.785941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.922 qpair failed and we were unable to recover it. 00:38:27.922 [2024-12-14 00:19:06.786175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.922 [2024-12-14 00:19:06.786189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.922 qpair failed and we were unable to recover it. 00:38:27.922 [2024-12-14 00:19:06.786289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.922 [2024-12-14 00:19:06.786302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.922 qpair failed and we were unable to recover it. 00:38:27.922 [2024-12-14 00:19:06.786380] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 5 00:38:27.922 [2024-12-14 00:19:06.786480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.922 [2024-12-14 00:19:06.786495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.922 qpair failed and we were unable to recover it. 00:38:27.922 [2024-12-14 00:19:06.786495] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 6 00:38:27.922 [2024-12-14 00:19:06.786589] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 4 00:38:27.922 [2024-12-14 00:19:06.786603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.922 [2024-12-14 00:19:06.786616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.922 qpair failed and we were unable to recover it. 00:38:27.922 [2024-12-14 00:19:06.786612] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 7 00:38:27.922 [2024-12-14 00:19:06.786711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.922 [2024-12-14 00:19:06.786724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.922 qpair failed and we were unable to recover it. 00:38:27.922 [2024-12-14 00:19:06.786837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.922 [2024-12-14 00:19:06.786851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.922 qpair failed and we were unable to recover it. 00:38:27.922 [2024-12-14 00:19:06.786947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.922 [2024-12-14 00:19:06.786960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.922 qpair failed and we were unable to recover it. 00:38:27.922 [2024-12-14 00:19:06.787201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.922 [2024-12-14 00:19:06.787214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.922 qpair failed and we were unable to recover it. 00:38:27.922 [2024-12-14 00:19:06.787328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.922 [2024-12-14 00:19:06.787342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.922 qpair failed and we were unable to recover it. 00:38:27.922 [2024-12-14 00:19:06.787508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.922 [2024-12-14 00:19:06.787522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.922 qpair failed and we were unable to recover it. 00:38:27.923 [2024-12-14 00:19:06.787616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.923 [2024-12-14 00:19:06.787629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.923 qpair failed and we were unable to recover it. 00:38:27.923 [2024-12-14 00:19:06.787851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.923 [2024-12-14 00:19:06.787866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.923 qpair failed and we were unable to recover it. 00:38:27.923 [2024-12-14 00:19:06.787965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.923 [2024-12-14 00:19:06.787978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.923 qpair failed and we were unable to recover it. 00:38:27.923 [2024-12-14 00:19:06.788230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.923 [2024-12-14 00:19:06.788244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.923 qpair failed and we were unable to recover it. 00:38:27.923 [2024-12-14 00:19:06.788346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.923 [2024-12-14 00:19:06.788361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.923 qpair failed and we were unable to recover it. 00:38:27.923 [2024-12-14 00:19:06.788504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.923 [2024-12-14 00:19:06.788518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.923 qpair failed and we were unable to recover it. 00:38:27.923 [2024-12-14 00:19:06.788625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.923 [2024-12-14 00:19:06.788639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.923 qpair failed and we were unable to recover it. 00:38:27.923 [2024-12-14 00:19:06.788732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.923 [2024-12-14 00:19:06.788745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.923 qpair failed and we were unable to recover it. 00:38:27.923 [2024-12-14 00:19:06.788850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.923 [2024-12-14 00:19:06.788864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.923 qpair failed and we were unable to recover it. 00:38:27.923 [2024-12-14 00:19:06.789012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.923 [2024-12-14 00:19:06.789026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.923 qpair failed and we were unable to recover it. 00:38:27.923 [2024-12-14 00:19:06.789243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.923 [2024-12-14 00:19:06.789257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.923 qpair failed and we were unable to recover it. 00:38:27.923 [2024-12-14 00:19:06.789461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.923 [2024-12-14 00:19:06.789492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:27.923 qpair failed and we were unable to recover it. 00:38:27.923 [2024-12-14 00:19:06.789668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.923 [2024-12-14 00:19:06.789691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:27.923 qpair failed and we were unable to recover it. 00:38:27.923 [2024-12-14 00:19:06.789820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.923 [2024-12-14 00:19:06.789843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:27.923 qpair failed and we were unable to recover it. 00:38:27.923 [2024-12-14 00:19:06.790108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.923 [2024-12-14 00:19:06.790129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:27.923 qpair failed and we were unable to recover it. 00:38:27.923 [2024-12-14 00:19:06.790302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.923 [2024-12-14 00:19:06.790324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:27.923 qpair failed and we were unable to recover it. 00:38:27.923 [2024-12-14 00:19:06.790509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.923 [2024-12-14 00:19:06.790532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:27.923 qpair failed and we were unable to recover it. 00:38:27.923 [2024-12-14 00:19:06.790759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.923 [2024-12-14 00:19:06.790781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:27.923 qpair failed and we were unable to recover it. 00:38:27.923 [2024-12-14 00:19:06.791021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.923 [2024-12-14 00:19:06.791042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:27.923 qpair failed and we were unable to recover it. 00:38:27.923 [2024-12-14 00:19:06.791287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.923 [2024-12-14 00:19:06.791309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:27.923 qpair failed and we were unable to recover it. 00:38:27.923 [2024-12-14 00:19:06.791475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.923 [2024-12-14 00:19:06.791492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.923 qpair failed and we were unable to recover it. 00:38:27.923 [2024-12-14 00:19:06.791647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.923 [2024-12-14 00:19:06.791660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.923 qpair failed and we were unable to recover it. 00:38:27.923 [2024-12-14 00:19:06.791838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.923 [2024-12-14 00:19:06.791852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.923 qpair failed and we were unable to recover it. 00:38:27.923 [2024-12-14 00:19:06.792052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.923 [2024-12-14 00:19:06.792067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.923 qpair failed and we were unable to recover it. 00:38:27.923 [2024-12-14 00:19:06.792229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.923 [2024-12-14 00:19:06.792245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.923 qpair failed and we were unable to recover it. 00:38:27.923 [2024-12-14 00:19:06.792452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.923 [2024-12-14 00:19:06.792467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.923 qpair failed and we were unable to recover it. 00:38:27.923 [2024-12-14 00:19:06.792567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.923 [2024-12-14 00:19:06.792581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.923 qpair failed and we were unable to recover it. 00:38:27.923 [2024-12-14 00:19:06.792672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.923 [2024-12-14 00:19:06.792686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.923 qpair failed and we were unable to recover it. 00:38:27.923 [2024-12-14 00:19:06.792896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.923 [2024-12-14 00:19:06.792910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.923 qpair failed and we were unable to recover it. 00:38:27.923 [2024-12-14 00:19:06.793135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.923 [2024-12-14 00:19:06.793149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.923 qpair failed and we were unable to recover it. 00:38:27.923 [2024-12-14 00:19:06.793256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.923 [2024-12-14 00:19:06.793270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.923 qpair failed and we were unable to recover it. 00:38:27.923 [2024-12-14 00:19:06.793412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.923 [2024-12-14 00:19:06.793427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.923 qpair failed and we were unable to recover it. 00:38:27.923 [2024-12-14 00:19:06.793622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.923 [2024-12-14 00:19:06.793637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.923 qpair failed and we were unable to recover it. 00:38:27.923 [2024-12-14 00:19:06.793798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.923 [2024-12-14 00:19:06.793813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.923 qpair failed and we were unable to recover it. 00:38:27.923 [2024-12-14 00:19:06.794011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.923 [2024-12-14 00:19:06.794027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.923 qpair failed and we were unable to recover it. 00:38:27.923 [2024-12-14 00:19:06.794259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.923 [2024-12-14 00:19:06.794274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.923 qpair failed and we were unable to recover it. 00:38:27.923 [2024-12-14 00:19:06.794462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.923 [2024-12-14 00:19:06.794477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.923 qpair failed and we were unable to recover it. 00:38:27.923 [2024-12-14 00:19:06.794640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.923 [2024-12-14 00:19:06.794655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.923 qpair failed and we were unable to recover it. 00:38:27.923 [2024-12-14 00:19:06.794887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.923 [2024-12-14 00:19:06.794902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.923 qpair failed and we were unable to recover it. 00:38:27.923 [2024-12-14 00:19:06.795076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.923 [2024-12-14 00:19:06.795091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.923 qpair failed and we were unable to recover it. 00:38:27.923 [2024-12-14 00:19:06.795263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.923 [2024-12-14 00:19:06.795277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.923 qpair failed and we were unable to recover it. 00:38:27.923 [2024-12-14 00:19:06.795433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.923 [2024-12-14 00:19:06.795453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.924 qpair failed and we were unable to recover it. 00:38:27.924 [2024-12-14 00:19:06.795555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.924 [2024-12-14 00:19:06.795569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.924 qpair failed and we were unable to recover it. 00:38:27.924 [2024-12-14 00:19:06.795789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.924 [2024-12-14 00:19:06.795804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.924 qpair failed and we were unable to recover it. 00:38:27.924 [2024-12-14 00:19:06.795981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.924 [2024-12-14 00:19:06.795996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.924 qpair failed and we were unable to recover it. 00:38:27.924 [2024-12-14 00:19:06.796177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.924 [2024-12-14 00:19:06.796191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.924 qpair failed and we were unable to recover it. 00:38:27.924 [2024-12-14 00:19:06.796356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.924 [2024-12-14 00:19:06.796371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.924 qpair failed and we were unable to recover it. 00:38:27.924 [2024-12-14 00:19:06.796608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.924 [2024-12-14 00:19:06.796623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.924 qpair failed and we were unable to recover it. 00:38:27.924 [2024-12-14 00:19:06.796869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.924 [2024-12-14 00:19:06.796884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.924 qpair failed and we were unable to recover it. 00:38:27.924 [2024-12-14 00:19:06.797150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.924 [2024-12-14 00:19:06.797164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.924 qpair failed and we were unable to recover it. 00:38:27.924 [2024-12-14 00:19:06.797320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.924 [2024-12-14 00:19:06.797336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.924 qpair failed and we were unable to recover it. 00:38:27.924 [2024-12-14 00:19:06.797547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.924 [2024-12-14 00:19:06.797565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.924 qpair failed and we were unable to recover it. 00:38:27.924 [2024-12-14 00:19:06.797675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.924 [2024-12-14 00:19:06.797689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.924 qpair failed and we were unable to recover it. 00:38:27.924 [2024-12-14 00:19:06.797762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.924 [2024-12-14 00:19:06.797776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.924 qpair failed and we were unable to recover it. 00:38:27.924 [2024-12-14 00:19:06.797876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.924 [2024-12-14 00:19:06.797890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.924 qpair failed and we were unable to recover it. 00:38:27.924 [2024-12-14 00:19:06.798036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.924 [2024-12-14 00:19:06.798050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.924 qpair failed and we were unable to recover it. 00:38:27.924 [2024-12-14 00:19:06.798284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.924 [2024-12-14 00:19:06.798303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.924 qpair failed and we were unable to recover it. 00:38:27.924 [2024-12-14 00:19:06.798509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.924 [2024-12-14 00:19:06.798529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.924 qpair failed and we were unable to recover it. 00:38:27.924 [2024-12-14 00:19:06.798709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.924 [2024-12-14 00:19:06.798723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.924 qpair failed and we were unable to recover it. 00:38:27.924 [2024-12-14 00:19:06.798824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.924 [2024-12-14 00:19:06.798839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.924 qpair failed and we were unable to recover it. 00:38:27.924 [2024-12-14 00:19:06.798942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.924 [2024-12-14 00:19:06.798956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.924 qpair failed and we were unable to recover it. 00:38:27.924 [2024-12-14 00:19:06.799118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.924 [2024-12-14 00:19:06.799133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.924 qpair failed and we were unable to recover it. 00:38:27.924 [2024-12-14 00:19:06.799384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.924 [2024-12-14 00:19:06.799400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.924 qpair failed and we were unable to recover it. 00:38:27.924 [2024-12-14 00:19:06.799542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.924 [2024-12-14 00:19:06.799555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.924 qpair failed and we were unable to recover it. 00:38:27.924 [2024-12-14 00:19:06.799698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.924 [2024-12-14 00:19:06.799711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.924 qpair failed and we were unable to recover it. 00:38:27.924 [2024-12-14 00:19:06.799820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.924 [2024-12-14 00:19:06.799834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.924 qpair failed and we were unable to recover it. 00:38:27.924 [2024-12-14 00:19:06.799993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.924 [2024-12-14 00:19:06.800007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.924 qpair failed and we were unable to recover it. 00:38:27.924 [2024-12-14 00:19:06.800099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.924 [2024-12-14 00:19:06.800112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.924 qpair failed and we were unable to recover it. 00:38:27.924 [2024-12-14 00:19:06.800217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.924 [2024-12-14 00:19:06.800230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.924 qpair failed and we were unable to recover it. 00:38:27.924 [2024-12-14 00:19:06.800373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.924 [2024-12-14 00:19:06.800387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.924 qpair failed and we were unable to recover it. 00:38:27.924 [2024-12-14 00:19:06.800535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.924 [2024-12-14 00:19:06.800550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.924 qpair failed and we were unable to recover it. 00:38:27.924 [2024-12-14 00:19:06.800657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.924 [2024-12-14 00:19:06.800671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.924 qpair failed and we were unable to recover it. 00:38:27.924 [2024-12-14 00:19:06.800844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.924 [2024-12-14 00:19:06.800858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.924 qpair failed and we were unable to recover it. 00:38:27.924 [2024-12-14 00:19:06.801079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.924 [2024-12-14 00:19:06.801092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.924 qpair failed and we were unable to recover it. 00:38:27.924 [2024-12-14 00:19:06.801270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.924 [2024-12-14 00:19:06.801283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.924 qpair failed and we were unable to recover it. 00:38:27.924 [2024-12-14 00:19:06.801442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.924 [2024-12-14 00:19:06.801456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.924 qpair failed and we were unable to recover it. 00:38:27.924 [2024-12-14 00:19:06.801558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.924 [2024-12-14 00:19:06.801572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.924 qpair failed and we were unable to recover it. 00:38:27.924 [2024-12-14 00:19:06.801685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.924 [2024-12-14 00:19:06.801699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.924 qpair failed and we were unable to recover it. 00:38:27.924 [2024-12-14 00:19:06.801860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.924 [2024-12-14 00:19:06.801874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.924 qpair failed and we were unable to recover it. 00:38:27.924 [2024-12-14 00:19:06.802122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.924 [2024-12-14 00:19:06.802136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.924 qpair failed and we were unable to recover it. 00:38:27.924 [2024-12-14 00:19:06.802431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.924 [2024-12-14 00:19:06.802456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.924 qpair failed and we were unable to recover it. 00:38:27.924 [2024-12-14 00:19:06.802601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.924 [2024-12-14 00:19:06.802615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.924 qpair failed and we were unable to recover it. 00:38:27.924 [2024-12-14 00:19:06.802776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.924 [2024-12-14 00:19:06.802790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.924 qpair failed and we were unable to recover it. 00:38:27.924 [2024-12-14 00:19:06.802886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.924 [2024-12-14 00:19:06.802899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.924 qpair failed and we were unable to recover it. 00:38:27.925 [2024-12-14 00:19:06.803046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.925 [2024-12-14 00:19:06.803060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.925 qpair failed and we were unable to recover it. 00:38:27.925 [2024-12-14 00:19:06.803160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.925 [2024-12-14 00:19:06.803174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.925 qpair failed and we were unable to recover it. 00:38:27.925 [2024-12-14 00:19:06.803329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.925 [2024-12-14 00:19:06.803343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.925 qpair failed and we were unable to recover it. 00:38:27.925 [2024-12-14 00:19:06.803496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.925 [2024-12-14 00:19:06.803514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.925 qpair failed and we were unable to recover it. 00:38:27.925 [2024-12-14 00:19:06.803676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.925 [2024-12-14 00:19:06.803690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.925 qpair failed and we were unable to recover it. 00:38:27.925 [2024-12-14 00:19:06.803781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.925 [2024-12-14 00:19:06.803794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.925 qpair failed and we were unable to recover it. 00:38:27.925 [2024-12-14 00:19:06.804083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.925 [2024-12-14 00:19:06.804096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.925 qpair failed and we were unable to recover it. 00:38:27.925 [2024-12-14 00:19:06.804232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.925 [2024-12-14 00:19:06.804248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.925 qpair failed and we were unable to recover it. 00:38:27.925 [2024-12-14 00:19:06.804403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.925 [2024-12-14 00:19:06.804417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.925 qpair failed and we were unable to recover it. 00:38:27.925 [2024-12-14 00:19:06.804622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.925 [2024-12-14 00:19:06.804636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.925 qpair failed and we were unable to recover it. 00:38:27.925 [2024-12-14 00:19:06.804759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.925 [2024-12-14 00:19:06.804772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.925 qpair failed and we were unable to recover it. 00:38:27.925 [2024-12-14 00:19:06.804928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.925 [2024-12-14 00:19:06.804941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.925 qpair failed and we were unable to recover it. 00:38:27.925 [2024-12-14 00:19:06.805091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.925 [2024-12-14 00:19:06.805105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.925 qpair failed and we were unable to recover it. 00:38:27.925 [2024-12-14 00:19:06.805274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.925 [2024-12-14 00:19:06.805287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.925 qpair failed and we were unable to recover it. 00:38:27.925 [2024-12-14 00:19:06.805442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.925 [2024-12-14 00:19:06.805457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.925 qpair failed and we were unable to recover it. 00:38:27.925 [2024-12-14 00:19:06.805553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.925 [2024-12-14 00:19:06.805567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.925 qpair failed and we were unable to recover it. 00:38:27.925 [2024-12-14 00:19:06.805737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.925 [2024-12-14 00:19:06.805750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.925 qpair failed and we were unable to recover it. 00:38:27.925 [2024-12-14 00:19:06.805908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.925 [2024-12-14 00:19:06.805922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.925 qpair failed and we were unable to recover it. 00:38:27.925 [2024-12-14 00:19:06.806154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.925 [2024-12-14 00:19:06.806168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.925 qpair failed and we were unable to recover it. 00:38:27.925 [2024-12-14 00:19:06.806242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.925 [2024-12-14 00:19:06.806255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.925 qpair failed and we were unable to recover it. 00:38:27.925 [2024-12-14 00:19:06.806410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.925 [2024-12-14 00:19:06.806424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.925 qpair failed and we were unable to recover it. 00:38:27.925 [2024-12-14 00:19:06.806598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.925 [2024-12-14 00:19:06.806613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.925 qpair failed and we were unable to recover it. 00:38:27.925 [2024-12-14 00:19:06.806725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.925 [2024-12-14 00:19:06.806739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.925 qpair failed and we were unable to recover it. 00:38:27.925 [2024-12-14 00:19:06.806895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.925 [2024-12-14 00:19:06.806909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.925 qpair failed and we were unable to recover it. 00:38:27.925 [2024-12-14 00:19:06.807116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.925 [2024-12-14 00:19:06.807130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.925 qpair failed and we were unable to recover it. 00:38:27.925 [2024-12-14 00:19:06.807228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.925 [2024-12-14 00:19:06.807242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.925 qpair failed and we were unable to recover it. 00:38:27.925 [2024-12-14 00:19:06.807409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.925 [2024-12-14 00:19:06.807423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.925 qpair failed and we were unable to recover it. 00:38:27.925 [2024-12-14 00:19:06.807594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.925 [2024-12-14 00:19:06.807608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.925 qpair failed and we were unable to recover it. 00:38:27.925 [2024-12-14 00:19:06.807828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.925 [2024-12-14 00:19:06.807841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.925 qpair failed and we were unable to recover it. 00:38:27.925 [2024-12-14 00:19:06.808008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.925 [2024-12-14 00:19:06.808021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.925 qpair failed and we were unable to recover it. 00:38:27.925 [2024-12-14 00:19:06.808229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.925 [2024-12-14 00:19:06.808242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.925 qpair failed and we were unable to recover it. 00:38:27.925 [2024-12-14 00:19:06.808452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.925 [2024-12-14 00:19:06.808466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.925 qpair failed and we were unable to recover it. 00:38:27.925 [2024-12-14 00:19:06.808631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.925 [2024-12-14 00:19:06.808645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.925 qpair failed and we were unable to recover it. 00:38:27.925 [2024-12-14 00:19:06.808794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.925 [2024-12-14 00:19:06.808807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.925 qpair failed and we were unable to recover it. 00:38:27.925 [2024-12-14 00:19:06.809039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.925 [2024-12-14 00:19:06.809054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.925 qpair failed and we were unable to recover it. 00:38:27.925 [2024-12-14 00:19:06.809252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.925 [2024-12-14 00:19:06.809265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.925 qpair failed and we were unable to recover it. 00:38:27.925 [2024-12-14 00:19:06.809403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.925 [2024-12-14 00:19:06.809417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.925 qpair failed and we were unable to recover it. 00:38:27.925 [2024-12-14 00:19:06.809661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.925 [2024-12-14 00:19:06.809681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.925 qpair failed and we were unable to recover it. 00:38:27.925 [2024-12-14 00:19:06.809885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.925 [2024-12-14 00:19:06.809899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.925 qpair failed and we were unable to recover it. 00:38:27.925 [2024-12-14 00:19:06.810011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.925 [2024-12-14 00:19:06.810024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.925 qpair failed and we were unable to recover it. 00:38:27.925 [2024-12-14 00:19:06.810180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.925 [2024-12-14 00:19:06.810193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.926 qpair failed and we were unable to recover it. 00:38:27.926 [2024-12-14 00:19:06.810275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.926 [2024-12-14 00:19:06.810288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.926 qpair failed and we were unable to recover it. 00:38:27.926 [2024-12-14 00:19:06.810377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.926 [2024-12-14 00:19:06.810391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.926 qpair failed and we were unable to recover it. 00:38:27.926 [2024-12-14 00:19:06.810551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.926 [2024-12-14 00:19:06.810565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.926 qpair failed and we were unable to recover it. 00:38:27.926 [2024-12-14 00:19:06.810747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.926 [2024-12-14 00:19:06.810760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.926 qpair failed and we were unable to recover it. 00:38:27.926 [2024-12-14 00:19:06.810936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.926 [2024-12-14 00:19:06.810950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.926 qpair failed and we were unable to recover it. 00:38:27.926 [2024-12-14 00:19:06.811155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.926 [2024-12-14 00:19:06.811169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.926 qpair failed and we were unable to recover it. 00:38:27.926 [2024-12-14 00:19:06.811311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.926 [2024-12-14 00:19:06.811328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.926 qpair failed and we were unable to recover it. 00:38:27.926 [2024-12-14 00:19:06.811538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.926 [2024-12-14 00:19:06.811551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.926 qpair failed and we were unable to recover it. 00:38:27.926 [2024-12-14 00:19:06.811784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.926 [2024-12-14 00:19:06.811797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.926 qpair failed and we were unable to recover it. 00:38:27.926 [2024-12-14 00:19:06.811901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.926 [2024-12-14 00:19:06.811914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.926 qpair failed and we were unable to recover it. 00:38:27.926 [2024-12-14 00:19:06.812178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.926 [2024-12-14 00:19:06.812191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.926 qpair failed and we were unable to recover it. 00:38:27.926 [2024-12-14 00:19:06.812407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.926 [2024-12-14 00:19:06.812421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.926 qpair failed and we were unable to recover it. 00:38:27.926 [2024-12-14 00:19:06.812640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.926 [2024-12-14 00:19:06.812654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.926 qpair failed and we were unable to recover it. 00:38:27.926 [2024-12-14 00:19:06.812743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.926 [2024-12-14 00:19:06.812757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.926 qpair failed and we were unable to recover it. 00:38:27.926 [2024-12-14 00:19:06.812960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.926 [2024-12-14 00:19:06.812973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.926 qpair failed and we were unable to recover it. 00:38:27.926 [2024-12-14 00:19:06.813126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.926 [2024-12-14 00:19:06.813140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.926 qpair failed and we were unable to recover it. 00:38:27.926 [2024-12-14 00:19:06.813301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.926 [2024-12-14 00:19:06.813315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.926 qpair failed and we were unable to recover it. 00:38:27.926 [2024-12-14 00:19:06.813570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.926 [2024-12-14 00:19:06.813585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.926 qpair failed and we were unable to recover it. 00:38:27.926 [2024-12-14 00:19:06.813679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.926 [2024-12-14 00:19:06.813693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.926 qpair failed and we were unable to recover it. 00:38:27.926 [2024-12-14 00:19:06.813885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.926 [2024-12-14 00:19:06.813899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.926 qpair failed and we were unable to recover it. 00:38:27.926 [2024-12-14 00:19:06.814043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.926 [2024-12-14 00:19:06.814057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.926 qpair failed and we were unable to recover it. 00:38:27.926 [2024-12-14 00:19:06.814273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.926 [2024-12-14 00:19:06.814287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.926 qpair failed and we were unable to recover it. 00:38:27.926 [2024-12-14 00:19:06.814372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.926 [2024-12-14 00:19:06.814385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.926 qpair failed and we were unable to recover it. 00:38:27.926 [2024-12-14 00:19:06.814549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.926 [2024-12-14 00:19:06.814562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.926 qpair failed and we were unable to recover it. 00:38:27.926 [2024-12-14 00:19:06.814717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.926 [2024-12-14 00:19:06.814730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.926 qpair failed and we were unable to recover it. 00:38:27.926 [2024-12-14 00:19:06.814884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.926 [2024-12-14 00:19:06.814898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.926 qpair failed and we were unable to recover it. 00:38:27.926 [2024-12-14 00:19:06.814992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.926 [2024-12-14 00:19:06.815005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.926 qpair failed and we were unable to recover it. 00:38:27.926 [2024-12-14 00:19:06.815091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.926 [2024-12-14 00:19:06.815105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.926 qpair failed and we were unable to recover it. 00:38:27.926 [2024-12-14 00:19:06.815330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.926 [2024-12-14 00:19:06.815344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.926 qpair failed and we were unable to recover it. 00:38:27.926 [2024-12-14 00:19:06.815565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.926 [2024-12-14 00:19:06.815579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.926 qpair failed and we were unable to recover it. 00:38:27.926 [2024-12-14 00:19:06.815667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.926 [2024-12-14 00:19:06.815680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.926 qpair failed and we were unable to recover it. 00:38:27.926 [2024-12-14 00:19:06.815836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.926 [2024-12-14 00:19:06.815849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.926 qpair failed and we were unable to recover it. 00:38:27.926 [2024-12-14 00:19:06.815950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.926 [2024-12-14 00:19:06.815963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.926 qpair failed and we were unable to recover it. 00:38:27.926 [2024-12-14 00:19:06.816062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.926 [2024-12-14 00:19:06.816076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.926 qpair failed and we were unable to recover it. 00:38:27.926 [2024-12-14 00:19:06.816282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.926 [2024-12-14 00:19:06.816296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.926 qpair failed and we were unable to recover it. 00:38:27.926 [2024-12-14 00:19:06.816449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.926 [2024-12-14 00:19:06.816462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.926 qpair failed and we were unable to recover it. 00:38:27.926 [2024-12-14 00:19:06.816611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.926 [2024-12-14 00:19:06.816624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.926 qpair failed and we were unable to recover it. 00:38:27.926 [2024-12-14 00:19:06.816741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.926 [2024-12-14 00:19:06.816754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.926 qpair failed and we were unable to recover it. 00:38:27.926 [2024-12-14 00:19:06.816909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.926 [2024-12-14 00:19:06.816923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.926 qpair failed and we were unable to recover it. 00:38:27.927 [2024-12-14 00:19:06.817147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.927 [2024-12-14 00:19:06.817161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.927 qpair failed and we were unable to recover it. 00:38:27.927 [2024-12-14 00:19:06.817315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.927 [2024-12-14 00:19:06.817329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.927 qpair failed and we were unable to recover it. 00:38:27.927 [2024-12-14 00:19:06.817557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.927 [2024-12-14 00:19:06.817572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.927 qpair failed and we were unable to recover it. 00:38:27.927 [2024-12-14 00:19:06.817659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.927 [2024-12-14 00:19:06.817672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.927 qpair failed and we were unable to recover it. 00:38:27.927 [2024-12-14 00:19:06.817767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.927 [2024-12-14 00:19:06.817780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.927 qpair failed and we were unable to recover it. 00:38:27.927 [2024-12-14 00:19:06.817936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.927 [2024-12-14 00:19:06.817949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.927 qpair failed and we were unable to recover it. 00:38:27.927 [2024-12-14 00:19:06.818117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.927 [2024-12-14 00:19:06.818131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.927 qpair failed and we were unable to recover it. 00:38:27.927 [2024-12-14 00:19:06.818304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.927 [2024-12-14 00:19:06.818320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.927 qpair failed and we were unable to recover it. 00:38:27.927 [2024-12-14 00:19:06.818543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.927 [2024-12-14 00:19:06.818556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.927 qpair failed and we were unable to recover it. 00:38:27.927 [2024-12-14 00:19:06.818783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.927 [2024-12-14 00:19:06.818796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.927 qpair failed and we were unable to recover it. 00:38:27.927 [2024-12-14 00:19:06.818886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.927 [2024-12-14 00:19:06.818900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.927 qpair failed and we were unable to recover it. 00:38:27.927 [2024-12-14 00:19:06.819130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.927 [2024-12-14 00:19:06.819144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.927 qpair failed and we were unable to recover it. 00:38:27.927 [2024-12-14 00:19:06.819299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.927 [2024-12-14 00:19:06.819312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.927 qpair failed and we were unable to recover it. 00:38:27.927 [2024-12-14 00:19:06.819471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.927 [2024-12-14 00:19:06.819485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.927 qpair failed and we were unable to recover it. 00:38:27.927 [2024-12-14 00:19:06.819644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.927 [2024-12-14 00:19:06.819658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.927 qpair failed and we were unable to recover it. 00:38:27.927 [2024-12-14 00:19:06.819829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.927 [2024-12-14 00:19:06.819842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.927 qpair failed and we were unable to recover it. 00:38:27.927 [2024-12-14 00:19:06.820001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.927 [2024-12-14 00:19:06.820014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.927 qpair failed and we were unable to recover it. 00:38:27.927 [2024-12-14 00:19:06.820182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.927 [2024-12-14 00:19:06.820196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.927 qpair failed and we were unable to recover it. 00:38:27.927 [2024-12-14 00:19:06.820351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.927 [2024-12-14 00:19:06.820364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.927 qpair failed and we were unable to recover it. 00:38:27.927 [2024-12-14 00:19:06.820510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.927 [2024-12-14 00:19:06.820523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.927 qpair failed and we were unable to recover it. 00:38:27.927 [2024-12-14 00:19:06.820703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.927 [2024-12-14 00:19:06.820716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.927 qpair failed and we were unable to recover it. 00:38:27.927 [2024-12-14 00:19:06.820812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.927 [2024-12-14 00:19:06.820845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.927 qpair failed and we were unable to recover it. 00:38:27.927 [2024-12-14 00:19:06.821101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.927 [2024-12-14 00:19:06.821115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.927 qpair failed and we were unable to recover it. 00:38:27.927 [2024-12-14 00:19:06.821257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.927 [2024-12-14 00:19:06.821270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.927 qpair failed and we were unable to recover it. 00:38:27.927 [2024-12-14 00:19:06.821515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.927 [2024-12-14 00:19:06.821529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.927 qpair failed and we were unable to recover it. 00:38:27.927 [2024-12-14 00:19:06.821640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.927 [2024-12-14 00:19:06.821654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.927 qpair failed and we were unable to recover it. 00:38:27.927 [2024-12-14 00:19:06.821768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.927 [2024-12-14 00:19:06.821781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.927 qpair failed and we were unable to recover it. 00:38:27.927 [2024-12-14 00:19:06.821928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.927 [2024-12-14 00:19:06.821941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.927 qpair failed and we were unable to recover it. 00:38:27.927 [2024-12-14 00:19:06.822106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.927 [2024-12-14 00:19:06.822119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.927 qpair failed and we were unable to recover it. 00:38:27.927 [2024-12-14 00:19:06.822291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.927 [2024-12-14 00:19:06.822304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.927 qpair failed and we were unable to recover it. 00:38:27.927 [2024-12-14 00:19:06.822399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.927 [2024-12-14 00:19:06.822412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.927 qpair failed and we were unable to recover it. 00:38:27.927 [2024-12-14 00:19:06.822640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.927 [2024-12-14 00:19:06.822653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.927 qpair failed and we were unable to recover it. 00:38:27.927 [2024-12-14 00:19:06.822860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.927 [2024-12-14 00:19:06.822874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.927 qpair failed and we were unable to recover it. 00:38:27.927 [2024-12-14 00:19:06.823042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.927 [2024-12-14 00:19:06.823055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.927 qpair failed and we were unable to recover it. 00:38:27.927 [2024-12-14 00:19:06.823270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.927 [2024-12-14 00:19:06.823285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.927 qpair failed and we were unable to recover it. 00:38:27.927 [2024-12-14 00:19:06.823524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.927 [2024-12-14 00:19:06.823539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.927 qpair failed and we were unable to recover it. 00:38:27.927 [2024-12-14 00:19:06.823638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.927 [2024-12-14 00:19:06.823652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.927 qpair failed and we were unable to recover it. 00:38:27.927 [2024-12-14 00:19:06.823754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.927 [2024-12-14 00:19:06.823767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.927 qpair failed and we were unable to recover it. 00:38:27.927 [2024-12-14 00:19:06.823865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.927 [2024-12-14 00:19:06.823878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.927 qpair failed and we were unable to recover it. 00:38:27.927 [2024-12-14 00:19:06.823959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.927 [2024-12-14 00:19:06.823972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.927 qpair failed and we were unable to recover it. 00:38:27.927 [2024-12-14 00:19:06.824152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.927 [2024-12-14 00:19:06.824165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.927 qpair failed and we were unable to recover it. 00:38:27.927 [2024-12-14 00:19:06.824334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.927 [2024-12-14 00:19:06.824348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.927 qpair failed and we were unable to recover it. 00:38:27.927 [2024-12-14 00:19:06.824573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.928 [2024-12-14 00:19:06.824587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.928 qpair failed and we were unable to recover it. 00:38:27.928 [2024-12-14 00:19:06.824675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.928 [2024-12-14 00:19:06.824688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.928 qpair failed and we were unable to recover it. 00:38:27.928 [2024-12-14 00:19:06.824839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.928 [2024-12-14 00:19:06.824852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.928 qpair failed and we were unable to recover it. 00:38:27.928 [2024-12-14 00:19:06.824937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.928 [2024-12-14 00:19:06.824950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.928 qpair failed and we were unable to recover it. 00:38:27.928 [2024-12-14 00:19:06.825062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.928 [2024-12-14 00:19:06.825076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.928 qpair failed and we were unable to recover it. 00:38:27.928 [2024-12-14 00:19:06.825293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.928 [2024-12-14 00:19:06.825309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.928 qpair failed and we were unable to recover it. 00:38:27.928 [2024-12-14 00:19:06.825512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.928 [2024-12-14 00:19:06.825526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.928 qpair failed and we were unable to recover it. 00:38:27.928 [2024-12-14 00:19:06.825677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.928 [2024-12-14 00:19:06.825690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.928 qpair failed and we were unable to recover it. 00:38:27.928 [2024-12-14 00:19:06.825919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.928 [2024-12-14 00:19:06.825935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.928 qpair failed and we were unable to recover it. 00:38:27.928 [2024-12-14 00:19:06.826110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.928 [2024-12-14 00:19:06.826123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.928 qpair failed and we were unable to recover it. 00:38:27.928 [2024-12-14 00:19:06.826348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.928 [2024-12-14 00:19:06.826361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.928 qpair failed and we were unable to recover it. 00:38:27.928 [2024-12-14 00:19:06.826554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.928 [2024-12-14 00:19:06.826567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.928 qpair failed and we were unable to recover it. 00:38:27.928 [2024-12-14 00:19:06.826679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.928 [2024-12-14 00:19:06.826693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.928 qpair failed and we were unable to recover it. 00:38:27.928 [2024-12-14 00:19:06.826908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.928 [2024-12-14 00:19:06.826922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.928 qpair failed and we were unable to recover it. 00:38:27.928 [2024-12-14 00:19:06.827140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.928 [2024-12-14 00:19:06.827154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.928 qpair failed and we were unable to recover it. 00:38:27.928 [2024-12-14 00:19:06.827418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.928 [2024-12-14 00:19:06.827432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.928 qpair failed and we were unable to recover it. 00:38:27.928 [2024-12-14 00:19:06.827551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.928 [2024-12-14 00:19:06.827565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.928 qpair failed and we were unable to recover it. 00:38:27.928 [2024-12-14 00:19:06.827680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.928 [2024-12-14 00:19:06.827693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.928 qpair failed and we were unable to recover it. 00:38:27.928 [2024-12-14 00:19:06.827792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.928 [2024-12-14 00:19:06.827805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.928 qpair failed and we were unable to recover it. 00:38:27.928 [2024-12-14 00:19:06.827906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.928 [2024-12-14 00:19:06.827920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.928 qpair failed and we were unable to recover it. 00:38:27.928 [2024-12-14 00:19:06.828011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.928 [2024-12-14 00:19:06.828025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.928 qpair failed and we were unable to recover it. 00:38:27.928 [2024-12-14 00:19:06.828188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.928 [2024-12-14 00:19:06.828203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.928 qpair failed and we were unable to recover it. 00:38:27.928 [2024-12-14 00:19:06.828350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.928 [2024-12-14 00:19:06.828363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.928 qpair failed and we were unable to recover it. 00:38:27.928 [2024-12-14 00:19:06.828521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.928 [2024-12-14 00:19:06.828534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.928 qpair failed and we were unable to recover it. 00:38:27.928 [2024-12-14 00:19:06.828628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.928 [2024-12-14 00:19:06.828641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.928 qpair failed and we were unable to recover it. 00:38:27.928 [2024-12-14 00:19:06.828799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.928 [2024-12-14 00:19:06.828812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.928 qpair failed and we were unable to recover it. 00:38:27.928 [2024-12-14 00:19:06.828905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.928 [2024-12-14 00:19:06.828919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.928 qpair failed and we were unable to recover it. 00:38:27.928 [2024-12-14 00:19:06.829008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.928 [2024-12-14 00:19:06.829021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.928 qpair failed and we were unable to recover it. 00:38:27.928 [2024-12-14 00:19:06.829121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.928 [2024-12-14 00:19:06.829134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.928 qpair failed and we were unable to recover it. 00:38:27.928 [2024-12-14 00:19:06.829296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.928 [2024-12-14 00:19:06.829309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.928 qpair failed and we were unable to recover it. 00:38:27.928 [2024-12-14 00:19:06.829560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.928 [2024-12-14 00:19:06.829574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.928 qpair failed and we were unable to recover it. 00:38:27.928 [2024-12-14 00:19:06.829660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.928 [2024-12-14 00:19:06.829673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.928 qpair failed and we were unable to recover it. 00:38:27.928 [2024-12-14 00:19:06.829766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.928 [2024-12-14 00:19:06.829779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.928 qpair failed and we were unable to recover it. 00:38:27.928 [2024-12-14 00:19:06.829881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.928 [2024-12-14 00:19:06.829894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.928 qpair failed and we were unable to recover it. 00:38:27.928 [2024-12-14 00:19:06.830132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.928 [2024-12-14 00:19:06.830146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.928 qpair failed and we were unable to recover it. 00:38:27.928 [2024-12-14 00:19:06.830244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.928 [2024-12-14 00:19:06.830258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.928 qpair failed and we were unable to recover it. 00:38:27.928 [2024-12-14 00:19:06.830410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.928 [2024-12-14 00:19:06.830423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.928 qpair failed and we were unable to recover it. 00:38:27.928 [2024-12-14 00:19:06.830578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.928 [2024-12-14 00:19:06.830592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.928 qpair failed and we were unable to recover it. 00:38:27.928 [2024-12-14 00:19:06.830758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.928 [2024-12-14 00:19:06.830772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.928 qpair failed and we were unable to recover it. 00:38:27.928 [2024-12-14 00:19:06.830875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.928 [2024-12-14 00:19:06.830889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.928 qpair failed and we were unable to recover it. 00:38:27.928 [2024-12-14 00:19:06.831111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.928 [2024-12-14 00:19:06.831124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.928 qpair failed and we were unable to recover it. 00:38:27.928 [2024-12-14 00:19:06.831334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.928 [2024-12-14 00:19:06.831348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.928 qpair failed and we were unable to recover it. 00:38:27.928 [2024-12-14 00:19:06.831505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.928 [2024-12-14 00:19:06.831519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.928 qpair failed and we were unable to recover it. 00:38:27.929 [2024-12-14 00:19:06.831678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.929 [2024-12-14 00:19:06.831699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.929 qpair failed and we were unable to recover it. 00:38:27.929 [2024-12-14 00:19:06.831809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.929 [2024-12-14 00:19:06.831823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.929 qpair failed and we were unable to recover it. 00:38:27.929 [2024-12-14 00:19:06.831979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.929 [2024-12-14 00:19:06.831995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.929 qpair failed and we were unable to recover it. 00:38:27.929 [2024-12-14 00:19:06.832102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.929 [2024-12-14 00:19:06.832115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.929 qpair failed and we were unable to recover it. 00:38:27.929 [2024-12-14 00:19:06.832208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.929 [2024-12-14 00:19:06.832222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.929 qpair failed and we were unable to recover it. 00:38:27.929 [2024-12-14 00:19:06.832452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.929 [2024-12-14 00:19:06.832466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.929 qpair failed and we were unable to recover it. 00:38:27.929 [2024-12-14 00:19:06.832611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.929 [2024-12-14 00:19:06.832625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.929 qpair failed and we were unable to recover it. 00:38:27.929 [2024-12-14 00:19:06.832799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.929 [2024-12-14 00:19:06.832813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.929 qpair failed and we were unable to recover it. 00:38:27.929 [2024-12-14 00:19:06.832968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.929 [2024-12-14 00:19:06.832982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.929 qpair failed and we were unable to recover it. 00:38:27.929 [2024-12-14 00:19:06.833085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.929 [2024-12-14 00:19:06.833099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.929 qpair failed and we were unable to recover it. 00:38:27.929 [2024-12-14 00:19:06.833343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.929 [2024-12-14 00:19:06.833357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.929 qpair failed and we were unable to recover it. 00:38:27.929 [2024-12-14 00:19:06.833560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.929 [2024-12-14 00:19:06.833574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.929 qpair failed and we were unable to recover it. 00:38:27.929 [2024-12-14 00:19:06.833742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.929 [2024-12-14 00:19:06.833756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.929 qpair failed and we were unable to recover it. 00:38:27.929 [2024-12-14 00:19:06.833915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.929 [2024-12-14 00:19:06.833929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.929 qpair failed and we were unable to recover it. 00:38:27.929 [2024-12-14 00:19:06.834205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.929 [2024-12-14 00:19:06.834218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.929 qpair failed and we were unable to recover it. 00:38:27.929 [2024-12-14 00:19:06.834450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.929 [2024-12-14 00:19:06.834465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.929 qpair failed and we were unable to recover it. 00:38:27.929 [2024-12-14 00:19:06.834585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.929 [2024-12-14 00:19:06.834598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.929 qpair failed and we were unable to recover it. 00:38:27.929 [2024-12-14 00:19:06.834748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.929 [2024-12-14 00:19:06.834763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.929 qpair failed and we were unable to recover it. 00:38:27.929 [2024-12-14 00:19:06.834982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.929 [2024-12-14 00:19:06.834996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.929 qpair failed and we were unable to recover it. 00:38:27.929 [2024-12-14 00:19:06.835254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.929 [2024-12-14 00:19:06.835270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.929 qpair failed and we were unable to recover it. 00:38:27.929 [2024-12-14 00:19:06.835479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.929 [2024-12-14 00:19:06.835494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.929 qpair failed and we were unable to recover it. 00:38:27.929 [2024-12-14 00:19:06.835726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.929 [2024-12-14 00:19:06.835740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.929 qpair failed and we were unable to recover it. 00:38:27.929 [2024-12-14 00:19:06.835825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.929 [2024-12-14 00:19:06.835838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.929 qpair failed and we were unable to recover it. 00:38:27.929 [2024-12-14 00:19:06.836017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.929 [2024-12-14 00:19:06.836031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.929 qpair failed and we were unable to recover it. 00:38:27.929 [2024-12-14 00:19:06.836240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.929 [2024-12-14 00:19:06.836253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.929 qpair failed and we were unable to recover it. 00:38:27.929 [2024-12-14 00:19:06.836457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.929 [2024-12-14 00:19:06.836471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.929 qpair failed and we were unable to recover it. 00:38:27.929 [2024-12-14 00:19:06.836646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.929 [2024-12-14 00:19:06.836660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.929 qpair failed and we were unable to recover it. 00:38:27.929 [2024-12-14 00:19:06.836861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.929 [2024-12-14 00:19:06.836874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.929 qpair failed and we were unable to recover it. 00:38:27.929 [2024-12-14 00:19:06.837033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.929 [2024-12-14 00:19:06.837046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.929 qpair failed and we were unable to recover it. 00:38:27.929 [2024-12-14 00:19:06.837221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.929 [2024-12-14 00:19:06.837235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.929 qpair failed and we were unable to recover it. 00:38:27.929 [2024-12-14 00:19:06.837389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.929 [2024-12-14 00:19:06.837402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.929 qpair failed and we were unable to recover it. 00:38:27.929 [2024-12-14 00:19:06.837559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.929 [2024-12-14 00:19:06.837573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.929 qpair failed and we were unable to recover it. 00:38:27.929 [2024-12-14 00:19:06.837673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.929 [2024-12-14 00:19:06.837686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.929 qpair failed and we were unable to recover it. 00:38:27.929 [2024-12-14 00:19:06.837795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.929 [2024-12-14 00:19:06.837808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.929 qpair failed and we were unable to recover it. 00:38:27.929 [2024-12-14 00:19:06.837956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.929 [2024-12-14 00:19:06.837969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.929 qpair failed and we were unable to recover it. 00:38:27.929 [2024-12-14 00:19:06.838072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.929 [2024-12-14 00:19:06.838085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.929 qpair failed and we were unable to recover it. 00:38:27.929 [2024-12-14 00:19:06.838183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.929 [2024-12-14 00:19:06.838196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.929 qpair failed and we were unable to recover it. 00:38:27.929 [2024-12-14 00:19:06.838366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.929 [2024-12-14 00:19:06.838379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.929 qpair failed and we were unable to recover it. 00:38:27.929 [2024-12-14 00:19:06.838477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.929 [2024-12-14 00:19:06.838491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.929 qpair failed and we were unable to recover it. 00:38:27.929 [2024-12-14 00:19:06.838628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.929 [2024-12-14 00:19:06.838641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.929 qpair failed and we were unable to recover it. 00:38:27.929 [2024-12-14 00:19:06.838731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.929 [2024-12-14 00:19:06.838744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.929 qpair failed and we were unable to recover it. 00:38:27.929 [2024-12-14 00:19:06.838923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.929 [2024-12-14 00:19:06.838937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.929 qpair failed and we were unable to recover it. 00:38:27.929 [2024-12-14 00:19:06.839177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.930 [2024-12-14 00:19:06.839194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.930 qpair failed and we were unable to recover it. 00:38:27.930 [2024-12-14 00:19:06.839287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.930 [2024-12-14 00:19:06.839300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.930 qpair failed and we were unable to recover it. 00:38:27.930 [2024-12-14 00:19:06.839456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.930 [2024-12-14 00:19:06.839470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.930 qpair failed and we were unable to recover it. 00:38:27.930 [2024-12-14 00:19:06.839583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.930 [2024-12-14 00:19:06.839597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.930 qpair failed and we were unable to recover it. 00:38:27.930 [2024-12-14 00:19:06.839688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.930 [2024-12-14 00:19:06.839701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.930 qpair failed and we were unable to recover it. 00:38:27.930 [2024-12-14 00:19:06.839792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.930 [2024-12-14 00:19:06.839805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.930 qpair failed and we were unable to recover it. 00:38:27.930 [2024-12-14 00:19:06.840067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.930 [2024-12-14 00:19:06.840080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.930 qpair failed and we were unable to recover it. 00:38:27.930 [2024-12-14 00:19:06.840167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.930 [2024-12-14 00:19:06.840180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.930 qpair failed and we were unable to recover it. 00:38:27.930 [2024-12-14 00:19:06.840398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.930 [2024-12-14 00:19:06.840412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.930 qpair failed and we were unable to recover it. 00:38:27.930 [2024-12-14 00:19:06.840582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.930 [2024-12-14 00:19:06.840596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.930 qpair failed and we were unable to recover it. 00:38:27.930 [2024-12-14 00:19:06.840676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.930 [2024-12-14 00:19:06.840690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.930 qpair failed and we were unable to recover it. 00:38:27.930 [2024-12-14 00:19:06.840840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.930 [2024-12-14 00:19:06.840853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.930 qpair failed and we were unable to recover it. 00:38:27.930 [2024-12-14 00:19:06.841023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.930 [2024-12-14 00:19:06.841037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.930 qpair failed and we were unable to recover it. 00:38:27.930 [2024-12-14 00:19:06.841206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.930 [2024-12-14 00:19:06.841219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.930 qpair failed and we were unable to recover it. 00:38:27.930 [2024-12-14 00:19:06.841467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.930 [2024-12-14 00:19:06.841481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.930 qpair failed and we were unable to recover it. 00:38:27.930 [2024-12-14 00:19:06.841655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.930 [2024-12-14 00:19:06.841669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.930 qpair failed and we were unable to recover it. 00:38:27.930 [2024-12-14 00:19:06.841762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.930 [2024-12-14 00:19:06.841776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.930 qpair failed and we were unable to recover it. 00:38:27.930 [2024-12-14 00:19:06.842003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.930 [2024-12-14 00:19:06.842016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.930 qpair failed and we were unable to recover it. 00:38:27.930 [2024-12-14 00:19:06.842161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.930 [2024-12-14 00:19:06.842174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.930 qpair failed and we were unable to recover it. 00:38:27.930 [2024-12-14 00:19:06.842338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.930 [2024-12-14 00:19:06.842352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.930 qpair failed and we were unable to recover it. 00:38:27.930 [2024-12-14 00:19:06.842443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.930 [2024-12-14 00:19:06.842457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.930 qpair failed and we were unable to recover it. 00:38:27.930 [2024-12-14 00:19:06.842612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.930 [2024-12-14 00:19:06.842626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.930 qpair failed and we were unable to recover it. 00:38:27.930 [2024-12-14 00:19:06.842797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.930 [2024-12-14 00:19:06.842815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.930 qpair failed and we were unable to recover it. 00:38:27.930 [2024-12-14 00:19:06.842975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.930 [2024-12-14 00:19:06.842988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.930 qpair failed and we were unable to recover it. 00:38:27.930 [2024-12-14 00:19:06.843092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.930 [2024-12-14 00:19:06.843105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.930 qpair failed and we were unable to recover it. 00:38:27.930 [2024-12-14 00:19:06.843352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.930 [2024-12-14 00:19:06.843366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.930 qpair failed and we were unable to recover it. 00:38:27.930 [2024-12-14 00:19:06.843505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.930 [2024-12-14 00:19:06.843518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.930 qpair failed and we were unable to recover it. 00:38:27.930 [2024-12-14 00:19:06.843673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.930 [2024-12-14 00:19:06.843687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.930 qpair failed and we were unable to recover it. 00:38:27.930 [2024-12-14 00:19:06.843824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.930 [2024-12-14 00:19:06.843837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.930 qpair failed and we were unable to recover it. 00:38:27.930 [2024-12-14 00:19:06.844045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.930 [2024-12-14 00:19:06.844058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.930 qpair failed and we were unable to recover it. 00:38:27.930 [2024-12-14 00:19:06.844289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.930 [2024-12-14 00:19:06.844302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.930 qpair failed and we were unable to recover it. 00:38:27.930 [2024-12-14 00:19:06.844506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.930 [2024-12-14 00:19:06.844520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.930 qpair failed and we were unable to recover it. 00:38:27.930 [2024-12-14 00:19:06.844749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.930 [2024-12-14 00:19:06.844762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.930 qpair failed and we were unable to recover it. 00:38:27.930 [2024-12-14 00:19:06.845043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.930 [2024-12-14 00:19:06.845056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.930 qpair failed and we were unable to recover it. 00:38:27.930 [2024-12-14 00:19:06.845265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.930 [2024-12-14 00:19:06.845278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.930 qpair failed and we were unable to recover it. 00:38:27.930 [2024-12-14 00:19:06.845433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.930 [2024-12-14 00:19:06.845452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.930 qpair failed and we were unable to recover it. 00:38:27.930 [2024-12-14 00:19:06.845667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.930 [2024-12-14 00:19:06.845680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.930 qpair failed and we were unable to recover it. 00:38:27.930 [2024-12-14 00:19:06.845838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.930 [2024-12-14 00:19:06.845851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.930 qpair failed and we were unable to recover it. 00:38:27.931 [2024-12-14 00:19:06.845951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.931 [2024-12-14 00:19:06.845964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.931 qpair failed and we were unable to recover it. 00:38:27.931 [2024-12-14 00:19:06.846067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.931 [2024-12-14 00:19:06.846080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.931 qpair failed and we were unable to recover it. 00:38:27.931 [2024-12-14 00:19:06.846293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.931 [2024-12-14 00:19:06.846309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.931 qpair failed and we were unable to recover it. 00:38:27.931 [2024-12-14 00:19:06.846411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.931 [2024-12-14 00:19:06.846424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.931 qpair failed and we were unable to recover it. 00:38:27.931 [2024-12-14 00:19:06.846558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.931 [2024-12-14 00:19:06.846598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000326200 with addr=10.0.0.2, port=4420 00:38:27.931 qpair failed and we were unable to recover it. 00:38:27.931 [2024-12-14 00:19:06.846848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.931 [2024-12-14 00:19:06.846886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000350000 with addr=10.0.0.2, port=4420 00:38:27.931 qpair failed and we were unable to recover it. 00:38:27.931 A controller has encountered a failure and is being reset. 00:38:27.931 [2024-12-14 00:19:06.847180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.931 [2024-12-14 00:19:06.847210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032ff80 with addr=10.0.0.2, port=4420 00:38:27.931 qpair failed and we were unable to recover it. 00:38:27.931 [2024-12-14 00:19:06.847468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.931 [2024-12-14 00:19:06.847484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.931 qpair failed and we were unable to recover it. 00:38:27.931 [2024-12-14 00:19:06.847585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.931 [2024-12-14 00:19:06.847599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.931 qpair failed and we were unable to recover it. 00:38:27.931 [2024-12-14 00:19:06.847802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.931 [2024-12-14 00:19:06.847815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.931 qpair failed and we were unable to recover it. 00:38:27.931 [2024-12-14 00:19:06.847970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.931 [2024-12-14 00:19:06.847984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.931 qpair failed and we were unable to recover it. 00:38:27.931 [2024-12-14 00:19:06.848196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.931 [2024-12-14 00:19:06.848209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.931 qpair failed and we were unable to recover it. 00:38:27.931 [2024-12-14 00:19:06.848429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.931 [2024-12-14 00:19:06.848448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.931 qpair failed and we were unable to recover it. 00:38:27.931 [2024-12-14 00:19:06.848640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.931 [2024-12-14 00:19:06.848653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.931 qpair failed and we were unable to recover it. 00:38:27.931 [2024-12-14 00:19:06.848853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.931 [2024-12-14 00:19:06.848866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.931 qpair failed and we were unable to recover it. 00:38:27.931 [2024-12-14 00:19:06.849083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.931 [2024-12-14 00:19:06.849097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.931 qpair failed and we were unable to recover it. 00:38:27.931 [2024-12-14 00:19:06.849186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.931 [2024-12-14 00:19:06.849199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.931 qpair failed and we were unable to recover it. 00:38:27.931 [2024-12-14 00:19:06.849346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.931 [2024-12-14 00:19:06.849360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.931 qpair failed and we were unable to recover it. 00:38:27.931 [2024-12-14 00:19:06.849584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.931 [2024-12-14 00:19:06.849597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.931 qpair failed and we were unable to recover it. 00:38:27.931 [2024-12-14 00:19:06.849748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.931 [2024-12-14 00:19:06.849762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.931 qpair failed and we were unable to recover it. 00:38:27.931 [2024-12-14 00:19:06.849852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.931 [2024-12-14 00:19:06.849865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.931 qpair failed and we were unable to recover it. 00:38:27.931 [2024-12-14 00:19:06.850019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.931 [2024-12-14 00:19:06.850032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.931 qpair failed and we were unable to recover it. 00:38:27.931 [2024-12-14 00:19:06.850186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.931 [2024-12-14 00:19:06.850199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.931 qpair failed and we were unable to recover it. 00:38:27.931 [2024-12-14 00:19:06.850429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.931 [2024-12-14 00:19:06.850447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.931 qpair failed and we were unable to recover it. 00:38:27.931 [2024-12-14 00:19:06.850593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.931 [2024-12-14 00:19:06.850607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.931 qpair failed and we were unable to recover it. 00:38:27.931 [2024-12-14 00:19:06.850705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.931 [2024-12-14 00:19:06.850718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.931 qpair failed and we were unable to recover it. 00:38:27.931 [2024-12-14 00:19:06.850871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.931 [2024-12-14 00:19:06.850884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.931 qpair failed and we were unable to recover it. 00:38:27.931 [2024-12-14 00:19:06.851137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.931 [2024-12-14 00:19:06.851150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.931 qpair failed and we were unable to recover it. 00:38:27.931 [2024-12-14 00:19:06.851306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.931 [2024-12-14 00:19:06.851319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.931 qpair failed and we were unable to recover it. 00:38:27.931 [2024-12-14 00:19:06.851411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.931 [2024-12-14 00:19:06.851424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500033fe80 with addr=10.0.0.2, port=4420 00:38:27.931 qpair failed and we were unable to recover it. 00:38:27.931 [2024-12-14 00:19:06.851678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:27.931 [2024-12-14 00:19:06.851713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000325a80 with addr=10.0.0.2, port=4420 00:38:27.931 [2024-12-14 00:19:06.851734] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000325a80 is same with the state(6) to be set 00:38:27.931 [2024-12-14 00:19:06.851762] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:38:27.931 [2024-12-14 00:19:06.851784] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:38:27.931 [2024-12-14 00:19:06.851806] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:38:27.931 [2024-12-14 00:19:06.851828] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:38:27.931 Unable to reset the controller. 00:38:28.499 00:19:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:38:28.499 00:19:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@868 -- # return 0 00:38:28.499 00:19:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:38:28.499 00:19:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:38:28.499 00:19:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:38:28.499 00:19:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:38:28.499 00:19:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:38:28.499 00:19:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:28.499 00:19:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:38:28.499 Malloc0 00:38:28.499 00:19:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:28.499 00:19:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:38:28.499 00:19:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:28.499 00:19:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:38:28.499 [2024-12-14 00:19:07.531417] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:38:28.499 00:19:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:28.499 00:19:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:38:28.499 00:19:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:28.499 00:19:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:38:28.499 00:19:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:28.499 00:19:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:38:28.499 00:19:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:28.499 00:19:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:38:28.499 00:19:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:28.499 00:19:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:38:28.499 00:19:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:28.499 00:19:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:38:28.500 [2024-12-14 00:19:07.559724] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:28.500 00:19:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:28.500 00:19:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:38:28.500 00:19:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:28.500 00:19:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:38:28.500 00:19:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:28.500 00:19:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 58988 00:38:29.067 Controller properly reset. 00:38:34.338 Initializing NVMe Controllers 00:38:34.338 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:38:34.338 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:38:34.338 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:38:34.338 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:38:34.338 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:38:34.338 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:38:34.338 Initialization complete. Launching workers. 00:38:34.338 Starting thread on core 1 00:38:34.338 Starting thread on core 2 00:38:34.338 Starting thread on core 3 00:38:34.338 Starting thread on core 0 00:38:34.338 00:19:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:38:34.338 00:38:34.338 real 0m11.471s 00:38:34.338 user 0m37.223s 00:38:34.338 sys 0m5.829s 00:38:34.338 00:19:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:34.338 00:19:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:38:34.338 ************************************ 00:38:34.338 END TEST nvmf_target_disconnect_tc2 00:38:34.338 ************************************ 00:38:34.338 00:19:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n '' ']' 00:38:34.338 00:19:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:38:34.338 00:19:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:38:34.338 00:19:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@516 -- # nvmfcleanup 00:38:34.338 00:19:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@121 -- # sync 00:38:34.338 00:19:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:38:34.338 00:19:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set +e 00:38:34.338 00:19:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:38:34.338 00:19:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:38:34.338 rmmod nvme_tcp 00:38:34.338 rmmod nvme_fabrics 00:38:34.338 rmmod nvme_keyring 00:38:34.338 00:19:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:38:34.338 00:19:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@128 -- # set -e 00:38:34.338 00:19:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@129 -- # return 0 00:38:34.338 00:19:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@517 -- # '[' -n 59510 ']' 00:38:34.338 00:19:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@518 -- # killprocess 59510 00:38:34.338 00:19:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # '[' -z 59510 ']' 00:38:34.338 00:19:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@958 -- # kill -0 59510 00:38:34.338 00:19:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@959 -- # uname 00:38:34.338 00:19:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:38:34.338 00:19:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59510 00:38:34.338 00:19:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@960 -- # process_name=reactor_4 00:38:34.338 00:19:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@964 -- # '[' reactor_4 = sudo ']' 00:38:34.338 00:19:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59510' 00:38:34.338 killing process with pid 59510 00:38:34.338 00:19:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@973 -- # kill 59510 00:38:34.338 00:19:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@978 -- # wait 59510 00:38:35.275 00:19:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:38:35.275 00:19:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:38:35.275 00:19:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:38:35.275 00:19:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@297 -- # iptr 00:38:35.275 00:19:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # iptables-save 00:38:35.275 00:19:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:38:35.275 00:19:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # iptables-restore 00:38:35.275 00:19:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:38:35.275 00:19:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:38:35.275 00:19:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:35.275 00:19:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:35.275 00:19:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:37.812 00:19:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:38:37.812 00:38:37.812 real 0m20.867s 00:38:37.812 user 1m7.456s 00:38:37.812 sys 0m10.673s 00:38:37.812 00:19:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:37.812 00:19:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:38:37.812 ************************************ 00:38:37.812 END TEST nvmf_target_disconnect 00:38:37.812 ************************************ 00:38:37.812 00:19:16 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:38:37.812 00:38:37.812 real 8m6.875s 00:38:37.812 user 19m20.002s 00:38:37.812 sys 2m6.892s 00:38:37.812 00:19:16 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:37.812 00:19:16 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:38:37.812 ************************************ 00:38:37.812 END TEST nvmf_host 00:38:37.812 ************************************ 00:38:37.812 00:19:16 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ tcp = \t\c\p ]] 00:38:37.812 00:19:16 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ 0 -eq 0 ]] 00:38:37.812 00:19:16 nvmf_tcp -- nvmf/nvmf.sh@20 -- # run_test nvmf_target_core_interrupt_mode /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:38:37.812 00:19:16 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:38:37.812 00:19:16 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:38:37.812 00:19:16 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:38:37.812 ************************************ 00:38:37.812 START TEST nvmf_target_core_interrupt_mode 00:38:37.812 ************************************ 00:38:37.812 00:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:38:37.812 * Looking for test storage... 00:38:37.812 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:38:37.812 00:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:38:37.812 00:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1711 -- # lcov --version 00:38:37.812 00:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:38:37.812 00:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:38:37.812 00:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:38:37.812 00:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@333 -- # local ver1 ver1_l 00:38:37.812 00:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@334 -- # local ver2 ver2_l 00:38:37.812 00:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # IFS=.-: 00:38:37.812 00:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # read -ra ver1 00:38:37.812 00:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # IFS=.-: 00:38:37.812 00:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # read -ra ver2 00:38:37.812 00:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@338 -- # local 'op=<' 00:38:37.812 00:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@340 -- # ver1_l=2 00:38:37.812 00:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@341 -- # ver2_l=1 00:38:37.812 00:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:38:37.812 00:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@344 -- # case "$op" in 00:38:37.812 00:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@345 -- # : 1 00:38:37.812 00:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v = 0 )) 00:38:37.812 00:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:38:37.812 00:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # decimal 1 00:38:37.812 00:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=1 00:38:37.812 00:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:38:37.812 00:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 1 00:38:37.812 00:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # ver1[v]=1 00:38:37.812 00:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # decimal 2 00:38:37.812 00:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=2 00:38:37.812 00:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:38:37.812 00:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 2 00:38:37.812 00:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # ver2[v]=2 00:38:37.812 00:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:38:37.812 00:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:38:37.812 00:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # return 0 00:38:37.812 00:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:38:37.812 00:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:38:37.812 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:37.812 --rc genhtml_branch_coverage=1 00:38:37.812 --rc genhtml_function_coverage=1 00:38:37.812 --rc genhtml_legend=1 00:38:37.812 --rc geninfo_all_blocks=1 00:38:37.812 --rc geninfo_unexecuted_blocks=1 00:38:37.812 00:38:37.812 ' 00:38:37.812 00:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:38:37.812 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:37.812 --rc genhtml_branch_coverage=1 00:38:37.812 --rc genhtml_function_coverage=1 00:38:37.812 --rc genhtml_legend=1 00:38:37.812 --rc geninfo_all_blocks=1 00:38:37.812 --rc geninfo_unexecuted_blocks=1 00:38:37.812 00:38:37.812 ' 00:38:37.812 00:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:38:37.812 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:37.812 --rc genhtml_branch_coverage=1 00:38:37.812 --rc genhtml_function_coverage=1 00:38:37.812 --rc genhtml_legend=1 00:38:37.812 --rc geninfo_all_blocks=1 00:38:37.812 --rc geninfo_unexecuted_blocks=1 00:38:37.812 00:38:37.812 ' 00:38:37.812 00:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:38:37.812 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:37.812 --rc genhtml_branch_coverage=1 00:38:37.812 --rc genhtml_function_coverage=1 00:38:37.812 --rc genhtml_legend=1 00:38:37.812 --rc geninfo_all_blocks=1 00:38:37.812 --rc geninfo_unexecuted_blocks=1 00:38:37.812 00:38:37.812 ' 00:38:37.812 00:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:38:37.812 00:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:38:37.812 00:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:38:37.812 00:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # uname -s 00:38:37.812 00:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:38:37.812 00:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:38:37.812 00:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:38:37.812 00:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:38:37.812 00:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:38:37.812 00:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:38:37.812 00:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:38:37.812 00:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:38:37.812 00:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:38:37.812 00:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:38:37.812 00:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:38:37.812 00:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:38:37.812 00:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:38:37.812 00:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:38:37.812 00:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:38:37.812 00:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:38:37.812 00:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:38:37.812 00:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@15 -- # shopt -s extglob 00:38:37.812 00:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:37.812 00:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:37.812 00:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:37.812 00:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:37.813 00:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:37.813 00:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:37.813 00:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@5 -- # export PATH 00:38:37.813 00:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:37.813 00:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@51 -- # : 0 00:38:37.813 00:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:38:37.813 00:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:38:37.813 00:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:38:37.813 00:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:38:37.813 00:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:38:37.813 00:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:38:37.813 00:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:38:37.813 00:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:38:37.813 00:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:38:37.813 00:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@55 -- # have_pci_nics=0 00:38:37.813 00:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:38:37.813 00:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:38:37.813 00:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:38:37.813 00:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:38:37.813 00:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:38:37.813 00:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:38:37.813 00:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:38:37.813 ************************************ 00:38:37.813 START TEST nvmf_abort 00:38:37.813 ************************************ 00:38:37.813 00:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:38:37.813 * Looking for test storage... 00:38:37.813 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:38:37.813 00:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:38:37.813 00:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1711 -- # lcov --version 00:38:37.813 00:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:38:37.813 00:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:38:37.813 00:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:38:37.813 00:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:38:37.813 00:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:38:37.813 00:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:38:37.813 00:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:38:37.813 00:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:38:37.813 00:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:38:37.813 00:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:38:37.813 00:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:38:37.813 00:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:38:37.813 00:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:38:37.813 00:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:38:37.813 00:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:38:37.813 00:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:38:37.813 00:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:38:37.813 00:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:38:37.813 00:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:38:37.813 00:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:38:37.813 00:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:38:37.813 00:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:38:37.813 00:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:38:37.813 00:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:38:37.813 00:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:38:37.813 00:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:38:37.813 00:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:38:37.813 00:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:38:37.813 00:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:38:37.813 00:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:38:37.813 00:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:38:37.813 00:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:38:37.813 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:37.813 --rc genhtml_branch_coverage=1 00:38:37.813 --rc genhtml_function_coverage=1 00:38:37.813 --rc genhtml_legend=1 00:38:37.813 --rc geninfo_all_blocks=1 00:38:37.813 --rc geninfo_unexecuted_blocks=1 00:38:37.813 00:38:37.813 ' 00:38:37.813 00:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:38:37.813 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:37.813 --rc genhtml_branch_coverage=1 00:38:37.813 --rc genhtml_function_coverage=1 00:38:37.813 --rc genhtml_legend=1 00:38:37.813 --rc geninfo_all_blocks=1 00:38:37.813 --rc geninfo_unexecuted_blocks=1 00:38:37.813 00:38:37.813 ' 00:38:37.813 00:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:38:37.813 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:37.813 --rc genhtml_branch_coverage=1 00:38:37.813 --rc genhtml_function_coverage=1 00:38:37.813 --rc genhtml_legend=1 00:38:37.813 --rc geninfo_all_blocks=1 00:38:37.813 --rc geninfo_unexecuted_blocks=1 00:38:37.813 00:38:37.813 ' 00:38:37.813 00:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:38:37.813 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:37.813 --rc genhtml_branch_coverage=1 00:38:37.813 --rc genhtml_function_coverage=1 00:38:37.813 --rc genhtml_legend=1 00:38:37.813 --rc geninfo_all_blocks=1 00:38:37.813 --rc geninfo_unexecuted_blocks=1 00:38:37.813 00:38:37.813 ' 00:38:37.813 00:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:38:37.813 00:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:38:37.813 00:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:38:37.813 00:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:38:37.813 00:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:38:37.813 00:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:38:37.813 00:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:38:37.813 00:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:38:37.813 00:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:38:37.813 00:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:38:37.813 00:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:38:37.813 00:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:38:37.813 00:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:38:37.813 00:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:38:37.813 00:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:38:37.813 00:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:38:37.813 00:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:38:37.813 00:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:38:37.813 00:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:38:37.813 00:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:38:37.813 00:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:37.813 00:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:37.813 00:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:37.814 00:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:37.814 00:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:37.814 00:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:37.814 00:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:38:37.814 00:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:37.814 00:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:38:37.814 00:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:38:37.814 00:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:38:37.814 00:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:38:37.814 00:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:38:37.814 00:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:38:37.814 00:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:38:37.814 00:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:38:37.814 00:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:38:37.814 00:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:38:37.814 00:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:38:37.814 00:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:38:37.814 00:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:38:37.814 00:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:38:37.814 00:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:38:37.814 00:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:38:37.814 00:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@476 -- # prepare_net_devs 00:38:37.814 00:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@438 -- # local -g is_hw=no 00:38:37.814 00:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@440 -- # remove_spdk_ns 00:38:37.814 00:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:37.814 00:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:37.814 00:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:37.814 00:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:38:37.814 00:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:38:37.814 00:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:38:37.814 00:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:38:43.170 00:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:38:43.170 00:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:38:43.170 00:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:38:43.170 00:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:38:43.170 00:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:38:43.170 00:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:38:43.170 00:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:38:43.170 00:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:38:43.170 00:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:38:43.170 00:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:38:43.170 00:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:38:43.170 00:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:38:43.170 00:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:38:43.170 00:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:38:43.170 00:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:38:43.170 00:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:38:43.170 00:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:38:43.170 00:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:38:43.170 00:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:38:43.170 00:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:38:43.170 00:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:38:43.170 00:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:38:43.170 00:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:38:43.170 00:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:38:43.170 00:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:38:43.170 00:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:38:43.170 00:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:38:43.170 00:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:38:43.170 00:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:38:43.170 00:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:38:43.170 00:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:38:43.170 00:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:38:43.170 00:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:38:43.170 00:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:38:43.170 00:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:38:43.170 Found 0000:af:00.0 (0x8086 - 0x159b) 00:38:43.170 00:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:38:43.170 00:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:38:43.170 00:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:43.170 00:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:43.170 00:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:38:43.170 00:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:38:43.170 00:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:38:43.170 Found 0000:af:00.1 (0x8086 - 0x159b) 00:38:43.170 00:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:38:43.170 00:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:38:43.170 00:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:43.170 00:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:43.170 00:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:38:43.170 00:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:38:43.170 00:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:38:43.170 00:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:38:43.170 00:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:38:43.170 00:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:43.170 00:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:38:43.170 00:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:43.170 00:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:38:43.170 00:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:38:43.170 00:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:43.170 00:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:38:43.170 Found net devices under 0000:af:00.0: cvl_0_0 00:38:43.170 00:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:38:43.170 00:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:38:43.170 00:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:43.170 00:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:38:43.170 00:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:43.170 00:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:38:43.170 00:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:38:43.170 00:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:43.170 00:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:38:43.170 Found net devices under 0000:af:00.1: cvl_0_1 00:38:43.170 00:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:38:43.170 00:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:38:43.170 00:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # is_hw=yes 00:38:43.170 00:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:38:43.170 00:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:38:43.170 00:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:38:43.170 00:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:38:43.170 00:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:38:43.170 00:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:38:43.170 00:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:38:43.171 00:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:38:43.171 00:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:38:43.171 00:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:38:43.171 00:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:38:43.171 00:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:38:43.171 00:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:38:43.171 00:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:38:43.171 00:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:38:43.171 00:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:38:43.171 00:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:38:43.171 00:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:38:43.171 00:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:38:43.171 00:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:38:43.171 00:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:38:43.171 00:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:38:43.171 00:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:38:43.171 00:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:38:43.171 00:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:38:43.171 00:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:38:43.171 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:38:43.171 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.372 ms 00:38:43.171 00:38:43.171 --- 10.0.0.2 ping statistics --- 00:38:43.171 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:43.171 rtt min/avg/max/mdev = 0.372/0.372/0.372/0.000 ms 00:38:43.171 00:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:38:43.171 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:38:43.171 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.177 ms 00:38:43.171 00:38:43.171 --- 10.0.0.1 ping statistics --- 00:38:43.171 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:43.171 rtt min/avg/max/mdev = 0.177/0.177/0.177/0.000 ms 00:38:43.171 00:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:38:43.171 00:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@450 -- # return 0 00:38:43.171 00:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:38:43.171 00:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:38:43.171 00:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:38:43.171 00:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:38:43.171 00:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:38:43.171 00:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:38:43.171 00:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:38:43.171 00:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:38:43.171 00:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:38:43.171 00:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@726 -- # xtrace_disable 00:38:43.171 00:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:38:43.171 00:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@509 -- # nvmfpid=64161 00:38:43.171 00:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:38:43.171 00:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@510 -- # waitforlisten 64161 00:38:43.171 00:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@835 -- # '[' -z 64161 ']' 00:38:43.171 00:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:43.171 00:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@840 -- # local max_retries=100 00:38:43.171 00:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:43.171 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:43.171 00:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@844 -- # xtrace_disable 00:38:43.171 00:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:38:43.171 [2024-12-14 00:19:22.062850] thread.c:3079:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:38:43.171 [2024-12-14 00:19:22.067829] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:38:43.171 [2024-12-14 00:19:22.067970] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:38:43.171 [2024-12-14 00:19:22.191404] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:38:43.171 [2024-12-14 00:19:22.296138] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:38:43.171 [2024-12-14 00:19:22.296183] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:38:43.171 [2024-12-14 00:19:22.296195] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:38:43.171 [2024-12-14 00:19:22.296221] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:38:43.171 [2024-12-14 00:19:22.296231] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:38:43.171 [2024-12-14 00:19:22.298496] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:38:43.171 [2024-12-14 00:19:22.298557] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:38:43.171 [2024-12-14 00:19:22.298568] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:38:43.739 [2024-12-14 00:19:22.612719] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:38:43.739 [2024-12-14 00:19:22.613657] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:38:43.739 [2024-12-14 00:19:22.614365] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:38:43.739 [2024-12-14 00:19:22.614581] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:38:43.739 00:19:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:38:43.739 00:19:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@868 -- # return 0 00:38:43.739 00:19:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:38:43.739 00:19:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@732 -- # xtrace_disable 00:38:43.739 00:19:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:38:43.999 00:19:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:38:43.999 00:19:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:38:43.999 00:19:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:43.999 00:19:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:38:43.999 [2024-12-14 00:19:22.899500] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:38:43.999 00:19:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:43.999 00:19:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:38:43.999 00:19:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:43.999 00:19:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:38:43.999 Malloc0 00:38:43.999 00:19:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:43.999 00:19:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:38:43.999 00:19:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:43.999 00:19:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:38:43.999 Delay0 00:38:43.999 00:19:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:43.999 00:19:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:38:43.999 00:19:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:43.999 00:19:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:38:43.999 00:19:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:43.999 00:19:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:38:43.999 00:19:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:43.999 00:19:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:38:43.999 00:19:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:43.999 00:19:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:38:43.999 00:19:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:43.999 00:19:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:38:43.999 [2024-12-14 00:19:23.031435] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:43.999 00:19:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:43.999 00:19:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:38:43.999 00:19:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:43.999 00:19:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:38:43.999 00:19:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:43.999 00:19:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:38:44.258 [2024-12-14 00:19:23.180472] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:38:46.162 Initializing NVMe Controllers 00:38:46.162 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:38:46.162 controller IO queue size 128 less than required 00:38:46.162 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:38:46.162 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:38:46.162 Initialization complete. Launching workers. 00:38:46.162 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 127, failed: 34514 00:38:46.162 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 34575, failed to submit 66 00:38:46.162 success 34514, unsuccessful 61, failed 0 00:38:46.162 00:19:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:38:46.162 00:19:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:46.162 00:19:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:38:46.163 00:19:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:46.163 00:19:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:38:46.163 00:19:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:38:46.163 00:19:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@516 -- # nvmfcleanup 00:38:46.163 00:19:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:38:46.163 00:19:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:38:46.163 00:19:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:38:46.163 00:19:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:38:46.163 00:19:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:38:46.163 rmmod nvme_tcp 00:38:46.163 rmmod nvme_fabrics 00:38:46.421 rmmod nvme_keyring 00:38:46.421 00:19:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:38:46.421 00:19:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:38:46.421 00:19:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:38:46.421 00:19:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@517 -- # '[' -n 64161 ']' 00:38:46.421 00:19:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@518 -- # killprocess 64161 00:38:46.421 00:19:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@954 -- # '[' -z 64161 ']' 00:38:46.421 00:19:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@958 -- # kill -0 64161 00:38:46.421 00:19:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@959 -- # uname 00:38:46.421 00:19:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:38:46.421 00:19:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 64161 00:38:46.421 00:19:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:38:46.421 00:19:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:38:46.421 00:19:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 64161' 00:38:46.421 killing process with pid 64161 00:38:46.421 00:19:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@973 -- # kill 64161 00:38:46.421 00:19:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@978 -- # wait 64161 00:38:47.846 00:19:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:38:47.846 00:19:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:38:47.846 00:19:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:38:47.846 00:19:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:38:47.846 00:19:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # iptables-save 00:38:47.846 00:19:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:38:47.846 00:19:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # iptables-restore 00:38:47.846 00:19:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:38:47.846 00:19:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:38:47.846 00:19:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:47.846 00:19:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:47.846 00:19:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:49.751 00:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:38:49.751 00:38:49.751 real 0m12.025s 00:38:49.751 user 0m11.707s 00:38:49.751 sys 0m5.094s 00:38:49.751 00:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:49.751 00:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:38:49.751 ************************************ 00:38:49.751 END TEST nvmf_abort 00:38:49.751 ************************************ 00:38:49.751 00:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:38:49.751 00:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:38:49.751 00:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:38:49.751 00:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:38:49.751 ************************************ 00:38:49.751 START TEST nvmf_ns_hotplug_stress 00:38:49.751 ************************************ 00:38:49.751 00:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:38:50.011 * Looking for test storage... 00:38:50.011 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:38:50.011 00:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:38:50.011 00:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # lcov --version 00:38:50.011 00:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:38:50.011 00:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:38:50.011 00:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:38:50.011 00:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:38:50.011 00:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:38:50.011 00:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:38:50.011 00:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:38:50.011 00:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:38:50.011 00:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:38:50.011 00:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:38:50.011 00:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:38:50.011 00:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:38:50.011 00:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:38:50.011 00:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:38:50.011 00:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:38:50.011 00:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:38:50.011 00:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:38:50.011 00:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:38:50.011 00:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:38:50.011 00:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:38:50.011 00:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:38:50.011 00:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:38:50.011 00:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:38:50.012 00:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:38:50.012 00:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:38:50.012 00:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:38:50.012 00:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:38:50.012 00:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:38:50.012 00:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:38:50.012 00:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:38:50.012 00:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:38:50.012 00:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:38:50.012 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:50.012 --rc genhtml_branch_coverage=1 00:38:50.012 --rc genhtml_function_coverage=1 00:38:50.012 --rc genhtml_legend=1 00:38:50.012 --rc geninfo_all_blocks=1 00:38:50.012 --rc geninfo_unexecuted_blocks=1 00:38:50.012 00:38:50.012 ' 00:38:50.012 00:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:38:50.012 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:50.012 --rc genhtml_branch_coverage=1 00:38:50.012 --rc genhtml_function_coverage=1 00:38:50.012 --rc genhtml_legend=1 00:38:50.012 --rc geninfo_all_blocks=1 00:38:50.012 --rc geninfo_unexecuted_blocks=1 00:38:50.012 00:38:50.012 ' 00:38:50.012 00:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:38:50.012 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:50.012 --rc genhtml_branch_coverage=1 00:38:50.012 --rc genhtml_function_coverage=1 00:38:50.012 --rc genhtml_legend=1 00:38:50.012 --rc geninfo_all_blocks=1 00:38:50.012 --rc geninfo_unexecuted_blocks=1 00:38:50.012 00:38:50.012 ' 00:38:50.012 00:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:38:50.012 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:50.012 --rc genhtml_branch_coverage=1 00:38:50.012 --rc genhtml_function_coverage=1 00:38:50.012 --rc genhtml_legend=1 00:38:50.012 --rc geninfo_all_blocks=1 00:38:50.012 --rc geninfo_unexecuted_blocks=1 00:38:50.012 00:38:50.012 ' 00:38:50.012 00:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:38:50.012 00:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:38:50.012 00:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:38:50.012 00:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:38:50.012 00:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:38:50.012 00:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:38:50.012 00:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:38:50.012 00:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:38:50.012 00:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:38:50.012 00:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:38:50.012 00:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:38:50.012 00:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:38:50.012 00:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:38:50.012 00:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:38:50.012 00:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:38:50.012 00:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:38:50.012 00:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:38:50.012 00:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:38:50.012 00:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:38:50.012 00:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:38:50.012 00:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:50.012 00:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:50.012 00:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:50.012 00:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:50.012 00:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:50.012 00:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:50.012 00:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:38:50.012 00:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:50.012 00:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:38:50.012 00:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:38:50.012 00:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:38:50.012 00:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:38:50.012 00:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:38:50.012 00:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:38:50.012 00:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:38:50.012 00:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:38:50.012 00:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:38:50.012 00:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:38:50.012 00:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:38:50.012 00:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:38:50.012 00:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:38:50.012 00:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:38:50.012 00:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:38:50.012 00:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:38:50.012 00:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:38:50.012 00:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:38:50.012 00:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:50.012 00:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:50.012 00:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:50.012 00:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:38:50.012 00:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:38:50.012 00:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:38:50.012 00:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:38:55.286 00:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:38:55.286 00:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:38:55.286 00:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:38:55.286 00:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:38:55.286 00:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:38:55.286 00:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:38:55.286 00:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:38:55.286 00:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:38:55.286 00:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:38:55.286 00:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:38:55.286 00:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:38:55.286 00:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:38:55.286 00:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:38:55.286 00:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:38:55.287 00:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:38:55.287 00:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:38:55.287 00:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:38:55.287 00:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:38:55.287 00:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:38:55.287 00:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:38:55.287 00:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:38:55.287 00:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:38:55.287 00:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:38:55.287 00:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:38:55.287 00:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:38:55.287 00:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:38:55.287 00:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:38:55.287 00:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:38:55.287 00:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:38:55.287 00:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:38:55.287 00:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:38:55.287 00:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:38:55.287 00:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:38:55.287 00:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:38:55.287 00:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:38:55.287 Found 0000:af:00.0 (0x8086 - 0x159b) 00:38:55.287 00:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:38:55.287 00:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:38:55.287 00:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:55.287 00:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:55.287 00:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:38:55.287 00:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:38:55.287 00:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:38:55.287 Found 0000:af:00.1 (0x8086 - 0x159b) 00:38:55.287 00:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:38:55.287 00:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:38:55.287 00:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:55.287 00:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:55.287 00:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:38:55.287 00:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:38:55.287 00:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:38:55.287 00:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:38:55.287 00:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:38:55.287 00:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:55.287 00:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:38:55.287 00:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:55.287 00:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:38:55.287 00:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:38:55.287 00:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:55.287 00:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:38:55.287 Found net devices under 0000:af:00.0: cvl_0_0 00:38:55.287 00:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:38:55.287 00:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:38:55.287 00:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:55.287 00:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:38:55.287 00:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:55.287 00:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:38:55.287 00:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:38:55.287 00:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:55.287 00:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:38:55.287 Found net devices under 0000:af:00.1: cvl_0_1 00:38:55.287 00:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:38:55.287 00:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:38:55.287 00:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:38:55.287 00:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:38:55.287 00:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:38:55.287 00:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:38:55.287 00:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:38:55.287 00:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:38:55.287 00:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:38:55.287 00:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:38:55.287 00:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:38:55.287 00:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:38:55.287 00:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:38:55.287 00:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:38:55.287 00:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:38:55.287 00:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:38:55.287 00:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:38:55.287 00:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:38:55.287 00:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:38:55.287 00:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:38:55.287 00:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:38:55.287 00:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:38:55.287 00:19:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:38:55.287 00:19:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:38:55.287 00:19:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:38:55.287 00:19:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:38:55.287 00:19:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:38:55.287 00:19:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:38:55.287 00:19:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:38:55.287 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:38:55.287 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.378 ms 00:38:55.287 00:38:55.287 --- 10.0.0.2 ping statistics --- 00:38:55.287 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:55.287 rtt min/avg/max/mdev = 0.378/0.378/0.378/0.000 ms 00:38:55.287 00:19:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:38:55.287 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:38:55.287 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.217 ms 00:38:55.287 00:38:55.287 --- 10.0.0.1 ping statistics --- 00:38:55.287 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:55.287 rtt min/avg/max/mdev = 0.217/0.217/0.217/0.000 ms 00:38:55.287 00:19:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:38:55.287 00:19:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # return 0 00:38:55.287 00:19:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:38:55.288 00:19:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:38:55.288 00:19:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:38:55.288 00:19:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:38:55.288 00:19:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:38:55.288 00:19:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:38:55.288 00:19:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:38:55.288 00:19:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:38:55.288 00:19:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:38:55.288 00:19:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:38:55.288 00:19:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:38:55.288 00:19:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # nvmfpid=68307 00:38:55.288 00:19:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # waitforlisten 68307 00:38:55.288 00:19:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:38:55.288 00:19:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # '[' -z 68307 ']' 00:38:55.288 00:19:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:55.288 00:19:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:38:55.288 00:19:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:55.288 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:55.288 00:19:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:38:55.288 00:19:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:38:55.288 [2024-12-14 00:19:34.305879] thread.c:3079:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:38:55.288 [2024-12-14 00:19:34.307959] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:38:55.288 [2024-12-14 00:19:34.308036] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:38:55.288 [2024-12-14 00:19:34.424201] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:38:55.547 [2024-12-14 00:19:34.528201] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:38:55.547 [2024-12-14 00:19:34.528246] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:38:55.547 [2024-12-14 00:19:34.528258] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:38:55.547 [2024-12-14 00:19:34.528267] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:38:55.547 [2024-12-14 00:19:34.528277] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:38:55.547 [2024-12-14 00:19:34.530440] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:38:55.547 [2024-12-14 00:19:34.530512] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:38:55.547 [2024-12-14 00:19:34.530518] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:38:55.805 [2024-12-14 00:19:34.853831] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:38:55.805 [2024-12-14 00:19:34.854886] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:38:55.805 [2024-12-14 00:19:34.855608] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:38:55.805 [2024-12-14 00:19:34.855823] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:38:56.064 00:19:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:38:56.064 00:19:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@868 -- # return 0 00:38:56.064 00:19:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:38:56.064 00:19:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:38:56.064 00:19:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:38:56.064 00:19:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:38:56.064 00:19:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:38:56.064 00:19:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:38:56.322 [2024-12-14 00:19:35.307291] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:38:56.322 00:19:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:38:56.580 00:19:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:38:56.580 [2024-12-14 00:19:35.696145] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:56.580 00:19:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:38:56.839 00:19:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:38:57.097 Malloc0 00:38:57.098 00:19:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:38:57.356 Delay0 00:38:57.356 00:19:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:57.356 00:19:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:38:57.615 NULL1 00:38:57.615 00:19:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:38:57.873 00:19:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=68780 00:38:57.873 00:19:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:38:57.873 00:19:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68780 00:38:57.873 00:19:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:58.132 00:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:58.391 00:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:38:58.391 00:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:38:58.391 true 00:38:58.649 00:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68780 00:38:58.649 00:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:58.649 00:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:58.908 00:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:38:58.908 00:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:38:59.167 true 00:38:59.167 00:19:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68780 00:38:59.167 00:19:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:59.425 00:19:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:59.684 00:19:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:38:59.684 00:19:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:38:59.684 true 00:38:59.684 00:19:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68780 00:38:59.684 00:19:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:59.943 00:19:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:00.201 00:19:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:39:00.201 00:19:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:39:00.460 true 00:39:00.460 00:19:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68780 00:39:00.460 00:19:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:00.718 00:19:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:00.977 00:19:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:39:00.977 00:19:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:39:00.977 true 00:39:00.977 00:19:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68780 00:39:00.977 00:19:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:01.235 00:19:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:01.494 00:19:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:39:01.494 00:19:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:39:01.753 true 00:39:01.753 00:19:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68780 00:39:01.753 00:19:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:02.011 00:19:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:02.269 00:19:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:39:02.269 00:19:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:39:02.269 true 00:39:02.269 00:19:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68780 00:39:02.269 00:19:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:02.527 00:19:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:02.787 00:19:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:39:02.787 00:19:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:39:03.045 true 00:39:03.045 00:19:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68780 00:39:03.045 00:19:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:03.304 00:19:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:03.562 00:19:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:39:03.562 00:19:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:39:03.562 true 00:39:03.562 00:19:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68780 00:39:03.562 00:19:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:03.821 00:19:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:04.080 00:19:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:39:04.080 00:19:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:39:04.338 true 00:39:04.338 00:19:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68780 00:39:04.338 00:19:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:04.597 00:19:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:04.857 00:19:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:39:04.857 00:19:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:39:04.857 true 00:39:04.857 00:19:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68780 00:39:04.857 00:19:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:05.115 00:19:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:05.374 00:19:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:39:05.374 00:19:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:39:05.632 true 00:39:05.632 00:19:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68780 00:39:05.632 00:19:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:05.891 00:19:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:06.149 00:19:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:39:06.149 00:19:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:39:06.149 true 00:39:06.149 00:19:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68780 00:39:06.149 00:19:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:06.407 00:19:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:06.665 00:19:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:39:06.665 00:19:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:39:06.924 true 00:39:06.924 00:19:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68780 00:39:06.924 00:19:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:07.183 00:19:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:07.183 00:19:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:39:07.183 00:19:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:39:07.441 true 00:39:07.441 00:19:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68780 00:39:07.441 00:19:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:07.700 00:19:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:07.959 00:19:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:39:07.959 00:19:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:39:08.217 true 00:39:08.217 00:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68780 00:39:08.217 00:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:08.476 00:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:08.476 00:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:39:08.476 00:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:39:08.734 true 00:39:08.734 00:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68780 00:39:08.734 00:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:08.992 00:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:09.251 00:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:39:09.251 00:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:39:09.510 true 00:39:09.510 00:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68780 00:39:09.510 00:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:09.769 00:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:09.769 00:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:39:09.769 00:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:39:10.028 true 00:39:10.028 00:19:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68780 00:39:10.028 00:19:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:10.286 00:19:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:10.545 00:19:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:39:10.545 00:19:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:39:10.804 true 00:39:10.804 00:19:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68780 00:39:10.804 00:19:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:11.063 00:19:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:11.063 00:19:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:39:11.063 00:19:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:39:11.322 true 00:39:11.322 00:19:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68780 00:39:11.322 00:19:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:11.581 00:19:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:11.839 00:19:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:39:11.839 00:19:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:39:12.098 true 00:39:12.098 00:19:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68780 00:39:12.098 00:19:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:12.356 00:19:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:12.356 00:19:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:39:12.356 00:19:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:39:12.615 true 00:39:12.615 00:19:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68780 00:39:12.615 00:19:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:12.874 00:19:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:13.132 00:19:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:39:13.132 00:19:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:39:13.391 true 00:39:13.391 00:19:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68780 00:39:13.391 00:19:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:13.391 00:19:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:13.649 00:19:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:39:13.649 00:19:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:39:13.908 true 00:39:13.908 00:19:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68780 00:39:13.908 00:19:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:14.167 00:19:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:14.426 00:19:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:39:14.426 00:19:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:39:14.426 true 00:39:14.426 00:19:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68780 00:39:14.426 00:19:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:14.684 00:19:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:14.946 00:19:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:39:14.946 00:19:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:39:15.207 true 00:39:15.207 00:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68780 00:39:15.207 00:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:15.466 00:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:15.724 00:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:39:15.724 00:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:39:15.724 true 00:39:15.724 00:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68780 00:39:15.725 00:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:15.983 00:19:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:16.241 00:19:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:39:16.241 00:19:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:39:16.499 true 00:39:16.499 00:19:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68780 00:39:16.499 00:19:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:16.758 00:19:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:17.018 00:19:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:39:17.018 00:19:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:39:17.018 true 00:39:17.018 00:19:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68780 00:39:17.018 00:19:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:17.277 00:19:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:17.536 00:19:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1031 00:39:17.536 00:19:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1031 00:39:17.795 true 00:39:17.795 00:19:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68780 00:39:17.795 00:19:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:18.054 00:19:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:18.313 00:19:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1032 00:39:18.314 00:19:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1032 00:39:18.314 true 00:39:18.314 00:19:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68780 00:39:18.314 00:19:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:18.573 00:19:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:18.831 00:19:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1033 00:39:18.831 00:19:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1033 00:39:19.090 true 00:39:19.090 00:19:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68780 00:39:19.090 00:19:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:19.349 00:19:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:19.349 00:19:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1034 00:39:19.349 00:19:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1034 00:39:19.607 true 00:39:19.607 00:19:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68780 00:39:19.607 00:19:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:19.866 00:19:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:20.125 00:19:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1035 00:39:20.125 00:19:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1035 00:39:20.383 true 00:39:20.383 00:19:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68780 00:39:20.384 00:19:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:20.643 00:19:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:20.643 00:19:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1036 00:39:20.643 00:19:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1036 00:39:20.902 true 00:39:20.902 00:19:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68780 00:39:20.902 00:19:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:21.161 00:20:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:21.422 00:20:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1037 00:39:21.422 00:20:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1037 00:39:21.682 true 00:39:21.683 00:20:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68780 00:39:21.683 00:20:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:21.941 00:20:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:22.199 00:20:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1038 00:39:22.199 00:20:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1038 00:39:22.199 true 00:39:22.199 00:20:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68780 00:39:22.199 00:20:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:22.458 00:20:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:22.716 00:20:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1039 00:39:22.716 00:20:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1039 00:39:22.974 true 00:39:22.974 00:20:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68780 00:39:22.975 00:20:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:23.233 00:20:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:23.492 00:20:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1040 00:39:23.492 00:20:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1040 00:39:23.492 true 00:39:23.492 00:20:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68780 00:39:23.492 00:20:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:23.750 00:20:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:24.008 00:20:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1041 00:39:24.008 00:20:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1041 00:39:24.267 true 00:39:24.267 00:20:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68780 00:39:24.267 00:20:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:24.526 00:20:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:24.784 00:20:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1042 00:39:24.784 00:20:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1042 00:39:24.784 true 00:39:24.784 00:20:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68780 00:39:24.784 00:20:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:25.043 00:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:25.302 00:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1043 00:39:25.302 00:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1043 00:39:25.560 true 00:39:25.560 00:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68780 00:39:25.560 00:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:25.819 00:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:26.077 00:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1044 00:39:26.078 00:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1044 00:39:26.078 true 00:39:26.078 00:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68780 00:39:26.078 00:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:26.335 00:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:26.593 00:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1045 00:39:26.593 00:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1045 00:39:26.851 true 00:39:26.851 00:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68780 00:39:26.851 00:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:27.110 00:20:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:27.370 00:20:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1046 00:39:27.370 00:20:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1046 00:39:27.370 true 00:39:27.370 00:20:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68780 00:39:27.370 00:20:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:27.672 00:20:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:27.997 00:20:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1047 00:39:27.997 00:20:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1047 00:39:27.997 true 00:39:28.347 00:20:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68780 00:39:28.347 00:20:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:28.347 00:20:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:28.347 Initializing NVMe Controllers 00:39:28.347 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:39:28.347 Controller IO queue size 128, less than required. 00:39:28.347 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:39:28.347 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:39:28.347 Initialization complete. Launching workers. 00:39:28.347 ======================================================== 00:39:28.347 Latency(us) 00:39:28.347 Device Information : IOPS MiB/s Average min max 00:39:28.347 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 24017.96 11.73 5332.81 1402.63 45489.51 00:39:28.347 ======================================================== 00:39:28.347 Total : 24017.96 11.73 5332.81 1402.63 45489.51 00:39:28.347 00:39:28.606 00:20:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1048 00:39:28.606 00:20:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1048 00:39:28.606 true 00:39:28.606 00:20:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68780 00:39:28.606 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (68780) - No such process 00:39:28.606 00:20:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 68780 00:39:28.606 00:20:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:28.865 00:20:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:39:29.124 00:20:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:39:29.124 00:20:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:39:29.124 00:20:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:39:29.124 00:20:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:39:29.124 00:20:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:39:29.383 null0 00:39:29.383 00:20:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:39:29.383 00:20:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:39:29.383 00:20:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:39:29.642 null1 00:39:29.642 00:20:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:39:29.642 00:20:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:39:29.642 00:20:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:39:29.642 null2 00:39:29.642 00:20:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:39:29.642 00:20:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:39:29.642 00:20:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:39:29.901 null3 00:39:29.901 00:20:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:39:29.901 00:20:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:39:29.901 00:20:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:39:30.160 null4 00:39:30.160 00:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:39:30.160 00:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:39:30.160 00:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:39:30.160 null5 00:39:30.419 00:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:39:30.419 00:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:39:30.419 00:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:39:30.419 null6 00:39:30.419 00:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:39:30.419 00:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:39:30.419 00:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:39:30.678 null7 00:39:30.678 00:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:39:30.678 00:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:39:30.678 00:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:39:30.678 00:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:39:30.678 00:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:39:30.678 00:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:39:30.678 00:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:39:30.678 00:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:39:30.678 00:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:39:30.678 00:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:39:30.678 00:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:30.678 00:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:39:30.678 00:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:39:30.678 00:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:39:30.678 00:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:39:30.678 00:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:39:30.678 00:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:30.678 00:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:39:30.678 00:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:39:30.678 00:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:39:30.678 00:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:39:30.678 00:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:39:30.678 00:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:39:30.678 00:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:39:30.678 00:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:39:30.678 00:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:39:30.678 00:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:30.678 00:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:39:30.678 00:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:39:30.678 00:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:39:30.678 00:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:39:30.678 00:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:39:30.678 00:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:39:30.678 00:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:39:30.678 00:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:30.678 00:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:39:30.678 00:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:39:30.679 00:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:39:30.679 00:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:39:30.679 00:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:39:30.679 00:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:39:30.679 00:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:39:30.679 00:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:30.679 00:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:39:30.679 00:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:39:30.679 00:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:39:30.679 00:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:39:30.679 00:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:39:30.679 00:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:39:30.679 00:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:39:30.679 00:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:39:30.679 00:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:39:30.679 00:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:30.679 00:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:39:30.679 00:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:39:30.679 00:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:39:30.679 00:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:39:30.679 00:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:39:30.679 00:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:39:30.679 00:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:39:30.679 00:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:39:30.679 00:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:30.679 00:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:39:30.679 00:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 73995 73997 74001 74003 74006 74009 74012 74013 00:39:30.679 00:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:39:30.679 00:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:39:30.679 00:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:39:30.679 00:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:30.679 00:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:39:30.938 00:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:30.938 00:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:39:30.938 00:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:39:30.938 00:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:39:30.938 00:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:39:30.938 00:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:39:30.938 00:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:39:30.938 00:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:39:30.938 00:20:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:30.938 00:20:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:30.938 00:20:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:39:31.197 00:20:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:31.197 00:20:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:31.198 00:20:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:39:31.198 00:20:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:31.198 00:20:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:31.198 00:20:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:39:31.198 00:20:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:31.198 00:20:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:31.198 00:20:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:31.198 00:20:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:39:31.198 00:20:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:31.198 00:20:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:39:31.198 00:20:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:31.198 00:20:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:31.198 00:20:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:39:31.198 00:20:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:31.198 00:20:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:31.198 00:20:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:39:31.198 00:20:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:31.198 00:20:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:31.198 00:20:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:39:31.198 00:20:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:31.198 00:20:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:39:31.198 00:20:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:39:31.198 00:20:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:39:31.198 00:20:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:39:31.198 00:20:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:39:31.198 00:20:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:39:31.198 00:20:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:39:31.457 00:20:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:31.457 00:20:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:31.457 00:20:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:39:31.457 00:20:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:31.457 00:20:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:31.457 00:20:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:39:31.457 00:20:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:31.457 00:20:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:31.457 00:20:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:31.457 00:20:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:31.457 00:20:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:39:31.457 00:20:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:39:31.457 00:20:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:31.457 00:20:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:31.457 00:20:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:39:31.457 00:20:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:31.457 00:20:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:31.457 00:20:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:39:31.457 00:20:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:31.457 00:20:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:31.457 00:20:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:39:31.457 00:20:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:31.457 00:20:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:31.457 00:20:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:39:31.716 00:20:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:31.717 00:20:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:39:31.717 00:20:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:39:31.717 00:20:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:39:31.717 00:20:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:39:31.717 00:20:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:39:31.717 00:20:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:39:31.717 00:20:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:39:31.976 00:20:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:31.976 00:20:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:31.976 00:20:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:39:31.976 00:20:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:31.977 00:20:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:31.977 00:20:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:39:31.977 00:20:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:31.977 00:20:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:31.977 00:20:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:39:31.977 00:20:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:31.977 00:20:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:31.977 00:20:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:39:31.977 00:20:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:31.977 00:20:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:31.977 00:20:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:39:31.977 00:20:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:31.977 00:20:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:31.977 00:20:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:39:31.977 00:20:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:31.977 00:20:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:31.977 00:20:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:39:31.977 00:20:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:31.977 00:20:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:31.977 00:20:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:39:32.235 00:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:39:32.235 00:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:39:32.235 00:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:39:32.235 00:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:32.235 00:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:39:32.235 00:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:39:32.235 00:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:39:32.235 00:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:39:32.235 00:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:32.235 00:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:32.235 00:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:39:32.235 00:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:32.235 00:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:32.235 00:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:32.235 00:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:39:32.235 00:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:32.235 00:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:32.235 00:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:32.235 00:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:39:32.235 00:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:39:32.235 00:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:32.235 00:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:32.235 00:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:32.235 00:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:32.235 00:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:39:32.235 00:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:39:32.235 00:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:32.235 00:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:32.235 00:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:39:32.235 00:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:32.235 00:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:32.235 00:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:39:32.494 00:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:39:32.494 00:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:39:32.494 00:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:39:32.494 00:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:39:32.494 00:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:32.494 00:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:39:32.494 00:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:39:32.494 00:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:39:32.752 00:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:32.752 00:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:32.752 00:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:39:32.752 00:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:32.752 00:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:32.752 00:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:39:32.752 00:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:32.752 00:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:32.752 00:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:39:32.752 00:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:32.752 00:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:32.752 00:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:39:32.753 00:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:32.753 00:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:32.753 00:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:39:32.753 00:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:32.753 00:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:32.753 00:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:32.753 00:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:32.753 00:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:39:32.753 00:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:39:32.753 00:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:32.753 00:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:32.753 00:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:39:33.012 00:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:39:33.012 00:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:33.012 00:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:39:33.012 00:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:39:33.012 00:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:39:33.012 00:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:39:33.012 00:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:39:33.012 00:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:39:33.271 00:20:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:33.271 00:20:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:33.271 00:20:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:39:33.271 00:20:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:33.271 00:20:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:33.271 00:20:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:39:33.271 00:20:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:33.271 00:20:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:33.271 00:20:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:39:33.271 00:20:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:33.271 00:20:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:33.271 00:20:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:39:33.271 00:20:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:33.271 00:20:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:33.271 00:20:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:39:33.271 00:20:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:33.271 00:20:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:33.271 00:20:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:39:33.271 00:20:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:33.271 00:20:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:33.271 00:20:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:39:33.271 00:20:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:33.271 00:20:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:33.271 00:20:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:39:33.271 00:20:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:39:33.271 00:20:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:39:33.271 00:20:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:39:33.271 00:20:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:39:33.271 00:20:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:39:33.271 00:20:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:39:33.271 00:20:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:33.271 00:20:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:39:33.530 00:20:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:33.530 00:20:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:33.530 00:20:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:39:33.530 00:20:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:33.530 00:20:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:33.530 00:20:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:39:33.530 00:20:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:33.530 00:20:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:33.530 00:20:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:39:33.530 00:20:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:33.530 00:20:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:33.530 00:20:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:39:33.530 00:20:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:33.530 00:20:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:33.530 00:20:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:33.530 00:20:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:33.530 00:20:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:39:33.530 00:20:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:39:33.530 00:20:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:33.530 00:20:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:33.530 00:20:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:39:33.530 00:20:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:33.530 00:20:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:33.530 00:20:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:39:33.790 00:20:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:33.790 00:20:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:39:33.790 00:20:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:39:33.790 00:20:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:39:33.790 00:20:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:39:33.790 00:20:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:39:33.790 00:20:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:39:33.790 00:20:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:39:34.049 00:20:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:34.049 00:20:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:34.049 00:20:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:39:34.049 00:20:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:34.049 00:20:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:34.049 00:20:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:39:34.049 00:20:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:34.049 00:20:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:34.049 00:20:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:39:34.049 00:20:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:34.049 00:20:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:34.049 00:20:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:39:34.049 00:20:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:34.049 00:20:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:34.049 00:20:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:39:34.049 00:20:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:34.049 00:20:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:34.049 00:20:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:34.049 00:20:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:34.049 00:20:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:39:34.049 00:20:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:39:34.049 00:20:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:34.049 00:20:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:34.049 00:20:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:39:34.049 00:20:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:39:34.049 00:20:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:34.308 00:20:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:39:34.308 00:20:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:39:34.308 00:20:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:39:34.308 00:20:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:39:34.308 00:20:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:39:34.308 00:20:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:39:34.308 00:20:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:34.308 00:20:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:34.308 00:20:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:39:34.308 00:20:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:34.308 00:20:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:34.308 00:20:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:39:34.308 00:20:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:34.308 00:20:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:34.308 00:20:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:39:34.308 00:20:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:34.308 00:20:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:34.309 00:20:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:39:34.309 00:20:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:34.309 00:20:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:34.309 00:20:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:39:34.309 00:20:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:34.309 00:20:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:34.309 00:20:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:39:34.309 00:20:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:34.309 00:20:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:34.309 00:20:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:39:34.309 00:20:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:34.309 00:20:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:34.309 00:20:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:39:34.566 00:20:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:39:34.566 00:20:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:34.566 00:20:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:39:34.566 00:20:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:39:34.566 00:20:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:39:34.567 00:20:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:39:34.567 00:20:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:39:34.567 00:20:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:39:34.825 00:20:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:34.825 00:20:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:34.825 00:20:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:34.825 00:20:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:34.825 00:20:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:34.826 00:20:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:34.826 00:20:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:34.826 00:20:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:34.826 00:20:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:34.826 00:20:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:34.826 00:20:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:34.826 00:20:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:34.826 00:20:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:34.826 00:20:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:34.826 00:20:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:34.826 00:20:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:34.826 00:20:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:39:34.826 00:20:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:39:34.826 00:20:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:39:34.826 00:20:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:39:34.826 00:20:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:39:34.826 00:20:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:39:34.826 00:20:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:39:34.826 00:20:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:39:34.826 rmmod nvme_tcp 00:39:34.826 rmmod nvme_fabrics 00:39:34.826 rmmod nvme_keyring 00:39:34.826 00:20:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:39:34.826 00:20:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:39:34.826 00:20:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:39:34.826 00:20:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@517 -- # '[' -n 68307 ']' 00:39:34.826 00:20:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # killprocess 68307 00:39:34.826 00:20:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # '[' -z 68307 ']' 00:39:34.826 00:20:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # kill -0 68307 00:39:34.826 00:20:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # uname 00:39:34.826 00:20:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:39:34.826 00:20:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 68307 00:39:35.084 00:20:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:39:35.085 00:20:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:39:35.085 00:20:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 68307' 00:39:35.085 killing process with pid 68307 00:39:35.085 00:20:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@973 -- # kill 68307 00:39:35.085 00:20:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@978 -- # wait 68307 00:39:36.462 00:20:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:39:36.462 00:20:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:39:36.462 00:20:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:39:36.462 00:20:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:39:36.462 00:20:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-save 00:39:36.462 00:20:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:39:36.462 00:20:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-restore 00:39:36.462 00:20:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:39:36.462 00:20:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:39:36.462 00:20:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:36.462 00:20:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:39:36.462 00:20:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:38.366 00:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:39:38.366 00:39:38.366 real 0m48.433s 00:39:38.366 user 3m5.153s 00:39:38.366 sys 0m20.554s 00:39:38.366 00:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:39:38.366 00:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:39:38.366 ************************************ 00:39:38.366 END TEST nvmf_ns_hotplug_stress 00:39:38.366 ************************************ 00:39:38.366 00:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:39:38.366 00:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:39:38.366 00:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:39:38.366 00:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:39:38.366 ************************************ 00:39:38.366 START TEST nvmf_delete_subsystem 00:39:38.366 ************************************ 00:39:38.366 00:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:39:38.366 * Looking for test storage... 00:39:38.366 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:39:38.366 00:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:39:38.366 00:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:39:38.366 00:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # lcov --version 00:39:38.366 00:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:39:38.366 00:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:39:38.366 00:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:39:38.366 00:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:39:38.366 00:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:39:38.366 00:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:39:38.366 00:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:39:38.366 00:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:39:38.366 00:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:39:38.367 00:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:39:38.367 00:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:39:38.367 00:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:39:38.367 00:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:39:38.367 00:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:39:38.367 00:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:39:38.367 00:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:39:38.367 00:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:39:38.367 00:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:39:38.367 00:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:39:38.367 00:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:39:38.367 00:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:39:38.367 00:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:39:38.367 00:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:39:38.367 00:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:39:38.367 00:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:39:38.367 00:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:39:38.626 00:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:39:38.626 00:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:39:38.626 00:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:39:38.626 00:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:39:38.626 00:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:39:38.626 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:38.626 --rc genhtml_branch_coverage=1 00:39:38.626 --rc genhtml_function_coverage=1 00:39:38.626 --rc genhtml_legend=1 00:39:38.626 --rc geninfo_all_blocks=1 00:39:38.626 --rc geninfo_unexecuted_blocks=1 00:39:38.626 00:39:38.626 ' 00:39:38.626 00:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:39:38.626 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:38.626 --rc genhtml_branch_coverage=1 00:39:38.626 --rc genhtml_function_coverage=1 00:39:38.626 --rc genhtml_legend=1 00:39:38.626 --rc geninfo_all_blocks=1 00:39:38.626 --rc geninfo_unexecuted_blocks=1 00:39:38.626 00:39:38.626 ' 00:39:38.626 00:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:39:38.626 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:38.626 --rc genhtml_branch_coverage=1 00:39:38.626 --rc genhtml_function_coverage=1 00:39:38.626 --rc genhtml_legend=1 00:39:38.626 --rc geninfo_all_blocks=1 00:39:38.626 --rc geninfo_unexecuted_blocks=1 00:39:38.626 00:39:38.626 ' 00:39:38.626 00:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:39:38.626 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:38.626 --rc genhtml_branch_coverage=1 00:39:38.626 --rc genhtml_function_coverage=1 00:39:38.626 --rc genhtml_legend=1 00:39:38.626 --rc geninfo_all_blocks=1 00:39:38.626 --rc geninfo_unexecuted_blocks=1 00:39:38.626 00:39:38.626 ' 00:39:38.626 00:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:39:38.626 00:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:39:38.626 00:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:39:38.626 00:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:39:38.626 00:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:39:38.626 00:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:39:38.626 00:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:39:38.626 00:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:39:38.627 00:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:39:38.627 00:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:39:38.627 00:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:39:38.627 00:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:39:38.627 00:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:39:38.627 00:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:39:38.627 00:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:39:38.627 00:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:39:38.627 00:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:39:38.627 00:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:39:38.627 00:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:39:38.627 00:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:39:38.627 00:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:39:38.627 00:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:39:38.627 00:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:39:38.627 00:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:38.627 00:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:38.627 00:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:38.627 00:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:39:38.627 00:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:38.627 00:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:39:38.627 00:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:39:38.627 00:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:39:38.627 00:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:39:38.627 00:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:39:38.627 00:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:39:38.627 00:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:39:38.627 00:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:39:38.627 00:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:39:38.627 00:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:39:38.627 00:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:39:38.627 00:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:39:38.627 00:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:39:38.627 00:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:39:38.627 00:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:39:38.627 00:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:39:38.627 00:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:39:38.627 00:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:38.627 00:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:39:38.627 00:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:38.627 00:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:39:38.627 00:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:39:38.627 00:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:39:38.627 00:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:39:43.900 00:20:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:39:43.900 00:20:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:39:43.900 00:20:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:39:43.900 00:20:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:39:43.900 00:20:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:39:43.900 00:20:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:39:43.900 00:20:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:39:43.900 00:20:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:39:43.900 00:20:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:39:43.900 00:20:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:39:43.900 00:20:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:39:43.900 00:20:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:39:43.900 00:20:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:39:43.900 00:20:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:39:43.900 00:20:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:39:43.900 00:20:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:39:43.900 00:20:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:39:43.900 00:20:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:39:43.900 00:20:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:39:43.900 00:20:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:39:43.901 00:20:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:39:43.901 00:20:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:39:43.901 00:20:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:39:43.901 00:20:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:39:43.901 00:20:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:39:43.901 00:20:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:39:43.901 00:20:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:39:43.901 00:20:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:39:43.901 00:20:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:39:43.901 00:20:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:39:43.901 00:20:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:39:43.901 00:20:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:39:43.901 00:20:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:39:43.901 00:20:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:39:43.901 00:20:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:39:43.901 Found 0000:af:00.0 (0x8086 - 0x159b) 00:39:43.901 00:20:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:39:43.901 00:20:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:39:43.901 00:20:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:43.901 00:20:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:43.901 00:20:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:39:43.901 00:20:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:39:43.901 00:20:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:39:43.901 Found 0000:af:00.1 (0x8086 - 0x159b) 00:39:43.901 00:20:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:39:43.901 00:20:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:39:43.901 00:20:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:43.901 00:20:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:43.901 00:20:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:39:43.901 00:20:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:39:43.901 00:20:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:39:43.901 00:20:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:39:43.901 00:20:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:39:43.901 00:20:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:43.901 00:20:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:39:43.901 00:20:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:43.901 00:20:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:39:43.901 00:20:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:39:43.901 00:20:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:43.901 00:20:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:39:43.901 Found net devices under 0000:af:00.0: cvl_0_0 00:39:43.901 00:20:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:39:43.901 00:20:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:39:43.901 00:20:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:43.901 00:20:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:39:43.901 00:20:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:43.901 00:20:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:39:43.901 00:20:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:39:43.901 00:20:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:43.901 00:20:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:39:43.901 Found net devices under 0000:af:00.1: cvl_0_1 00:39:43.901 00:20:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:39:43.901 00:20:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:39:43.901 00:20:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # is_hw=yes 00:39:43.901 00:20:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:39:43.901 00:20:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:39:43.901 00:20:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:39:43.901 00:20:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:39:43.901 00:20:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:39:43.901 00:20:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:39:43.901 00:20:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:39:43.901 00:20:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:39:43.901 00:20:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:39:43.901 00:20:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:39:43.901 00:20:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:39:43.901 00:20:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:39:43.901 00:20:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:39:43.901 00:20:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:39:43.901 00:20:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:39:43.901 00:20:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:39:43.901 00:20:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:39:43.901 00:20:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:39:43.901 00:20:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:39:43.901 00:20:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:39:43.901 00:20:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:39:43.901 00:20:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:39:43.901 00:20:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:39:43.901 00:20:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:39:43.901 00:20:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:39:43.901 00:20:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:39:43.901 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:39:43.901 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.290 ms 00:39:43.901 00:39:43.901 --- 10.0.0.2 ping statistics --- 00:39:43.901 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:43.901 rtt min/avg/max/mdev = 0.290/0.290/0.290/0.000 ms 00:39:43.901 00:20:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:39:43.901 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:39:43.901 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.185 ms 00:39:43.901 00:39:43.901 --- 10.0.0.1 ping statistics --- 00:39:43.901 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:43.901 rtt min/avg/max/mdev = 0.185/0.185/0.185/0.000 ms 00:39:43.901 00:20:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:39:43.901 00:20:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # return 0 00:39:43.901 00:20:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:39:43.901 00:20:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:39:43.901 00:20:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:39:43.901 00:20:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:39:43.901 00:20:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:39:43.901 00:20:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:39:43.901 00:20:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:39:43.901 00:20:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:39:43.902 00:20:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:39:43.902 00:20:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@726 -- # xtrace_disable 00:39:43.902 00:20:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:39:43.902 00:20:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # nvmfpid=78322 00:39:43.902 00:20:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # waitforlisten 78322 00:39:43.902 00:20:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # '[' -z 78322 ']' 00:39:43.902 00:20:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:43.902 00:20:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # local max_retries=100 00:39:43.902 00:20:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:39:43.902 00:20:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:43.902 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:43.902 00:20:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@844 -- # xtrace_disable 00:39:43.902 00:20:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:39:43.902 [2024-12-14 00:20:22.720156] thread.c:3079:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:39:43.902 [2024-12-14 00:20:22.722249] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:39:43.902 [2024-12-14 00:20:22.722316] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:39:43.902 [2024-12-14 00:20:22.839602] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:39:43.902 [2024-12-14 00:20:22.946012] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:39:43.902 [2024-12-14 00:20:22.946053] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:39:43.902 [2024-12-14 00:20:22.946065] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:39:43.902 [2024-12-14 00:20:22.946075] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:39:43.902 [2024-12-14 00:20:22.946089] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:39:43.902 [2024-12-14 00:20:22.948212] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:39:43.902 [2024-12-14 00:20:22.948223] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:39:44.161 [2024-12-14 00:20:23.263016] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:39:44.161 [2024-12-14 00:20:23.263666] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:39:44.161 [2024-12-14 00:20:23.263883] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:39:44.420 00:20:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:39:44.420 00:20:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@868 -- # return 0 00:39:44.420 00:20:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:39:44.420 00:20:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@732 -- # xtrace_disable 00:39:44.420 00:20:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:39:44.420 00:20:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:39:44.420 00:20:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:39:44.420 00:20:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:44.420 00:20:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:39:44.420 [2024-12-14 00:20:23.537245] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:39:44.420 00:20:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:44.420 00:20:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:39:44.420 00:20:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:44.420 00:20:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:39:44.420 00:20:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:44.420 00:20:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:39:44.420 00:20:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:44.420 00:20:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:39:44.420 [2024-12-14 00:20:23.557652] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:39:44.679 00:20:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:44.679 00:20:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:39:44.679 00:20:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:44.679 00:20:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:39:44.679 NULL1 00:39:44.679 00:20:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:44.679 00:20:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:39:44.679 00:20:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:44.679 00:20:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:39:44.679 Delay0 00:39:44.679 00:20:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:44.679 00:20:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:44.679 00:20:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:44.679 00:20:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:39:44.679 00:20:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:44.679 00:20:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=78374 00:39:44.679 00:20:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:39:44.679 00:20:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:39:44.679 [2024-12-14 00:20:23.691519] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:39:46.581 00:20:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:39:46.581 00:20:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:46.581 00:20:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:39:46.841 Read completed with error (sct=0, sc=8) 00:39:46.841 Read completed with error (sct=0, sc=8) 00:39:46.841 Read completed with error (sct=0, sc=8) 00:39:46.841 Read completed with error (sct=0, sc=8) 00:39:46.841 starting I/O failed: -6 00:39:46.841 Write completed with error (sct=0, sc=8) 00:39:46.841 Read completed with error (sct=0, sc=8) 00:39:46.841 Read completed with error (sct=0, sc=8) 00:39:46.841 Read completed with error (sct=0, sc=8) 00:39:46.841 starting I/O failed: -6 00:39:46.841 Read completed with error (sct=0, sc=8) 00:39:46.841 Write completed with error (sct=0, sc=8) 00:39:46.841 Read completed with error (sct=0, sc=8) 00:39:46.841 Read completed with error (sct=0, sc=8) 00:39:46.841 starting I/O failed: -6 00:39:46.841 Read completed with error (sct=0, sc=8) 00:39:46.841 Read completed with error (sct=0, sc=8) 00:39:46.841 Read completed with error (sct=0, sc=8) 00:39:46.841 Read completed with error (sct=0, sc=8) 00:39:46.841 starting I/O failed: -6 00:39:46.841 Read completed with error (sct=0, sc=8) 00:39:46.841 Read completed with error (sct=0, sc=8) 00:39:46.841 Write completed with error (sct=0, sc=8) 00:39:46.841 Write completed with error (sct=0, sc=8) 00:39:46.841 starting I/O failed: -6 00:39:46.841 Read completed with error (sct=0, sc=8) 00:39:46.841 Read completed with error (sct=0, sc=8) 00:39:46.841 Read completed with error (sct=0, sc=8) 00:39:46.841 Read completed with error (sct=0, sc=8) 00:39:46.841 starting I/O failed: -6 00:39:46.841 Read completed with error (sct=0, sc=8) 00:39:46.841 Write completed with error (sct=0, sc=8) 00:39:46.841 Read completed with error (sct=0, sc=8) 00:39:46.841 Read completed with error (sct=0, sc=8) 00:39:46.841 starting I/O failed: -6 00:39:46.841 Read completed with error (sct=0, sc=8) 00:39:46.841 Read completed with error (sct=0, sc=8) 00:39:46.841 Write completed with error (sct=0, sc=8) 00:39:46.841 Write completed with error (sct=0, sc=8) 00:39:46.841 starting I/O failed: -6 00:39:46.841 Read completed with error (sct=0, sc=8) 00:39:46.841 Read completed with error (sct=0, sc=8) 00:39:46.841 Write completed with error (sct=0, sc=8) 00:39:46.841 Write completed with error (sct=0, sc=8) 00:39:46.841 starting I/O failed: -6 00:39:46.841 Read completed with error (sct=0, sc=8) 00:39:46.841 Read completed with error (sct=0, sc=8) 00:39:46.841 Read completed with error (sct=0, sc=8) 00:39:46.841 Read completed with error (sct=0, sc=8) 00:39:46.841 starting I/O failed: -6 00:39:46.841 Read completed with error (sct=0, sc=8) 00:39:46.841 Read completed with error (sct=0, sc=8) 00:39:46.841 Read completed with error (sct=0, sc=8) 00:39:46.841 Read completed with error (sct=0, sc=8) 00:39:46.841 starting I/O failed: -6 00:39:46.841 Read completed with error (sct=0, sc=8) 00:39:46.841 Read completed with error (sct=0, sc=8) 00:39:46.841 Write completed with error (sct=0, sc=8) 00:39:46.841 Read completed with error (sct=0, sc=8) 00:39:46.841 starting I/O failed: -6 00:39:46.841 Read completed with error (sct=0, sc=8) 00:39:46.841 Read completed with error (sct=0, sc=8) 00:39:46.841 Write completed with error (sct=0, sc=8) 00:39:46.841 Read completed with error (sct=0, sc=8) 00:39:46.841 starting I/O failed: -6 00:39:46.841 [2024-12-14 00:20:25.858974] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500001ed00 is same with the state(6) to be set 00:39:46.841 Write completed with error (sct=0, sc=8) 00:39:46.841 Read completed with error (sct=0, sc=8) 00:39:46.841 starting I/O failed: -6 00:39:46.841 Read completed with error (sct=0, sc=8) 00:39:46.841 Write completed with error (sct=0, sc=8) 00:39:46.841 Read completed with error (sct=0, sc=8) 00:39:46.841 Write completed with error (sct=0, sc=8) 00:39:46.841 starting I/O failed: -6 00:39:46.841 Read completed with error (sct=0, sc=8) 00:39:46.841 Read completed with error (sct=0, sc=8) 00:39:46.841 Read completed with error (sct=0, sc=8) 00:39:46.841 Read completed with error (sct=0, sc=8) 00:39:46.841 starting I/O failed: -6 00:39:46.841 Read completed with error (sct=0, sc=8) 00:39:46.841 Read completed with error (sct=0, sc=8) 00:39:46.841 Write completed with error (sct=0, sc=8) 00:39:46.841 Read completed with error (sct=0, sc=8) 00:39:46.841 starting I/O failed: -6 00:39:46.841 Write completed with error (sct=0, sc=8) 00:39:46.841 Read completed with error (sct=0, sc=8) 00:39:46.841 Write completed with error (sct=0, sc=8) 00:39:46.841 Read completed with error (sct=0, sc=8) 00:39:46.841 starting I/O failed: -6 00:39:46.841 Read completed with error (sct=0, sc=8) 00:39:46.841 Write completed with error (sct=0, sc=8) 00:39:46.841 Read completed with error (sct=0, sc=8) 00:39:46.841 Write completed with error (sct=0, sc=8) 00:39:46.841 starting I/O failed: -6 00:39:46.841 Write completed with error (sct=0, sc=8) 00:39:46.841 Read completed with error (sct=0, sc=8) 00:39:46.841 Write completed with error (sct=0, sc=8) 00:39:46.841 Write completed with error (sct=0, sc=8) 00:39:46.841 starting I/O failed: -6 00:39:46.841 Read completed with error (sct=0, sc=8) 00:39:46.841 Write completed with error (sct=0, sc=8) 00:39:46.841 Write completed with error (sct=0, sc=8) 00:39:46.841 Write completed with error (sct=0, sc=8) 00:39:46.841 starting I/O failed: -6 00:39:46.841 Write completed with error (sct=0, sc=8) 00:39:46.841 Write completed with error (sct=0, sc=8) 00:39:46.841 Read completed with error (sct=0, sc=8) 00:39:46.841 Read completed with error (sct=0, sc=8) 00:39:46.841 starting I/O failed: -6 00:39:46.841 Read completed with error (sct=0, sc=8) 00:39:46.841 Read completed with error (sct=0, sc=8) 00:39:46.841 Read completed with error (sct=0, sc=8) 00:39:46.841 Read completed with error (sct=0, sc=8) 00:39:46.841 starting I/O failed: -6 00:39:46.841 [2024-12-14 00:20:25.859968] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000020600 is same with the state(6) to be set 00:39:46.841 Write completed with error (sct=0, sc=8) 00:39:46.841 Read completed with error (sct=0, sc=8) 00:39:46.841 Write completed with error (sct=0, sc=8) 00:39:46.841 Read completed with error (sct=0, sc=8) 00:39:46.841 Read completed with error (sct=0, sc=8) 00:39:46.841 Read completed with error (sct=0, sc=8) 00:39:46.841 Read completed with error (sct=0, sc=8) 00:39:46.841 Read completed with error (sct=0, sc=8) 00:39:46.841 Read completed with error (sct=0, sc=8) 00:39:46.841 Read completed with error (sct=0, sc=8) 00:39:46.841 Write completed with error (sct=0, sc=8) 00:39:46.841 Read completed with error (sct=0, sc=8) 00:39:46.841 Read completed with error (sct=0, sc=8) 00:39:46.841 Read completed with error (sct=0, sc=8) 00:39:46.841 Write completed with error (sct=0, sc=8) 00:39:46.841 Write completed with error (sct=0, sc=8) 00:39:46.841 Write completed with error (sct=0, sc=8) 00:39:46.841 Read completed with error (sct=0, sc=8) 00:39:46.841 Write completed with error (sct=0, sc=8) 00:39:46.841 Read completed with error (sct=0, sc=8) 00:39:46.841 Read completed with error (sct=0, sc=8) 00:39:46.841 Read completed with error (sct=0, sc=8) 00:39:46.841 Write completed with error (sct=0, sc=8) 00:39:46.841 Write completed with error (sct=0, sc=8) 00:39:46.841 Write completed with error (sct=0, sc=8) 00:39:46.841 Read completed with error (sct=0, sc=8) 00:39:46.841 Read completed with error (sct=0, sc=8) 00:39:46.841 Read completed with error (sct=0, sc=8) 00:39:46.841 Write completed with error (sct=0, sc=8) 00:39:46.841 Read completed with error (sct=0, sc=8) 00:39:46.841 Read completed with error (sct=0, sc=8) 00:39:46.841 Write completed with error (sct=0, sc=8) 00:39:46.841 Read completed with error (sct=0, sc=8) 00:39:46.841 Read completed with error (sct=0, sc=8) 00:39:46.841 Read completed with error (sct=0, sc=8) 00:39:46.841 Write completed with error (sct=0, sc=8) 00:39:46.841 Read completed with error (sct=0, sc=8) 00:39:46.841 Read completed with error (sct=0, sc=8) 00:39:46.841 Read completed with error (sct=0, sc=8) 00:39:46.841 Write completed with error (sct=0, sc=8) 00:39:46.841 Read completed with error (sct=0, sc=8) 00:39:46.841 Write completed with error (sct=0, sc=8) 00:39:46.841 Write completed with error (sct=0, sc=8) 00:39:46.841 Read completed with error (sct=0, sc=8) 00:39:46.841 Write completed with error (sct=0, sc=8) 00:39:46.841 Read completed with error (sct=0, sc=8) 00:39:46.841 Read completed with error (sct=0, sc=8) 00:39:46.842 Read completed with error (sct=0, sc=8) 00:39:46.842 [2024-12-14 00:20:25.860546] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000020100 is same with the state(6) to be set 00:39:46.842 Read completed with error (sct=0, sc=8) 00:39:46.842 Read completed with error (sct=0, sc=8) 00:39:46.842 Write completed with error (sct=0, sc=8) 00:39:46.842 Read completed with error (sct=0, sc=8) 00:39:46.842 Read completed with error (sct=0, sc=8) 00:39:46.842 Read completed with error (sct=0, sc=8) 00:39:46.842 Write completed with error (sct=0, sc=8) 00:39:46.842 Read completed with error (sct=0, sc=8) 00:39:46.842 Read completed with error (sct=0, sc=8) 00:39:46.842 Read completed with error (sct=0, sc=8) 00:39:46.842 Write completed with error (sct=0, sc=8) 00:39:46.842 Read completed with error (sct=0, sc=8) 00:39:46.842 Read completed with error (sct=0, sc=8) 00:39:46.842 Read completed with error (sct=0, sc=8) 00:39:46.842 Write completed with error (sct=0, sc=8) 00:39:46.842 Read completed with error (sct=0, sc=8) 00:39:46.842 Write completed with error (sct=0, sc=8) 00:39:46.842 Read completed with error (sct=0, sc=8) 00:39:46.842 Read completed with error (sct=0, sc=8) 00:39:46.842 Read completed with error (sct=0, sc=8) 00:39:46.842 Write completed with error (sct=0, sc=8) 00:39:46.842 Read completed with error (sct=0, sc=8) 00:39:46.842 Read completed with error (sct=0, sc=8) 00:39:46.842 Read completed with error (sct=0, sc=8) 00:39:46.842 Read completed with error (sct=0, sc=8) 00:39:46.842 Write completed with error (sct=0, sc=8) 00:39:46.842 Read completed with error (sct=0, sc=8) 00:39:46.842 Read completed with error (sct=0, sc=8) 00:39:46.842 Write completed with error (sct=0, sc=8) 00:39:46.842 Read completed with error (sct=0, sc=8) 00:39:46.842 Read completed with error (sct=0, sc=8) 00:39:46.842 Read completed with error (sct=0, sc=8) 00:39:46.842 Read completed with error (sct=0, sc=8) 00:39:46.842 Read completed with error (sct=0, sc=8) 00:39:46.842 Write completed with error (sct=0, sc=8) 00:39:46.842 Read completed with error (sct=0, sc=8) 00:39:46.842 Write completed with error (sct=0, sc=8) 00:39:46.842 Read completed with error (sct=0, sc=8) 00:39:46.842 Read completed with error (sct=0, sc=8) 00:39:46.842 Read completed with error (sct=0, sc=8) 00:39:46.842 Write completed with error (sct=0, sc=8) 00:39:46.842 Read completed with error (sct=0, sc=8) 00:39:46.842 Read completed with error (sct=0, sc=8) 00:39:46.842 Read completed with error (sct=0, sc=8) 00:39:46.842 Read completed with error (sct=0, sc=8) 00:39:46.842 Read completed with error (sct=0, sc=8) 00:39:46.842 Read completed with error (sct=0, sc=8) 00:39:46.842 Write completed with error (sct=0, sc=8) 00:39:46.842 [2024-12-14 00:20:25.861607] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500001fe80 is same with the state(6) to be set 00:39:47.778 [2024-12-14 00:20:26.829701] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500001e080 is same with the state(6) to be set 00:39:47.778 Read completed with error (sct=0, sc=8) 00:39:47.778 Read completed with error (sct=0, sc=8) 00:39:47.778 Read completed with error (sct=0, sc=8) 00:39:47.778 Read completed with error (sct=0, sc=8) 00:39:47.778 Read completed with error (sct=0, sc=8) 00:39:47.778 Read completed with error (sct=0, sc=8) 00:39:47.778 Write completed with error (sct=0, sc=8) 00:39:47.778 Read completed with error (sct=0, sc=8) 00:39:47.778 Read completed with error (sct=0, sc=8) 00:39:47.778 Read completed with error (sct=0, sc=8) 00:39:47.778 Write completed with error (sct=0, sc=8) 00:39:47.778 Read completed with error (sct=0, sc=8) 00:39:47.778 Write completed with error (sct=0, sc=8) 00:39:47.778 Write completed with error (sct=0, sc=8) 00:39:47.778 Write completed with error (sct=0, sc=8) 00:39:47.778 Read completed with error (sct=0, sc=8) 00:39:47.778 [2024-12-14 00:20:26.859495] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000020380 is same with the state(6) to be set 00:39:47.778 Read completed with error (sct=0, sc=8) 00:39:47.778 Read completed with error (sct=0, sc=8) 00:39:47.778 Write completed with error (sct=0, sc=8) 00:39:47.778 Read completed with error (sct=0, sc=8) 00:39:47.778 Write completed with error (sct=0, sc=8) 00:39:47.778 Read completed with error (sct=0, sc=8) 00:39:47.778 Read completed with error (sct=0, sc=8) 00:39:47.778 Read completed with error (sct=0, sc=8) 00:39:47.778 Read completed with error (sct=0, sc=8) 00:39:47.778 Read completed with error (sct=0, sc=8) 00:39:47.778 Read completed with error (sct=0, sc=8) 00:39:47.778 Write completed with error (sct=0, sc=8) 00:39:47.778 Read completed with error (sct=0, sc=8) 00:39:47.778 Read completed with error (sct=0, sc=8) 00:39:47.778 Read completed with error (sct=0, sc=8) 00:39:47.778 Read completed with error (sct=0, sc=8) 00:39:47.778 Read completed with error (sct=0, sc=8) 00:39:47.778 Read completed with error (sct=0, sc=8) 00:39:47.778 Read completed with error (sct=0, sc=8) 00:39:47.778 Read completed with error (sct=0, sc=8) 00:39:47.778 Read completed with error (sct=0, sc=8) 00:39:47.778 Read completed with error (sct=0, sc=8) 00:39:47.778 Write completed with error (sct=0, sc=8) 00:39:47.778 Write completed with error (sct=0, sc=8) 00:39:47.778 Write completed with error (sct=0, sc=8) 00:39:47.778 Write completed with error (sct=0, sc=8) 00:39:47.778 Write completed with error (sct=0, sc=8) 00:39:47.778 Read completed with error (sct=0, sc=8) 00:39:47.778 Read completed with error (sct=0, sc=8) 00:39:47.778 Read completed with error (sct=0, sc=8) 00:39:47.778 Read completed with error (sct=0, sc=8) 00:39:47.778 Write completed with error (sct=0, sc=8) 00:39:47.778 Read completed with error (sct=0, sc=8) 00:39:47.778 [2024-12-14 00:20:26.862366] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500001e800 is same with the state(6) to be set 00:39:47.778 Read completed with error (sct=0, sc=8) 00:39:47.778 Read completed with error (sct=0, sc=8) 00:39:47.778 Write completed with error (sct=0, sc=8) 00:39:47.778 Write completed with error (sct=0, sc=8) 00:39:47.778 Read completed with error (sct=0, sc=8) 00:39:47.778 Write completed with error (sct=0, sc=8) 00:39:47.778 Write completed with error (sct=0, sc=8) 00:39:47.778 Write completed with error (sct=0, sc=8) 00:39:47.778 Read completed with error (sct=0, sc=8) 00:39:47.778 Read completed with error (sct=0, sc=8) 00:39:47.778 Read completed with error (sct=0, sc=8) 00:39:47.778 Read completed with error (sct=0, sc=8) 00:39:47.778 Write completed with error (sct=0, sc=8) 00:39:47.778 Read completed with error (sct=0, sc=8) 00:39:47.778 Read completed with error (sct=0, sc=8) 00:39:47.778 Read completed with error (sct=0, sc=8) 00:39:47.778 Write completed with error (sct=0, sc=8) 00:39:47.778 Read completed with error (sct=0, sc=8) 00:39:47.778 Read completed with error (sct=0, sc=8) 00:39:47.778 Read completed with error (sct=0, sc=8) 00:39:47.778 Read completed with error (sct=0, sc=8) 00:39:47.778 Read completed with error (sct=0, sc=8) 00:39:47.778 Read completed with error (sct=0, sc=8) 00:39:47.778 Write completed with error (sct=0, sc=8) 00:39:47.778 Read completed with error (sct=0, sc=8) 00:39:47.778 Read completed with error (sct=0, sc=8) 00:39:47.778 Read completed with error (sct=0, sc=8) 00:39:47.778 Read completed with error (sct=0, sc=8) 00:39:47.778 Read completed with error (sct=0, sc=8) 00:39:47.778 Read completed with error (sct=0, sc=8) 00:39:47.778 Read completed with error (sct=0, sc=8) 00:39:47.778 Read completed with error (sct=0, sc=8) 00:39:47.778 Write completed with error (sct=0, sc=8) 00:39:47.778 [2024-12-14 00:20:26.863390] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500001ea80 is same with the state(6) to be set 00:39:47.778 Write completed with error (sct=0, sc=8) 00:39:47.778 Read completed with error (sct=0, sc=8) 00:39:47.778 Read completed with error (sct=0, sc=8) 00:39:47.778 Read completed with error (sct=0, sc=8) 00:39:47.778 Write completed with error (sct=0, sc=8) 00:39:47.778 Read completed with error (sct=0, sc=8) 00:39:47.778 Read completed with error (sct=0, sc=8) 00:39:47.778 Write completed with error (sct=0, sc=8) 00:39:47.778 Read completed with error (sct=0, sc=8) 00:39:47.778 Write completed with error (sct=0, sc=8) 00:39:47.778 Read completed with error (sct=0, sc=8) 00:39:47.778 Read completed with error (sct=0, sc=8) 00:39:47.778 Read completed with error (sct=0, sc=8) 00:39:47.778 Read completed with error (sct=0, sc=8) 00:39:47.778 Read completed with error (sct=0, sc=8) 00:39:47.778 Read completed with error (sct=0, sc=8) 00:39:47.778 Read completed with error (sct=0, sc=8) 00:39:47.778 Read completed with error (sct=0, sc=8) 00:39:47.778 Write completed with error (sct=0, sc=8) 00:39:47.778 Write completed with error (sct=0, sc=8) 00:39:47.778 Read completed with error (sct=0, sc=8) 00:39:47.778 Read completed with error (sct=0, sc=8) 00:39:47.778 Read completed with error (sct=0, sc=8) 00:39:47.778 Write completed with error (sct=0, sc=8) 00:39:47.778 Read completed with error (sct=0, sc=8) 00:39:47.778 Read completed with error (sct=0, sc=8) 00:39:47.778 Read completed with error (sct=0, sc=8) 00:39:47.778 Read completed with error (sct=0, sc=8) 00:39:47.778 Write completed with error (sct=0, sc=8) 00:39:47.778 Read completed with error (sct=0, sc=8) 00:39:47.778 Read completed with error (sct=0, sc=8) 00:39:47.778 Read completed with error (sct=0, sc=8) 00:39:47.778 00:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:47.778 00:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:39:47.778 00:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 78374 00:39:47.778 00:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:39:47.778 [2024-12-14 00:20:26.868048] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500001ef80 is same with the state(6) to be set 00:39:47.778 Initializing NVMe Controllers 00:39:47.778 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:39:47.778 Controller IO queue size 128, less than required. 00:39:47.778 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:39:47.778 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:39:47.778 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:39:47.778 Initialization complete. Launching workers. 00:39:47.778 ======================================================== 00:39:47.778 Latency(us) 00:39:47.778 Device Information : IOPS MiB/s Average min max 00:39:47.779 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 185.44 0.09 952415.60 686.04 1014422.15 00:39:47.779 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 153.71 0.08 884425.27 596.48 1014992.89 00:39:47.779 ======================================================== 00:39:47.779 Total : 339.15 0.17 921601.27 596.48 1014992.89 00:39:47.779 00:39:47.779 [2024-12-14 00:20:26.873520] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500001e080 (9): Bad file descriptor 00:39:47.779 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:39:48.346 00:20:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:39:48.346 00:20:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 78374 00:39:48.346 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (78374) - No such process 00:39:48.346 00:20:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 78374 00:39:48.346 00:20:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # local es=0 00:39:48.346 00:20:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@654 -- # valid_exec_arg wait 78374 00:39:48.346 00:20:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # local arg=wait 00:39:48.346 00:20:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:39:48.346 00:20:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # type -t wait 00:39:48.346 00:20:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:39:48.346 00:20:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # wait 78374 00:39:48.347 00:20:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # es=1 00:39:48.347 00:20:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:39:48.347 00:20:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:39:48.347 00:20:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:39:48.347 00:20:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:39:48.347 00:20:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:48.347 00:20:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:39:48.347 00:20:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:48.347 00:20:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:39:48.347 00:20:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:48.347 00:20:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:39:48.347 [2024-12-14 00:20:27.397615] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:39:48.347 00:20:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:48.347 00:20:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:48.347 00:20:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:48.347 00:20:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:39:48.347 00:20:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:48.347 00:20:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=79025 00:39:48.347 00:20:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:39:48.347 00:20:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:39:48.347 00:20:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 79025 00:39:48.347 00:20:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:39:48.605 [2024-12-14 00:20:27.504348] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:39:48.864 00:20:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:39:48.864 00:20:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 79025 00:39:48.864 00:20:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:39:49.430 00:20:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:39:49.430 00:20:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 79025 00:39:49.430 00:20:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:39:49.997 00:20:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:39:49.997 00:20:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 79025 00:39:49.997 00:20:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:39:50.564 00:20:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:39:50.564 00:20:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 79025 00:39:50.564 00:20:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:39:50.822 00:20:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:39:50.822 00:20:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 79025 00:39:50.822 00:20:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:39:51.389 00:20:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:39:51.390 00:20:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 79025 00:39:51.390 00:20:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:39:51.648 Initializing NVMe Controllers 00:39:51.648 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:39:51.648 Controller IO queue size 128, less than required. 00:39:51.648 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:39:51.648 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:39:51.648 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:39:51.648 Initialization complete. Launching workers. 00:39:51.648 ======================================================== 00:39:51.648 Latency(us) 00:39:51.648 Device Information : IOPS MiB/s Average min max 00:39:51.648 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1003739.93 1000222.89 1012391.04 00:39:51.648 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1004970.28 1000208.91 1012333.97 00:39:51.648 ======================================================== 00:39:51.648 Total : 256.00 0.12 1004355.10 1000208.91 1012391.04 00:39:51.648 00:39:51.906 00:20:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:39:51.906 00:20:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 79025 00:39:51.906 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (79025) - No such process 00:39:51.906 00:20:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 79025 00:39:51.906 00:20:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:39:51.906 00:20:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:39:51.906 00:20:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:39:51.906 00:20:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:39:51.906 00:20:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:39:51.906 00:20:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:39:51.906 00:20:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:39:51.906 00:20:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:39:51.906 rmmod nvme_tcp 00:39:51.906 rmmod nvme_fabrics 00:39:51.906 rmmod nvme_keyring 00:39:51.906 00:20:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:39:51.906 00:20:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:39:51.906 00:20:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:39:51.906 00:20:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@517 -- # '[' -n 78322 ']' 00:39:51.906 00:20:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # killprocess 78322 00:39:51.907 00:20:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # '[' -z 78322 ']' 00:39:51.907 00:20:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # kill -0 78322 00:39:51.907 00:20:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # uname 00:39:51.907 00:20:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:39:51.907 00:20:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 78322 00:39:52.165 00:20:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:39:52.165 00:20:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:39:52.165 00:20:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # echo 'killing process with pid 78322' 00:39:52.165 killing process with pid 78322 00:39:52.165 00:20:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@973 -- # kill 78322 00:39:52.165 00:20:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@978 -- # wait 78322 00:39:53.102 00:20:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:39:53.102 00:20:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:39:53.102 00:20:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:39:53.102 00:20:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:39:53.102 00:20:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-save 00:39:53.102 00:20:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:39:53.102 00:20:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-restore 00:39:53.102 00:20:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:39:53.102 00:20:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:39:53.102 00:20:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:53.102 00:20:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:39:53.102 00:20:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:55.637 00:20:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:39:55.637 00:39:55.637 real 0m16.898s 00:39:55.637 user 0m27.209s 00:39:55.637 sys 0m5.808s 00:39:55.637 00:20:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:39:55.637 00:20:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:39:55.637 ************************************ 00:39:55.637 END TEST nvmf_delete_subsystem 00:39:55.637 ************************************ 00:39:55.637 00:20:34 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:39:55.637 00:20:34 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:39:55.637 00:20:34 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:39:55.637 00:20:34 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:39:55.637 ************************************ 00:39:55.637 START TEST nvmf_host_management 00:39:55.637 ************************************ 00:39:55.637 00:20:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:39:55.637 * Looking for test storage... 00:39:55.637 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:39:55.637 00:20:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:39:55.637 00:20:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1711 -- # lcov --version 00:39:55.637 00:20:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:39:55.637 00:20:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:39:55.637 00:20:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:39:55.637 00:20:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:39:55.637 00:20:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:39:55.637 00:20:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:39:55.637 00:20:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:39:55.637 00:20:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:39:55.637 00:20:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:39:55.637 00:20:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:39:55.637 00:20:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:39:55.637 00:20:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:39:55.637 00:20:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:39:55.637 00:20:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:39:55.637 00:20:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:39:55.637 00:20:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:39:55.637 00:20:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:39:55.637 00:20:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:39:55.637 00:20:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:39:55.637 00:20:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:39:55.637 00:20:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:39:55.637 00:20:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:39:55.637 00:20:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:39:55.638 00:20:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:39:55.638 00:20:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:39:55.638 00:20:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:39:55.638 00:20:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:39:55.638 00:20:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:39:55.638 00:20:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:39:55.638 00:20:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:39:55.638 00:20:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:39:55.638 00:20:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:39:55.638 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:55.638 --rc genhtml_branch_coverage=1 00:39:55.638 --rc genhtml_function_coverage=1 00:39:55.638 --rc genhtml_legend=1 00:39:55.638 --rc geninfo_all_blocks=1 00:39:55.638 --rc geninfo_unexecuted_blocks=1 00:39:55.638 00:39:55.638 ' 00:39:55.638 00:20:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:39:55.638 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:55.638 --rc genhtml_branch_coverage=1 00:39:55.638 --rc genhtml_function_coverage=1 00:39:55.638 --rc genhtml_legend=1 00:39:55.638 --rc geninfo_all_blocks=1 00:39:55.638 --rc geninfo_unexecuted_blocks=1 00:39:55.638 00:39:55.638 ' 00:39:55.638 00:20:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:39:55.638 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:55.638 --rc genhtml_branch_coverage=1 00:39:55.638 --rc genhtml_function_coverage=1 00:39:55.638 --rc genhtml_legend=1 00:39:55.638 --rc geninfo_all_blocks=1 00:39:55.638 --rc geninfo_unexecuted_blocks=1 00:39:55.638 00:39:55.638 ' 00:39:55.638 00:20:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:39:55.638 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:55.638 --rc genhtml_branch_coverage=1 00:39:55.638 --rc genhtml_function_coverage=1 00:39:55.638 --rc genhtml_legend=1 00:39:55.638 --rc geninfo_all_blocks=1 00:39:55.638 --rc geninfo_unexecuted_blocks=1 00:39:55.638 00:39:55.638 ' 00:39:55.638 00:20:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:39:55.638 00:20:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:39:55.638 00:20:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:39:55.638 00:20:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:39:55.638 00:20:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:39:55.638 00:20:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:39:55.638 00:20:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:39:55.638 00:20:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:39:55.638 00:20:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:39:55.638 00:20:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:39:55.638 00:20:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:39:55.638 00:20:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:39:55.638 00:20:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:39:55.638 00:20:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:39:55.638 00:20:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:39:55.638 00:20:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:39:55.638 00:20:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:39:55.638 00:20:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:39:55.638 00:20:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:39:55.638 00:20:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:39:55.638 00:20:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:39:55.638 00:20:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:39:55.638 00:20:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:39:55.638 00:20:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:55.638 00:20:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:55.638 00:20:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:55.638 00:20:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:39:55.638 00:20:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:55.638 00:20:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:39:55.638 00:20:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:39:55.638 00:20:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:39:55.638 00:20:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:39:55.638 00:20:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:39:55.638 00:20:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:39:55.638 00:20:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:39:55.638 00:20:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:39:55.638 00:20:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:39:55.638 00:20:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:39:55.638 00:20:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:39:55.638 00:20:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:39:55.638 00:20:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:39:55.638 00:20:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:39:55.638 00:20:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:39:55.638 00:20:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:39:55.638 00:20:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:39:55.638 00:20:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:39:55.638 00:20:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:39:55.638 00:20:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:55.638 00:20:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:39:55.638 00:20:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:55.638 00:20:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:39:55.638 00:20:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:39:55.638 00:20:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:39:55.638 00:20:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:40:00.907 00:20:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:40:00.907 00:20:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:40:00.907 00:20:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:40:00.907 00:20:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:40:00.907 00:20:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:40:00.907 00:20:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:40:00.907 00:20:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:40:00.907 00:20:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:40:00.907 00:20:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:40:00.907 00:20:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:40:00.907 00:20:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:40:00.907 00:20:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:40:00.907 00:20:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:40:00.907 00:20:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:40:00.907 00:20:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:40:00.907 00:20:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:40:00.907 00:20:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:40:00.907 00:20:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:40:00.907 00:20:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:40:00.907 00:20:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:40:00.907 00:20:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:40:00.907 00:20:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:40:00.907 00:20:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:40:00.907 00:20:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:40:00.907 00:20:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:40:00.907 00:20:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:40:00.907 00:20:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:40:00.907 00:20:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:40:00.907 00:20:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:40:00.907 00:20:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:40:00.907 00:20:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:40:00.908 00:20:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:40:00.908 00:20:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:40:00.908 00:20:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:40:00.908 00:20:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:40:00.908 Found 0000:af:00.0 (0x8086 - 0x159b) 00:40:00.908 00:20:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:40:00.908 00:20:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:40:00.908 00:20:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:00.908 00:20:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:00.908 00:20:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:40:00.908 00:20:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:40:00.908 00:20:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:40:00.908 Found 0000:af:00.1 (0x8086 - 0x159b) 00:40:00.908 00:20:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:40:00.908 00:20:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:40:00.908 00:20:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:00.908 00:20:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:00.908 00:20:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:40:00.908 00:20:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:40:00.908 00:20:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:40:00.908 00:20:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:40:00.908 00:20:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:40:00.908 00:20:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:00.908 00:20:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:40:00.908 00:20:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:00.908 00:20:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:40:00.908 00:20:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:40:00.908 00:20:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:00.908 00:20:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:40:00.908 Found net devices under 0000:af:00.0: cvl_0_0 00:40:00.908 00:20:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:40:00.908 00:20:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:40:00.908 00:20:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:00.908 00:20:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:40:00.908 00:20:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:00.908 00:20:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:40:00.908 00:20:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:40:00.908 00:20:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:00.908 00:20:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:40:00.908 Found net devices under 0000:af:00.1: cvl_0_1 00:40:00.908 00:20:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:40:00.908 00:20:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:40:00.908 00:20:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # is_hw=yes 00:40:00.908 00:20:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:40:00.908 00:20:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:40:00.908 00:20:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:40:00.908 00:20:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:40:00.908 00:20:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:40:00.908 00:20:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:40:00.908 00:20:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:40:00.908 00:20:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:40:00.908 00:20:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:40:00.908 00:20:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:40:00.908 00:20:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:40:00.908 00:20:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:40:00.908 00:20:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:40:00.908 00:20:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:40:00.908 00:20:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:40:00.908 00:20:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:40:00.908 00:20:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:40:00.908 00:20:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:40:00.908 00:20:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:40:00.908 00:20:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:40:00.908 00:20:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:40:00.908 00:20:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:40:00.908 00:20:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:40:00.908 00:20:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:40:00.908 00:20:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:40:00.908 00:20:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:40:00.908 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:40:00.908 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.320 ms 00:40:00.908 00:40:00.908 --- 10.0.0.2 ping statistics --- 00:40:00.908 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:00.908 rtt min/avg/max/mdev = 0.320/0.320/0.320/0.000 ms 00:40:00.908 00:20:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:40:00.908 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:40:00.908 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.150 ms 00:40:00.908 00:40:00.908 --- 10.0.0.1 ping statistics --- 00:40:00.908 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:00.908 rtt min/avg/max/mdev = 0.150/0.150/0.150/0.000 ms 00:40:00.908 00:20:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:40:00.908 00:20:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@450 -- # return 0 00:40:00.908 00:20:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:40:00.908 00:20:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:40:00.908 00:20:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:40:00.908 00:20:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:40:00.908 00:20:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:40:00.908 00:20:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:40:00.908 00:20:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:40:00.908 00:20:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:40:00.908 00:20:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:40:00.908 00:20:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:40:00.908 00:20:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:40:00.908 00:20:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:40:00.908 00:20:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:40:00.908 00:20:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=83154 00:40:00.908 00:20:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 83154 00:40:00.908 00:20:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1E 00:40:00.908 00:20:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 83154 ']' 00:40:00.908 00:20:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:40:00.908 00:20:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:40:00.909 00:20:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:40:00.909 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:40:00.909 00:20:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:40:00.909 00:20:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:40:00.909 [2024-12-14 00:20:39.961878] thread.c:3079:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:40:00.909 [2024-12-14 00:20:39.963997] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:40:00.909 [2024-12-14 00:20:39.964070] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:40:01.167 [2024-12-14 00:20:40.087639] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:40:01.167 [2024-12-14 00:20:40.200339] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:40:01.167 [2024-12-14 00:20:40.200386] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:40:01.167 [2024-12-14 00:20:40.200398] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:40:01.167 [2024-12-14 00:20:40.200424] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:40:01.167 [2024-12-14 00:20:40.200434] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:40:01.167 [2024-12-14 00:20:40.203158] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:40:01.167 [2024-12-14 00:20:40.203229] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:40:01.167 [2024-12-14 00:20:40.203532] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:40:01.167 [2024-12-14 00:20:40.203556] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 4 00:40:01.426 [2024-12-14 00:20:40.538236] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:40:01.426 [2024-12-14 00:20:40.539968] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:40:01.426 [2024-12-14 00:20:40.541992] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:40:01.426 [2024-12-14 00:20:40.542988] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:40:01.426 [2024-12-14 00:20:40.543308] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:40:01.684 00:20:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:40:01.684 00:20:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:40:01.684 00:20:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:40:01.684 00:20:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:40:01.685 00:20:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:40:01.685 00:20:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:40:01.685 00:20:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:40:01.685 00:20:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:01.685 00:20:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:40:01.685 [2024-12-14 00:20:40.808309] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:40:01.944 00:20:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:01.944 00:20:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:40:01.944 00:20:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:40:01.944 00:20:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:40:01.944 00:20:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:40:01.944 00:20:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:40:01.944 00:20:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:40:01.944 00:20:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:01.944 00:20:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:40:01.944 Malloc0 00:40:01.944 [2024-12-14 00:20:40.936580] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:40:01.944 00:20:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:01.944 00:20:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:40:01.944 00:20:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:40:01.944 00:20:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:40:01.944 00:20:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=83338 00:40:01.944 00:20:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 83338 /var/tmp/bdevperf.sock 00:40:01.944 00:20:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 83338 ']' 00:40:01.944 00:20:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:40:01.944 00:20:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:40:01.944 00:20:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:40:01.944 00:20:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:40:01.944 00:20:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:40:01.944 00:20:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:40:01.944 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:40:01.944 00:20:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:40:01.944 00:20:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:40:01.944 00:20:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:40:01.944 00:20:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:40:01.944 00:20:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:40:01.944 { 00:40:01.944 "params": { 00:40:01.944 "name": "Nvme$subsystem", 00:40:01.944 "trtype": "$TEST_TRANSPORT", 00:40:01.944 "traddr": "$NVMF_FIRST_TARGET_IP", 00:40:01.944 "adrfam": "ipv4", 00:40:01.944 "trsvcid": "$NVMF_PORT", 00:40:01.944 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:40:01.944 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:40:01.944 "hdgst": ${hdgst:-false}, 00:40:01.944 "ddgst": ${ddgst:-false} 00:40:01.944 }, 00:40:01.944 "method": "bdev_nvme_attach_controller" 00:40:01.944 } 00:40:01.944 EOF 00:40:01.944 )") 00:40:01.944 00:20:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:40:01.944 00:20:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:40:01.944 00:20:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:40:01.944 00:20:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:40:01.944 "params": { 00:40:01.944 "name": "Nvme0", 00:40:01.944 "trtype": "tcp", 00:40:01.944 "traddr": "10.0.0.2", 00:40:01.944 "adrfam": "ipv4", 00:40:01.944 "trsvcid": "4420", 00:40:01.944 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:40:01.944 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:40:01.944 "hdgst": false, 00:40:01.944 "ddgst": false 00:40:01.944 }, 00:40:01.944 "method": "bdev_nvme_attach_controller" 00:40:01.944 }' 00:40:01.944 [2024-12-14 00:20:41.056370] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:40:01.944 [2024-12-14 00:20:41.056466] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83338 ] 00:40:02.203 [2024-12-14 00:20:41.172917] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:02.203 [2024-12-14 00:20:41.282863] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:40:02.771 Running I/O for 10 seconds... 00:40:03.029 00:20:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:40:03.029 00:20:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:40:03.029 00:20:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:40:03.029 00:20:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:03.029 00:20:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:40:03.029 00:20:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:03.029 00:20:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:40:03.029 00:20:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:40:03.029 00:20:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:40:03.029 00:20:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:40:03.029 00:20:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:40:03.029 00:20:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:40:03.029 00:20:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:40:03.029 00:20:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:40:03.029 00:20:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:40:03.029 00:20:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:40:03.029 00:20:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:03.029 00:20:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:40:03.029 00:20:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:03.029 00:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=67 00:40:03.029 00:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@58 -- # '[' 67 -ge 100 ']' 00:40:03.029 00:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@62 -- # sleep 0.25 00:40:03.290 00:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i-- )) 00:40:03.290 00:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:40:03.290 00:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:40:03.290 00:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:40:03.290 00:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:03.290 00:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:40:03.290 00:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:03.290 00:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=614 00:40:03.290 00:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@58 -- # '[' 614 -ge 100 ']' 00:40:03.290 00:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:40:03.290 00:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@60 -- # break 00:40:03.290 00:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:40:03.290 00:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:40:03.290 00:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:03.290 00:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:40:03.290 [2024-12-14 00:20:42.308263] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:40:03.290 [2024-12-14 00:20:42.308322] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:40:03.290 [2024-12-14 00:20:42.308334] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:40:03.291 [2024-12-14 00:20:42.308343] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:40:03.291 [2024-12-14 00:20:42.308351] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:40:03.291 [2024-12-14 00:20:42.308359] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:40:03.291 [2024-12-14 00:20:42.308368] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:40:03.291 [2024-12-14 00:20:42.308376] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:40:03.291 [2024-12-14 00:20:42.308384] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:40:03.291 [2024-12-14 00:20:42.308392] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:40:03.291 [2024-12-14 00:20:42.308401] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:40:03.291 [2024-12-14 00:20:42.308409] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:40:03.291 [2024-12-14 00:20:42.308417] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:40:03.291 [2024-12-14 00:20:42.308425] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:40:03.291 [2024-12-14 00:20:42.308433] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:40:03.291 [2024-12-14 00:20:42.308447] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:40:03.291 [2024-12-14 00:20:42.308456] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:40:03.291 [2024-12-14 00:20:42.308464] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:40:03.291 [2024-12-14 00:20:42.308473] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:40:03.291 [2024-12-14 00:20:42.308481] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:40:03.291 [2024-12-14 00:20:42.308489] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:40:03.291 [2024-12-14 00:20:42.308497] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:40:03.291 [2024-12-14 00:20:42.308506] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:40:03.291 [2024-12-14 00:20:42.308513] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:40:03.291 [2024-12-14 00:20:42.308521] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:40:03.291 [2024-12-14 00:20:42.308529] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:40:03.291 [2024-12-14 00:20:42.312941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:90112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:03.291 [2024-12-14 00:20:42.312989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:03.291 [2024-12-14 00:20:42.313013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:90240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:03.291 [2024-12-14 00:20:42.313025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:03.291 [2024-12-14 00:20:42.313038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:90368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:03.291 [2024-12-14 00:20:42.313049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:03.291 [2024-12-14 00:20:42.313060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:90496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:03.291 [2024-12-14 00:20:42.313070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:03.291 [2024-12-14 00:20:42.313094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:90624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:03.291 00:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:03.291 [2024-12-14 00:20:42.313104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:03.291 [2024-12-14 00:20:42.313124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:90752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:03.291 [2024-12-14 00:20:42.313133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:03.291 [2024-12-14 00:20:42.313145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:90880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:03.291 [2024-12-14 00:20:42.313155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:03.291 [2024-12-14 00:20:42.313166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:91008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:03.291 [2024-12-14 00:20:42.313176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:03.291 [2024-12-14 00:20:42.313187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:91136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:03.291 [2024-12-14 00:20:42.313197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:03.291 [2024-12-14 00:20:42.313208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:91264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:03.291 [2024-12-14 00:20:42.313217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:03.291 [2024-12-14 00:20:42.313229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:91392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:03.291 [2024-12-14 00:20:42.313238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:03.291 [2024-12-14 00:20:42.313250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:91520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:03.291 [2024-12-14 00:20:42.313259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:03.291 [2024-12-14 00:20:42.313270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:91648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:03.291 [2024-12-14 00:20:42.313281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:03.291 [2024-12-14 00:20:42.313293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:91776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:03.291 [2024-12-14 00:20:42.313302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:03.291 [2024-12-14 00:20:42.313313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:91904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:03.291 [2024-12-14 00:20:42.313322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:03.291 [2024-12-14 00:20:42.313334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:92032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:03.291 [2024-12-14 00:20:42.313343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:03.291 [2024-12-14 00:20:42.313354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:92160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:03.291 [2024-12-14 00:20:42.313363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:03.291 [2024-12-14 00:20:42.313374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:92288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:03.291 [2024-12-14 00:20:42.313384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:03.291 [2024-12-14 00:20:42.313395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:92416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:03.291 [2024-12-14 00:20:42.313405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:03.291 [2024-12-14 00:20:42.313415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:92544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:03.291 [2024-12-14 00:20:42.313425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:03.291 [2024-12-14 00:20:42.313443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:92672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:03.291 [2024-12-14 00:20:42.313454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:03.291 [2024-12-14 00:20:42.313465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:92800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:03.291 [2024-12-14 00:20:42.313474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:03.291 [2024-12-14 00:20:42.313485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:92928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:03.291 [2024-12-14 00:20:42.313494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:03.291 [2024-12-14 00:20:42.313507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:93056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:03.291 00:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:40:03.291 [2024-12-14 00:20:42.313516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:03.291 [2024-12-14 00:20:42.313529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:93184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:03.291 [2024-12-14 00:20:42.313541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:03.291 [2024-12-14 00:20:42.313552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:93312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:03.291 [2024-12-14 00:20:42.313561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:03.291 [2024-12-14 00:20:42.313573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:93440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:03.291 [2024-12-14 00:20:42.313582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:03.291 [2024-12-14 00:20:42.313593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:93568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:03.291 [2024-12-14 00:20:42.313603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:03.291 [2024-12-14 00:20:42.313615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:93696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:03.292 [2024-12-14 00:20:42.313624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:03.292 [2024-12-14 00:20:42.313635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:93824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:03.292 [2024-12-14 00:20:42.313645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:03.292 [2024-12-14 00:20:42.313656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:93952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:03.292 [2024-12-14 00:20:42.313666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:03.292 [2024-12-14 00:20:42.313677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:94080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:03.292 [2024-12-14 00:20:42.313686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:03.292 [2024-12-14 00:20:42.313697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:94208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:03.292 [2024-12-14 00:20:42.313706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:03.292 [2024-12-14 00:20:42.313717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:94336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:03.292 [2024-12-14 00:20:42.313727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:03.292 [2024-12-14 00:20:42.313739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:94464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:03.292 [2024-12-14 00:20:42.313748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:03.292 [2024-12-14 00:20:42.313759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:94592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:03.292 [2024-12-14 00:20:42.313769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:03.292 [2024-12-14 00:20:42.313781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:94720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:03.292 [2024-12-14 00:20:42.313790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:03.292 [2024-12-14 00:20:42.313804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:94848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:03.292 [2024-12-14 00:20:42.313813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:03.292 [2024-12-14 00:20:42.313825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:94976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:03.292 00:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:03.292 [2024-12-14 00:20:42.313834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:03.292 [2024-12-14 00:20:42.313846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:95104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:03.292 [2024-12-14 00:20:42.313855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:03.292 [2024-12-14 00:20:42.313867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:95232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:03.292 [2024-12-14 00:20:42.313877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:03.292 [2024-12-14 00:20:42.313887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:95360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:03.292 [2024-12-14 00:20:42.313896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:03.292 [2024-12-14 00:20:42.313907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:95488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:03.292 [2024-12-14 00:20:42.313916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:03.292 [2024-12-14 00:20:42.313927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:95616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:03.292 [2024-12-14 00:20:42.313936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:03.292 [2024-12-14 00:20:42.313947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:95744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:03.292 [2024-12-14 00:20:42.313956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:03.292 [2024-12-14 00:20:42.313967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:95872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:03.292 [2024-12-14 00:20:42.313976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:03.292 [2024-12-14 00:20:42.313987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:96000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:03.292 [2024-12-14 00:20:42.313996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:03.292 [2024-12-14 00:20:42.314007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:96128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:03.292 [2024-12-14 00:20:42.314016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:03.292 [2024-12-14 00:20:42.314027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:96256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:03.292 [2024-12-14 00:20:42.314036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:03.292 [2024-12-14 00:20:42.314049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:96384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:03.292 [2024-12-14 00:20:42.314060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:03.292 [2024-12-14 00:20:42.314071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:96512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:03.292 [2024-12-14 00:20:42.314080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:03.292 [2024-12-14 00:20:42.314091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:96640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:03.292 [2024-12-14 00:20:42.314100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:03.292 00:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:40:03.292 [2024-12-14 00:20:42.314111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:96768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:03.292 [2024-12-14 00:20:42.314121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:03.292 [2024-12-14 00:20:42.314132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:96896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:03.292 [2024-12-14 00:20:42.314142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:03.292 [2024-12-14 00:20:42.314153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:97024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:03.292 [2024-12-14 00:20:42.314163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:03.292 [2024-12-14 00:20:42.314174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:97152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:03.292 [2024-12-14 00:20:42.314183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:03.292 [2024-12-14 00:20:42.314194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:97280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:03.292 [2024-12-14 00:20:42.314203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:03.292 [2024-12-14 00:20:42.314214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:97408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:03.292 [2024-12-14 00:20:42.314223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:03.292 [2024-12-14 00:20:42.314234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:97536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:03.292 [2024-12-14 00:20:42.314243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:03.292 [2024-12-14 00:20:42.314254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:97664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:03.292 [2024-12-14 00:20:42.314263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:03.292 [2024-12-14 00:20:42.314275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:97792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:03.292 [2024-12-14 00:20:42.314284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:03.292 [2024-12-14 00:20:42.314295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:97920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:03.292 [2024-12-14 00:20:42.314306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:03.292 [2024-12-14 00:20:42.314317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:98048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:03.292 [2024-12-14 00:20:42.314327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:03.292 [2024-12-14 00:20:42.314338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:98176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:03.292 [2024-12-14 00:20:42.314347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:03.292 [2024-12-14 00:20:42.314386] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:40:03.292 [2024-12-14 00:20:42.315599] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:40:03.292 task offset: 90112 on job bdev=Nvme0n1 fails 00:40:03.292 00:40:03.292 Latency(us) 00:40:03.292 [2024-12-13T23:20:42.433Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:40:03.292 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:40:03.292 Job: Nvme0n1 ended in about 0.41 seconds with error 00:40:03.292 Verification LBA range: start 0x0 length 0x400 00:40:03.292 Nvme0n1 : 0.41 1723.26 107.70 156.66 0.00 33103.85 2465.40 31332.45 00:40:03.292 [2024-12-13T23:20:42.433Z] =================================================================================================================== 00:40:03.293 [2024-12-13T23:20:42.434Z] Total : 1723.26 107.70 156.66 0.00 33103.85 2465.40 31332.45 00:40:03.293 00:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:03.293 00:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:40:03.293 [2024-12-14 00:20:42.331040] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:40:03.293 [2024-12-14 00:20:42.331077] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000325a80 (9): Bad file descriptor 00:40:03.293 [2024-12-14 00:20:42.382976] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:40:04.230 00:20:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 83338 00:40:04.230 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (83338) - No such process 00:40:04.230 00:20:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # true 00:40:04.230 00:20:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:40:04.230 00:20:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:40:04.230 00:20:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:40:04.230 00:20:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:40:04.230 00:20:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:40:04.230 00:20:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:40:04.230 00:20:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:40:04.230 { 00:40:04.230 "params": { 00:40:04.230 "name": "Nvme$subsystem", 00:40:04.230 "trtype": "$TEST_TRANSPORT", 00:40:04.230 "traddr": "$NVMF_FIRST_TARGET_IP", 00:40:04.230 "adrfam": "ipv4", 00:40:04.230 "trsvcid": "$NVMF_PORT", 00:40:04.230 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:40:04.230 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:40:04.230 "hdgst": ${hdgst:-false}, 00:40:04.230 "ddgst": ${ddgst:-false} 00:40:04.230 }, 00:40:04.230 "method": "bdev_nvme_attach_controller" 00:40:04.230 } 00:40:04.230 EOF 00:40:04.230 )") 00:40:04.230 00:20:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:40:04.230 00:20:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:40:04.230 00:20:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:40:04.230 00:20:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:40:04.230 "params": { 00:40:04.230 "name": "Nvme0", 00:40:04.230 "trtype": "tcp", 00:40:04.230 "traddr": "10.0.0.2", 00:40:04.230 "adrfam": "ipv4", 00:40:04.230 "trsvcid": "4420", 00:40:04.230 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:40:04.230 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:40:04.230 "hdgst": false, 00:40:04.230 "ddgst": false 00:40:04.230 }, 00:40:04.230 "method": "bdev_nvme_attach_controller" 00:40:04.230 }' 00:40:04.489 [2024-12-14 00:20:43.406257] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:40:04.489 [2024-12-14 00:20:43.406344] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83672 ] 00:40:04.489 [2024-12-14 00:20:43.518851] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:04.748 [2024-12-14 00:20:43.635295] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:40:05.007 Running I/O for 1 seconds... 00:40:06.384 1901.00 IOPS, 118.81 MiB/s 00:40:06.384 Latency(us) 00:40:06.384 [2024-12-13T23:20:45.525Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:40:06.384 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:40:06.384 Verification LBA range: start 0x0 length 0x400 00:40:06.384 Nvme0n1 : 1.02 1944.20 121.51 0.00 0.00 32252.14 3573.27 30208.98 00:40:06.384 [2024-12-13T23:20:45.525Z] =================================================================================================================== 00:40:06.384 [2024-12-13T23:20:45.525Z] Total : 1944.20 121.51 0.00 0.00 32252.14 3573.27 30208.98 00:40:06.951 00:20:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:40:06.951 00:20:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:40:06.951 00:20:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:40:06.951 00:20:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:40:06.951 00:20:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:40:06.951 00:20:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:40:06.951 00:20:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:40:06.951 00:20:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:40:06.951 00:20:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:40:06.951 00:20:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:40:06.951 00:20:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:40:06.951 rmmod nvme_tcp 00:40:07.210 rmmod nvme_fabrics 00:40:07.210 rmmod nvme_keyring 00:40:07.210 00:20:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:40:07.210 00:20:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:40:07.210 00:20:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:40:07.210 00:20:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 83154 ']' 00:40:07.210 00:20:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 83154 00:40:07.210 00:20:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@954 -- # '[' -z 83154 ']' 00:40:07.210 00:20:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@958 -- # kill -0 83154 00:40:07.210 00:20:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@959 -- # uname 00:40:07.210 00:20:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:40:07.210 00:20:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83154 00:40:07.210 00:20:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:40:07.210 00:20:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:40:07.210 00:20:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83154' 00:40:07.210 killing process with pid 83154 00:40:07.210 00:20:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@973 -- # kill 83154 00:40:07.210 00:20:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@978 -- # wait 83154 00:40:08.589 [2024-12-14 00:20:47.413434] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:40:08.589 00:20:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:40:08.589 00:20:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:40:08.589 00:20:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:40:08.589 00:20:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:40:08.589 00:20:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 00:40:08.589 00:20:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 00:40:08.589 00:20:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:40:08.589 00:20:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:40:08.589 00:20:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:40:08.589 00:20:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:08.589 00:20:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:40:08.589 00:20:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:10.493 00:20:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:40:10.493 00:20:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:40:10.493 00:40:10.493 real 0m15.262s 00:40:10.493 user 0m28.203s 00:40:10.493 sys 0m6.579s 00:40:10.493 00:20:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1130 -- # xtrace_disable 00:40:10.493 00:20:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:40:10.493 ************************************ 00:40:10.493 END TEST nvmf_host_management 00:40:10.493 ************************************ 00:40:10.493 00:20:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:40:10.493 00:20:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:40:10.493 00:20:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:40:10.493 00:20:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:40:10.753 ************************************ 00:40:10.753 START TEST nvmf_lvol 00:40:10.753 ************************************ 00:40:10.753 00:20:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:40:10.753 * Looking for test storage... 00:40:10.753 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:40:10.753 00:20:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:40:10.753 00:20:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:40:10.753 00:20:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1711 -- # lcov --version 00:40:10.753 00:20:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:40:10.753 00:20:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:40:10.753 00:20:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:40:10.753 00:20:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:40:10.753 00:20:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:40:10.753 00:20:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:40:10.753 00:20:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:40:10.753 00:20:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:40:10.753 00:20:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:40:10.753 00:20:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:40:10.753 00:20:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:40:10.753 00:20:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:40:10.753 00:20:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:40:10.753 00:20:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:40:10.753 00:20:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:40:10.753 00:20:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:40:10.753 00:20:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:40:10.753 00:20:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:40:10.753 00:20:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:40:10.753 00:20:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:40:10.753 00:20:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:40:10.753 00:20:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:40:10.753 00:20:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:40:10.753 00:20:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:40:10.753 00:20:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:40:10.753 00:20:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:40:10.753 00:20:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:40:10.753 00:20:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:40:10.753 00:20:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:40:10.753 00:20:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:40:10.753 00:20:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:40:10.753 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:10.753 --rc genhtml_branch_coverage=1 00:40:10.753 --rc genhtml_function_coverage=1 00:40:10.753 --rc genhtml_legend=1 00:40:10.753 --rc geninfo_all_blocks=1 00:40:10.753 --rc geninfo_unexecuted_blocks=1 00:40:10.753 00:40:10.753 ' 00:40:10.753 00:20:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:40:10.753 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:10.753 --rc genhtml_branch_coverage=1 00:40:10.753 --rc genhtml_function_coverage=1 00:40:10.753 --rc genhtml_legend=1 00:40:10.753 --rc geninfo_all_blocks=1 00:40:10.753 --rc geninfo_unexecuted_blocks=1 00:40:10.753 00:40:10.753 ' 00:40:10.753 00:20:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:40:10.753 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:10.753 --rc genhtml_branch_coverage=1 00:40:10.753 --rc genhtml_function_coverage=1 00:40:10.753 --rc genhtml_legend=1 00:40:10.753 --rc geninfo_all_blocks=1 00:40:10.753 --rc geninfo_unexecuted_blocks=1 00:40:10.753 00:40:10.753 ' 00:40:10.753 00:20:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:40:10.753 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:10.753 --rc genhtml_branch_coverage=1 00:40:10.753 --rc genhtml_function_coverage=1 00:40:10.753 --rc genhtml_legend=1 00:40:10.753 --rc geninfo_all_blocks=1 00:40:10.753 --rc geninfo_unexecuted_blocks=1 00:40:10.753 00:40:10.753 ' 00:40:10.753 00:20:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:40:10.753 00:20:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:40:10.753 00:20:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:40:10.753 00:20:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:40:10.753 00:20:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:40:10.753 00:20:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:40:10.753 00:20:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:40:10.753 00:20:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:40:10.753 00:20:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:40:10.753 00:20:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:40:10.753 00:20:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:40:10.753 00:20:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:40:10.754 00:20:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:40:10.754 00:20:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:40:10.754 00:20:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:40:10.754 00:20:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:40:10.754 00:20:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:40:10.754 00:20:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:40:10.754 00:20:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:40:10.754 00:20:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:40:10.754 00:20:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:40:10.754 00:20:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:40:10.754 00:20:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:40:10.754 00:20:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:10.754 00:20:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:10.754 00:20:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:10.754 00:20:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:40:10.754 00:20:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:10.754 00:20:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:40:10.754 00:20:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:40:10.754 00:20:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:40:10.754 00:20:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:40:10.754 00:20:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:40:10.754 00:20:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:40:10.754 00:20:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:40:10.754 00:20:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:40:10.754 00:20:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:40:10.754 00:20:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:40:10.754 00:20:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:40:10.754 00:20:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:40:10.754 00:20:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:40:10.754 00:20:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:40:10.754 00:20:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:40:10.754 00:20:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:40:10.754 00:20:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:40:10.754 00:20:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:40:10.754 00:20:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:40:10.754 00:20:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:40:10.754 00:20:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:40:10.754 00:20:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:40:10.754 00:20:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:10.754 00:20:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:40:10.754 00:20:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:10.754 00:20:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:40:10.754 00:20:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:40:10.754 00:20:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:40:10.754 00:20:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:40:16.124 00:20:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:40:16.124 00:20:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:40:16.124 00:20:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:40:16.124 00:20:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:40:16.124 00:20:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:40:16.124 00:20:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:40:16.124 00:20:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:40:16.124 00:20:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:40:16.124 00:20:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:40:16.124 00:20:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:40:16.124 00:20:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:40:16.124 00:20:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:40:16.124 00:20:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:40:16.124 00:20:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:40:16.124 00:20:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:40:16.124 00:20:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:40:16.124 00:20:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:40:16.124 00:20:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:40:16.124 00:20:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:40:16.124 00:20:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:40:16.124 00:20:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:40:16.124 00:20:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:40:16.124 00:20:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:40:16.124 00:20:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:40:16.124 00:20:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:40:16.124 00:20:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:40:16.124 00:20:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:40:16.124 00:20:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:40:16.124 00:20:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:40:16.124 00:20:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:40:16.124 00:20:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:40:16.124 00:20:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:40:16.124 00:20:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:40:16.124 00:20:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:40:16.124 00:20:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:40:16.124 Found 0000:af:00.0 (0x8086 - 0x159b) 00:40:16.124 00:20:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:40:16.124 00:20:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:40:16.124 00:20:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:16.124 00:20:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:16.124 00:20:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:40:16.124 00:20:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:40:16.124 00:20:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:40:16.124 Found 0000:af:00.1 (0x8086 - 0x159b) 00:40:16.124 00:20:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:40:16.124 00:20:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:40:16.124 00:20:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:16.124 00:20:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:16.124 00:20:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:40:16.124 00:20:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:40:16.124 00:20:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:40:16.124 00:20:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:40:16.124 00:20:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:40:16.124 00:20:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:16.124 00:20:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:40:16.124 00:20:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:16.124 00:20:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:40:16.124 00:20:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:40:16.124 00:20:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:16.124 00:20:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:40:16.124 Found net devices under 0000:af:00.0: cvl_0_0 00:40:16.124 00:20:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:40:16.124 00:20:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:40:16.124 00:20:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:16.124 00:20:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:40:16.124 00:20:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:16.124 00:20:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:40:16.124 00:20:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:40:16.124 00:20:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:16.124 00:20:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:40:16.124 Found net devices under 0000:af:00.1: cvl_0_1 00:40:16.124 00:20:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:40:16.124 00:20:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:40:16.124 00:20:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # is_hw=yes 00:40:16.124 00:20:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:40:16.124 00:20:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:40:16.124 00:20:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:40:16.124 00:20:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:40:16.124 00:20:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:40:16.124 00:20:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:40:16.125 00:20:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:40:16.125 00:20:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:40:16.125 00:20:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:40:16.125 00:20:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:40:16.125 00:20:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:40:16.125 00:20:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:40:16.125 00:20:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:40:16.125 00:20:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:40:16.125 00:20:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:40:16.125 00:20:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:40:16.125 00:20:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:40:16.125 00:20:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:40:16.125 00:20:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:40:16.125 00:20:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:40:16.125 00:20:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:40:16.125 00:20:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:40:16.383 00:20:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:40:16.383 00:20:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:40:16.383 00:20:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:40:16.383 00:20:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:40:16.383 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:40:16.383 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.391 ms 00:40:16.383 00:40:16.383 --- 10.0.0.2 ping statistics --- 00:40:16.383 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:16.383 rtt min/avg/max/mdev = 0.391/0.391/0.391/0.000 ms 00:40:16.383 00:20:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:40:16.383 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:40:16.383 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.178 ms 00:40:16.383 00:40:16.383 --- 10.0.0.1 ping statistics --- 00:40:16.383 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:16.383 rtt min/avg/max/mdev = 0.178/0.178/0.178/0.000 ms 00:40:16.383 00:20:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:40:16.383 00:20:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@450 -- # return 0 00:40:16.383 00:20:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:40:16.383 00:20:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:40:16.383 00:20:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:40:16.383 00:20:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:40:16.383 00:20:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:40:16.383 00:20:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:40:16.383 00:20:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:40:16.383 00:20:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:40:16.383 00:20:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:40:16.383 00:20:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:40:16.383 00:20:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:40:16.383 00:20:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=87737 00:40:16.383 00:20:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 87737 00:40:16.383 00:20:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x7 00:40:16.383 00:20:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@835 -- # '[' -z 87737 ']' 00:40:16.383 00:20:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:40:16.383 00:20:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@840 -- # local max_retries=100 00:40:16.383 00:20:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:40:16.383 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:40:16.383 00:20:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@844 -- # xtrace_disable 00:40:16.383 00:20:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:40:16.383 [2024-12-14 00:20:55.429471] thread.c:3079:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:40:16.383 [2024-12-14 00:20:55.431563] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:40:16.383 [2024-12-14 00:20:55.431632] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:40:16.641 [2024-12-14 00:20:55.549962] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:40:16.641 [2024-12-14 00:20:55.656224] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:40:16.641 [2024-12-14 00:20:55.656269] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:40:16.641 [2024-12-14 00:20:55.656281] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:40:16.641 [2024-12-14 00:20:55.656305] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:40:16.641 [2024-12-14 00:20:55.656315] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:40:16.641 [2024-12-14 00:20:55.658499] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:40:16.641 [2024-12-14 00:20:55.658565] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:40:16.641 [2024-12-14 00:20:55.658572] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:40:16.900 [2024-12-14 00:20:55.952485] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:40:16.900 [2024-12-14 00:20:55.953326] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:40:16.900 [2024-12-14 00:20:55.954175] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:40:16.900 [2024-12-14 00:20:55.954379] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:40:17.159 00:20:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:40:17.159 00:20:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@868 -- # return 0 00:40:17.159 00:20:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:40:17.159 00:20:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@732 -- # xtrace_disable 00:40:17.159 00:20:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:40:17.159 00:20:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:40:17.159 00:20:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:40:17.417 [2024-12-14 00:20:56.435669] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:40:17.417 00:20:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:40:17.675 00:20:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:40:17.675 00:20:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:40:17.933 00:20:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:40:17.934 00:20:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:40:18.192 00:20:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:40:18.450 00:20:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=55701894-0164-418d-99e7-872928e8b1ce 00:40:18.450 00:20:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 55701894-0164-418d-99e7-872928e8b1ce lvol 20 00:40:18.709 00:20:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=c498ce01-d0fd-4ddb-b6f6-54d2aae38291 00:40:18.709 00:20:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:40:18.709 00:20:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 c498ce01-d0fd-4ddb-b6f6-54d2aae38291 00:40:18.967 00:20:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:40:19.225 [2024-12-14 00:20:58.155454] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:40:19.225 00:20:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:40:19.225 00:20:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=88312 00:40:19.225 00:20:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:40:19.225 00:20:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:40:20.599 00:20:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot c498ce01-d0fd-4ddb-b6f6-54d2aae38291 MY_SNAPSHOT 00:40:20.599 00:20:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=bbc65e3d-76da-48e0-b077-0a797b458681 00:40:20.599 00:20:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize c498ce01-d0fd-4ddb-b6f6-54d2aae38291 30 00:40:20.857 00:20:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone bbc65e3d-76da-48e0-b077-0a797b458681 MY_CLONE 00:40:21.115 00:21:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=57271f87-5539-4a3e-a476-0c302caac563 00:40:21.115 00:21:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 57271f87-5539-4a3e-a476-0c302caac563 00:40:21.681 00:21:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 88312 00:40:29.798 Initializing NVMe Controllers 00:40:29.798 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:40:29.798 Controller IO queue size 128, less than required. 00:40:29.798 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:40:29.798 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:40:29.798 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:40:29.798 Initialization complete. Launching workers. 00:40:29.798 ======================================================== 00:40:29.798 Latency(us) 00:40:29.798 Device Information : IOPS MiB/s Average min max 00:40:29.798 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 11163.80 43.61 11467.33 386.80 217275.14 00:40:29.798 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 10991.00 42.93 11649.68 3430.82 132464.58 00:40:29.798 ======================================================== 00:40:29.798 Total : 22154.80 86.54 11557.79 386.80 217275.14 00:40:29.798 00:40:29.798 00:21:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:40:30.057 00:21:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete c498ce01-d0fd-4ddb-b6f6-54d2aae38291 00:40:30.057 00:21:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 55701894-0164-418d-99e7-872928e8b1ce 00:40:30.316 00:21:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:40:30.316 00:21:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:40:30.316 00:21:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:40:30.316 00:21:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:40:30.316 00:21:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:40:30.316 00:21:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:40:30.316 00:21:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:40:30.316 00:21:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:40:30.316 00:21:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:40:30.316 rmmod nvme_tcp 00:40:30.316 rmmod nvme_fabrics 00:40:30.316 rmmod nvme_keyring 00:40:30.316 00:21:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:40:30.316 00:21:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:40:30.316 00:21:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:40:30.316 00:21:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 87737 ']' 00:40:30.316 00:21:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 87737 00:40:30.316 00:21:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@954 -- # '[' -z 87737 ']' 00:40:30.316 00:21:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@958 -- # kill -0 87737 00:40:30.316 00:21:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@959 -- # uname 00:40:30.316 00:21:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:40:30.316 00:21:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 87737 00:40:30.316 00:21:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:40:30.316 00:21:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:40:30.316 00:21:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@972 -- # echo 'killing process with pid 87737' 00:40:30.316 killing process with pid 87737 00:40:30.316 00:21:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@973 -- # kill 87737 00:40:30.316 00:21:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@978 -- # wait 87737 00:40:32.221 00:21:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:40:32.221 00:21:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:40:32.221 00:21:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:40:32.221 00:21:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:40:32.221 00:21:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 00:40:32.221 00:21:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:40:32.221 00:21:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 00:40:32.221 00:21:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:40:32.221 00:21:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:40:32.221 00:21:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:32.221 00:21:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:40:32.221 00:21:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:34.126 00:21:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:40:34.126 00:40:34.126 real 0m23.414s 00:40:34.126 user 0m57.461s 00:40:34.126 sys 0m9.275s 00:40:34.126 00:21:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1130 -- # xtrace_disable 00:40:34.126 00:21:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:40:34.126 ************************************ 00:40:34.126 END TEST nvmf_lvol 00:40:34.126 ************************************ 00:40:34.126 00:21:13 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:40:34.126 00:21:13 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:40:34.126 00:21:13 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:40:34.126 00:21:13 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:40:34.126 ************************************ 00:40:34.126 START TEST nvmf_lvs_grow 00:40:34.126 ************************************ 00:40:34.126 00:21:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:40:34.126 * Looking for test storage... 00:40:34.126 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:40:34.126 00:21:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:40:34.126 00:21:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # lcov --version 00:40:34.126 00:21:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:40:34.385 00:21:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:40:34.385 00:21:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:40:34.385 00:21:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:40:34.385 00:21:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:40:34.385 00:21:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:40:34.385 00:21:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:40:34.385 00:21:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:40:34.385 00:21:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:40:34.385 00:21:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:40:34.385 00:21:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:40:34.385 00:21:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:40:34.385 00:21:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:40:34.385 00:21:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:40:34.385 00:21:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:40:34.385 00:21:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:40:34.385 00:21:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:40:34.385 00:21:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:40:34.385 00:21:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:40:34.385 00:21:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:40:34.385 00:21:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:40:34.385 00:21:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:40:34.385 00:21:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:40:34.385 00:21:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:40:34.385 00:21:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:40:34.385 00:21:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:40:34.385 00:21:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:40:34.385 00:21:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:40:34.385 00:21:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:40:34.385 00:21:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:40:34.385 00:21:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:40:34.385 00:21:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:40:34.385 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:34.385 --rc genhtml_branch_coverage=1 00:40:34.385 --rc genhtml_function_coverage=1 00:40:34.385 --rc genhtml_legend=1 00:40:34.385 --rc geninfo_all_blocks=1 00:40:34.385 --rc geninfo_unexecuted_blocks=1 00:40:34.385 00:40:34.385 ' 00:40:34.385 00:21:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:40:34.385 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:34.385 --rc genhtml_branch_coverage=1 00:40:34.385 --rc genhtml_function_coverage=1 00:40:34.386 --rc genhtml_legend=1 00:40:34.386 --rc geninfo_all_blocks=1 00:40:34.386 --rc geninfo_unexecuted_blocks=1 00:40:34.386 00:40:34.386 ' 00:40:34.386 00:21:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:40:34.386 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:34.386 --rc genhtml_branch_coverage=1 00:40:34.386 --rc genhtml_function_coverage=1 00:40:34.386 --rc genhtml_legend=1 00:40:34.386 --rc geninfo_all_blocks=1 00:40:34.386 --rc geninfo_unexecuted_blocks=1 00:40:34.386 00:40:34.386 ' 00:40:34.386 00:21:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:40:34.386 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:34.386 --rc genhtml_branch_coverage=1 00:40:34.386 --rc genhtml_function_coverage=1 00:40:34.386 --rc genhtml_legend=1 00:40:34.386 --rc geninfo_all_blocks=1 00:40:34.386 --rc geninfo_unexecuted_blocks=1 00:40:34.386 00:40:34.386 ' 00:40:34.386 00:21:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:40:34.386 00:21:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:40:34.386 00:21:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:40:34.386 00:21:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:40:34.386 00:21:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:40:34.386 00:21:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:40:34.386 00:21:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:40:34.386 00:21:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:40:34.386 00:21:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:40:34.386 00:21:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:40:34.386 00:21:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:40:34.386 00:21:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:40:34.386 00:21:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:40:34.386 00:21:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:40:34.386 00:21:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:40:34.386 00:21:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:40:34.386 00:21:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:40:34.386 00:21:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:40:34.386 00:21:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:40:34.386 00:21:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:40:34.386 00:21:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:40:34.386 00:21:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:40:34.386 00:21:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:40:34.386 00:21:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:34.386 00:21:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:34.386 00:21:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:34.386 00:21:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:40:34.386 00:21:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:34.386 00:21:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:40:34.386 00:21:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:40:34.386 00:21:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:40:34.386 00:21:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:40:34.386 00:21:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:40:34.386 00:21:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:40:34.386 00:21:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:40:34.386 00:21:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:40:34.386 00:21:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:40:34.386 00:21:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:40:34.386 00:21:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:40:34.386 00:21:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:40:34.386 00:21:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:40:34.386 00:21:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:40:34.386 00:21:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:40:34.386 00:21:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:40:34.386 00:21:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:40:34.386 00:21:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:40:34.386 00:21:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:40:34.386 00:21:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:34.386 00:21:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:40:34.386 00:21:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:34.386 00:21:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:40:34.386 00:21:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:40:34.386 00:21:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:40:34.386 00:21:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:40:39.658 00:21:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:40:39.658 00:21:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:40:39.658 00:21:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:40:39.658 00:21:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:40:39.658 00:21:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:40:39.658 00:21:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:40:39.658 00:21:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:40:39.658 00:21:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:40:39.658 00:21:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:40:39.658 00:21:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:40:39.658 00:21:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:40:39.658 00:21:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:40:39.658 00:21:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:40:39.658 00:21:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:40:39.658 00:21:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:40:39.658 00:21:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:40:39.658 00:21:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:40:39.658 00:21:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:40:39.658 00:21:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:40:39.659 00:21:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:40:39.659 00:21:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:40:39.659 00:21:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:40:39.659 00:21:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:40:39.659 00:21:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:40:39.659 00:21:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:40:39.659 00:21:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:40:39.659 00:21:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:40:39.659 00:21:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:40:39.659 00:21:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:40:39.659 00:21:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:40:39.659 00:21:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:40:39.659 00:21:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:40:39.659 00:21:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:40:39.659 00:21:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:40:39.659 00:21:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:40:39.659 Found 0000:af:00.0 (0x8086 - 0x159b) 00:40:39.659 00:21:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:40:39.659 00:21:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:40:39.659 00:21:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:39.659 00:21:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:39.659 00:21:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:40:39.659 00:21:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:40:39.659 00:21:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:40:39.659 Found 0000:af:00.1 (0x8086 - 0x159b) 00:40:39.659 00:21:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:40:39.659 00:21:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:40:39.659 00:21:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:39.659 00:21:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:39.659 00:21:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:40:39.659 00:21:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:40:39.659 00:21:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:40:39.659 00:21:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:40:39.659 00:21:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:40:39.659 00:21:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:39.659 00:21:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:40:39.659 00:21:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:39.659 00:21:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:40:39.659 00:21:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:40:39.659 00:21:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:39.659 00:21:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:40:39.659 Found net devices under 0000:af:00.0: cvl_0_0 00:40:39.659 00:21:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:40:39.659 00:21:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:40:39.659 00:21:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:39.659 00:21:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:40:39.659 00:21:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:39.659 00:21:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:40:39.659 00:21:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:40:39.659 00:21:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:39.659 00:21:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:40:39.659 Found net devices under 0000:af:00.1: cvl_0_1 00:40:39.659 00:21:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:40:39.659 00:21:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:40:39.659 00:21:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # is_hw=yes 00:40:39.659 00:21:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:40:39.659 00:21:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:40:39.659 00:21:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:40:39.659 00:21:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:40:39.659 00:21:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:40:39.659 00:21:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:40:39.659 00:21:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:40:39.659 00:21:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:40:39.659 00:21:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:40:39.659 00:21:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:40:39.659 00:21:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:40:39.659 00:21:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:40:39.659 00:21:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:40:39.659 00:21:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:40:39.659 00:21:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:40:39.659 00:21:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:40:39.659 00:21:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:40:39.659 00:21:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:40:39.659 00:21:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:40:39.659 00:21:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:40:39.659 00:21:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:40:39.659 00:21:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:40:39.659 00:21:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:40:39.659 00:21:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:40:39.659 00:21:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:40:39.659 00:21:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:40:39.659 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:40:39.659 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.259 ms 00:40:39.659 00:40:39.659 --- 10.0.0.2 ping statistics --- 00:40:39.659 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:39.659 rtt min/avg/max/mdev = 0.259/0.259/0.259/0.000 ms 00:40:39.659 00:21:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:40:39.659 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:40:39.659 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.072 ms 00:40:39.659 00:40:39.659 --- 10.0.0.1 ping statistics --- 00:40:39.659 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:39.659 rtt min/avg/max/mdev = 0.072/0.072/0.072/0.000 ms 00:40:39.659 00:21:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:40:39.659 00:21:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@450 -- # return 0 00:40:39.659 00:21:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:40:39.659 00:21:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:40:39.659 00:21:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:40:39.659 00:21:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:40:39.659 00:21:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:40:39.659 00:21:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:40:39.659 00:21:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:40:39.659 00:21:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:40:39.659 00:21:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:40:39.659 00:21:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:40:39.659 00:21:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:40:39.659 00:21:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=94078 00:40:39.660 00:21:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:40:39.660 00:21:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 94078 00:40:39.660 00:21:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # '[' -z 94078 ']' 00:40:39.660 00:21:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:40:39.660 00:21:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # local max_retries=100 00:40:39.660 00:21:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:40:39.660 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:40:39.660 00:21:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@844 -- # xtrace_disable 00:40:39.660 00:21:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:40:39.918 [2024-12-14 00:21:18.832456] thread.c:3079:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:40:39.918 [2024-12-14 00:21:18.834588] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:40:39.918 [2024-12-14 00:21:18.834664] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:40:39.918 [2024-12-14 00:21:18.952744] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:39.919 [2024-12-14 00:21:19.052057] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:40:39.919 [2024-12-14 00:21:19.052100] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:40:39.919 [2024-12-14 00:21:19.052112] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:40:39.919 [2024-12-14 00:21:19.052121] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:40:39.919 [2024-12-14 00:21:19.052130] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:40:39.919 [2024-12-14 00:21:19.053535] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:40:40.486 [2024-12-14 00:21:19.379791] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:40:40.486 [2024-12-14 00:21:19.380035] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:40:40.745 00:21:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:40:40.745 00:21:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@868 -- # return 0 00:40:40.745 00:21:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:40:40.745 00:21:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@732 -- # xtrace_disable 00:40:40.745 00:21:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:40:40.745 00:21:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:40:40.745 00:21:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:40:40.745 [2024-12-14 00:21:19.862269] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:40:40.745 00:21:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:40:40.745 00:21:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:40:40.745 00:21:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:40:40.745 00:21:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:40:41.004 ************************************ 00:40:41.004 START TEST lvs_grow_clean 00:40:41.004 ************************************ 00:40:41.004 00:21:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1129 -- # lvs_grow 00:40:41.004 00:21:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:40:41.004 00:21:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:40:41.004 00:21:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:40:41.004 00:21:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:40:41.004 00:21:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:40:41.004 00:21:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:40:41.004 00:21:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:40:41.004 00:21:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:40:41.004 00:21:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:40:41.263 00:21:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:40:41.263 00:21:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:40:41.263 00:21:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=5a5ea105-a1c3-46d2-8c09-07012b1951d3 00:40:41.263 00:21:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5a5ea105-a1c3-46d2-8c09-07012b1951d3 00:40:41.263 00:21:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:40:41.523 00:21:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:40:41.523 00:21:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:40:41.523 00:21:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 5a5ea105-a1c3-46d2-8c09-07012b1951d3 lvol 150 00:40:41.782 00:21:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=1fee7476-5aa0-47f3-99ce-06cf3520a5df 00:40:41.782 00:21:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:40:41.782 00:21:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:40:41.782 [2024-12-14 00:21:20.906112] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:40:41.782 [2024-12-14 00:21:20.906212] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:40:41.782 true 00:40:42.041 00:21:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:40:42.041 00:21:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5a5ea105-a1c3-46d2-8c09-07012b1951d3 00:40:42.041 00:21:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:40:42.041 00:21:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:40:42.299 00:21:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 1fee7476-5aa0-47f3-99ce-06cf3520a5df 00:40:42.558 00:21:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:40:42.558 [2024-12-14 00:21:21.654647] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:40:42.558 00:21:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:40:42.816 00:21:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=94569 00:40:42.816 00:21:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:40:42.816 00:21:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:40:42.816 00:21:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 94569 /var/tmp/bdevperf.sock 00:40:42.816 00:21:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # '[' -z 94569 ']' 00:40:42.816 00:21:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:40:42.816 00:21:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:40:42.816 00:21:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:40:42.816 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:40:42.816 00:21:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:40:42.816 00:21:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:40:42.816 [2024-12-14 00:21:21.920341] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:40:42.816 [2024-12-14 00:21:21.920431] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94569 ] 00:40:43.075 [2024-12-14 00:21:22.031674] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:43.075 [2024-12-14 00:21:22.140257] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:40:43.644 00:21:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:40:43.644 00:21:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@868 -- # return 0 00:40:43.644 00:21:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:40:44.212 Nvme0n1 00:40:44.212 00:21:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:40:44.212 [ 00:40:44.212 { 00:40:44.212 "name": "Nvme0n1", 00:40:44.212 "aliases": [ 00:40:44.212 "1fee7476-5aa0-47f3-99ce-06cf3520a5df" 00:40:44.212 ], 00:40:44.212 "product_name": "NVMe disk", 00:40:44.212 "block_size": 4096, 00:40:44.212 "num_blocks": 38912, 00:40:44.212 "uuid": "1fee7476-5aa0-47f3-99ce-06cf3520a5df", 00:40:44.212 "numa_id": 1, 00:40:44.212 "assigned_rate_limits": { 00:40:44.212 "rw_ios_per_sec": 0, 00:40:44.212 "rw_mbytes_per_sec": 0, 00:40:44.212 "r_mbytes_per_sec": 0, 00:40:44.212 "w_mbytes_per_sec": 0 00:40:44.212 }, 00:40:44.212 "claimed": false, 00:40:44.212 "zoned": false, 00:40:44.212 "supported_io_types": { 00:40:44.212 "read": true, 00:40:44.212 "write": true, 00:40:44.212 "unmap": true, 00:40:44.212 "flush": true, 00:40:44.212 "reset": true, 00:40:44.212 "nvme_admin": true, 00:40:44.212 "nvme_io": true, 00:40:44.212 "nvme_io_md": false, 00:40:44.212 "write_zeroes": true, 00:40:44.212 "zcopy": false, 00:40:44.212 "get_zone_info": false, 00:40:44.212 "zone_management": false, 00:40:44.212 "zone_append": false, 00:40:44.212 "compare": true, 00:40:44.212 "compare_and_write": true, 00:40:44.212 "abort": true, 00:40:44.212 "seek_hole": false, 00:40:44.212 "seek_data": false, 00:40:44.212 "copy": true, 00:40:44.212 "nvme_iov_md": false 00:40:44.212 }, 00:40:44.212 "memory_domains": [ 00:40:44.212 { 00:40:44.212 "dma_device_id": "system", 00:40:44.212 "dma_device_type": 1 00:40:44.212 } 00:40:44.212 ], 00:40:44.212 "driver_specific": { 00:40:44.212 "nvme": [ 00:40:44.212 { 00:40:44.212 "trid": { 00:40:44.212 "trtype": "TCP", 00:40:44.212 "adrfam": "IPv4", 00:40:44.212 "traddr": "10.0.0.2", 00:40:44.212 "trsvcid": "4420", 00:40:44.212 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:40:44.212 }, 00:40:44.212 "ctrlr_data": { 00:40:44.212 "cntlid": 1, 00:40:44.212 "vendor_id": "0x8086", 00:40:44.212 "model_number": "SPDK bdev Controller", 00:40:44.212 "serial_number": "SPDK0", 00:40:44.212 "firmware_revision": "25.01", 00:40:44.212 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:40:44.212 "oacs": { 00:40:44.212 "security": 0, 00:40:44.212 "format": 0, 00:40:44.212 "firmware": 0, 00:40:44.212 "ns_manage": 0 00:40:44.212 }, 00:40:44.212 "multi_ctrlr": true, 00:40:44.212 "ana_reporting": false 00:40:44.212 }, 00:40:44.212 "vs": { 00:40:44.212 "nvme_version": "1.3" 00:40:44.212 }, 00:40:44.212 "ns_data": { 00:40:44.212 "id": 1, 00:40:44.212 "can_share": true 00:40:44.212 } 00:40:44.212 } 00:40:44.212 ], 00:40:44.212 "mp_policy": "active_passive" 00:40:44.212 } 00:40:44.212 } 00:40:44.212 ] 00:40:44.212 00:21:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:40:44.212 00:21:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=94796 00:40:44.212 00:21:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:40:44.212 Running I/O for 10 seconds... 00:40:45.590 Latency(us) 00:40:45.590 [2024-12-13T23:21:24.731Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:40:45.590 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:40:45.590 Nvme0n1 : 1.00 19939.00 77.89 0.00 0.00 0.00 0.00 0.00 00:40:45.590 [2024-12-13T23:21:24.731Z] =================================================================================================================== 00:40:45.590 [2024-12-13T23:21:24.731Z] Total : 19939.00 77.89 0.00 0.00 0.00 0.00 0.00 00:40:45.590 00:40:46.159 00:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 5a5ea105-a1c3-46d2-8c09-07012b1951d3 00:40:46.418 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:40:46.418 Nvme0n1 : 2.00 20129.50 78.63 0.00 0.00 0.00 0.00 0.00 00:40:46.418 [2024-12-13T23:21:25.559Z] =================================================================================================================== 00:40:46.418 [2024-12-13T23:21:25.559Z] Total : 20129.50 78.63 0.00 0.00 0.00 0.00 0.00 00:40:46.418 00:40:46.418 true 00:40:46.418 00:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5a5ea105-a1c3-46d2-8c09-07012b1951d3 00:40:46.418 00:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:40:46.677 00:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:40:46.677 00:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:40:46.677 00:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 94796 00:40:47.245 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:40:47.245 Nvme0n1 : 3.00 20150.67 78.71 0.00 0.00 0.00 0.00 0.00 00:40:47.245 [2024-12-13T23:21:26.386Z] =================================================================================================================== 00:40:47.245 [2024-12-13T23:21:26.386Z] Total : 20150.67 78.71 0.00 0.00 0.00 0.00 0.00 00:40:47.245 00:40:48.622 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:40:48.622 Nvme0n1 : 4.00 20229.00 79.02 0.00 0.00 0.00 0.00 0.00 00:40:48.622 [2024-12-13T23:21:27.763Z] =================================================================================================================== 00:40:48.622 [2024-12-13T23:21:27.763Z] Total : 20229.00 79.02 0.00 0.00 0.00 0.00 0.00 00:40:48.622 00:40:49.560 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:40:49.560 Nvme0n1 : 5.00 20298.00 79.29 0.00 0.00 0.00 0.00 0.00 00:40:49.560 [2024-12-13T23:21:28.701Z] =================================================================================================================== 00:40:49.560 [2024-12-13T23:21:28.701Z] Total : 20298.00 79.29 0.00 0.00 0.00 0.00 0.00 00:40:49.560 00:40:50.495 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:40:50.495 Nvme0n1 : 6.00 20344.00 79.47 0.00 0.00 0.00 0.00 0.00 00:40:50.495 [2024-12-13T23:21:29.636Z] =================================================================================================================== 00:40:50.495 [2024-12-13T23:21:29.636Z] Total : 20344.00 79.47 0.00 0.00 0.00 0.00 0.00 00:40:50.495 00:40:51.433 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:40:51.433 Nvme0n1 : 7.00 20322.43 79.38 0.00 0.00 0.00 0.00 0.00 00:40:51.433 [2024-12-13T23:21:30.574Z] =================================================================================================================== 00:40:51.433 [2024-12-13T23:21:30.574Z] Total : 20322.43 79.38 0.00 0.00 0.00 0.00 0.00 00:40:51.433 00:40:52.370 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:40:52.370 Nvme0n1 : 8.00 20330.12 79.41 0.00 0.00 0.00 0.00 0.00 00:40:52.370 [2024-12-13T23:21:31.511Z] =================================================================================================================== 00:40:52.370 [2024-12-13T23:21:31.511Z] Total : 20330.12 79.41 0.00 0.00 0.00 0.00 0.00 00:40:52.370 00:40:53.309 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:40:53.309 Nvme0n1 : 9.00 20357.22 79.52 0.00 0.00 0.00 0.00 0.00 00:40:53.309 [2024-12-13T23:21:32.450Z] =================================================================================================================== 00:40:53.309 [2024-12-13T23:21:32.450Z] Total : 20357.22 79.52 0.00 0.00 0.00 0.00 0.00 00:40:53.309 00:40:54.247 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:40:54.247 Nvme0n1 : 10.00 20378.90 79.61 0.00 0.00 0.00 0.00 0.00 00:40:54.247 [2024-12-13T23:21:33.388Z] =================================================================================================================== 00:40:54.247 [2024-12-13T23:21:33.388Z] Total : 20378.90 79.61 0.00 0.00 0.00 0.00 0.00 00:40:54.247 00:40:54.247 00:40:54.247 Latency(us) 00:40:54.247 [2024-12-13T23:21:33.388Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:40:54.247 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:40:54.247 Nvme0n1 : 10.00 20383.76 79.62 0.00 0.00 6276.16 3635.69 18599.74 00:40:54.247 [2024-12-13T23:21:33.388Z] =================================================================================================================== 00:40:54.247 [2024-12-13T23:21:33.388Z] Total : 20383.76 79.62 0.00 0.00 6276.16 3635.69 18599.74 00:40:54.247 { 00:40:54.247 "results": [ 00:40:54.247 { 00:40:54.247 "job": "Nvme0n1", 00:40:54.247 "core_mask": "0x2", 00:40:54.247 "workload": "randwrite", 00:40:54.247 "status": "finished", 00:40:54.247 "queue_depth": 128, 00:40:54.247 "io_size": 4096, 00:40:54.247 "runtime": 10.003896, 00:40:54.247 "iops": 20383.758487693194, 00:40:54.247 "mibps": 79.62405659255154, 00:40:54.247 "io_failed": 0, 00:40:54.247 "io_timeout": 0, 00:40:54.247 "avg_latency_us": 6276.156022359238, 00:40:54.247 "min_latency_us": 3635.687619047619, 00:40:54.247 "max_latency_us": 18599.74095238095 00:40:54.247 } 00:40:54.247 ], 00:40:54.247 "core_count": 1 00:40:54.247 } 00:40:54.507 00:21:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 94569 00:40:54.507 00:21:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # '[' -z 94569 ']' 00:40:54.507 00:21:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # kill -0 94569 00:40:54.507 00:21:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # uname 00:40:54.507 00:21:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:40:54.507 00:21:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 94569 00:40:54.507 00:21:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:40:54.507 00:21:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:40:54.507 00:21:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 94569' 00:40:54.507 killing process with pid 94569 00:40:54.507 00:21:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@973 -- # kill 94569 00:40:54.507 Received shutdown signal, test time was about 10.000000 seconds 00:40:54.507 00:40:54.507 Latency(us) 00:40:54.507 [2024-12-13T23:21:33.648Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:40:54.507 [2024-12-13T23:21:33.648Z] =================================================================================================================== 00:40:54.507 [2024-12-13T23:21:33.648Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:40:54.507 00:21:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@978 -- # wait 94569 00:40:55.446 00:21:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:40:55.446 00:21:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:40:55.706 00:21:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5a5ea105-a1c3-46d2-8c09-07012b1951d3 00:40:55.706 00:21:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:40:55.966 00:21:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:40:55.966 00:21:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:40:55.966 00:21:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:40:55.966 [2024-12-14 00:21:35.074346] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:40:56.224 00:21:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5a5ea105-a1c3-46d2-8c09-07012b1951d3 00:40:56.224 00:21:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # local es=0 00:40:56.224 00:21:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5a5ea105-a1c3-46d2-8c09-07012b1951d3 00:40:56.224 00:21:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:40:56.224 00:21:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:40:56.224 00:21:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:40:56.224 00:21:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:40:56.224 00:21:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:40:56.224 00:21:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:40:56.224 00:21:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:40:56.224 00:21:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:40:56.225 00:21:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5a5ea105-a1c3-46d2-8c09-07012b1951d3 00:40:56.225 request: 00:40:56.225 { 00:40:56.225 "uuid": "5a5ea105-a1c3-46d2-8c09-07012b1951d3", 00:40:56.225 "method": "bdev_lvol_get_lvstores", 00:40:56.225 "req_id": 1 00:40:56.225 } 00:40:56.225 Got JSON-RPC error response 00:40:56.225 response: 00:40:56.225 { 00:40:56.225 "code": -19, 00:40:56.225 "message": "No such device" 00:40:56.225 } 00:40:56.225 00:21:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # es=1 00:40:56.225 00:21:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:40:56.225 00:21:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:40:56.225 00:21:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:40:56.225 00:21:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:40:56.484 aio_bdev 00:40:56.484 00:21:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 1fee7476-5aa0-47f3-99ce-06cf3520a5df 00:40:56.484 00:21:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local bdev_name=1fee7476-5aa0-47f3-99ce-06cf3520a5df 00:40:56.484 00:21:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:40:56.484 00:21:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # local i 00:40:56.484 00:21:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:40:56.484 00:21:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:40:56.484 00:21:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:40:56.743 00:21:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 1fee7476-5aa0-47f3-99ce-06cf3520a5df -t 2000 00:40:57.002 [ 00:40:57.002 { 00:40:57.002 "name": "1fee7476-5aa0-47f3-99ce-06cf3520a5df", 00:40:57.002 "aliases": [ 00:40:57.002 "lvs/lvol" 00:40:57.002 ], 00:40:57.002 "product_name": "Logical Volume", 00:40:57.003 "block_size": 4096, 00:40:57.003 "num_blocks": 38912, 00:40:57.003 "uuid": "1fee7476-5aa0-47f3-99ce-06cf3520a5df", 00:40:57.003 "assigned_rate_limits": { 00:40:57.003 "rw_ios_per_sec": 0, 00:40:57.003 "rw_mbytes_per_sec": 0, 00:40:57.003 "r_mbytes_per_sec": 0, 00:40:57.003 "w_mbytes_per_sec": 0 00:40:57.003 }, 00:40:57.003 "claimed": false, 00:40:57.003 "zoned": false, 00:40:57.003 "supported_io_types": { 00:40:57.003 "read": true, 00:40:57.003 "write": true, 00:40:57.003 "unmap": true, 00:40:57.003 "flush": false, 00:40:57.003 "reset": true, 00:40:57.003 "nvme_admin": false, 00:40:57.003 "nvme_io": false, 00:40:57.003 "nvme_io_md": false, 00:40:57.003 "write_zeroes": true, 00:40:57.003 "zcopy": false, 00:40:57.003 "get_zone_info": false, 00:40:57.003 "zone_management": false, 00:40:57.003 "zone_append": false, 00:40:57.003 "compare": false, 00:40:57.003 "compare_and_write": false, 00:40:57.003 "abort": false, 00:40:57.003 "seek_hole": true, 00:40:57.003 "seek_data": true, 00:40:57.003 "copy": false, 00:40:57.003 "nvme_iov_md": false 00:40:57.003 }, 00:40:57.003 "driver_specific": { 00:40:57.003 "lvol": { 00:40:57.003 "lvol_store_uuid": "5a5ea105-a1c3-46d2-8c09-07012b1951d3", 00:40:57.003 "base_bdev": "aio_bdev", 00:40:57.003 "thin_provision": false, 00:40:57.003 "num_allocated_clusters": 38, 00:40:57.003 "snapshot": false, 00:40:57.003 "clone": false, 00:40:57.003 "esnap_clone": false 00:40:57.003 } 00:40:57.003 } 00:40:57.003 } 00:40:57.003 ] 00:40:57.003 00:21:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@911 -- # return 0 00:40:57.003 00:21:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:40:57.003 00:21:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5a5ea105-a1c3-46d2-8c09-07012b1951d3 00:40:57.003 00:21:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:40:57.003 00:21:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5a5ea105-a1c3-46d2-8c09-07012b1951d3 00:40:57.003 00:21:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:40:57.262 00:21:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:40:57.262 00:21:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 1fee7476-5aa0-47f3-99ce-06cf3520a5df 00:40:57.521 00:21:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 5a5ea105-a1c3-46d2-8c09-07012b1951d3 00:40:57.780 00:21:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:40:57.780 00:21:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:40:57.780 00:40:57.780 real 0m16.994s 00:40:57.780 user 0m16.598s 00:40:57.780 sys 0m1.484s 00:40:57.780 00:21:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:40:57.780 00:21:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:40:57.780 ************************************ 00:40:57.780 END TEST lvs_grow_clean 00:40:57.780 ************************************ 00:40:58.039 00:21:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:40:58.039 00:21:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:40:58.039 00:21:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:40:58.039 00:21:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:40:58.039 ************************************ 00:40:58.039 START TEST lvs_grow_dirty 00:40:58.039 ************************************ 00:40:58.039 00:21:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1129 -- # lvs_grow dirty 00:40:58.039 00:21:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:40:58.039 00:21:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:40:58.039 00:21:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:40:58.039 00:21:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:40:58.039 00:21:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:40:58.039 00:21:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:40:58.039 00:21:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:40:58.039 00:21:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:40:58.039 00:21:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:40:58.299 00:21:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:40:58.299 00:21:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:40:58.299 00:21:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=d816f4ab-abb5-4312-a5f5-9425f00de116 00:40:58.299 00:21:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:40:58.299 00:21:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d816f4ab-abb5-4312-a5f5-9425f00de116 00:40:58.558 00:21:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:40:58.558 00:21:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:40:58.558 00:21:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u d816f4ab-abb5-4312-a5f5-9425f00de116 lvol 150 00:40:58.817 00:21:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=816528c5-c14c-4996-aa41-97bc64e95ea8 00:40:58.817 00:21:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:40:58.817 00:21:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:40:58.817 [2024-12-14 00:21:37.954210] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:40:58.817 [2024-12-14 00:21:37.954377] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:40:59.077 true 00:40:59.077 00:21:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d816f4ab-abb5-4312-a5f5-9425f00de116 00:40:59.077 00:21:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:40:59.077 00:21:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:40:59.077 00:21:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:40:59.337 00:21:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 816528c5-c14c-4996-aa41-97bc64e95ea8 00:40:59.596 00:21:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:40:59.596 [2024-12-14 00:21:38.722823] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:40:59.855 00:21:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:40:59.855 00:21:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=97301 00:40:59.855 00:21:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:40:59.855 00:21:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:40:59.855 00:21:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 97301 /var/tmp/bdevperf.sock 00:40:59.855 00:21:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 97301 ']' 00:40:59.855 00:21:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:40:59.855 00:21:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:40:59.855 00:21:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:40:59.855 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:40:59.855 00:21:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:40:59.855 00:21:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:41:00.115 [2024-12-14 00:21:39.010797] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:41:00.115 [2024-12-14 00:21:39.010887] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid97301 ] 00:41:00.115 [2024-12-14 00:21:39.120806] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:00.115 [2024-12-14 00:21:39.228598] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:41:00.683 00:21:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:41:00.683 00:21:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:41:00.683 00:21:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:41:01.253 Nvme0n1 00:41:01.253 00:21:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:41:01.513 [ 00:41:01.513 { 00:41:01.513 "name": "Nvme0n1", 00:41:01.513 "aliases": [ 00:41:01.513 "816528c5-c14c-4996-aa41-97bc64e95ea8" 00:41:01.513 ], 00:41:01.513 "product_name": "NVMe disk", 00:41:01.513 "block_size": 4096, 00:41:01.513 "num_blocks": 38912, 00:41:01.513 "uuid": "816528c5-c14c-4996-aa41-97bc64e95ea8", 00:41:01.513 "numa_id": 1, 00:41:01.513 "assigned_rate_limits": { 00:41:01.513 "rw_ios_per_sec": 0, 00:41:01.513 "rw_mbytes_per_sec": 0, 00:41:01.513 "r_mbytes_per_sec": 0, 00:41:01.513 "w_mbytes_per_sec": 0 00:41:01.513 }, 00:41:01.513 "claimed": false, 00:41:01.513 "zoned": false, 00:41:01.513 "supported_io_types": { 00:41:01.513 "read": true, 00:41:01.513 "write": true, 00:41:01.513 "unmap": true, 00:41:01.513 "flush": true, 00:41:01.513 "reset": true, 00:41:01.513 "nvme_admin": true, 00:41:01.513 "nvme_io": true, 00:41:01.513 "nvme_io_md": false, 00:41:01.513 "write_zeroes": true, 00:41:01.513 "zcopy": false, 00:41:01.513 "get_zone_info": false, 00:41:01.513 "zone_management": false, 00:41:01.513 "zone_append": false, 00:41:01.513 "compare": true, 00:41:01.513 "compare_and_write": true, 00:41:01.513 "abort": true, 00:41:01.513 "seek_hole": false, 00:41:01.513 "seek_data": false, 00:41:01.513 "copy": true, 00:41:01.513 "nvme_iov_md": false 00:41:01.513 }, 00:41:01.513 "memory_domains": [ 00:41:01.513 { 00:41:01.513 "dma_device_id": "system", 00:41:01.513 "dma_device_type": 1 00:41:01.513 } 00:41:01.513 ], 00:41:01.513 "driver_specific": { 00:41:01.513 "nvme": [ 00:41:01.513 { 00:41:01.513 "trid": { 00:41:01.513 "trtype": "TCP", 00:41:01.513 "adrfam": "IPv4", 00:41:01.513 "traddr": "10.0.0.2", 00:41:01.513 "trsvcid": "4420", 00:41:01.513 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:41:01.513 }, 00:41:01.513 "ctrlr_data": { 00:41:01.513 "cntlid": 1, 00:41:01.513 "vendor_id": "0x8086", 00:41:01.513 "model_number": "SPDK bdev Controller", 00:41:01.513 "serial_number": "SPDK0", 00:41:01.513 "firmware_revision": "25.01", 00:41:01.513 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:41:01.513 "oacs": { 00:41:01.513 "security": 0, 00:41:01.513 "format": 0, 00:41:01.513 "firmware": 0, 00:41:01.513 "ns_manage": 0 00:41:01.513 }, 00:41:01.513 "multi_ctrlr": true, 00:41:01.513 "ana_reporting": false 00:41:01.513 }, 00:41:01.513 "vs": { 00:41:01.513 "nvme_version": "1.3" 00:41:01.513 }, 00:41:01.513 "ns_data": { 00:41:01.513 "id": 1, 00:41:01.513 "can_share": true 00:41:01.513 } 00:41:01.513 } 00:41:01.513 ], 00:41:01.513 "mp_policy": "active_passive" 00:41:01.513 } 00:41:01.513 } 00:41:01.513 ] 00:41:01.513 00:21:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=97534 00:41:01.513 00:21:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:41:01.513 00:21:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:41:01.513 Running I/O for 10 seconds... 00:41:02.451 Latency(us) 00:41:02.451 [2024-12-13T23:21:41.592Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:41:02.451 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:41:02.451 Nvme0n1 : 1.00 20193.00 78.88 0.00 0.00 0.00 0.00 0.00 00:41:02.451 [2024-12-13T23:21:41.592Z] =================================================================================================================== 00:41:02.451 [2024-12-13T23:21:41.592Z] Total : 20193.00 78.88 0.00 0.00 0.00 0.00 0.00 00:41:02.451 00:41:03.389 00:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u d816f4ab-abb5-4312-a5f5-9425f00de116 00:41:03.389 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:41:03.389 Nvme0n1 : 2.00 20320.00 79.38 0.00 0.00 0.00 0.00 0.00 00:41:03.389 [2024-12-13T23:21:42.530Z] =================================================================================================================== 00:41:03.389 [2024-12-13T23:21:42.530Z] Total : 20320.00 79.38 0.00 0.00 0.00 0.00 0.00 00:41:03.389 00:41:03.648 true 00:41:03.648 00:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d816f4ab-abb5-4312-a5f5-9425f00de116 00:41:03.648 00:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:41:03.908 00:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:41:03.908 00:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:41:03.908 00:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 97534 00:41:04.476 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:41:04.476 Nvme0n1 : 3.00 20320.00 79.38 0.00 0.00 0.00 0.00 0.00 00:41:04.476 [2024-12-13T23:21:43.617Z] =================================================================================================================== 00:41:04.476 [2024-12-13T23:21:43.617Z] Total : 20320.00 79.38 0.00 0.00 0.00 0.00 0.00 00:41:04.476 00:41:05.414 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:41:05.414 Nvme0n1 : 4.00 20415.25 79.75 0.00 0.00 0.00 0.00 0.00 00:41:05.414 [2024-12-13T23:21:44.555Z] =================================================================================================================== 00:41:05.414 [2024-12-13T23:21:44.555Z] Total : 20415.25 79.75 0.00 0.00 0.00 0.00 0.00 00:41:05.414 00:41:06.522 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:41:06.522 Nvme0n1 : 5.00 20447.00 79.87 0.00 0.00 0.00 0.00 0.00 00:41:06.522 [2024-12-13T23:21:45.663Z] =================================================================================================================== 00:41:06.522 [2024-12-13T23:21:45.663Z] Total : 20447.00 79.87 0.00 0.00 0.00 0.00 0.00 00:41:06.522 00:41:07.528 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:41:07.528 Nvme0n1 : 6.00 20478.83 80.00 0.00 0.00 0.00 0.00 0.00 00:41:07.528 [2024-12-13T23:21:46.669Z] =================================================================================================================== 00:41:07.528 [2024-12-13T23:21:46.669Z] Total : 20478.83 80.00 0.00 0.00 0.00 0.00 0.00 00:41:07.528 00:41:08.467 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:41:08.467 Nvme0n1 : 7.00 20510.57 80.12 0.00 0.00 0.00 0.00 0.00 00:41:08.467 [2024-12-13T23:21:47.608Z] =================================================================================================================== 00:41:08.467 [2024-12-13T23:21:47.608Z] Total : 20510.57 80.12 0.00 0.00 0.00 0.00 0.00 00:41:08.467 00:41:09.418 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:41:09.418 Nvme0n1 : 8.00 20550.25 80.27 0.00 0.00 0.00 0.00 0.00 00:41:09.418 [2024-12-13T23:21:48.559Z] =================================================================================================================== 00:41:09.418 [2024-12-13T23:21:48.559Z] Total : 20550.25 80.27 0.00 0.00 0.00 0.00 0.00 00:41:09.418 00:41:10.798 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:41:10.798 Nvme0n1 : 9.00 20567.00 80.34 0.00 0.00 0.00 0.00 0.00 00:41:10.798 [2024-12-13T23:21:49.939Z] =================================================================================================================== 00:41:10.798 [2024-12-13T23:21:49.939Z] Total : 20567.00 80.34 0.00 0.00 0.00 0.00 0.00 00:41:10.798 00:41:11.735 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:41:11.735 Nvme0n1 : 10.00 20542.30 80.24 0.00 0.00 0.00 0.00 0.00 00:41:11.735 [2024-12-13T23:21:50.876Z] =================================================================================================================== 00:41:11.735 [2024-12-13T23:21:50.876Z] Total : 20542.30 80.24 0.00 0.00 0.00 0.00 0.00 00:41:11.735 00:41:11.735 00:41:11.735 Latency(us) 00:41:11.735 [2024-12-13T23:21:50.876Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:41:11.735 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:41:11.735 Nvme0n1 : 10.01 20542.59 80.24 0.00 0.00 6227.73 3932.16 18474.91 00:41:11.735 [2024-12-13T23:21:50.876Z] =================================================================================================================== 00:41:11.735 [2024-12-13T23:21:50.876Z] Total : 20542.59 80.24 0.00 0.00 6227.73 3932.16 18474.91 00:41:11.735 { 00:41:11.735 "results": [ 00:41:11.735 { 00:41:11.735 "job": "Nvme0n1", 00:41:11.735 "core_mask": "0x2", 00:41:11.735 "workload": "randwrite", 00:41:11.735 "status": "finished", 00:41:11.735 "queue_depth": 128, 00:41:11.735 "io_size": 4096, 00:41:11.735 "runtime": 10.006091, 00:41:11.735 "iops": 20542.58750994769, 00:41:11.735 "mibps": 80.24448246073317, 00:41:11.735 "io_failed": 0, 00:41:11.735 "io_timeout": 0, 00:41:11.735 "avg_latency_us": 6227.73445118359, 00:41:11.735 "min_latency_us": 3932.16, 00:41:11.735 "max_latency_us": 18474.910476190475 00:41:11.735 } 00:41:11.735 ], 00:41:11.735 "core_count": 1 00:41:11.735 } 00:41:11.735 00:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 97301 00:41:11.735 00:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # '[' -z 97301 ']' 00:41:11.735 00:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # kill -0 97301 00:41:11.736 00:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # uname 00:41:11.736 00:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:41:11.736 00:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 97301 00:41:11.736 00:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:41:11.736 00:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:41:11.736 00:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # echo 'killing process with pid 97301' 00:41:11.736 killing process with pid 97301 00:41:11.736 00:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@973 -- # kill 97301 00:41:11.736 Received shutdown signal, test time was about 10.000000 seconds 00:41:11.736 00:41:11.736 Latency(us) 00:41:11.736 [2024-12-13T23:21:50.877Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:41:11.736 [2024-12-13T23:21:50.877Z] =================================================================================================================== 00:41:11.736 [2024-12-13T23:21:50.877Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:41:11.736 00:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@978 -- # wait 97301 00:41:12.674 00:21:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:41:12.674 00:21:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:41:12.933 00:21:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d816f4ab-abb5-4312-a5f5-9425f00de116 00:41:12.933 00:21:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:41:13.193 00:21:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:41:13.193 00:21:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:41:13.193 00:21:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 94078 00:41:13.193 00:21:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 94078 00:41:13.193 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 94078 Killed "${NVMF_APP[@]}" "$@" 00:41:13.193 00:21:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:41:13.193 00:21:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:41:13.193 00:21:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:41:13.193 00:21:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:41:13.193 00:21:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:41:13.193 00:21:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:41:13.193 00:21:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=99328 00:41:13.193 00:21:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 99328 00:41:13.193 00:21:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 99328 ']' 00:41:13.193 00:21:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:41:13.193 00:21:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:41:13.193 00:21:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:41:13.193 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:41:13.193 00:21:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:41:13.193 00:21:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:41:13.193 [2024-12-14 00:21:52.209918] thread.c:3079:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:41:13.193 [2024-12-14 00:21:52.211987] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:41:13.193 [2024-12-14 00:21:52.212057] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:41:13.453 [2024-12-14 00:21:52.336245] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:13.453 [2024-12-14 00:21:52.439746] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:41:13.453 [2024-12-14 00:21:52.439790] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:41:13.453 [2024-12-14 00:21:52.439802] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:41:13.453 [2024-12-14 00:21:52.439810] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:41:13.453 [2024-12-14 00:21:52.439820] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:41:13.453 [2024-12-14 00:21:52.441242] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:41:13.712 [2024-12-14 00:21:52.764456] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:41:13.712 [2024-12-14 00:21:52.764709] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:41:13.972 00:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:41:13.972 00:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:41:13.972 00:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:41:13.972 00:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@732 -- # xtrace_disable 00:41:13.972 00:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:41:13.972 00:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:41:13.972 00:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:41:14.231 [2024-12-14 00:21:53.213265] blobstore.c:4899:bs_recover: *NOTICE*: Performing recovery on blobstore 00:41:14.231 [2024-12-14 00:21:53.213490] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:41:14.231 [2024-12-14 00:21:53.213555] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:41:14.231 00:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:41:14.231 00:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 816528c5-c14c-4996-aa41-97bc64e95ea8 00:41:14.231 00:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=816528c5-c14c-4996-aa41-97bc64e95ea8 00:41:14.231 00:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:41:14.231 00:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:41:14.231 00:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:41:14.231 00:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:41:14.231 00:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:41:14.491 00:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 816528c5-c14c-4996-aa41-97bc64e95ea8 -t 2000 00:41:14.491 [ 00:41:14.491 { 00:41:14.491 "name": "816528c5-c14c-4996-aa41-97bc64e95ea8", 00:41:14.491 "aliases": [ 00:41:14.491 "lvs/lvol" 00:41:14.491 ], 00:41:14.491 "product_name": "Logical Volume", 00:41:14.491 "block_size": 4096, 00:41:14.491 "num_blocks": 38912, 00:41:14.491 "uuid": "816528c5-c14c-4996-aa41-97bc64e95ea8", 00:41:14.491 "assigned_rate_limits": { 00:41:14.491 "rw_ios_per_sec": 0, 00:41:14.491 "rw_mbytes_per_sec": 0, 00:41:14.491 "r_mbytes_per_sec": 0, 00:41:14.491 "w_mbytes_per_sec": 0 00:41:14.491 }, 00:41:14.491 "claimed": false, 00:41:14.491 "zoned": false, 00:41:14.491 "supported_io_types": { 00:41:14.491 "read": true, 00:41:14.491 "write": true, 00:41:14.491 "unmap": true, 00:41:14.491 "flush": false, 00:41:14.491 "reset": true, 00:41:14.491 "nvme_admin": false, 00:41:14.491 "nvme_io": false, 00:41:14.491 "nvme_io_md": false, 00:41:14.491 "write_zeroes": true, 00:41:14.491 "zcopy": false, 00:41:14.491 "get_zone_info": false, 00:41:14.491 "zone_management": false, 00:41:14.491 "zone_append": false, 00:41:14.491 "compare": false, 00:41:14.491 "compare_and_write": false, 00:41:14.491 "abort": false, 00:41:14.491 "seek_hole": true, 00:41:14.491 "seek_data": true, 00:41:14.491 "copy": false, 00:41:14.491 "nvme_iov_md": false 00:41:14.491 }, 00:41:14.491 "driver_specific": { 00:41:14.491 "lvol": { 00:41:14.491 "lvol_store_uuid": "d816f4ab-abb5-4312-a5f5-9425f00de116", 00:41:14.491 "base_bdev": "aio_bdev", 00:41:14.491 "thin_provision": false, 00:41:14.491 "num_allocated_clusters": 38, 00:41:14.491 "snapshot": false, 00:41:14.491 "clone": false, 00:41:14.491 "esnap_clone": false 00:41:14.491 } 00:41:14.491 } 00:41:14.491 } 00:41:14.491 ] 00:41:14.491 00:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:41:14.491 00:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d816f4ab-abb5-4312-a5f5-9425f00de116 00:41:14.491 00:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:41:14.751 00:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:41:14.751 00:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:41:14.751 00:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d816f4ab-abb5-4312-a5f5-9425f00de116 00:41:15.010 00:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:41:15.011 00:21:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:41:15.011 [2024-12-14 00:21:54.149886] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:41:15.270 00:21:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d816f4ab-abb5-4312-a5f5-9425f00de116 00:41:15.270 00:21:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # local es=0 00:41:15.270 00:21:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d816f4ab-abb5-4312-a5f5-9425f00de116 00:41:15.270 00:21:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:41:15.270 00:21:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:41:15.270 00:21:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:41:15.270 00:21:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:41:15.270 00:21:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:41:15.270 00:21:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:41:15.270 00:21:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:41:15.270 00:21:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:41:15.270 00:21:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d816f4ab-abb5-4312-a5f5-9425f00de116 00:41:15.530 request: 00:41:15.530 { 00:41:15.530 "uuid": "d816f4ab-abb5-4312-a5f5-9425f00de116", 00:41:15.530 "method": "bdev_lvol_get_lvstores", 00:41:15.530 "req_id": 1 00:41:15.530 } 00:41:15.530 Got JSON-RPC error response 00:41:15.530 response: 00:41:15.530 { 00:41:15.530 "code": -19, 00:41:15.530 "message": "No such device" 00:41:15.530 } 00:41:15.530 00:21:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # es=1 00:41:15.530 00:21:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:41:15.530 00:21:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:41:15.530 00:21:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:41:15.530 00:21:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:41:15.530 aio_bdev 00:41:15.530 00:21:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 816528c5-c14c-4996-aa41-97bc64e95ea8 00:41:15.530 00:21:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=816528c5-c14c-4996-aa41-97bc64e95ea8 00:41:15.530 00:21:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:41:15.530 00:21:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:41:15.530 00:21:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:41:15.530 00:21:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:41:15.530 00:21:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:41:15.788 00:21:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 816528c5-c14c-4996-aa41-97bc64e95ea8 -t 2000 00:41:16.047 [ 00:41:16.047 { 00:41:16.047 "name": "816528c5-c14c-4996-aa41-97bc64e95ea8", 00:41:16.047 "aliases": [ 00:41:16.047 "lvs/lvol" 00:41:16.047 ], 00:41:16.047 "product_name": "Logical Volume", 00:41:16.047 "block_size": 4096, 00:41:16.047 "num_blocks": 38912, 00:41:16.047 "uuid": "816528c5-c14c-4996-aa41-97bc64e95ea8", 00:41:16.047 "assigned_rate_limits": { 00:41:16.047 "rw_ios_per_sec": 0, 00:41:16.047 "rw_mbytes_per_sec": 0, 00:41:16.047 "r_mbytes_per_sec": 0, 00:41:16.047 "w_mbytes_per_sec": 0 00:41:16.047 }, 00:41:16.047 "claimed": false, 00:41:16.047 "zoned": false, 00:41:16.047 "supported_io_types": { 00:41:16.047 "read": true, 00:41:16.047 "write": true, 00:41:16.047 "unmap": true, 00:41:16.047 "flush": false, 00:41:16.047 "reset": true, 00:41:16.047 "nvme_admin": false, 00:41:16.047 "nvme_io": false, 00:41:16.047 "nvme_io_md": false, 00:41:16.047 "write_zeroes": true, 00:41:16.047 "zcopy": false, 00:41:16.047 "get_zone_info": false, 00:41:16.047 "zone_management": false, 00:41:16.047 "zone_append": false, 00:41:16.047 "compare": false, 00:41:16.047 "compare_and_write": false, 00:41:16.047 "abort": false, 00:41:16.047 "seek_hole": true, 00:41:16.047 "seek_data": true, 00:41:16.047 "copy": false, 00:41:16.047 "nvme_iov_md": false 00:41:16.047 }, 00:41:16.047 "driver_specific": { 00:41:16.047 "lvol": { 00:41:16.047 "lvol_store_uuid": "d816f4ab-abb5-4312-a5f5-9425f00de116", 00:41:16.047 "base_bdev": "aio_bdev", 00:41:16.047 "thin_provision": false, 00:41:16.047 "num_allocated_clusters": 38, 00:41:16.047 "snapshot": false, 00:41:16.047 "clone": false, 00:41:16.047 "esnap_clone": false 00:41:16.047 } 00:41:16.047 } 00:41:16.047 } 00:41:16.047 ] 00:41:16.047 00:21:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:41:16.047 00:21:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d816f4ab-abb5-4312-a5f5-9425f00de116 00:41:16.047 00:21:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:41:16.047 00:21:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:41:16.047 00:21:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:41:16.047 00:21:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d816f4ab-abb5-4312-a5f5-9425f00de116 00:41:16.306 00:21:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:41:16.306 00:21:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 816528c5-c14c-4996-aa41-97bc64e95ea8 00:41:16.566 00:21:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u d816f4ab-abb5-4312-a5f5-9425f00de116 00:41:16.826 00:21:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:41:17.086 00:21:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:41:17.086 00:41:17.086 real 0m19.045s 00:41:17.086 user 0m36.507s 00:41:17.086 sys 0m3.775s 00:41:17.086 00:21:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1130 -- # xtrace_disable 00:41:17.086 00:21:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:41:17.086 ************************************ 00:41:17.086 END TEST lvs_grow_dirty 00:41:17.086 ************************************ 00:41:17.086 00:21:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:41:17.086 00:21:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # type=--id 00:41:17.086 00:21:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # id=0 00:41:17.086 00:21:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:41:17.086 00:21:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:41:17.086 00:21:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:41:17.086 00:21:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:41:17.086 00:21:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@824 -- # for n in $shm_files 00:41:17.086 00:21:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:41:17.086 nvmf_trace.0 00:41:17.086 00:21:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # return 0 00:41:17.086 00:21:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:41:17.086 00:21:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:41:17.086 00:21:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:41:17.086 00:21:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:41:17.086 00:21:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:41:17.086 00:21:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:41:17.086 00:21:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:41:17.086 rmmod nvme_tcp 00:41:17.086 rmmod nvme_fabrics 00:41:17.086 rmmod nvme_keyring 00:41:17.086 00:21:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:41:17.086 00:21:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:41:17.086 00:21:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:41:17.086 00:21:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 99328 ']' 00:41:17.086 00:21:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 99328 00:41:17.086 00:21:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # '[' -z 99328 ']' 00:41:17.086 00:21:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # kill -0 99328 00:41:17.086 00:21:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # uname 00:41:17.086 00:21:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:41:17.086 00:21:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 99328 00:41:17.086 00:21:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:41:17.086 00:21:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:41:17.086 00:21:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # echo 'killing process with pid 99328' 00:41:17.086 killing process with pid 99328 00:41:17.086 00:21:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@973 -- # kill 99328 00:41:17.086 00:21:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@978 -- # wait 99328 00:41:18.466 00:21:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:41:18.466 00:21:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:41:18.466 00:21:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:41:18.466 00:21:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:41:18.466 00:21:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 00:41:18.466 00:21:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:41:18.466 00:21:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 00:41:18.466 00:21:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:41:18.466 00:21:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:41:18.466 00:21:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:41:18.466 00:21:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:41:18.466 00:21:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:41:20.373 00:21:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:41:20.373 00:41:20.373 real 0m46.264s 00:41:20.373 user 0m56.871s 00:41:20.373 sys 0m9.872s 00:41:20.373 00:21:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1130 -- # xtrace_disable 00:41:20.373 00:21:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:41:20.373 ************************************ 00:41:20.373 END TEST nvmf_lvs_grow 00:41:20.373 ************************************ 00:41:20.373 00:21:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:41:20.373 00:21:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:41:20.373 00:21:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:41:20.373 00:21:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:41:20.373 ************************************ 00:41:20.373 START TEST nvmf_bdev_io_wait 00:41:20.373 ************************************ 00:41:20.373 00:21:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:41:20.633 * Looking for test storage... 00:41:20.633 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:41:20.633 00:21:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:41:20.633 00:21:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # lcov --version 00:41:20.633 00:21:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:41:20.633 00:21:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:41:20.633 00:21:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:41:20.633 00:21:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:41:20.633 00:21:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:41:20.633 00:21:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:41:20.633 00:21:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:41:20.633 00:21:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:41:20.633 00:21:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:41:20.633 00:21:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:41:20.633 00:21:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:41:20.633 00:21:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:41:20.633 00:21:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:41:20.633 00:21:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:41:20.633 00:21:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:41:20.633 00:21:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:41:20.633 00:21:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:41:20.633 00:21:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:41:20.633 00:21:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:41:20.633 00:21:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:41:20.633 00:21:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:41:20.633 00:21:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:41:20.633 00:21:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:41:20.633 00:21:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:41:20.633 00:21:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:41:20.633 00:21:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:41:20.633 00:21:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:41:20.633 00:21:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:41:20.633 00:21:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:41:20.633 00:21:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:41:20.633 00:21:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:41:20.633 00:21:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:41:20.633 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:20.633 --rc genhtml_branch_coverage=1 00:41:20.633 --rc genhtml_function_coverage=1 00:41:20.633 --rc genhtml_legend=1 00:41:20.633 --rc geninfo_all_blocks=1 00:41:20.633 --rc geninfo_unexecuted_blocks=1 00:41:20.633 00:41:20.634 ' 00:41:20.634 00:21:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:41:20.634 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:20.634 --rc genhtml_branch_coverage=1 00:41:20.634 --rc genhtml_function_coverage=1 00:41:20.634 --rc genhtml_legend=1 00:41:20.634 --rc geninfo_all_blocks=1 00:41:20.634 --rc geninfo_unexecuted_blocks=1 00:41:20.634 00:41:20.634 ' 00:41:20.634 00:21:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:41:20.634 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:20.634 --rc genhtml_branch_coverage=1 00:41:20.634 --rc genhtml_function_coverage=1 00:41:20.634 --rc genhtml_legend=1 00:41:20.634 --rc geninfo_all_blocks=1 00:41:20.634 --rc geninfo_unexecuted_blocks=1 00:41:20.634 00:41:20.634 ' 00:41:20.634 00:21:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:41:20.634 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:20.634 --rc genhtml_branch_coverage=1 00:41:20.634 --rc genhtml_function_coverage=1 00:41:20.634 --rc genhtml_legend=1 00:41:20.634 --rc geninfo_all_blocks=1 00:41:20.634 --rc geninfo_unexecuted_blocks=1 00:41:20.634 00:41:20.634 ' 00:41:20.634 00:21:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:41:20.634 00:21:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:41:20.634 00:21:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:41:20.634 00:21:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:41:20.634 00:21:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:41:20.634 00:21:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:41:20.634 00:21:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:41:20.634 00:21:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:41:20.634 00:21:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:41:20.634 00:21:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:41:20.634 00:21:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:41:20.634 00:21:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:41:20.634 00:21:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:41:20.634 00:21:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:41:20.634 00:21:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:41:20.634 00:21:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:41:20.634 00:21:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:41:20.634 00:21:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:41:20.634 00:21:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:41:20.634 00:21:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:41:20.634 00:21:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:41:20.634 00:21:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:41:20.634 00:21:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:41:20.634 00:21:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:20.634 00:21:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:20.634 00:21:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:20.634 00:21:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:41:20.634 00:21:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:20.634 00:21:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:41:20.634 00:21:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:41:20.634 00:21:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:41:20.634 00:21:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:41:20.634 00:21:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:41:20.634 00:21:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:41:20.634 00:21:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:41:20.634 00:21:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:41:20.634 00:21:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:41:20.634 00:21:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:41:20.634 00:21:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:41:20.634 00:21:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:41:20.634 00:21:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:41:20.634 00:21:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:41:20.634 00:21:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:41:20.634 00:21:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:41:20.634 00:21:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:41:20.634 00:21:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:41:20.634 00:21:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:41:20.634 00:21:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:41:20.634 00:21:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:41:20.634 00:21:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:41:20.634 00:21:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:41:20.634 00:21:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:41:20.634 00:21:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:41:20.634 00:21:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:41:25.922 00:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:41:25.922 00:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:41:25.922 00:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:41:25.922 00:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:41:25.922 00:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:41:25.922 00:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:41:25.922 00:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:41:25.922 00:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:41:25.922 00:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:41:25.922 00:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:41:25.922 00:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:41:25.922 00:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:41:25.922 00:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:41:25.922 00:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:41:25.922 00:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:41:25.922 00:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:41:25.922 00:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:41:25.922 00:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:41:25.922 00:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:41:25.922 00:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:41:25.922 00:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:41:25.922 00:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:41:25.922 00:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:41:25.922 00:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:41:25.922 00:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:41:25.922 00:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:41:25.922 00:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:41:25.922 00:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:41:25.922 00:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:41:25.922 00:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:41:25.922 00:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:41:25.922 00:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:41:25.922 00:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:41:25.922 00:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:41:25.923 00:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:41:25.923 Found 0000:af:00.0 (0x8086 - 0x159b) 00:41:25.923 00:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:41:25.923 00:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:41:25.923 00:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:41:25.923 00:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:41:25.923 00:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:41:25.923 00:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:41:25.923 00:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:41:25.923 Found 0000:af:00.1 (0x8086 - 0x159b) 00:41:25.923 00:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:41:25.923 00:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:41:25.923 00:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:41:25.923 00:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:41:25.923 00:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:41:25.923 00:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:41:25.923 00:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:41:25.923 00:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:41:25.923 00:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:41:25.923 00:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:41:25.923 00:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:41:25.923 00:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:41:25.923 00:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:41:25.923 00:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:41:25.923 00:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:41:25.923 00:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:41:25.923 Found net devices under 0000:af:00.0: cvl_0_0 00:41:25.923 00:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:41:25.923 00:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:41:25.923 00:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:41:25.923 00:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:41:25.923 00:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:41:25.923 00:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:41:25.923 00:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:41:25.923 00:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:41:25.923 00:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:41:25.923 Found net devices under 0000:af:00.1: cvl_0_1 00:41:25.923 00:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:41:25.923 00:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:41:25.923 00:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # is_hw=yes 00:41:25.923 00:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:41:25.923 00:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:41:25.923 00:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:41:25.923 00:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:41:25.923 00:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:41:25.923 00:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:41:25.923 00:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:41:25.923 00:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:41:25.923 00:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:41:25.923 00:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:41:25.923 00:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:41:25.923 00:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:41:25.923 00:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:41:25.923 00:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:41:25.923 00:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:41:25.923 00:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:41:25.923 00:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:41:25.923 00:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:41:25.923 00:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:41:25.923 00:22:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:41:25.923 00:22:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:41:25.923 00:22:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:41:26.184 00:22:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:41:26.184 00:22:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:41:26.184 00:22:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:41:26.184 00:22:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:41:26.184 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:41:26.184 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.349 ms 00:41:26.184 00:41:26.184 --- 10.0.0.2 ping statistics --- 00:41:26.184 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:26.184 rtt min/avg/max/mdev = 0.349/0.349/0.349/0.000 ms 00:41:26.185 00:22:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:41:26.185 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:41:26.185 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.160 ms 00:41:26.185 00:41:26.185 --- 10.0.0.1 ping statistics --- 00:41:26.185 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:26.185 rtt min/avg/max/mdev = 0.160/0.160/0.160/0.000 ms 00:41:26.185 00:22:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:41:26.185 00:22:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # return 0 00:41:26.185 00:22:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:41:26.185 00:22:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:41:26.185 00:22:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:41:26.185 00:22:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:41:26.185 00:22:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:41:26.185 00:22:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:41:26.185 00:22:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:41:26.185 00:22:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:41:26.185 00:22:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:41:26.185 00:22:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:41:26.185 00:22:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:41:26.185 00:22:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=103522 00:41:26.185 00:22:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 103522 00:41:26.185 00:22:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF --wait-for-rpc 00:41:26.185 00:22:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # '[' -z 103522 ']' 00:41:26.185 00:22:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:41:26.185 00:22:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # local max_retries=100 00:41:26.185 00:22:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:41:26.185 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:41:26.185 00:22:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@844 -- # xtrace_disable 00:41:26.185 00:22:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:41:26.185 [2024-12-14 00:22:05.285670] thread.c:3079:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:41:26.185 [2024-12-14 00:22:05.287698] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:41:26.185 [2024-12-14 00:22:05.287765] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:41:26.444 [2024-12-14 00:22:05.403518] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:41:26.444 [2024-12-14 00:22:05.507718] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:41:26.444 [2024-12-14 00:22:05.507765] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:41:26.444 [2024-12-14 00:22:05.507776] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:41:26.444 [2024-12-14 00:22:05.507785] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:41:26.444 [2024-12-14 00:22:05.507794] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:41:26.444 [2024-12-14 00:22:05.510299] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:41:26.444 [2024-12-14 00:22:05.510374] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:41:26.444 [2024-12-14 00:22:05.510435] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:41:26.444 [2024-12-14 00:22:05.510457] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:41:26.444 [2024-12-14 00:22:05.510905] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:41:27.012 00:22:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:41:27.012 00:22:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@868 -- # return 0 00:41:27.012 00:22:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:41:27.012 00:22:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@732 -- # xtrace_disable 00:41:27.012 00:22:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:41:27.012 00:22:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:41:27.012 00:22:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:41:27.012 00:22:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:27.012 00:22:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:41:27.012 00:22:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:27.012 00:22:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:41:27.012 00:22:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:27.012 00:22:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:41:27.271 [2024-12-14 00:22:06.349603] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:41:27.271 [2024-12-14 00:22:06.350484] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:41:27.271 [2024-12-14 00:22:06.351623] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:41:27.271 [2024-12-14 00:22:06.352581] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:41:27.271 00:22:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:27.271 00:22:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:41:27.271 00:22:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:27.271 00:22:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:41:27.271 [2024-12-14 00:22:06.359380] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:41:27.271 00:22:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:27.271 00:22:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:41:27.271 00:22:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:27.271 00:22:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:41:27.532 Malloc0 00:41:27.532 00:22:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:27.532 00:22:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:41:27.532 00:22:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:27.532 00:22:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:41:27.532 00:22:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:27.532 00:22:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:41:27.532 00:22:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:27.532 00:22:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:41:27.532 00:22:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:27.532 00:22:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:41:27.532 00:22:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:27.532 00:22:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:41:27.532 [2024-12-14 00:22:06.475408] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:41:27.532 00:22:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:27.532 00:22:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=103761 00:41:27.532 00:22:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:41:27.532 00:22:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:41:27.532 00:22:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=103763 00:41:27.532 00:22:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:41:27.532 00:22:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:41:27.532 00:22:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:41:27.532 00:22:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:41:27.532 { 00:41:27.532 "params": { 00:41:27.532 "name": "Nvme$subsystem", 00:41:27.532 "trtype": "$TEST_TRANSPORT", 00:41:27.532 "traddr": "$NVMF_FIRST_TARGET_IP", 00:41:27.532 "adrfam": "ipv4", 00:41:27.532 "trsvcid": "$NVMF_PORT", 00:41:27.532 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:41:27.532 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:41:27.532 "hdgst": ${hdgst:-false}, 00:41:27.532 "ddgst": ${ddgst:-false} 00:41:27.532 }, 00:41:27.532 "method": "bdev_nvme_attach_controller" 00:41:27.532 } 00:41:27.532 EOF 00:41:27.532 )") 00:41:27.532 00:22:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:41:27.532 00:22:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:41:27.532 00:22:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=103765 00:41:27.532 00:22:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:41:27.532 00:22:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:41:27.532 00:22:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:41:27.532 00:22:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:41:27.532 00:22:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:41:27.532 00:22:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:41:27.532 { 00:41:27.532 "params": { 00:41:27.532 "name": "Nvme$subsystem", 00:41:27.532 "trtype": "$TEST_TRANSPORT", 00:41:27.532 "traddr": "$NVMF_FIRST_TARGET_IP", 00:41:27.532 "adrfam": "ipv4", 00:41:27.532 "trsvcid": "$NVMF_PORT", 00:41:27.532 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:41:27.532 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:41:27.532 "hdgst": ${hdgst:-false}, 00:41:27.532 "ddgst": ${ddgst:-false} 00:41:27.532 }, 00:41:27.532 "method": "bdev_nvme_attach_controller" 00:41:27.532 } 00:41:27.532 EOF 00:41:27.532 )") 00:41:27.532 00:22:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=103768 00:41:27.532 00:22:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:41:27.532 00:22:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:41:27.532 00:22:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:41:27.532 00:22:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:41:27.532 00:22:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:41:27.532 00:22:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:41:27.532 00:22:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:41:27.532 { 00:41:27.532 "params": { 00:41:27.532 "name": "Nvme$subsystem", 00:41:27.532 "trtype": "$TEST_TRANSPORT", 00:41:27.532 "traddr": "$NVMF_FIRST_TARGET_IP", 00:41:27.532 "adrfam": "ipv4", 00:41:27.532 "trsvcid": "$NVMF_PORT", 00:41:27.532 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:41:27.532 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:41:27.532 "hdgst": ${hdgst:-false}, 00:41:27.532 "ddgst": ${ddgst:-false} 00:41:27.532 }, 00:41:27.532 "method": "bdev_nvme_attach_controller" 00:41:27.532 } 00:41:27.532 EOF 00:41:27.532 )") 00:41:27.532 00:22:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:41:27.532 00:22:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:41:27.532 00:22:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:41:27.532 00:22:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:41:27.532 00:22:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:41:27.532 00:22:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:41:27.532 { 00:41:27.532 "params": { 00:41:27.532 "name": "Nvme$subsystem", 00:41:27.533 "trtype": "$TEST_TRANSPORT", 00:41:27.533 "traddr": "$NVMF_FIRST_TARGET_IP", 00:41:27.533 "adrfam": "ipv4", 00:41:27.533 "trsvcid": "$NVMF_PORT", 00:41:27.533 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:41:27.533 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:41:27.533 "hdgst": ${hdgst:-false}, 00:41:27.533 "ddgst": ${ddgst:-false} 00:41:27.533 }, 00:41:27.533 "method": "bdev_nvme_attach_controller" 00:41:27.533 } 00:41:27.533 EOF 00:41:27.533 )") 00:41:27.533 00:22:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:41:27.533 00:22:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 103761 00:41:27.533 00:22:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:41:27.533 00:22:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:41:27.533 00:22:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:41:27.533 00:22:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:41:27.533 00:22:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:41:27.533 00:22:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:41:27.533 "params": { 00:41:27.533 "name": "Nvme1", 00:41:27.533 "trtype": "tcp", 00:41:27.533 "traddr": "10.0.0.2", 00:41:27.533 "adrfam": "ipv4", 00:41:27.533 "trsvcid": "4420", 00:41:27.533 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:41:27.533 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:41:27.533 "hdgst": false, 00:41:27.533 "ddgst": false 00:41:27.533 }, 00:41:27.533 "method": "bdev_nvme_attach_controller" 00:41:27.533 }' 00:41:27.533 00:22:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:41:27.533 00:22:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:41:27.533 00:22:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:41:27.533 "params": { 00:41:27.533 "name": "Nvme1", 00:41:27.533 "trtype": "tcp", 00:41:27.533 "traddr": "10.0.0.2", 00:41:27.533 "adrfam": "ipv4", 00:41:27.533 "trsvcid": "4420", 00:41:27.533 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:41:27.533 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:41:27.533 "hdgst": false, 00:41:27.533 "ddgst": false 00:41:27.533 }, 00:41:27.533 "method": "bdev_nvme_attach_controller" 00:41:27.533 }' 00:41:27.533 00:22:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:41:27.533 00:22:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:41:27.533 "params": { 00:41:27.533 "name": "Nvme1", 00:41:27.533 "trtype": "tcp", 00:41:27.533 "traddr": "10.0.0.2", 00:41:27.533 "adrfam": "ipv4", 00:41:27.533 "trsvcid": "4420", 00:41:27.533 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:41:27.533 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:41:27.533 "hdgst": false, 00:41:27.533 "ddgst": false 00:41:27.533 }, 00:41:27.533 "method": "bdev_nvme_attach_controller" 00:41:27.533 }' 00:41:27.533 00:22:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:41:27.533 00:22:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:41:27.533 "params": { 00:41:27.533 "name": "Nvme1", 00:41:27.533 "trtype": "tcp", 00:41:27.533 "traddr": "10.0.0.2", 00:41:27.533 "adrfam": "ipv4", 00:41:27.533 "trsvcid": "4420", 00:41:27.533 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:41:27.533 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:41:27.533 "hdgst": false, 00:41:27.533 "ddgst": false 00:41:27.533 }, 00:41:27.533 "method": "bdev_nvme_attach_controller" 00:41:27.533 }' 00:41:27.533 [2024-12-14 00:22:06.555434] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:41:27.533 [2024-12-14 00:22:06.555535] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:41:27.533 [2024-12-14 00:22:06.555742] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:41:27.533 [2024-12-14 00:22:06.555743] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:41:27.533 [2024-12-14 00:22:06.555819] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-12-14 00:22:06.555819] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:41:27.533 --proc-type=auto ] 00:41:27.533 [2024-12-14 00:22:06.555846] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:41:27.533 [2024-12-14 00:22:06.555917] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:41:27.793 [2024-12-14 00:22:06.791618] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:27.793 [2024-12-14 00:22:06.888956] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:27.793 [2024-12-14 00:22:06.901158] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 7 00:41:28.052 [2024-12-14 00:22:06.998982] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:28.052 [2024-12-14 00:22:06.999551] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 6 00:41:28.052 [2024-12-14 00:22:07.102230] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:28.052 [2024-12-14 00:22:07.108360] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 4 00:41:28.310 [2024-12-14 00:22:07.209152] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 5 00:41:28.311 Running I/O for 1 seconds... 00:41:28.569 Running I/O for 1 seconds... 00:41:28.569 Running I/O for 1 seconds... 00:41:28.828 Running I/O for 1 seconds... 00:41:29.395 10783.00 IOPS, 42.12 MiB/s 00:41:29.395 Latency(us) 00:41:29.395 [2024-12-13T23:22:08.536Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:41:29.395 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:41:29.395 Nvme1n1 : 1.02 10754.47 42.01 0.00 0.00 11813.67 4213.03 25590.25 00:41:29.395 [2024-12-13T23:22:08.536Z] =================================================================================================================== 00:41:29.395 [2024-12-13T23:22:08.536Z] Total : 10754.47 42.01 0.00 0.00 11813.67 4213.03 25590.25 00:41:29.654 215688.00 IOPS, 842.53 MiB/s 00:41:29.654 Latency(us) 00:41:29.654 [2024-12-13T23:22:08.795Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:41:29.654 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:41:29.654 Nvme1n1 : 1.00 215332.09 841.14 0.00 0.00 591.35 267.22 1622.80 00:41:29.654 [2024-12-13T23:22:08.795Z] =================================================================================================================== 00:41:29.654 [2024-12-13T23:22:08.795Z] Total : 215332.09 841.14 0.00 0.00 591.35 267.22 1622.80 00:41:29.654 10262.00 IOPS, 40.09 MiB/s 00:41:29.654 Latency(us) 00:41:29.654 [2024-12-13T23:22:08.795Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:41:29.654 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:41:29.654 Nvme1n1 : 1.01 10326.65 40.34 0.00 0.00 12348.24 1997.29 18599.74 00:41:29.654 [2024-12-13T23:22:08.795Z] =================================================================================================================== 00:41:29.654 [2024-12-13T23:22:08.795Z] Total : 10326.65 40.34 0.00 0.00 12348.24 1997.29 18599.74 00:41:29.654 9059.00 IOPS, 35.39 MiB/s 00:41:29.654 Latency(us) 00:41:29.654 [2024-12-13T23:22:08.795Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:41:29.654 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:41:29.654 Nvme1n1 : 1.01 9143.64 35.72 0.00 0.00 13962.88 4056.99 33953.89 00:41:29.654 [2024-12-13T23:22:08.795Z] =================================================================================================================== 00:41:29.654 [2024-12-13T23:22:08.795Z] Total : 9143.64 35.72 0.00 0.00 13962.88 4056.99 33953.89 00:41:30.222 00:22:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 103763 00:41:30.482 00:22:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 103765 00:41:30.482 00:22:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 103768 00:41:30.482 00:22:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:41:30.482 00:22:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:30.482 00:22:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:41:30.482 00:22:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:30.482 00:22:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:41:30.482 00:22:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:41:30.482 00:22:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:41:30.482 00:22:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:41:30.482 00:22:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:41:30.482 00:22:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:41:30.482 00:22:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:41:30.482 00:22:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:41:30.482 rmmod nvme_tcp 00:41:30.482 rmmod nvme_fabrics 00:41:30.482 rmmod nvme_keyring 00:41:30.482 00:22:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:41:30.482 00:22:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:41:30.482 00:22:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:41:30.482 00:22:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 103522 ']' 00:41:30.482 00:22:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 103522 00:41:30.482 00:22:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # '[' -z 103522 ']' 00:41:30.482 00:22:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # kill -0 103522 00:41:30.482 00:22:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # uname 00:41:30.482 00:22:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:41:30.482 00:22:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 103522 00:41:30.482 00:22:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:41:30.482 00:22:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:41:30.482 00:22:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # echo 'killing process with pid 103522' 00:41:30.482 killing process with pid 103522 00:41:30.482 00:22:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@973 -- # kill 103522 00:41:30.482 00:22:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@978 -- # wait 103522 00:41:31.862 00:22:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:41:31.862 00:22:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:41:31.862 00:22:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:41:31.862 00:22:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:41:31.862 00:22:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 00:41:31.862 00:22:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:41:31.862 00:22:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 00:41:31.862 00:22:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:41:31.862 00:22:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:41:31.862 00:22:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:41:31.862 00:22:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:41:31.862 00:22:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:41:33.765 00:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:41:33.765 00:41:33.765 real 0m13.228s 00:41:33.765 user 0m24.011s 00:41:33.765 sys 0m6.827s 00:41:33.765 00:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1130 -- # xtrace_disable 00:41:33.765 00:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:41:33.765 ************************************ 00:41:33.765 END TEST nvmf_bdev_io_wait 00:41:33.765 ************************************ 00:41:33.765 00:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:41:33.765 00:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:41:33.765 00:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:41:33.765 00:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:41:33.765 ************************************ 00:41:33.765 START TEST nvmf_queue_depth 00:41:33.765 ************************************ 00:41:33.765 00:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:41:33.765 * Looking for test storage... 00:41:33.765 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:41:33.765 00:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:41:33.765 00:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # lcov --version 00:41:33.765 00:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:41:34.024 00:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:41:34.024 00:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:41:34.024 00:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:41:34.024 00:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:41:34.024 00:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:41:34.024 00:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:41:34.024 00:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:41:34.024 00:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:41:34.024 00:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:41:34.024 00:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:41:34.024 00:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:41:34.024 00:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:41:34.024 00:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:41:34.024 00:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:41:34.024 00:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:41:34.024 00:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:41:34.024 00:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:41:34.024 00:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:41:34.024 00:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:41:34.024 00:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:41:34.024 00:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:41:34.024 00:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:41:34.024 00:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:41:34.024 00:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:41:34.024 00:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:41:34.024 00:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:41:34.024 00:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:41:34.024 00:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:41:34.024 00:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:41:34.024 00:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:41:34.024 00:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:41:34.024 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:34.024 --rc genhtml_branch_coverage=1 00:41:34.024 --rc genhtml_function_coverage=1 00:41:34.024 --rc genhtml_legend=1 00:41:34.024 --rc geninfo_all_blocks=1 00:41:34.024 --rc geninfo_unexecuted_blocks=1 00:41:34.024 00:41:34.024 ' 00:41:34.025 00:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:41:34.025 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:34.025 --rc genhtml_branch_coverage=1 00:41:34.025 --rc genhtml_function_coverage=1 00:41:34.025 --rc genhtml_legend=1 00:41:34.025 --rc geninfo_all_blocks=1 00:41:34.025 --rc geninfo_unexecuted_blocks=1 00:41:34.025 00:41:34.025 ' 00:41:34.025 00:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:41:34.025 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:34.025 --rc genhtml_branch_coverage=1 00:41:34.025 --rc genhtml_function_coverage=1 00:41:34.025 --rc genhtml_legend=1 00:41:34.025 --rc geninfo_all_blocks=1 00:41:34.025 --rc geninfo_unexecuted_blocks=1 00:41:34.025 00:41:34.025 ' 00:41:34.025 00:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:41:34.025 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:34.025 --rc genhtml_branch_coverage=1 00:41:34.025 --rc genhtml_function_coverage=1 00:41:34.025 --rc genhtml_legend=1 00:41:34.025 --rc geninfo_all_blocks=1 00:41:34.025 --rc geninfo_unexecuted_blocks=1 00:41:34.025 00:41:34.025 ' 00:41:34.025 00:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:41:34.025 00:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:41:34.025 00:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:41:34.025 00:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:41:34.025 00:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:41:34.025 00:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:41:34.025 00:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:41:34.025 00:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:41:34.025 00:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:41:34.025 00:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:41:34.025 00:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:41:34.025 00:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:41:34.025 00:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:41:34.025 00:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:41:34.025 00:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:41:34.025 00:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:41:34.025 00:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:41:34.025 00:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:41:34.025 00:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:41:34.025 00:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:41:34.025 00:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:41:34.025 00:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:41:34.025 00:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:41:34.025 00:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:34.025 00:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:34.025 00:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:34.025 00:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:41:34.025 00:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:34.025 00:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:41:34.025 00:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:41:34.025 00:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:41:34.025 00:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:41:34.025 00:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:41:34.025 00:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:41:34.025 00:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:41:34.025 00:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:41:34.025 00:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:41:34.025 00:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:41:34.025 00:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:41:34.025 00:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:41:34.025 00:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:41:34.025 00:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:41:34.025 00:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:41:34.025 00:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:41:34.025 00:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:41:34.025 00:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:41:34.025 00:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:41:34.025 00:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:41:34.025 00:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:41:34.025 00:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:41:34.025 00:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:41:34.025 00:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:41:34.025 00:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:41:34.025 00:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:41:34.025 00:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:41:39.297 00:22:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:41:39.297 00:22:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:41:39.297 00:22:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:41:39.297 00:22:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:41:39.297 00:22:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:41:39.297 00:22:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:41:39.297 00:22:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:41:39.297 00:22:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:41:39.297 00:22:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:41:39.297 00:22:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:41:39.297 00:22:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:41:39.297 00:22:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:41:39.297 00:22:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:41:39.297 00:22:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:41:39.297 00:22:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:41:39.297 00:22:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:41:39.297 00:22:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:41:39.297 00:22:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:41:39.297 00:22:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:41:39.297 00:22:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:41:39.297 00:22:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:41:39.297 00:22:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:41:39.297 00:22:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:41:39.297 00:22:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:41:39.297 00:22:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:41:39.297 00:22:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:41:39.297 00:22:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:41:39.297 00:22:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:41:39.297 00:22:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:41:39.297 00:22:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:41:39.297 00:22:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:41:39.297 00:22:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:41:39.297 00:22:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:41:39.297 00:22:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:41:39.297 00:22:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:41:39.297 Found 0000:af:00.0 (0x8086 - 0x159b) 00:41:39.297 00:22:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:41:39.298 00:22:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:41:39.298 00:22:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:41:39.298 00:22:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:41:39.298 00:22:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:41:39.298 00:22:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:41:39.298 00:22:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:41:39.298 Found 0000:af:00.1 (0x8086 - 0x159b) 00:41:39.298 00:22:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:41:39.298 00:22:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:41:39.298 00:22:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:41:39.298 00:22:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:41:39.298 00:22:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:41:39.298 00:22:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:41:39.298 00:22:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:41:39.298 00:22:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:41:39.298 00:22:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:41:39.298 00:22:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:41:39.298 00:22:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:41:39.298 00:22:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:41:39.298 00:22:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:41:39.298 00:22:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:41:39.298 00:22:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:41:39.298 00:22:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:41:39.298 Found net devices under 0000:af:00.0: cvl_0_0 00:41:39.298 00:22:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:41:39.298 00:22:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:41:39.298 00:22:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:41:39.298 00:22:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:41:39.298 00:22:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:41:39.298 00:22:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:41:39.298 00:22:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:41:39.298 00:22:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:41:39.298 00:22:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:41:39.298 Found net devices under 0000:af:00.1: cvl_0_1 00:41:39.298 00:22:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:41:39.298 00:22:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:41:39.298 00:22:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # is_hw=yes 00:41:39.298 00:22:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:41:39.298 00:22:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:41:39.298 00:22:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:41:39.298 00:22:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:41:39.298 00:22:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:41:39.298 00:22:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:41:39.298 00:22:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:41:39.298 00:22:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:41:39.298 00:22:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:41:39.298 00:22:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:41:39.298 00:22:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:41:39.298 00:22:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:41:39.298 00:22:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:41:39.298 00:22:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:41:39.298 00:22:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:41:39.298 00:22:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:41:39.298 00:22:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:41:39.298 00:22:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:41:39.298 00:22:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:41:39.298 00:22:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:41:39.298 00:22:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:41:39.298 00:22:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:41:39.298 00:22:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:41:39.298 00:22:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:41:39.298 00:22:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:41:39.298 00:22:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:41:39.298 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:41:39.298 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.368 ms 00:41:39.298 00:41:39.298 --- 10.0.0.2 ping statistics --- 00:41:39.298 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:39.298 rtt min/avg/max/mdev = 0.368/0.368/0.368/0.000 ms 00:41:39.298 00:22:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:41:39.298 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:41:39.298 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.198 ms 00:41:39.298 00:41:39.298 --- 10.0.0.1 ping statistics --- 00:41:39.298 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:39.298 rtt min/avg/max/mdev = 0.198/0.198/0.198/0.000 ms 00:41:39.298 00:22:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:41:39.298 00:22:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@450 -- # return 0 00:41:39.298 00:22:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:41:39.298 00:22:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:41:39.298 00:22:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:41:39.298 00:22:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:41:39.298 00:22:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:41:39.298 00:22:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:41:39.298 00:22:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:41:39.298 00:22:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:41:39.298 00:22:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:41:39.298 00:22:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:41:39.298 00:22:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:41:39.298 00:22:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=107707 00:41:39.298 00:22:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 107707 00:41:39.298 00:22:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 107707 ']' 00:41:39.298 00:22:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:41:39.298 00:22:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:41:39.298 00:22:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:41:39.298 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:41:39.299 00:22:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:41:39.299 00:22:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:41:39.299 00:22:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:41:39.299 [2024-12-14 00:22:18.003153] thread.c:3079:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:41:39.299 [2024-12-14 00:22:18.005219] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:41:39.299 [2024-12-14 00:22:18.005286] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:41:39.299 [2024-12-14 00:22:18.126824] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:39.299 [2024-12-14 00:22:18.229986] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:41:39.299 [2024-12-14 00:22:18.230030] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:41:39.299 [2024-12-14 00:22:18.230042] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:41:39.299 [2024-12-14 00:22:18.230051] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:41:39.299 [2024-12-14 00:22:18.230060] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:41:39.299 [2024-12-14 00:22:18.231434] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:41:39.561 [2024-12-14 00:22:18.548425] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:41:39.561 [2024-12-14 00:22:18.548708] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:41:39.821 00:22:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:41:39.821 00:22:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:41:39.821 00:22:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:41:39.821 00:22:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@732 -- # xtrace_disable 00:41:39.821 00:22:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:41:39.821 00:22:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:41:39.821 00:22:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:41:39.821 00:22:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:39.821 00:22:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:41:39.821 [2024-12-14 00:22:18.828152] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:41:39.821 00:22:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:39.821 00:22:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:41:39.821 00:22:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:39.821 00:22:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:41:39.821 Malloc0 00:41:39.821 00:22:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:39.821 00:22:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:41:39.821 00:22:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:39.821 00:22:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:41:39.821 00:22:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:39.821 00:22:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:41:39.821 00:22:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:39.821 00:22:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:41:39.821 00:22:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:39.821 00:22:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:41:39.821 00:22:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:39.821 00:22:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:41:39.821 [2024-12-14 00:22:18.936315] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:41:39.821 00:22:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:39.821 00:22:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=107944 00:41:39.821 00:22:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:41:39.821 00:22:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:41:39.821 00:22:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 107944 /var/tmp/bdevperf.sock 00:41:39.821 00:22:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 107944 ']' 00:41:39.821 00:22:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:41:39.821 00:22:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:41:39.821 00:22:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:41:39.821 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:41:39.821 00:22:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:41:39.821 00:22:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:41:40.081 [2024-12-14 00:22:18.994964] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:41:40.081 [2024-12-14 00:22:18.995050] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid107944 ] 00:41:40.081 [2024-12-14 00:22:19.106971] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:40.081 [2024-12-14 00:22:19.216926] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:41:41.016 00:22:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:41:41.016 00:22:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:41:41.016 00:22:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:41:41.016 00:22:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:41.016 00:22:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:41:41.016 NVMe0n1 00:41:41.016 00:22:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:41.016 00:22:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:41:41.016 Running I/O for 10 seconds... 00:41:43.330 9968.00 IOPS, 38.94 MiB/s [2024-12-13T23:22:23.405Z] 10251.00 IOPS, 40.04 MiB/s [2024-12-13T23:22:24.342Z] 10564.00 IOPS, 41.27 MiB/s [2024-12-13T23:22:25.276Z] 10542.50 IOPS, 41.18 MiB/s [2024-12-13T23:22:26.210Z] 10652.60 IOPS, 41.61 MiB/s [2024-12-13T23:22:27.146Z] 10679.67 IOPS, 41.72 MiB/s [2024-12-13T23:22:28.522Z] 10733.71 IOPS, 41.93 MiB/s [2024-12-13T23:22:29.456Z] 10734.38 IOPS, 41.93 MiB/s [2024-12-13T23:22:30.391Z] 10767.22 IOPS, 42.06 MiB/s [2024-12-13T23:22:30.391Z] 10780.60 IOPS, 42.11 MiB/s 00:41:51.250 Latency(us) 00:41:51.250 [2024-12-13T23:22:30.391Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:41:51.250 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:41:51.250 Verification LBA range: start 0x0 length 0x4000 00:41:51.250 NVMe0n1 : 10.06 10809.51 42.22 0.00 0.00 94353.68 15354.15 61915.92 00:41:51.250 [2024-12-13T23:22:30.391Z] =================================================================================================================== 00:41:51.250 [2024-12-13T23:22:30.391Z] Total : 10809.51 42.22 0.00 0.00 94353.68 15354.15 61915.92 00:41:51.250 { 00:41:51.250 "results": [ 00:41:51.250 { 00:41:51.250 "job": "NVMe0n1", 00:41:51.250 "core_mask": "0x1", 00:41:51.250 "workload": "verify", 00:41:51.250 "status": "finished", 00:41:51.250 "verify_range": { 00:41:51.250 "start": 0, 00:41:51.250 "length": 16384 00:41:51.250 }, 00:41:51.250 "queue_depth": 1024, 00:41:51.250 "io_size": 4096, 00:41:51.250 "runtime": 10.059104, 00:41:51.250 "iops": 10809.511463446446, 00:41:51.250 "mibps": 42.22465415408768, 00:41:51.250 "io_failed": 0, 00:41:51.250 "io_timeout": 0, 00:41:51.250 "avg_latency_us": 94353.68382087522, 00:41:51.250 "min_latency_us": 15354.148571428572, 00:41:51.250 "max_latency_us": 61915.91619047619 00:41:51.250 } 00:41:51.250 ], 00:41:51.250 "core_count": 1 00:41:51.250 } 00:41:51.250 00:22:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 107944 00:41:51.250 00:22:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 107944 ']' 00:41:51.250 00:22:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 107944 00:41:51.250 00:22:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:41:51.250 00:22:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:41:51.250 00:22:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 107944 00:41:51.250 00:22:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:41:51.250 00:22:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:41:51.250 00:22:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 107944' 00:41:51.250 killing process with pid 107944 00:41:51.251 00:22:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 107944 00:41:51.251 Received shutdown signal, test time was about 10.000000 seconds 00:41:51.251 00:41:51.251 Latency(us) 00:41:51.251 [2024-12-13T23:22:30.392Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:41:51.251 [2024-12-13T23:22:30.392Z] =================================================================================================================== 00:41:51.251 [2024-12-13T23:22:30.392Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:41:51.251 00:22:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 107944 00:41:52.184 00:22:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:41:52.184 00:22:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:41:52.184 00:22:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:41:52.184 00:22:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:41:52.184 00:22:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:41:52.184 00:22:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:41:52.184 00:22:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:41:52.184 00:22:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:41:52.184 rmmod nvme_tcp 00:41:52.184 rmmod nvme_fabrics 00:41:52.184 rmmod nvme_keyring 00:41:52.184 00:22:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:41:52.184 00:22:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:41:52.184 00:22:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:41:52.184 00:22:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 107707 ']' 00:41:52.184 00:22:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 107707 00:41:52.184 00:22:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 107707 ']' 00:41:52.184 00:22:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 107707 00:41:52.184 00:22:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:41:52.184 00:22:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:41:52.184 00:22:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 107707 00:41:52.184 00:22:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:41:52.184 00:22:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:41:52.184 00:22:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 107707' 00:41:52.184 killing process with pid 107707 00:41:52.184 00:22:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 107707 00:41:52.184 00:22:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 107707 00:41:53.559 00:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:41:53.560 00:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:41:53.560 00:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:41:53.560 00:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:41:53.560 00:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 00:41:53.560 00:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:41:53.560 00:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 00:41:53.560 00:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:41:53.560 00:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:41:53.560 00:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:41:53.560 00:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:41:53.560 00:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:41:55.560 00:22:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:41:55.560 00:41:55.560 real 0m21.816s 00:41:55.560 user 0m26.621s 00:41:55.560 sys 0m5.944s 00:41:55.560 00:22:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1130 -- # xtrace_disable 00:41:55.560 00:22:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:41:55.560 ************************************ 00:41:55.560 END TEST nvmf_queue_depth 00:41:55.560 ************************************ 00:41:55.560 00:22:34 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:41:55.560 00:22:34 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:41:55.560 00:22:34 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:41:55.560 00:22:34 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:41:55.560 ************************************ 00:41:55.560 START TEST nvmf_target_multipath 00:41:55.560 ************************************ 00:41:55.560 00:22:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:41:55.820 * Looking for test storage... 00:41:55.820 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:41:55.820 00:22:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:41:55.820 00:22:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # lcov --version 00:41:55.820 00:22:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:41:55.820 00:22:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:41:55.820 00:22:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:41:55.820 00:22:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:41:55.820 00:22:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:41:55.820 00:22:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:41:55.820 00:22:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:41:55.820 00:22:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:41:55.820 00:22:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:41:55.820 00:22:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:41:55.820 00:22:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:41:55.820 00:22:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:41:55.820 00:22:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:41:55.820 00:22:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:41:55.820 00:22:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:41:55.820 00:22:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:41:55.820 00:22:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:41:55.820 00:22:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:41:55.820 00:22:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:41:55.820 00:22:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:41:55.820 00:22:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:41:55.820 00:22:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:41:55.820 00:22:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:41:55.820 00:22:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:41:55.820 00:22:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:41:55.820 00:22:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:41:55.820 00:22:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:41:55.820 00:22:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:41:55.820 00:22:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:41:55.820 00:22:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:41:55.820 00:22:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:41:55.820 00:22:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:41:55.820 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:55.820 --rc genhtml_branch_coverage=1 00:41:55.820 --rc genhtml_function_coverage=1 00:41:55.820 --rc genhtml_legend=1 00:41:55.820 --rc geninfo_all_blocks=1 00:41:55.820 --rc geninfo_unexecuted_blocks=1 00:41:55.820 00:41:55.820 ' 00:41:55.820 00:22:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:41:55.820 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:55.820 --rc genhtml_branch_coverage=1 00:41:55.820 --rc genhtml_function_coverage=1 00:41:55.820 --rc genhtml_legend=1 00:41:55.820 --rc geninfo_all_blocks=1 00:41:55.820 --rc geninfo_unexecuted_blocks=1 00:41:55.820 00:41:55.820 ' 00:41:55.820 00:22:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:41:55.820 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:55.820 --rc genhtml_branch_coverage=1 00:41:55.820 --rc genhtml_function_coverage=1 00:41:55.820 --rc genhtml_legend=1 00:41:55.820 --rc geninfo_all_blocks=1 00:41:55.820 --rc geninfo_unexecuted_blocks=1 00:41:55.820 00:41:55.820 ' 00:41:55.820 00:22:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:41:55.820 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:55.820 --rc genhtml_branch_coverage=1 00:41:55.820 --rc genhtml_function_coverage=1 00:41:55.820 --rc genhtml_legend=1 00:41:55.821 --rc geninfo_all_blocks=1 00:41:55.821 --rc geninfo_unexecuted_blocks=1 00:41:55.821 00:41:55.821 ' 00:41:55.821 00:22:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:41:55.821 00:22:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:41:55.821 00:22:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:41:55.821 00:22:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:41:55.821 00:22:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:41:55.821 00:22:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:41:55.821 00:22:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:41:55.821 00:22:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:41:55.821 00:22:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:41:55.821 00:22:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:41:55.821 00:22:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:41:55.821 00:22:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:41:55.821 00:22:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:41:55.821 00:22:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:41:55.821 00:22:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:41:55.821 00:22:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:41:55.821 00:22:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:41:55.821 00:22:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:41:55.821 00:22:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:41:55.821 00:22:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:41:55.821 00:22:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:41:55.821 00:22:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:41:55.821 00:22:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:41:55.821 00:22:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:55.821 00:22:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:55.821 00:22:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:55.821 00:22:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:41:55.821 00:22:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:55.821 00:22:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:41:55.821 00:22:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:41:55.821 00:22:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:41:55.821 00:22:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:41:55.821 00:22:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:41:55.821 00:22:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:41:55.821 00:22:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:41:55.821 00:22:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:41:55.821 00:22:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:41:55.821 00:22:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:41:55.821 00:22:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:41:55.821 00:22:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:41:55.821 00:22:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:41:55.821 00:22:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:41:55.821 00:22:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:41:55.821 00:22:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:41:55.821 00:22:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:41:55.821 00:22:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:41:55.821 00:22:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:41:55.821 00:22:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:41:55.821 00:22:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:41:55.821 00:22:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:41:55.821 00:22:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:41:55.821 00:22:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:41:55.821 00:22:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:41:55.821 00:22:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:41:55.821 00:22:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:41:55.821 00:22:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:42:01.092 00:22:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:42:01.092 00:22:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:42:01.092 00:22:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:42:01.092 00:22:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:42:01.092 00:22:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:42:01.092 00:22:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:42:01.092 00:22:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:42:01.092 00:22:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:42:01.092 00:22:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:42:01.092 00:22:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:42:01.092 00:22:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:42:01.092 00:22:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:42:01.092 00:22:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:42:01.092 00:22:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:42:01.092 00:22:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:42:01.092 00:22:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:42:01.092 00:22:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:42:01.092 00:22:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:42:01.092 00:22:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:42:01.092 00:22:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:42:01.092 00:22:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:42:01.092 00:22:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:42:01.092 00:22:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:42:01.092 00:22:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:42:01.092 00:22:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:42:01.092 00:22:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:42:01.092 00:22:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:42:01.092 00:22:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:42:01.092 00:22:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:42:01.092 00:22:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:42:01.092 00:22:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:42:01.092 00:22:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:42:01.092 00:22:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:42:01.092 00:22:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:42:01.092 00:22:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:42:01.092 Found 0000:af:00.0 (0x8086 - 0x159b) 00:42:01.092 00:22:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:42:01.092 00:22:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:42:01.092 00:22:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:42:01.092 00:22:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:42:01.092 00:22:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:42:01.092 00:22:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:42:01.092 00:22:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:42:01.092 Found 0000:af:00.1 (0x8086 - 0x159b) 00:42:01.092 00:22:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:42:01.092 00:22:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:42:01.092 00:22:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:42:01.092 00:22:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:42:01.092 00:22:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:42:01.092 00:22:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:42:01.092 00:22:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:42:01.092 00:22:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:42:01.092 00:22:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:42:01.092 00:22:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:42:01.092 00:22:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:42:01.092 00:22:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:42:01.092 00:22:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:42:01.092 00:22:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:42:01.092 00:22:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:42:01.092 00:22:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:42:01.092 Found net devices under 0000:af:00.0: cvl_0_0 00:42:01.092 00:22:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:42:01.092 00:22:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:42:01.092 00:22:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:42:01.092 00:22:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:42:01.092 00:22:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:42:01.092 00:22:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:42:01.092 00:22:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:42:01.092 00:22:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:42:01.092 00:22:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:42:01.092 Found net devices under 0000:af:00.1: cvl_0_1 00:42:01.093 00:22:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:42:01.093 00:22:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:42:01.093 00:22:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # is_hw=yes 00:42:01.093 00:22:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:42:01.093 00:22:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:42:01.093 00:22:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:42:01.093 00:22:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:42:01.093 00:22:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:42:01.093 00:22:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:42:01.093 00:22:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:42:01.093 00:22:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:42:01.093 00:22:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:42:01.093 00:22:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:42:01.093 00:22:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:42:01.093 00:22:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:42:01.093 00:22:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:42:01.093 00:22:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:42:01.093 00:22:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:42:01.093 00:22:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:42:01.093 00:22:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:42:01.093 00:22:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:42:01.093 00:22:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:42:01.093 00:22:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:42:01.093 00:22:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:42:01.093 00:22:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:42:01.093 00:22:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:42:01.093 00:22:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:42:01.093 00:22:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:42:01.093 00:22:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:42:01.093 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:42:01.093 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.364 ms 00:42:01.093 00:42:01.093 --- 10.0.0.2 ping statistics --- 00:42:01.093 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:42:01.093 rtt min/avg/max/mdev = 0.364/0.364/0.364/0.000 ms 00:42:01.093 00:22:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:42:01.093 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:42:01.093 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.197 ms 00:42:01.093 00:42:01.093 --- 10.0.0.1 ping statistics --- 00:42:01.093 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:42:01.093 rtt min/avg/max/mdev = 0.197/0.197/0.197/0.000 ms 00:42:01.093 00:22:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:42:01.093 00:22:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@450 -- # return 0 00:42:01.093 00:22:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:42:01.093 00:22:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:42:01.093 00:22:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:42:01.093 00:22:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:42:01.093 00:22:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:42:01.093 00:22:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:42:01.093 00:22:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:42:01.093 00:22:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:42:01.093 00:22:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:42:01.093 only one NIC for nvmf test 00:42:01.093 00:22:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:42:01.093 00:22:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:42:01.093 00:22:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:42:01.093 00:22:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:42:01.093 00:22:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:42:01.093 00:22:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:42:01.093 00:22:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:42:01.093 rmmod nvme_tcp 00:42:01.093 rmmod nvme_fabrics 00:42:01.093 rmmod nvme_keyring 00:42:01.093 00:22:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:42:01.093 00:22:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:42:01.093 00:22:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:42:01.093 00:22:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:42:01.093 00:22:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:42:01.093 00:22:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:42:01.093 00:22:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:42:01.093 00:22:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:42:01.093 00:22:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:42:01.093 00:22:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:42:01.093 00:22:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:42:01.093 00:22:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:42:01.093 00:22:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:42:01.093 00:22:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:42:01.093 00:22:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:42:01.093 00:22:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:42:02.998 00:22:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:42:02.998 00:22:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:42:02.998 00:22:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:42:02.998 00:22:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:42:02.998 00:22:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:42:02.998 00:22:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:42:02.998 00:22:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:42:02.998 00:22:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:42:02.998 00:22:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:42:02.998 00:22:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:42:02.998 00:22:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:42:02.998 00:22:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:42:02.998 00:22:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:42:02.998 00:22:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:42:02.998 00:22:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:42:02.998 00:22:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:42:02.998 00:22:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:42:02.998 00:22:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:42:02.998 00:22:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:42:02.998 00:22:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:42:02.998 00:22:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:42:02.998 00:22:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:42:02.998 00:22:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:42:02.998 00:22:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:42:02.998 00:22:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:42:02.998 00:22:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:42:02.998 00:42:02.998 real 0m7.225s 00:42:02.998 user 0m1.426s 00:42:02.998 sys 0m3.714s 00:42:02.998 00:22:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:42:02.998 00:22:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:42:02.998 ************************************ 00:42:02.998 END TEST nvmf_target_multipath 00:42:02.998 ************************************ 00:42:02.998 00:22:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:42:02.998 00:22:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:42:02.998 00:22:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:42:02.998 00:22:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:42:02.998 ************************************ 00:42:02.998 START TEST nvmf_zcopy 00:42:02.998 ************************************ 00:42:02.998 00:22:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:42:02.998 * Looking for test storage... 00:42:02.998 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:42:02.998 00:22:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:42:02.998 00:22:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1711 -- # lcov --version 00:42:02.998 00:22:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:42:02.998 00:22:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:42:02.998 00:22:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:42:02.998 00:22:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:42:02.998 00:22:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:42:02.998 00:22:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:42:02.998 00:22:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:42:02.998 00:22:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:42:02.998 00:22:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:42:02.998 00:22:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:42:02.998 00:22:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:42:02.998 00:22:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:42:02.998 00:22:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:42:02.998 00:22:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:42:02.998 00:22:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:42:02.998 00:22:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:42:02.998 00:22:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:42:02.998 00:22:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:42:02.998 00:22:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:42:02.998 00:22:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:42:02.998 00:22:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:42:02.998 00:22:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:42:02.998 00:22:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:42:02.998 00:22:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:42:02.998 00:22:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:42:02.998 00:22:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:42:02.998 00:22:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:42:02.998 00:22:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:42:02.998 00:22:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:42:02.998 00:22:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:42:02.998 00:22:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:42:02.998 00:22:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:42:02.998 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:02.998 --rc genhtml_branch_coverage=1 00:42:02.998 --rc genhtml_function_coverage=1 00:42:02.998 --rc genhtml_legend=1 00:42:02.998 --rc geninfo_all_blocks=1 00:42:02.998 --rc geninfo_unexecuted_blocks=1 00:42:02.998 00:42:02.998 ' 00:42:02.998 00:22:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:42:02.998 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:02.998 --rc genhtml_branch_coverage=1 00:42:02.998 --rc genhtml_function_coverage=1 00:42:02.998 --rc genhtml_legend=1 00:42:02.998 --rc geninfo_all_blocks=1 00:42:02.998 --rc geninfo_unexecuted_blocks=1 00:42:02.998 00:42:02.998 ' 00:42:02.998 00:22:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:42:02.998 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:02.998 --rc genhtml_branch_coverage=1 00:42:02.998 --rc genhtml_function_coverage=1 00:42:02.998 --rc genhtml_legend=1 00:42:02.998 --rc geninfo_all_blocks=1 00:42:02.998 --rc geninfo_unexecuted_blocks=1 00:42:02.998 00:42:02.998 ' 00:42:02.998 00:22:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:42:02.998 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:02.998 --rc genhtml_branch_coverage=1 00:42:02.998 --rc genhtml_function_coverage=1 00:42:02.998 --rc genhtml_legend=1 00:42:02.998 --rc geninfo_all_blocks=1 00:42:02.998 --rc geninfo_unexecuted_blocks=1 00:42:02.998 00:42:02.999 ' 00:42:02.999 00:22:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:42:02.999 00:22:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:42:02.999 00:22:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:42:02.999 00:22:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:42:02.999 00:22:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:42:02.999 00:22:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:42:02.999 00:22:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:42:02.999 00:22:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:42:02.999 00:22:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:42:02.999 00:22:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:42:02.999 00:22:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:42:02.999 00:22:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:42:02.999 00:22:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:42:02.999 00:22:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:42:02.999 00:22:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:42:02.999 00:22:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:42:02.999 00:22:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:42:02.999 00:22:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:42:02.999 00:22:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:42:02.999 00:22:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:42:02.999 00:22:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:42:02.999 00:22:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:42:02.999 00:22:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:42:02.999 00:22:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:02.999 00:22:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:02.999 00:22:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:02.999 00:22:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:42:02.999 00:22:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:02.999 00:22:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:42:02.999 00:22:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:42:02.999 00:22:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:42:02.999 00:22:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:42:02.999 00:22:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:42:02.999 00:22:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:42:02.999 00:22:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:42:02.999 00:22:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:42:02.999 00:22:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:42:02.999 00:22:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:42:02.999 00:22:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:42:02.999 00:22:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:42:02.999 00:22:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:42:02.999 00:22:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:42:02.999 00:22:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:42:02.999 00:22:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:42:02.999 00:22:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:42:02.999 00:22:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:42:02.999 00:22:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:42:02.999 00:22:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:42:02.999 00:22:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:42:02.999 00:22:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:42:02.999 00:22:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:42:02.999 00:22:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:42:08.271 00:22:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:42:08.271 00:22:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:42:08.271 00:22:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:42:08.271 00:22:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:42:08.271 00:22:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:42:08.271 00:22:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:42:08.271 00:22:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:42:08.271 00:22:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:42:08.271 00:22:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:42:08.271 00:22:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:42:08.271 00:22:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:42:08.271 00:22:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:42:08.271 00:22:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:42:08.271 00:22:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:42:08.271 00:22:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:42:08.271 00:22:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:42:08.271 00:22:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:42:08.271 00:22:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:42:08.271 00:22:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:42:08.271 00:22:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:42:08.271 00:22:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:42:08.271 00:22:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:42:08.271 00:22:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:42:08.271 00:22:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:42:08.271 00:22:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:42:08.271 00:22:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:42:08.271 00:22:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:42:08.271 00:22:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:42:08.271 00:22:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:42:08.271 00:22:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:42:08.271 00:22:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:42:08.271 00:22:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:42:08.271 00:22:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:42:08.271 00:22:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:42:08.271 00:22:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:42:08.271 Found 0000:af:00.0 (0x8086 - 0x159b) 00:42:08.271 00:22:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:42:08.271 00:22:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:42:08.271 00:22:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:42:08.271 00:22:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:42:08.271 00:22:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:42:08.271 00:22:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:42:08.271 00:22:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:42:08.271 Found 0000:af:00.1 (0x8086 - 0x159b) 00:42:08.271 00:22:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:42:08.271 00:22:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:42:08.271 00:22:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:42:08.271 00:22:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:42:08.271 00:22:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:42:08.271 00:22:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:42:08.271 00:22:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:42:08.271 00:22:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:42:08.271 00:22:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:42:08.271 00:22:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:42:08.271 00:22:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:42:08.271 00:22:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:42:08.271 00:22:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:42:08.271 00:22:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:42:08.271 00:22:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:42:08.271 00:22:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:42:08.271 Found net devices under 0000:af:00.0: cvl_0_0 00:42:08.271 00:22:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:42:08.271 00:22:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:42:08.271 00:22:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:42:08.271 00:22:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:42:08.271 00:22:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:42:08.271 00:22:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:42:08.271 00:22:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:42:08.271 00:22:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:42:08.271 00:22:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:42:08.271 Found net devices under 0000:af:00.1: cvl_0_1 00:42:08.271 00:22:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:42:08.271 00:22:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:42:08.271 00:22:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # is_hw=yes 00:42:08.271 00:22:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:42:08.271 00:22:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:42:08.271 00:22:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:42:08.271 00:22:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:42:08.271 00:22:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:42:08.271 00:22:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:42:08.271 00:22:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:42:08.271 00:22:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:42:08.271 00:22:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:42:08.271 00:22:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:42:08.271 00:22:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:42:08.271 00:22:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:42:08.271 00:22:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:42:08.271 00:22:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:42:08.271 00:22:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:42:08.271 00:22:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:42:08.271 00:22:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:42:08.271 00:22:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:42:08.271 00:22:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:42:08.271 00:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:42:08.271 00:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:42:08.271 00:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:42:08.271 00:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:42:08.271 00:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:42:08.271 00:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:42:08.271 00:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:42:08.271 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:42:08.271 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.329 ms 00:42:08.271 00:42:08.271 --- 10.0.0.2 ping statistics --- 00:42:08.271 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:42:08.271 rtt min/avg/max/mdev = 0.329/0.329/0.329/0.000 ms 00:42:08.271 00:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:42:08.271 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:42:08.271 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.252 ms 00:42:08.271 00:42:08.271 --- 10.0.0.1 ping statistics --- 00:42:08.271 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:42:08.271 rtt min/avg/max/mdev = 0.252/0.252/0.252/0.000 ms 00:42:08.271 00:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:42:08.271 00:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@450 -- # return 0 00:42:08.271 00:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:42:08.271 00:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:42:08.271 00:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:42:08.271 00:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:42:08.271 00:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:42:08.271 00:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:42:08.271 00:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:42:08.271 00:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:42:08.271 00:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:42:08.271 00:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:42:08.271 00:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:42:08.271 00:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=116536 00:42:08.271 00:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:42:08.272 00:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 116536 00:42:08.272 00:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@835 -- # '[' -z 116536 ']' 00:42:08.272 00:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:42:08.272 00:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@840 -- # local max_retries=100 00:42:08.272 00:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:42:08.272 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:42:08.272 00:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@844 -- # xtrace_disable 00:42:08.272 00:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:42:08.272 [2024-12-14 00:22:47.335989] thread.c:3079:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:42:08.272 [2024-12-14 00:22:47.338139] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:42:08.272 [2024-12-14 00:22:47.338220] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:42:08.530 [2024-12-14 00:22:47.457302] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:08.530 [2024-12-14 00:22:47.563702] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:42:08.530 [2024-12-14 00:22:47.563746] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:42:08.530 [2024-12-14 00:22:47.563758] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:42:08.530 [2024-12-14 00:22:47.563768] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:42:08.530 [2024-12-14 00:22:47.563777] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:42:08.530 [2024-12-14 00:22:47.565209] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:42:08.788 [2024-12-14 00:22:47.877572] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:42:08.788 [2024-12-14 00:22:47.877816] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:42:09.047 00:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:42:09.047 00:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@868 -- # return 0 00:42:09.047 00:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:42:09.047 00:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@732 -- # xtrace_disable 00:42:09.047 00:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:42:09.047 00:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:42:09.047 00:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:42:09.047 00:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:42:09.047 00:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:09.047 00:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:42:09.306 [2024-12-14 00:22:48.190048] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:42:09.306 00:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:09.306 00:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:42:09.306 00:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:09.306 00:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:42:09.306 00:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:09.306 00:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:42:09.306 00:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:09.306 00:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:42:09.306 [2024-12-14 00:22:48.214247] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:42:09.306 00:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:09.306 00:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:42:09.306 00:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:09.306 00:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:42:09.306 00:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:09.306 00:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:42:09.306 00:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:09.306 00:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:42:09.306 malloc0 00:42:09.306 00:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:09.306 00:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:42:09.306 00:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:09.306 00:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:42:09.306 00:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:09.306 00:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:42:09.306 00:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:42:09.306 00:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:42:09.306 00:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:42:09.306 00:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:42:09.306 00:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:42:09.306 { 00:42:09.306 "params": { 00:42:09.306 "name": "Nvme$subsystem", 00:42:09.306 "trtype": "$TEST_TRANSPORT", 00:42:09.306 "traddr": "$NVMF_FIRST_TARGET_IP", 00:42:09.306 "adrfam": "ipv4", 00:42:09.306 "trsvcid": "$NVMF_PORT", 00:42:09.306 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:42:09.306 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:42:09.306 "hdgst": ${hdgst:-false}, 00:42:09.306 "ddgst": ${ddgst:-false} 00:42:09.306 }, 00:42:09.306 "method": "bdev_nvme_attach_controller" 00:42:09.306 } 00:42:09.306 EOF 00:42:09.306 )") 00:42:09.306 00:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:42:09.306 00:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:42:09.306 00:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:42:09.306 00:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:42:09.306 "params": { 00:42:09.306 "name": "Nvme1", 00:42:09.306 "trtype": "tcp", 00:42:09.306 "traddr": "10.0.0.2", 00:42:09.306 "adrfam": "ipv4", 00:42:09.306 "trsvcid": "4420", 00:42:09.306 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:42:09.306 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:42:09.306 "hdgst": false, 00:42:09.306 "ddgst": false 00:42:09.306 }, 00:42:09.306 "method": "bdev_nvme_attach_controller" 00:42:09.306 }' 00:42:09.306 [2024-12-14 00:22:48.367227] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:42:09.306 [2024-12-14 00:22:48.367309] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid116687 ] 00:42:09.565 [2024-12-14 00:22:48.479122] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:09.565 [2024-12-14 00:22:48.587525] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:42:10.178 Running I/O for 10 seconds... 00:42:12.051 7198.00 IOPS, 56.23 MiB/s [2024-12-13T23:22:52.128Z] 7234.00 IOPS, 56.52 MiB/s [2024-12-13T23:22:53.065Z] 7295.00 IOPS, 56.99 MiB/s [2024-12-13T23:22:54.442Z] 7328.75 IOPS, 57.26 MiB/s [2024-12-13T23:22:55.378Z] 7332.80 IOPS, 57.29 MiB/s [2024-12-13T23:22:56.313Z] 7355.17 IOPS, 57.46 MiB/s [2024-12-13T23:22:57.248Z] 7362.00 IOPS, 57.52 MiB/s [2024-12-13T23:22:58.186Z] 7373.25 IOPS, 57.60 MiB/s [2024-12-13T23:22:59.122Z] 7381.11 IOPS, 57.66 MiB/s [2024-12-13T23:22:59.122Z] 7391.00 IOPS, 57.74 MiB/s 00:42:19.981 Latency(us) 00:42:19.981 [2024-12-13T23:22:59.122Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:42:19.981 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:42:19.981 Verification LBA range: start 0x0 length 0x1000 00:42:19.981 Nvme1n1 : 10.01 7392.41 57.75 0.00 0.00 17265.82 1942.67 25090.93 00:42:19.981 [2024-12-13T23:22:59.122Z] =================================================================================================================== 00:42:19.981 [2024-12-13T23:22:59.122Z] Total : 7392.41 57.75 0.00 0.00 17265.82 1942.67 25090.93 00:42:20.918 00:22:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=118462 00:42:20.918 00:22:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:42:20.918 00:22:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:42:20.918 00:22:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:42:20.918 00:22:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:42:20.918 00:22:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:42:20.918 00:22:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:42:20.918 00:22:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:42:20.918 00:22:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:42:20.918 { 00:42:20.918 "params": { 00:42:20.918 "name": "Nvme$subsystem", 00:42:20.918 "trtype": "$TEST_TRANSPORT", 00:42:20.918 "traddr": "$NVMF_FIRST_TARGET_IP", 00:42:20.918 "adrfam": "ipv4", 00:42:20.918 "trsvcid": "$NVMF_PORT", 00:42:20.918 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:42:20.918 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:42:20.918 "hdgst": ${hdgst:-false}, 00:42:20.918 "ddgst": ${ddgst:-false} 00:42:20.918 }, 00:42:20.918 "method": "bdev_nvme_attach_controller" 00:42:20.918 } 00:42:20.918 EOF 00:42:20.918 )") 00:42:20.918 00:22:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:42:20.918 [2024-12-14 00:22:59.969788] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:20.918 [2024-12-14 00:22:59.969834] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:20.918 00:22:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:42:20.918 00:22:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:42:20.918 00:22:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:42:20.918 "params": { 00:42:20.918 "name": "Nvme1", 00:42:20.918 "trtype": "tcp", 00:42:20.918 "traddr": "10.0.0.2", 00:42:20.918 "adrfam": "ipv4", 00:42:20.918 "trsvcid": "4420", 00:42:20.918 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:42:20.918 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:42:20.918 "hdgst": false, 00:42:20.918 "ddgst": false 00:42:20.918 }, 00:42:20.918 "method": "bdev_nvme_attach_controller" 00:42:20.918 }' 00:42:20.918 [2024-12-14 00:22:59.981784] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:20.918 [2024-12-14 00:22:59.981814] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:20.918 [2024-12-14 00:22:59.993761] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:20.918 [2024-12-14 00:22:59.993783] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:20.918 [2024-12-14 00:23:00.005761] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:20.918 [2024-12-14 00:23:00.005785] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:20.918 [2024-12-14 00:23:00.013750] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:20.918 [2024-12-14 00:23:00.013774] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:20.918 [2024-12-14 00:23:00.025732] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:20.918 [2024-12-14 00:23:00.025757] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:20.918 [2024-12-14 00:23:00.033749] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:20.918 [2024-12-14 00:23:00.033771] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:20.918 [2024-12-14 00:23:00.035743] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:42:20.918 [2024-12-14 00:23:00.035820] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid118462 ] 00:42:20.918 [2024-12-14 00:23:00.041746] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:20.918 [2024-12-14 00:23:00.041771] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:20.918 [2024-12-14 00:23:00.049726] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:20.918 [2024-12-14 00:23:00.049747] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:20.918 [2024-12-14 00:23:00.057743] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:20.918 [2024-12-14 00:23:00.057764] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:21.178 [2024-12-14 00:23:00.065728] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:21.178 [2024-12-14 00:23:00.065748] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:21.178 [2024-12-14 00:23:00.073745] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:21.178 [2024-12-14 00:23:00.073765] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:21.178 [2024-12-14 00:23:00.081742] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:21.178 [2024-12-14 00:23:00.081762] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:21.178 [2024-12-14 00:23:00.089743] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:21.178 [2024-12-14 00:23:00.089763] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:21.178 [2024-12-14 00:23:00.097748] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:21.178 [2024-12-14 00:23:00.097770] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:21.178 [2024-12-14 00:23:00.105737] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:21.178 [2024-12-14 00:23:00.105757] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:21.178 [2024-12-14 00:23:00.113720] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:21.178 [2024-12-14 00:23:00.113738] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:21.178 [2024-12-14 00:23:00.121740] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:21.178 [2024-12-14 00:23:00.121760] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:21.178 [2024-12-14 00:23:00.129724] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:21.178 [2024-12-14 00:23:00.129745] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:21.178 [2024-12-14 00:23:00.137739] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:21.178 [2024-12-14 00:23:00.137759] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:21.178 [2024-12-14 00:23:00.145731] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:21.178 [2024-12-14 00:23:00.145750] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:21.178 [2024-12-14 00:23:00.151082] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:21.178 [2024-12-14 00:23:00.153721] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:21.178 [2024-12-14 00:23:00.153740] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:21.178 [2024-12-14 00:23:00.161743] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:21.178 [2024-12-14 00:23:00.161763] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:21.178 [2024-12-14 00:23:00.169744] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:21.178 [2024-12-14 00:23:00.169765] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:21.178 [2024-12-14 00:23:00.177726] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:21.178 [2024-12-14 00:23:00.177746] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:21.178 [2024-12-14 00:23:00.185750] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:21.178 [2024-12-14 00:23:00.185770] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:21.178 [2024-12-14 00:23:00.193729] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:21.178 [2024-12-14 00:23:00.193748] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:21.178 [2024-12-14 00:23:00.201733] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:21.178 [2024-12-14 00:23:00.201752] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:21.178 [2024-12-14 00:23:00.209733] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:21.178 [2024-12-14 00:23:00.209751] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:21.178 [2024-12-14 00:23:00.217731] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:21.178 [2024-12-14 00:23:00.217750] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:21.178 [2024-12-14 00:23:00.225744] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:21.178 [2024-12-14 00:23:00.225763] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:21.178 [2024-12-14 00:23:00.233738] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:21.178 [2024-12-14 00:23:00.233758] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:21.178 [2024-12-14 00:23:00.241718] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:21.178 [2024-12-14 00:23:00.241737] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:21.178 [2024-12-14 00:23:00.249737] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:21.178 [2024-12-14 00:23:00.249757] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:21.178 [2024-12-14 00:23:00.257724] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:21.178 [2024-12-14 00:23:00.257743] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:21.178 [2024-12-14 00:23:00.265733] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:21.178 [2024-12-14 00:23:00.265751] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:21.178 [2024-12-14 00:23:00.267343] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:42:21.178 [2024-12-14 00:23:00.273739] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:21.178 [2024-12-14 00:23:00.273760] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:21.178 [2024-12-14 00:23:00.281738] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:21.178 [2024-12-14 00:23:00.281757] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:21.178 [2024-12-14 00:23:00.289763] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:21.178 [2024-12-14 00:23:00.289785] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:21.178 [2024-12-14 00:23:00.297742] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:21.178 [2024-12-14 00:23:00.297761] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:21.178 [2024-12-14 00:23:00.305724] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:21.178 [2024-12-14 00:23:00.305744] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:21.178 [2024-12-14 00:23:00.313732] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:21.178 [2024-12-14 00:23:00.313750] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:21.438 [2024-12-14 00:23:00.321725] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:21.438 [2024-12-14 00:23:00.321744] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:21.438 [2024-12-14 00:23:00.329741] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:21.438 [2024-12-14 00:23:00.329760] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:21.438 [2024-12-14 00:23:00.337740] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:21.438 [2024-12-14 00:23:00.337761] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:21.438 [2024-12-14 00:23:00.345724] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:21.438 [2024-12-14 00:23:00.345744] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:21.438 [2024-12-14 00:23:00.353740] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:21.438 [2024-12-14 00:23:00.353760] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:21.438 [2024-12-14 00:23:00.361742] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:21.438 [2024-12-14 00:23:00.361762] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:21.438 [2024-12-14 00:23:00.369730] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:21.438 [2024-12-14 00:23:00.369749] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:21.438 [2024-12-14 00:23:00.377756] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:21.438 [2024-12-14 00:23:00.377775] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:21.438 [2024-12-14 00:23:00.385725] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:21.438 [2024-12-14 00:23:00.385745] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:21.438 [2024-12-14 00:23:00.393740] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:21.438 [2024-12-14 00:23:00.393760] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:21.438 [2024-12-14 00:23:00.401739] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:21.438 [2024-12-14 00:23:00.401758] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:21.438 [2024-12-14 00:23:00.409723] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:21.438 [2024-12-14 00:23:00.409742] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:21.438 [2024-12-14 00:23:00.417736] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:21.438 [2024-12-14 00:23:00.417756] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:21.438 [2024-12-14 00:23:00.425749] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:21.438 [2024-12-14 00:23:00.425772] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:21.438 [2024-12-14 00:23:00.433727] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:21.438 [2024-12-14 00:23:00.433746] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:21.438 [2024-12-14 00:23:00.441742] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:21.438 [2024-12-14 00:23:00.441762] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:21.438 [2024-12-14 00:23:00.449720] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:21.438 [2024-12-14 00:23:00.449740] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:21.438 [2024-12-14 00:23:00.457738] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:21.438 [2024-12-14 00:23:00.457758] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:21.438 [2024-12-14 00:23:00.465737] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:21.438 [2024-12-14 00:23:00.465756] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:21.438 [2024-12-14 00:23:00.473736] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:21.438 [2024-12-14 00:23:00.473755] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:21.438 [2024-12-14 00:23:00.481739] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:21.438 [2024-12-14 00:23:00.481758] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:21.438 [2024-12-14 00:23:00.489736] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:21.438 [2024-12-14 00:23:00.489755] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:21.438 [2024-12-14 00:23:00.497715] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:21.438 [2024-12-14 00:23:00.497732] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:21.438 [2024-12-14 00:23:00.505738] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:21.438 [2024-12-14 00:23:00.505757] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:21.438 [2024-12-14 00:23:00.513723] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:21.438 [2024-12-14 00:23:00.513741] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:21.438 [2024-12-14 00:23:00.521747] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:21.438 [2024-12-14 00:23:00.521765] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:21.438 [2024-12-14 00:23:00.529753] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:21.438 [2024-12-14 00:23:00.529773] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:21.438 [2024-12-14 00:23:00.537725] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:21.438 [2024-12-14 00:23:00.537744] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:21.438 [2024-12-14 00:23:00.545735] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:21.438 [2024-12-14 00:23:00.545753] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:21.438 [2024-12-14 00:23:00.553731] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:21.438 [2024-12-14 00:23:00.553748] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:21.438 [2024-12-14 00:23:00.561726] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:21.438 [2024-12-14 00:23:00.561744] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:21.438 [2024-12-14 00:23:00.569751] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:21.438 [2024-12-14 00:23:00.569770] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:21.698 [2024-12-14 00:23:00.577722] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:21.698 [2024-12-14 00:23:00.577745] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:21.698 [2024-12-14 00:23:00.585736] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:21.698 [2024-12-14 00:23:00.585756] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:21.698 [2024-12-14 00:23:00.593746] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:21.698 [2024-12-14 00:23:00.593765] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:21.698 [2024-12-14 00:23:00.601726] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:21.698 [2024-12-14 00:23:00.601745] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:21.698 [2024-12-14 00:23:00.609740] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:21.698 [2024-12-14 00:23:00.609759] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:21.698 [2024-12-14 00:23:00.617736] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:21.698 [2024-12-14 00:23:00.617755] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:21.698 [2024-12-14 00:23:00.625734] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:21.698 [2024-12-14 00:23:00.625753] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:21.698 [2024-12-14 00:23:00.633744] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:21.698 [2024-12-14 00:23:00.633763] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:21.698 [2024-12-14 00:23:00.641721] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:21.698 [2024-12-14 00:23:00.641743] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:21.698 [2024-12-14 00:23:00.649737] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:21.698 [2024-12-14 00:23:00.649757] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:21.698 [2024-12-14 00:23:00.657750] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:21.698 [2024-12-14 00:23:00.657770] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:21.698 [2024-12-14 00:23:00.665722] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:21.698 [2024-12-14 00:23:00.665744] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:21.698 [2024-12-14 00:23:00.673740] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:21.698 [2024-12-14 00:23:00.673760] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:21.698 [2024-12-14 00:23:00.681732] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:21.698 [2024-12-14 00:23:00.681751] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:21.698 [2024-12-14 00:23:00.689724] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:21.698 [2024-12-14 00:23:00.689744] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:21.698 [2024-12-14 00:23:00.697737] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:21.698 [2024-12-14 00:23:00.697756] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:21.698 [2024-12-14 00:23:00.705731] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:21.698 [2024-12-14 00:23:00.705750] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:21.698 [2024-12-14 00:23:00.713741] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:21.698 [2024-12-14 00:23:00.713762] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:21.698 [2024-12-14 00:23:00.721741] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:21.698 [2024-12-14 00:23:00.721762] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:21.698 [2024-12-14 00:23:00.729726] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:21.698 [2024-12-14 00:23:00.729749] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:21.698 [2024-12-14 00:23:00.737740] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:21.698 [2024-12-14 00:23:00.737759] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:21.698 [2024-12-14 00:23:00.745736] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:21.698 [2024-12-14 00:23:00.745755] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:21.698 [2024-12-14 00:23:00.753748] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:21.698 [2024-12-14 00:23:00.753765] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:21.698 [2024-12-14 00:23:00.761740] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:21.698 [2024-12-14 00:23:00.761761] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:21.698 [2024-12-14 00:23:00.769717] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:21.698 [2024-12-14 00:23:00.769735] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:21.698 [2024-12-14 00:23:00.777739] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:21.698 [2024-12-14 00:23:00.777759] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:21.698 [2024-12-14 00:23:00.785733] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:21.698 [2024-12-14 00:23:00.785752] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:21.698 [2024-12-14 00:23:00.793714] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:21.698 [2024-12-14 00:23:00.793732] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:21.698 [2024-12-14 00:23:00.801735] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:21.698 [2024-12-14 00:23:00.801754] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:21.698 [2024-12-14 00:23:00.809738] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:21.698 [2024-12-14 00:23:00.809758] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:21.698 [2024-12-14 00:23:00.817733] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:21.698 [2024-12-14 00:23:00.817751] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:21.698 [2024-12-14 00:23:00.825737] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:21.698 [2024-12-14 00:23:00.825756] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:21.698 [2024-12-14 00:23:00.833722] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:21.698 [2024-12-14 00:23:00.833741] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:21.957 [2024-12-14 00:23:00.841753] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:21.957 [2024-12-14 00:23:00.841772] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:21.957 [2024-12-14 00:23:00.849755] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:21.957 [2024-12-14 00:23:00.849775] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:21.957 [2024-12-14 00:23:00.857738] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:21.957 [2024-12-14 00:23:00.857760] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:21.957 [2024-12-14 00:23:00.865746] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:21.957 [2024-12-14 00:23:00.865766] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:21.957 [2024-12-14 00:23:00.873749] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:21.957 [2024-12-14 00:23:00.873771] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:21.957 Running I/O for 5 seconds... 00:42:21.957 [2024-12-14 00:23:00.886162] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:21.957 [2024-12-14 00:23:00.886185] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:21.957 [2024-12-14 00:23:00.898491] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:21.957 [2024-12-14 00:23:00.898514] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:21.957 [2024-12-14 00:23:00.906563] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:21.957 [2024-12-14 00:23:00.906587] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:21.957 [2024-12-14 00:23:00.917734] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:21.957 [2024-12-14 00:23:00.917758] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:21.957 [2024-12-14 00:23:00.925593] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:21.957 [2024-12-14 00:23:00.925616] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:21.957 [2024-12-14 00:23:00.935339] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:21.957 [2024-12-14 00:23:00.935362] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:21.957 [2024-12-14 00:23:00.951363] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:21.957 [2024-12-14 00:23:00.951387] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:21.957 [2024-12-14 00:23:00.959319] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:21.957 [2024-12-14 00:23:00.959343] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:21.957 [2024-12-14 00:23:00.969119] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:21.957 [2024-12-14 00:23:00.969143] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:21.957 [2024-12-14 00:23:00.981541] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:21.957 [2024-12-14 00:23:00.981564] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:21.957 [2024-12-14 00:23:00.996607] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:21.957 [2024-12-14 00:23:00.996636] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:21.957 [2024-12-14 00:23:01.011168] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:21.957 [2024-12-14 00:23:01.011192] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:21.957 [2024-12-14 00:23:01.019335] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:21.957 [2024-12-14 00:23:01.019359] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:21.957 [2024-12-14 00:23:01.032855] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:21.957 [2024-12-14 00:23:01.032878] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:21.957 [2024-12-14 00:23:01.049009] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:21.957 [2024-12-14 00:23:01.049034] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:21.957 [2024-12-14 00:23:01.063313] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:21.957 [2024-12-14 00:23:01.063338] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:21.957 [2024-12-14 00:23:01.071467] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:21.957 [2024-12-14 00:23:01.071491] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:21.957 [2024-12-14 00:23:01.081252] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:21.958 [2024-12-14 00:23:01.081276] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:21.958 [2024-12-14 00:23:01.092988] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:21.958 [2024-12-14 00:23:01.093012] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:22.216 [2024-12-14 00:23:01.108216] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:22.216 [2024-12-14 00:23:01.108241] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:22.216 [2024-12-14 00:23:01.124561] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:22.216 [2024-12-14 00:23:01.124593] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:22.216 [2024-12-14 00:23:01.139496] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:22.216 [2024-12-14 00:23:01.139520] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:22.216 [2024-12-14 00:23:01.147606] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:22.216 [2024-12-14 00:23:01.147630] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:22.216 [2024-12-14 00:23:01.161368] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:22.216 [2024-12-14 00:23:01.161392] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:22.216 [2024-12-14 00:23:01.175938] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:22.216 [2024-12-14 00:23:01.175962] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:22.216 [2024-12-14 00:23:01.183892] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:22.216 [2024-12-14 00:23:01.183915] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:22.216 [2024-12-14 00:23:01.193604] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:22.216 [2024-12-14 00:23:01.193628] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:22.216 [2024-12-14 00:23:01.202591] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:22.216 [2024-12-14 00:23:01.202614] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:22.216 [2024-12-14 00:23:01.213630] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:22.216 [2024-12-14 00:23:01.213654] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:22.216 [2024-12-14 00:23:01.221423] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:22.216 [2024-12-14 00:23:01.221454] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:22.216 [2024-12-14 00:23:01.231123] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:22.216 [2024-12-14 00:23:01.231147] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:22.216 [2024-12-14 00:23:01.240149] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:22.216 [2024-12-14 00:23:01.240173] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:22.216 [2024-12-14 00:23:01.255243] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:22.216 [2024-12-14 00:23:01.255268] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:22.216 [2024-12-14 00:23:01.263263] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:22.216 [2024-12-14 00:23:01.263286] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:22.216 [2024-12-14 00:23:01.273023] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:22.216 [2024-12-14 00:23:01.273047] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:22.216 [2024-12-14 00:23:01.285339] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:22.216 [2024-12-14 00:23:01.285363] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:22.216 [2024-12-14 00:23:01.300706] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:22.217 [2024-12-14 00:23:01.300730] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:22.217 [2024-12-14 00:23:01.314023] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:22.217 [2024-12-14 00:23:01.314052] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:22.217 [2024-12-14 00:23:01.327002] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:22.217 [2024-12-14 00:23:01.327026] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:22.217 [2024-12-14 00:23:01.336505] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:22.217 [2024-12-14 00:23:01.336529] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:22.217 [2024-12-14 00:23:01.352909] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:22.217 [2024-12-14 00:23:01.352933] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:22.475 [2024-12-14 00:23:01.366833] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:22.475 [2024-12-14 00:23:01.366857] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:22.475 [2024-12-14 00:23:01.374808] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:22.475 [2024-12-14 00:23:01.374832] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:22.475 [2024-12-14 00:23:01.384634] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:22.475 [2024-12-14 00:23:01.384664] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:22.475 [2024-12-14 00:23:01.397939] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:22.475 [2024-12-14 00:23:01.397964] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:22.475 [2024-12-14 00:23:01.405689] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:22.475 [2024-12-14 00:23:01.405712] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:22.475 [2024-12-14 00:23:01.415631] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:22.475 [2024-12-14 00:23:01.415654] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:22.475 [2024-12-14 00:23:01.430191] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:22.475 [2024-12-14 00:23:01.430214] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:22.475 [2024-12-14 00:23:01.438522] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:22.475 [2024-12-14 00:23:01.438545] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:22.475 [2024-12-14 00:23:01.448234] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:22.475 [2024-12-14 00:23:01.448258] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:22.475 [2024-12-14 00:23:01.464025] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:22.475 [2024-12-14 00:23:01.464049] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:22.475 [2024-12-14 00:23:01.480690] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:22.475 [2024-12-14 00:23:01.480714] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:22.475 [2024-12-14 00:23:01.493130] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:22.475 [2024-12-14 00:23:01.493154] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:22.475 [2024-12-14 00:23:01.508527] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:22.475 [2024-12-14 00:23:01.508550] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:22.475 [2024-12-14 00:23:01.524430] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:22.475 [2024-12-14 00:23:01.524463] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:22.475 [2024-12-14 00:23:01.540524] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:22.475 [2024-12-14 00:23:01.540548] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:22.475 [2024-12-14 00:23:01.555550] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:22.475 [2024-12-14 00:23:01.555576] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:22.475 [2024-12-14 00:23:01.572649] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:22.475 [2024-12-14 00:23:01.572675] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:22.475 [2024-12-14 00:23:01.586695] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:22.475 [2024-12-14 00:23:01.586720] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:22.475 [2024-12-14 00:23:01.598268] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:22.475 [2024-12-14 00:23:01.598292] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:22.475 [2024-12-14 00:23:01.610126] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:22.475 [2024-12-14 00:23:01.610150] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:22.733 [2024-12-14 00:23:01.622020] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:22.733 [2024-12-14 00:23:01.622044] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:22.734 [2024-12-14 00:23:01.634500] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:22.734 [2024-12-14 00:23:01.634525] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:22.734 [2024-12-14 00:23:01.642661] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:22.734 [2024-12-14 00:23:01.642685] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:22.734 [2024-12-14 00:23:01.653840] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:22.734 [2024-12-14 00:23:01.653864] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:22.734 [2024-12-14 00:23:01.662191] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:22.734 [2024-12-14 00:23:01.662214] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:22.734 [2024-12-14 00:23:01.674263] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:22.734 [2024-12-14 00:23:01.674286] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:22.734 [2024-12-14 00:23:01.682583] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:22.734 [2024-12-14 00:23:01.682606] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:22.734 [2024-12-14 00:23:01.694637] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:22.734 [2024-12-14 00:23:01.694660] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:22.734 [2024-12-14 00:23:01.702508] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:22.734 [2024-12-14 00:23:01.702531] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:22.734 [2024-12-14 00:23:01.712170] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:22.734 [2024-12-14 00:23:01.712194] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:22.734 [2024-12-14 00:23:01.727353] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:22.734 [2024-12-14 00:23:01.727377] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:22.734 [2024-12-14 00:23:01.735393] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:22.734 [2024-12-14 00:23:01.735418] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:22.734 [2024-12-14 00:23:01.749172] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:22.734 [2024-12-14 00:23:01.749197] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:22.734 [2024-12-14 00:23:01.764269] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:22.734 [2024-12-14 00:23:01.764294] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:22.734 [2024-12-14 00:23:01.780798] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:22.734 [2024-12-14 00:23:01.780827] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:22.734 [2024-12-14 00:23:01.795093] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:22.734 [2024-12-14 00:23:01.795118] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:22.734 [2024-12-14 00:23:01.802752] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:22.734 [2024-12-14 00:23:01.802776] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:22.734 [2024-12-14 00:23:01.812360] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:22.734 [2024-12-14 00:23:01.812383] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:22.734 [2024-12-14 00:23:01.827650] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:22.734 [2024-12-14 00:23:01.827676] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:22.734 [2024-12-14 00:23:01.835821] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:22.734 [2024-12-14 00:23:01.835845] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:22.734 [2024-12-14 00:23:01.845587] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:22.734 [2024-12-14 00:23:01.845611] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:22.734 [2024-12-14 00:23:01.854479] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:22.734 [2024-12-14 00:23:01.854502] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:22.734 [2024-12-14 00:23:01.864660] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:22.734 [2024-12-14 00:23:01.864684] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:22.992 14184.00 IOPS, 110.81 MiB/s [2024-12-13T23:23:02.133Z] [2024-12-14 00:23:01.880753] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:22.992 [2024-12-14 00:23:01.880778] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:22.992 [2024-12-14 00:23:01.894949] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:22.992 [2024-12-14 00:23:01.894973] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:22.993 [2024-12-14 00:23:01.903062] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:22.993 [2024-12-14 00:23:01.903085] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:22.993 [2024-12-14 00:23:01.912783] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:22.993 [2024-12-14 00:23:01.912807] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:22.993 [2024-12-14 00:23:01.926156] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:22.993 [2024-12-14 00:23:01.926180] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:22.993 [2024-12-14 00:23:01.938730] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:22.993 [2024-12-14 00:23:01.938754] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:22.993 [2024-12-14 00:23:01.946500] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:22.993 [2024-12-14 00:23:01.946523] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:22.993 [2024-12-14 00:23:01.956512] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:22.993 [2024-12-14 00:23:01.956537] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:22.993 [2024-12-14 00:23:01.969366] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:22.993 [2024-12-14 00:23:01.969391] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:22.993 [2024-12-14 00:23:01.984778] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:22.993 [2024-12-14 00:23:01.984803] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:22.993 [2024-12-14 00:23:02.000493] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:22.993 [2024-12-14 00:23:02.000522] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:22.993 [2024-12-14 00:23:02.017126] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:22.993 [2024-12-14 00:23:02.017150] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:22.993 [2024-12-14 00:23:02.028979] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:22.993 [2024-12-14 00:23:02.029003] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:22.993 [2024-12-14 00:23:02.044540] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:22.993 [2024-12-14 00:23:02.044564] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:22.993 [2024-12-14 00:23:02.060514] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:22.993 [2024-12-14 00:23:02.060538] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:22.993 [2024-12-14 00:23:02.076525] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:22.993 [2024-12-14 00:23:02.076551] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:22.993 [2024-12-14 00:23:02.092425] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:22.993 [2024-12-14 00:23:02.092457] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:22.993 [2024-12-14 00:23:02.106944] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:22.993 [2024-12-14 00:23:02.106974] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:22.993 [2024-12-14 00:23:02.114871] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:22.993 [2024-12-14 00:23:02.114894] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:22.993 [2024-12-14 00:23:02.124745] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:22.993 [2024-12-14 00:23:02.124769] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:23.251 [2024-12-14 00:23:02.139363] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:23.251 [2024-12-14 00:23:02.139388] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:23.251 [2024-12-14 00:23:02.146766] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:23.251 [2024-12-14 00:23:02.146789] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:23.251 [2024-12-14 00:23:02.157546] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:23.251 [2024-12-14 00:23:02.157569] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:23.251 [2024-12-14 00:23:02.172300] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:23.251 [2024-12-14 00:23:02.172325] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:23.251 [2024-12-14 00:23:02.187587] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:23.251 [2024-12-14 00:23:02.187611] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:23.251 [2024-12-14 00:23:02.204915] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:23.251 [2024-12-14 00:23:02.204940] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:23.251 [2024-12-14 00:23:02.218911] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:23.251 [2024-12-14 00:23:02.218936] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:23.251 [2024-12-14 00:23:02.227252] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:23.251 [2024-12-14 00:23:02.227275] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:23.251 [2024-12-14 00:23:02.237246] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:23.251 [2024-12-14 00:23:02.237270] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:23.251 [2024-12-14 00:23:02.250566] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:23.251 [2024-12-14 00:23:02.250595] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:23.251 [2024-12-14 00:23:02.262345] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:23.251 [2024-12-14 00:23:02.262369] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:23.251 [2024-12-14 00:23:02.274189] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:23.251 [2024-12-14 00:23:02.274213] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:23.251 [2024-12-14 00:23:02.286467] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:23.251 [2024-12-14 00:23:02.286492] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:23.251 [2024-12-14 00:23:02.294910] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:23.251 [2024-12-14 00:23:02.294934] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:23.251 [2024-12-14 00:23:02.304794] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:23.251 [2024-12-14 00:23:02.304818] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:23.251 [2024-12-14 00:23:02.317879] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:23.251 [2024-12-14 00:23:02.317904] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:23.251 [2024-12-14 00:23:02.325966] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:23.251 [2024-12-14 00:23:02.325990] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:23.251 [2024-12-14 00:23:02.335677] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:23.251 [2024-12-14 00:23:02.335713] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:23.251 [2024-12-14 00:23:02.344616] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:23.251 [2024-12-14 00:23:02.344640] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:23.251 [2024-12-14 00:23:02.358831] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:23.251 [2024-12-14 00:23:02.358856] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:23.251 [2024-12-14 00:23:02.370326] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:23.251 [2024-12-14 00:23:02.370349] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:23.251 [2024-12-14 00:23:02.378701] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:23.251 [2024-12-14 00:23:02.378724] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:23.251 [2024-12-14 00:23:02.388603] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:23.252 [2024-12-14 00:23:02.388627] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:23.510 [2024-12-14 00:23:02.401506] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:23.510 [2024-12-14 00:23:02.401530] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:23.510 [2024-12-14 00:23:02.415750] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:23.510 [2024-12-14 00:23:02.415773] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:23.510 [2024-12-14 00:23:02.424135] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:23.510 [2024-12-14 00:23:02.424158] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:23.510 [2024-12-14 00:23:02.434038] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:23.510 [2024-12-14 00:23:02.434061] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:23.510 [2024-12-14 00:23:02.446647] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:23.511 [2024-12-14 00:23:02.446670] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:23.511 [2024-12-14 00:23:02.454866] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:23.511 [2024-12-14 00:23:02.454889] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:23.511 [2024-12-14 00:23:02.464589] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:23.511 [2024-12-14 00:23:02.464613] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:23.511 [2024-12-14 00:23:02.477942] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:23.511 [2024-12-14 00:23:02.477965] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:23.511 [2024-12-14 00:23:02.485917] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:23.511 [2024-12-14 00:23:02.485940] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:23.511 [2024-12-14 00:23:02.496029] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:23.511 [2024-12-14 00:23:02.496053] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:23.511 [2024-12-14 00:23:02.511401] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:23.511 [2024-12-14 00:23:02.511425] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:23.511 [2024-12-14 00:23:02.519555] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:23.511 [2024-12-14 00:23:02.519580] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:23.511 [2024-12-14 00:23:02.529565] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:23.511 [2024-12-14 00:23:02.529589] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:23.511 [2024-12-14 00:23:02.538575] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:23.511 [2024-12-14 00:23:02.538599] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:23.511 [2024-12-14 00:23:02.549014] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:23.511 [2024-12-14 00:23:02.549038] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:23.511 [2024-12-14 00:23:02.563836] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:23.511 [2024-12-14 00:23:02.563861] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:23.511 [2024-12-14 00:23:02.580922] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:23.511 [2024-12-14 00:23:02.580946] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:23.511 [2024-12-14 00:23:02.593770] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:23.511 [2024-12-14 00:23:02.593794] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:23.511 [2024-12-14 00:23:02.601746] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:23.511 [2024-12-14 00:23:02.601769] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:23.511 [2024-12-14 00:23:02.611641] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:23.511 [2024-12-14 00:23:02.611664] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:23.511 [2024-12-14 00:23:02.627187] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:23.511 [2024-12-14 00:23:02.627211] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:23.511 [2024-12-14 00:23:02.635346] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:23.511 [2024-12-14 00:23:02.635368] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:23.511 [2024-12-14 00:23:02.645022] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:23.511 [2024-12-14 00:23:02.645045] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:23.770 [2024-12-14 00:23:02.658162] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:23.770 [2024-12-14 00:23:02.658186] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:23.770 [2024-12-14 00:23:02.670651] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:23.770 [2024-12-14 00:23:02.670675] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:23.770 [2024-12-14 00:23:02.678685] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:23.770 [2024-12-14 00:23:02.678710] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:23.770 [2024-12-14 00:23:02.695400] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:23.770 [2024-12-14 00:23:02.695425] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:23.770 [2024-12-14 00:23:02.703265] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:23.770 [2024-12-14 00:23:02.703288] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:23.770 [2024-12-14 00:23:02.713000] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:23.770 [2024-12-14 00:23:02.713024] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:23.770 [2024-12-14 00:23:02.725208] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:23.770 [2024-12-14 00:23:02.725232] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:23.770 [2024-12-14 00:23:02.739859] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:23.770 [2024-12-14 00:23:02.739883] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:23.770 [2024-12-14 00:23:02.756538] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:23.770 [2024-12-14 00:23:02.756562] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:23.770 [2024-12-14 00:23:02.771635] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:23.770 [2024-12-14 00:23:02.771659] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:23.770 [2024-12-14 00:23:02.779935] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:23.770 [2024-12-14 00:23:02.779958] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:23.770 [2024-12-14 00:23:02.789585] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:23.770 [2024-12-14 00:23:02.789609] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:23.770 [2024-12-14 00:23:02.798156] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:23.770 [2024-12-14 00:23:02.798178] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:23.770 [2024-12-14 00:23:02.809163] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:23.770 [2024-12-14 00:23:02.809186] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:23.770 [2024-12-14 00:23:02.824135] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:23.770 [2024-12-14 00:23:02.824159] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:23.770 [2024-12-14 00:23:02.840851] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:23.770 [2024-12-14 00:23:02.840884] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:23.770 [2024-12-14 00:23:02.855380] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:23.770 [2024-12-14 00:23:02.855403] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:23.770 [2024-12-14 00:23:02.863719] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:23.770 [2024-12-14 00:23:02.863742] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:23.770 [2024-12-14 00:23:02.878050] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:23.770 [2024-12-14 00:23:02.878073] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:23.770 14152.00 IOPS, 110.56 MiB/s [2024-12-13T23:23:02.911Z] [2024-12-14 00:23:02.889201] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:23.770 [2024-12-14 00:23:02.889225] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:23.770 [2024-12-14 00:23:02.904691] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:23.770 [2024-12-14 00:23:02.904715] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:24.029 [2024-12-14 00:23:02.919149] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:24.029 [2024-12-14 00:23:02.919173] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:24.030 [2024-12-14 00:23:02.927282] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:24.030 [2024-12-14 00:23:02.927305] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:24.030 [2024-12-14 00:23:02.938250] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:24.030 [2024-12-14 00:23:02.938273] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:24.030 [2024-12-14 00:23:02.946663] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:24.030 [2024-12-14 00:23:02.946686] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:24.030 [2024-12-14 00:23:02.956398] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:24.030 [2024-12-14 00:23:02.956421] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:24.030 [2024-12-14 00:23:02.970341] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:24.030 [2024-12-14 00:23:02.970364] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:24.030 [2024-12-14 00:23:02.980500] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:24.030 [2024-12-14 00:23:02.980524] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:24.030 [2024-12-14 00:23:02.996421] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:24.030 [2024-12-14 00:23:02.996451] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:24.030 [2024-12-14 00:23:03.010646] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:24.030 [2024-12-14 00:23:03.010670] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:24.030 [2024-12-14 00:23:03.022473] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:24.030 [2024-12-14 00:23:03.022498] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:24.030 [2024-12-14 00:23:03.030489] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:24.030 [2024-12-14 00:23:03.030513] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:24.030 [2024-12-14 00:23:03.040389] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:24.030 [2024-12-14 00:23:03.040414] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:24.030 [2024-12-14 00:23:03.053817] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:24.030 [2024-12-14 00:23:03.053842] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:24.030 [2024-12-14 00:23:03.062030] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:24.030 [2024-12-14 00:23:03.062052] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:24.030 [2024-12-14 00:23:03.073489] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:24.030 [2024-12-14 00:23:03.073514] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:24.030 [2024-12-14 00:23:03.086430] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:24.030 [2024-12-14 00:23:03.086478] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:24.030 [2024-12-14 00:23:03.098208] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:24.030 [2024-12-14 00:23:03.098233] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:24.030 [2024-12-14 00:23:03.110283] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:24.030 [2024-12-14 00:23:03.110313] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:24.030 [2024-12-14 00:23:03.122148] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:24.030 [2024-12-14 00:23:03.122172] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:24.030 [2024-12-14 00:23:03.134900] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:24.030 [2024-12-14 00:23:03.134924] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:24.030 [2024-12-14 00:23:03.142710] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:24.030 [2024-12-14 00:23:03.142733] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:24.030 [2024-12-14 00:23:03.152199] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:24.030 [2024-12-14 00:23:03.152223] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:24.030 [2024-12-14 00:23:03.165459] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:24.030 [2024-12-14 00:23:03.165486] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:24.289 [2024-12-14 00:23:03.178833] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:24.289 [2024-12-14 00:23:03.178858] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:24.289 [2024-12-14 00:23:03.196434] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:24.289 [2024-12-14 00:23:03.196467] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:24.289 [2024-12-14 00:23:03.209776] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:24.289 [2024-12-14 00:23:03.209801] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:24.289 [2024-12-14 00:23:03.222584] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:24.289 [2024-12-14 00:23:03.222607] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:24.289 [2024-12-14 00:23:03.239930] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:24.289 [2024-12-14 00:23:03.239954] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:24.289 [2024-12-14 00:23:03.256676] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:24.289 [2024-12-14 00:23:03.256712] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:24.289 [2024-12-14 00:23:03.271398] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:24.289 [2024-12-14 00:23:03.271421] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:24.289 [2024-12-14 00:23:03.288907] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:24.289 [2024-12-14 00:23:03.288931] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:24.289 [2024-12-14 00:23:03.302309] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:24.289 [2024-12-14 00:23:03.302332] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:24.289 [2024-12-14 00:23:03.319352] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:24.289 [2024-12-14 00:23:03.319376] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:24.289 [2024-12-14 00:23:03.336861] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:24.289 [2024-12-14 00:23:03.336886] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:24.289 [2024-12-14 00:23:03.349064] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:24.289 [2024-12-14 00:23:03.349088] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:24.289 [2024-12-14 00:23:03.364080] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:24.289 [2024-12-14 00:23:03.364104] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:24.289 [2024-12-14 00:23:03.380539] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:24.289 [2024-12-14 00:23:03.380568] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:24.289 [2024-12-14 00:23:03.393952] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:24.289 [2024-12-14 00:23:03.393977] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:24.289 [2024-12-14 00:23:03.406943] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:24.289 [2024-12-14 00:23:03.406966] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:24.289 [2024-12-14 00:23:03.423856] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:24.289 [2024-12-14 00:23:03.423881] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:24.547 [2024-12-14 00:23:03.439915] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:24.547 [2024-12-14 00:23:03.439940] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:24.547 [2024-12-14 00:23:03.455893] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:24.547 [2024-12-14 00:23:03.455917] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:24.547 [2024-12-14 00:23:03.472378] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:24.548 [2024-12-14 00:23:03.472403] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:24.548 [2024-12-14 00:23:03.488302] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:24.548 [2024-12-14 00:23:03.488326] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:24.548 [2024-12-14 00:23:03.505093] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:24.548 [2024-12-14 00:23:03.505116] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:24.548 [2024-12-14 00:23:03.519217] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:24.548 [2024-12-14 00:23:03.519240] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:24.548 [2024-12-14 00:23:03.536397] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:24.548 [2024-12-14 00:23:03.536421] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:24.548 [2024-12-14 00:23:03.552446] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:24.548 [2024-12-14 00:23:03.552486] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:24.548 [2024-12-14 00:23:03.568806] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:24.548 [2024-12-14 00:23:03.568831] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:24.548 [2024-12-14 00:23:03.582328] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:24.548 [2024-12-14 00:23:03.582351] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:24.548 [2024-12-14 00:23:03.599423] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:24.548 [2024-12-14 00:23:03.599453] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:24.548 [2024-12-14 00:23:03.616709] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:24.548 [2024-12-14 00:23:03.616733] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:24.548 [2024-12-14 00:23:03.629956] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:24.548 [2024-12-14 00:23:03.629979] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:24.548 [2024-12-14 00:23:03.642741] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:24.548 [2024-12-14 00:23:03.642765] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:24.548 [2024-12-14 00:23:03.660134] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:24.548 [2024-12-14 00:23:03.660159] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:24.548 [2024-12-14 00:23:03.675056] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:24.548 [2024-12-14 00:23:03.675085] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:24.806 [2024-12-14 00:23:03.692344] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:24.807 [2024-12-14 00:23:03.692369] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:24.807 [2024-12-14 00:23:03.705401] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:24.807 [2024-12-14 00:23:03.705425] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:24.807 [2024-12-14 00:23:03.718539] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:24.807 [2024-12-14 00:23:03.718572] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:24.807 [2024-12-14 00:23:03.736138] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:24.807 [2024-12-14 00:23:03.736162] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:24.807 [2024-12-14 00:23:03.749244] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:24.807 [2024-12-14 00:23:03.749269] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:24.807 [2024-12-14 00:23:03.762386] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:24.807 [2024-12-14 00:23:03.762410] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:24.807 [2024-12-14 00:23:03.780164] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:24.807 [2024-12-14 00:23:03.780188] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:24.807 [2024-12-14 00:23:03.793105] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:24.807 [2024-12-14 00:23:03.793129] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:24.807 [2024-12-14 00:23:03.807617] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:24.807 [2024-12-14 00:23:03.807640] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:24.807 [2024-12-14 00:23:03.824707] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:24.807 [2024-12-14 00:23:03.824731] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:24.807 [2024-12-14 00:23:03.838607] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:24.807 [2024-12-14 00:23:03.838631] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:24.807 [2024-12-14 00:23:03.855990] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:24.807 [2024-12-14 00:23:03.856015] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:24.807 [2024-12-14 00:23:03.871366] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:24.807 [2024-12-14 00:23:03.871390] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:24.807 14164.67 IOPS, 110.66 MiB/s [2024-12-13T23:23:03.948Z] [2024-12-14 00:23:03.888493] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:24.807 [2024-12-14 00:23:03.888518] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:24.807 [2024-12-14 00:23:03.903019] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:24.807 [2024-12-14 00:23:03.903043] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:24.807 [2024-12-14 00:23:03.920520] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:24.807 [2024-12-14 00:23:03.920544] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:24.807 [2024-12-14 00:23:03.933994] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:24.807 [2024-12-14 00:23:03.934018] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:24.807 [2024-12-14 00:23:03.946544] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:24.807 [2024-12-14 00:23:03.946568] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:25.066 [2024-12-14 00:23:03.963855] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:25.066 [2024-12-14 00:23:03.963880] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:25.066 [2024-12-14 00:23:03.977464] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:25.066 [2024-12-14 00:23:03.977488] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:25.066 [2024-12-14 00:23:03.992207] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:25.066 [2024-12-14 00:23:03.992230] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:25.066 [2024-12-14 00:23:04.008501] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:25.066 [2024-12-14 00:23:04.008525] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:25.066 [2024-12-14 00:23:04.025132] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:25.066 [2024-12-14 00:23:04.025157] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:25.066 [2024-12-14 00:23:04.039000] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:25.066 [2024-12-14 00:23:04.039023] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:25.066 [2024-12-14 00:23:04.056279] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:25.066 [2024-12-14 00:23:04.056304] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:25.066 [2024-12-14 00:23:04.069506] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:25.066 [2024-12-14 00:23:04.069530] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:25.066 [2024-12-14 00:23:04.084289] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:25.066 [2024-12-14 00:23:04.084313] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:25.066 [2024-12-14 00:23:04.101098] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:25.066 [2024-12-14 00:23:04.101123] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:25.066 [2024-12-14 00:23:04.114285] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:25.066 [2024-12-14 00:23:04.114310] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:25.066 [2024-12-14 00:23:04.131428] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:25.066 [2024-12-14 00:23:04.131463] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:25.066 [2024-12-14 00:23:04.148148] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:25.066 [2024-12-14 00:23:04.148173] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:25.066 [2024-12-14 00:23:04.164522] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:25.066 [2024-12-14 00:23:04.164547] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:25.066 [2024-12-14 00:23:04.181048] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:25.066 [2024-12-14 00:23:04.181072] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:25.066 [2024-12-14 00:23:04.194340] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:25.066 [2024-12-14 00:23:04.194362] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:25.324 [2024-12-14 00:23:04.211944] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:25.324 [2024-12-14 00:23:04.211968] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:25.324 [2024-12-14 00:23:04.226089] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:25.324 [2024-12-14 00:23:04.226113] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:25.324 [2024-12-14 00:23:04.242969] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:25.324 [2024-12-14 00:23:04.242993] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:25.324 [2024-12-14 00:23:04.259879] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:25.324 [2024-12-14 00:23:04.259903] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:25.324 [2024-12-14 00:23:04.275656] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:25.324 [2024-12-14 00:23:04.275680] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:25.324 [2024-12-14 00:23:04.292451] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:25.324 [2024-12-14 00:23:04.292475] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:25.324 [2024-12-14 00:23:04.305747] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:25.324 [2024-12-14 00:23:04.305771] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:25.324 [2024-12-14 00:23:04.318343] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:25.324 [2024-12-14 00:23:04.318367] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:25.324 [2024-12-14 00:23:04.335588] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:25.324 [2024-12-14 00:23:04.335612] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:25.324 [2024-12-14 00:23:04.352932] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:25.324 [2024-12-14 00:23:04.352955] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:25.324 [2024-12-14 00:23:04.365433] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:25.324 [2024-12-14 00:23:04.365463] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:25.324 [2024-12-14 00:23:04.380719] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:25.324 [2024-12-14 00:23:04.380743] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:25.324 [2024-12-14 00:23:04.396563] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:25.324 [2024-12-14 00:23:04.396587] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:25.324 [2024-12-14 00:23:04.413054] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:25.324 [2024-12-14 00:23:04.413078] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:25.324 [2024-12-14 00:23:04.427501] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:25.325 [2024-12-14 00:23:04.427525] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:25.325 [2024-12-14 00:23:04.444838] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:25.325 [2024-12-14 00:23:04.444862] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:25.325 [2024-12-14 00:23:04.458341] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:25.325 [2024-12-14 00:23:04.458365] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:25.583 [2024-12-14 00:23:04.476059] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:25.583 [2024-12-14 00:23:04.476085] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:25.583 [2024-12-14 00:23:04.489163] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:25.583 [2024-12-14 00:23:04.489187] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:25.583 [2024-12-14 00:23:04.503699] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:25.583 [2024-12-14 00:23:04.503724] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:25.583 [2024-12-14 00:23:04.520406] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:25.583 [2024-12-14 00:23:04.520432] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:25.583 [2024-12-14 00:23:04.535539] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:25.583 [2024-12-14 00:23:04.535565] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:25.583 [2024-12-14 00:23:04.552293] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:25.583 [2024-12-14 00:23:04.552317] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:25.583 [2024-12-14 00:23:04.567396] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:25.583 [2024-12-14 00:23:04.567422] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:25.583 [2024-12-14 00:23:04.584525] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:25.583 [2024-12-14 00:23:04.584550] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:25.583 [2024-12-14 00:23:04.600553] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:25.583 [2024-12-14 00:23:04.600577] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:25.583 [2024-12-14 00:23:04.615345] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:25.583 [2024-12-14 00:23:04.615369] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:25.583 [2024-12-14 00:23:04.632743] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:25.583 [2024-12-14 00:23:04.632767] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:25.583 [2024-12-14 00:23:04.646905] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:25.583 [2024-12-14 00:23:04.646931] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:25.583 [2024-12-14 00:23:04.663986] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:25.583 [2024-12-14 00:23:04.664010] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:25.583 [2024-12-14 00:23:04.679631] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:25.583 [2024-12-14 00:23:04.679656] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:25.583 [2024-12-14 00:23:04.696485] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:25.583 [2024-12-14 00:23:04.696509] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:25.584 [2024-12-14 00:23:04.712629] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:25.584 [2024-12-14 00:23:04.712660] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:25.842 [2024-12-14 00:23:04.728477] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:25.842 [2024-12-14 00:23:04.728502] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:25.842 [2024-12-14 00:23:04.744280] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:25.842 [2024-12-14 00:23:04.744305] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:25.842 [2024-12-14 00:23:04.759306] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:25.842 [2024-12-14 00:23:04.759331] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:25.842 [2024-12-14 00:23:04.776966] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:25.842 [2024-12-14 00:23:04.776991] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:25.842 [2024-12-14 00:23:04.790324] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:25.842 [2024-12-14 00:23:04.790348] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:25.842 [2024-12-14 00:23:04.806974] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:25.842 [2024-12-14 00:23:04.806998] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:25.842 [2024-12-14 00:23:04.824516] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:25.842 [2024-12-14 00:23:04.824541] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:25.843 [2024-12-14 00:23:04.837692] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:25.843 [2024-12-14 00:23:04.837720] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:25.843 [2024-12-14 00:23:04.850541] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:25.843 [2024-12-14 00:23:04.850564] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:25.843 [2024-12-14 00:23:04.867723] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:25.843 [2024-12-14 00:23:04.867747] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:25.843 [2024-12-14 00:23:04.882760] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:25.843 [2024-12-14 00:23:04.882785] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:25.843 14184.75 IOPS, 110.82 MiB/s [2024-12-13T23:23:04.984Z] [2024-12-14 00:23:04.900381] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:25.843 [2024-12-14 00:23:04.900405] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:25.843 [2024-12-14 00:23:04.913684] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:25.843 [2024-12-14 00:23:04.913708] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:25.843 [2024-12-14 00:23:04.927131] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:25.843 [2024-12-14 00:23:04.927155] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:25.843 [2024-12-14 00:23:04.944811] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:25.843 [2024-12-14 00:23:04.944835] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:25.843 [2024-12-14 00:23:04.958309] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:25.843 [2024-12-14 00:23:04.958333] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:25.843 [2024-12-14 00:23:04.975951] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:25.843 [2024-12-14 00:23:04.975976] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:26.102 [2024-12-14 00:23:04.991705] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:26.102 [2024-12-14 00:23:04.991730] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:26.102 [2024-12-14 00:23:05.008007] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:26.102 [2024-12-14 00:23:05.008031] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:26.102 [2024-12-14 00:23:05.024050] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:26.102 [2024-12-14 00:23:05.024074] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:26.102 [2024-12-14 00:23:05.040609] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:26.102 [2024-12-14 00:23:05.040634] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:26.102 [2024-12-14 00:23:05.055380] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:26.102 [2024-12-14 00:23:05.055404] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:26.102 [2024-12-14 00:23:05.072910] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:26.102 [2024-12-14 00:23:05.072933] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:26.102 [2024-12-14 00:23:05.085598] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:26.102 [2024-12-14 00:23:05.085623] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:26.102 [2024-12-14 00:23:05.098807] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:26.102 [2024-12-14 00:23:05.098831] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:26.102 [2024-12-14 00:23:05.116015] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:26.102 [2024-12-14 00:23:05.116039] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:26.102 [2024-12-14 00:23:05.132131] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:26.102 [2024-12-14 00:23:05.132159] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:26.102 [2024-12-14 00:23:05.148125] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:26.102 [2024-12-14 00:23:05.148149] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:26.102 [2024-12-14 00:23:05.164894] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:26.102 [2024-12-14 00:23:05.164919] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:26.102 [2024-12-14 00:23:05.177860] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:26.102 [2024-12-14 00:23:05.177883] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:26.102 [2024-12-14 00:23:05.190781] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:26.102 [2024-12-14 00:23:05.190803] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:26.102 [2024-12-14 00:23:05.208260] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:26.102 [2024-12-14 00:23:05.208284] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:26.102 [2024-12-14 00:23:05.220521] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:26.102 [2024-12-14 00:23:05.220544] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:26.102 [2024-12-14 00:23:05.236923] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:26.102 [2024-12-14 00:23:05.236947] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:26.361 [2024-12-14 00:23:05.251371] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:26.361 [2024-12-14 00:23:05.251395] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:26.361 [2024-12-14 00:23:05.268832] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:26.361 [2024-12-14 00:23:05.268856] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:26.361 [2024-12-14 00:23:05.281892] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:26.361 [2024-12-14 00:23:05.281916] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:26.361 [2024-12-14 00:23:05.294818] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:26.361 [2024-12-14 00:23:05.294842] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:26.361 [2024-12-14 00:23:05.312136] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:26.361 [2024-12-14 00:23:05.312160] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:26.361 [2024-12-14 00:23:05.329243] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:26.361 [2024-12-14 00:23:05.329266] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:26.361 [2024-12-14 00:23:05.341450] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:26.361 [2024-12-14 00:23:05.341473] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:26.361 [2024-12-14 00:23:05.355900] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:26.361 [2024-12-14 00:23:05.355924] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:26.361 [2024-12-14 00:23:05.372565] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:26.361 [2024-12-14 00:23:05.372589] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:26.361 [2024-12-14 00:23:05.387064] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:26.361 [2024-12-14 00:23:05.387089] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:26.361 [2024-12-14 00:23:05.404257] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:26.361 [2024-12-14 00:23:05.404281] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:26.361 [2024-12-14 00:23:05.420086] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:26.361 [2024-12-14 00:23:05.420114] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:26.361 [2024-12-14 00:23:05.436743] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:26.361 [2024-12-14 00:23:05.436767] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:26.361 [2024-12-14 00:23:05.451014] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:26.361 [2024-12-14 00:23:05.451038] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:26.361 [2024-12-14 00:23:05.468322] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:26.361 [2024-12-14 00:23:05.468345] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:26.361 [2024-12-14 00:23:05.484245] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:26.361 [2024-12-14 00:23:05.484270] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:26.361 [2024-12-14 00:23:05.499280] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:26.361 [2024-12-14 00:23:05.499305] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:26.620 [2024-12-14 00:23:05.516581] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:26.620 [2024-12-14 00:23:05.516606] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:26.620 [2024-12-14 00:23:05.531120] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:26.620 [2024-12-14 00:23:05.531145] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:26.620 [2024-12-14 00:23:05.548140] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:26.620 [2024-12-14 00:23:05.548164] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:26.620 [2024-12-14 00:23:05.564418] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:26.620 [2024-12-14 00:23:05.564449] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:26.620 [2024-12-14 00:23:05.580622] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:26.620 [2024-12-14 00:23:05.580645] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:26.620 [2024-12-14 00:23:05.592582] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:26.620 [2024-12-14 00:23:05.592606] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:26.620 [2024-12-14 00:23:05.608977] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:26.620 [2024-12-14 00:23:05.609001] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:26.620 [2024-12-14 00:23:05.622145] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:26.620 [2024-12-14 00:23:05.622169] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:26.620 [2024-12-14 00:23:05.639721] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:26.620 [2024-12-14 00:23:05.639746] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:26.620 [2024-12-14 00:23:05.654959] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:26.621 [2024-12-14 00:23:05.654984] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:26.621 [2024-12-14 00:23:05.672252] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:26.621 [2024-12-14 00:23:05.672276] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:26.621 [2024-12-14 00:23:05.687499] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:26.621 [2024-12-14 00:23:05.687524] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:26.621 [2024-12-14 00:23:05.704291] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:26.621 [2024-12-14 00:23:05.704323] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:26.621 [2024-12-14 00:23:05.717582] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:26.621 [2024-12-14 00:23:05.717606] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:26.621 [2024-12-14 00:23:05.731778] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:26.621 [2024-12-14 00:23:05.731801] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:26.621 [2024-12-14 00:23:05.748868] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:26.621 [2024-12-14 00:23:05.748892] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:26.880 [2024-12-14 00:23:05.763670] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:26.880 [2024-12-14 00:23:05.763695] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:26.880 [2024-12-14 00:23:05.780357] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:26.880 [2024-12-14 00:23:05.780382] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:26.880 [2024-12-14 00:23:05.796837] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:26.880 [2024-12-14 00:23:05.796862] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:26.880 [2024-12-14 00:23:05.811520] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:26.880 [2024-12-14 00:23:05.811544] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:26.880 [2024-12-14 00:23:05.828632] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:26.880 [2024-12-14 00:23:05.828656] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:26.880 [2024-12-14 00:23:05.843426] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:26.880 [2024-12-14 00:23:05.843458] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:26.880 [2024-12-14 00:23:05.860852] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:26.880 [2024-12-14 00:23:05.860876] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:26.880 [2024-12-14 00:23:05.875137] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:26.880 [2024-12-14 00:23:05.875161] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:26.880 14197.20 IOPS, 110.92 MiB/s [2024-12-13T23:23:06.021Z] [2024-12-14 00:23:05.892001] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:26.880 [2024-12-14 00:23:05.892025] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:26.880 00:42:26.880 Latency(us) 00:42:26.880 [2024-12-13T23:23:06.021Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:42:26.880 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:42:26.880 Nvme1n1 : 5.01 14198.25 110.92 0.00 0.00 9004.93 2278.16 15104.49 00:42:26.880 [2024-12-13T23:23:06.021Z] =================================================================================================================== 00:42:26.880 [2024-12-13T23:23:06.021Z] Total : 14198.25 110.92 0.00 0.00 9004.93 2278.16 15104.49 00:42:26.880 [2024-12-14 00:23:05.901724] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:26.880 [2024-12-14 00:23:05.901745] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:26.880 [2024-12-14 00:23:05.913738] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:26.880 [2024-12-14 00:23:05.913760] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:26.880 [2024-12-14 00:23:05.925739] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:26.880 [2024-12-14 00:23:05.925760] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:26.880 [2024-12-14 00:23:05.937739] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:26.880 [2024-12-14 00:23:05.937759] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:26.880 [2024-12-14 00:23:05.949805] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:26.880 [2024-12-14 00:23:05.949838] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:26.880 [2024-12-14 00:23:05.961762] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:26.880 [2024-12-14 00:23:05.961783] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:26.880 [2024-12-14 00:23:05.973749] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:26.880 [2024-12-14 00:23:05.973768] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:26.880 [2024-12-14 00:23:05.985746] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:26.880 [2024-12-14 00:23:05.985766] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:26.880 [2024-12-14 00:23:06.005742] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:26.880 [2024-12-14 00:23:06.005763] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:26.880 [2024-12-14 00:23:06.017737] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:26.880 [2024-12-14 00:23:06.017755] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:27.139 [2024-12-14 00:23:06.029737] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:27.139 [2024-12-14 00:23:06.029756] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:27.140 [2024-12-14 00:23:06.041737] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:27.140 [2024-12-14 00:23:06.041760] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:27.140 [2024-12-14 00:23:06.053754] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:27.140 [2024-12-14 00:23:06.053777] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:27.140 [2024-12-14 00:23:06.065735] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:27.140 [2024-12-14 00:23:06.065754] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:27.140 [2024-12-14 00:23:06.077733] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:27.140 [2024-12-14 00:23:06.077753] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:27.140 [2024-12-14 00:23:06.089738] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:27.140 [2024-12-14 00:23:06.089758] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:27.140 [2024-12-14 00:23:06.101735] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:27.140 [2024-12-14 00:23:06.101755] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:27.140 [2024-12-14 00:23:06.113734] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:27.140 [2024-12-14 00:23:06.113753] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:27.140 [2024-12-14 00:23:06.125741] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:27.140 [2024-12-14 00:23:06.125761] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:27.140 [2024-12-14 00:23:06.137724] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:27.140 [2024-12-14 00:23:06.137742] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:27.140 [2024-12-14 00:23:06.149749] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:27.140 [2024-12-14 00:23:06.149768] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:27.140 [2024-12-14 00:23:06.161738] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:27.140 [2024-12-14 00:23:06.161758] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:27.140 [2024-12-14 00:23:06.173724] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:27.140 [2024-12-14 00:23:06.173748] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:27.140 [2024-12-14 00:23:06.185740] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:27.140 [2024-12-14 00:23:06.185760] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:27.140 [2024-12-14 00:23:06.197721] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:27.140 [2024-12-14 00:23:06.197741] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:27.140 [2024-12-14 00:23:06.209740] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:27.140 [2024-12-14 00:23:06.209759] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:27.140 [2024-12-14 00:23:06.221740] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:27.140 [2024-12-14 00:23:06.221760] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:27.140 [2024-12-14 00:23:06.233726] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:27.140 [2024-12-14 00:23:06.233746] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:27.140 [2024-12-14 00:23:06.245756] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:27.140 [2024-12-14 00:23:06.245779] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:27.140 [2024-12-14 00:23:06.257744] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:27.140 [2024-12-14 00:23:06.257763] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:27.140 [2024-12-14 00:23:06.269732] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:27.140 [2024-12-14 00:23:06.269751] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:27.399 [2024-12-14 00:23:06.281771] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:27.399 [2024-12-14 00:23:06.281791] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:27.399 [2024-12-14 00:23:06.293719] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:27.399 [2024-12-14 00:23:06.293738] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:27.399 [2024-12-14 00:23:06.305742] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:27.399 [2024-12-14 00:23:06.305763] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:27.399 [2024-12-14 00:23:06.317765] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:27.399 [2024-12-14 00:23:06.317789] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:27.399 [2024-12-14 00:23:06.329726] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:27.399 [2024-12-14 00:23:06.329744] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:27.399 [2024-12-14 00:23:06.341755] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:27.399 [2024-12-14 00:23:06.341774] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:27.399 [2024-12-14 00:23:06.353751] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:27.399 [2024-12-14 00:23:06.353770] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:27.399 [2024-12-14 00:23:06.365722] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:27.399 [2024-12-14 00:23:06.365741] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:27.399 [2024-12-14 00:23:06.377742] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:27.400 [2024-12-14 00:23:06.377761] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:27.400 [2024-12-14 00:23:06.389725] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:27.400 [2024-12-14 00:23:06.389743] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:27.400 [2024-12-14 00:23:06.401737] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:27.400 [2024-12-14 00:23:06.401760] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:27.400 [2024-12-14 00:23:06.413731] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:27.400 [2024-12-14 00:23:06.413749] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:27.400 [2024-12-14 00:23:06.425732] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:27.400 [2024-12-14 00:23:06.425750] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:27.400 [2024-12-14 00:23:06.437741] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:27.400 [2024-12-14 00:23:06.437759] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:27.400 [2024-12-14 00:23:06.449751] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:27.400 [2024-12-14 00:23:06.449770] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:27.400 [2024-12-14 00:23:06.461732] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:27.400 [2024-12-14 00:23:06.461751] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:27.400 [2024-12-14 00:23:06.473735] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:27.400 [2024-12-14 00:23:06.473754] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:27.400 [2024-12-14 00:23:06.485727] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:27.400 [2024-12-14 00:23:06.485746] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:27.400 [2024-12-14 00:23:06.497735] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:27.400 [2024-12-14 00:23:06.497754] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:27.400 [2024-12-14 00:23:06.509750] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:27.400 [2024-12-14 00:23:06.509769] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:27.400 [2024-12-14 00:23:06.521735] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:27.400 [2024-12-14 00:23:06.521758] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:27.400 [2024-12-14 00:23:06.533733] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:27.400 [2024-12-14 00:23:06.533751] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:27.659 [2024-12-14 00:23:06.545744] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:27.659 [2024-12-14 00:23:06.545764] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:27.659 [2024-12-14 00:23:06.557730] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:27.659 [2024-12-14 00:23:06.557748] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:27.659 [2024-12-14 00:23:06.569734] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:27.659 [2024-12-14 00:23:06.569753] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:27.659 [2024-12-14 00:23:06.581755] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:27.659 [2024-12-14 00:23:06.581774] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:27.659 [2024-12-14 00:23:06.593754] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:27.659 [2024-12-14 00:23:06.593774] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:27.659 [2024-12-14 00:23:06.605753] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:27.659 [2024-12-14 00:23:06.605772] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:27.659 [2024-12-14 00:23:06.617732] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:27.659 [2024-12-14 00:23:06.617751] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:27.659 [2024-12-14 00:23:06.629749] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:27.659 [2024-12-14 00:23:06.629771] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:27.659 [2024-12-14 00:23:06.641749] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:27.659 [2024-12-14 00:23:06.641767] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:27.659 [2024-12-14 00:23:06.653734] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:27.659 [2024-12-14 00:23:06.653753] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:27.659 [2024-12-14 00:23:06.665746] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:27.659 [2024-12-14 00:23:06.665765] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:27.659 [2024-12-14 00:23:06.677725] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:27.659 [2024-12-14 00:23:06.677744] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:27.659 [2024-12-14 00:23:06.689742] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:27.659 [2024-12-14 00:23:06.689762] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:27.659 [2024-12-14 00:23:06.701736] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:27.659 [2024-12-14 00:23:06.701754] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:27.659 [2024-12-14 00:23:06.713720] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:27.659 [2024-12-14 00:23:06.713740] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:27.659 [2024-12-14 00:23:06.725749] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:27.659 [2024-12-14 00:23:06.725767] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:27.659 [2024-12-14 00:23:06.737749] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:27.659 [2024-12-14 00:23:06.737768] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:27.659 [2024-12-14 00:23:06.749734] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:27.659 [2024-12-14 00:23:06.749753] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:27.659 [2024-12-14 00:23:06.761748] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:27.659 [2024-12-14 00:23:06.761767] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:27.659 [2024-12-14 00:23:06.773735] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:27.659 [2024-12-14 00:23:06.773754] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:27.659 [2024-12-14 00:23:06.785743] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:27.659 [2024-12-14 00:23:06.785762] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:27.659 [2024-12-14 00:23:06.797739] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:27.659 [2024-12-14 00:23:06.797759] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:27.918 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (118462) - No such process 00:42:27.918 00:23:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 118462 00:42:27.918 00:23:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:42:27.918 00:23:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:27.918 00:23:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:42:27.918 00:23:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:27.918 00:23:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:42:27.918 00:23:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:27.918 00:23:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:42:27.918 delay0 00:42:27.918 00:23:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:27.918 00:23:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:42:27.918 00:23:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:27.918 00:23:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:42:27.918 00:23:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:27.918 00:23:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:42:27.918 [2024-12-14 00:23:07.008616] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:42:36.037 [2024-12-14 00:23:14.094174] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(6) to be set 00:42:36.037 Initializing NVMe Controllers 00:42:36.037 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:42:36.037 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:42:36.037 Initialization complete. Launching workers. 00:42:36.037 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 244, failed: 24185 00:42:36.037 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 24305, failed to submit 124 00:42:36.037 success 24223, unsuccessful 82, failed 0 00:42:36.037 00:23:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:42:36.037 00:23:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:42:36.037 00:23:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:42:36.037 00:23:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:42:36.037 00:23:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:42:36.037 00:23:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:42:36.037 00:23:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:42:36.037 00:23:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:42:36.037 rmmod nvme_tcp 00:42:36.037 rmmod nvme_fabrics 00:42:36.037 rmmod nvme_keyring 00:42:36.037 00:23:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:42:36.037 00:23:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:42:36.037 00:23:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:42:36.037 00:23:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 116536 ']' 00:42:36.037 00:23:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 116536 00:42:36.037 00:23:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@954 -- # '[' -z 116536 ']' 00:42:36.037 00:23:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@958 -- # kill -0 116536 00:42:36.037 00:23:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@959 -- # uname 00:42:36.037 00:23:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:42:36.037 00:23:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 116536 00:42:36.037 00:23:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:42:36.037 00:23:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:42:36.037 00:23:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@972 -- # echo 'killing process with pid 116536' 00:42:36.037 killing process with pid 116536 00:42:36.037 00:23:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@973 -- # kill 116536 00:42:36.037 00:23:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@978 -- # wait 116536 00:42:36.296 00:23:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:42:36.296 00:23:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:42:36.296 00:23:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:42:36.296 00:23:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:42:36.296 00:23:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 00:42:36.296 00:23:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:42:36.296 00:23:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 00:42:36.296 00:23:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:42:36.296 00:23:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:42:36.296 00:23:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:42:36.296 00:23:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:42:36.296 00:23:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:42:38.832 00:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:42:38.832 00:42:38.832 real 0m35.549s 00:42:38.832 user 0m47.418s 00:42:38.832 sys 0m12.443s 00:42:38.832 00:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:42:38.832 00:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:42:38.832 ************************************ 00:42:38.832 END TEST nvmf_zcopy 00:42:38.832 ************************************ 00:42:38.832 00:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:42:38.832 00:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:42:38.832 00:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:42:38.832 00:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:42:38.832 ************************************ 00:42:38.832 START TEST nvmf_nmic 00:42:38.832 ************************************ 00:42:38.832 00:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:42:38.832 * Looking for test storage... 00:42:38.832 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:42:38.832 00:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:42:38.832 00:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1711 -- # lcov --version 00:42:38.832 00:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:42:38.832 00:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:42:38.832 00:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:42:38.832 00:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:42:38.832 00:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:42:38.832 00:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:42:38.832 00:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:42:38.832 00:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:42:38.832 00:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:42:38.832 00:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:42:38.832 00:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:42:38.832 00:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:42:38.832 00:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:42:38.832 00:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:42:38.832 00:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:42:38.832 00:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:42:38.832 00:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:42:38.832 00:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:42:38.832 00:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:42:38.832 00:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:42:38.832 00:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:42:38.832 00:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:42:38.832 00:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:42:38.832 00:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:42:38.832 00:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:42:38.832 00:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:42:38.832 00:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:42:38.832 00:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:42:38.832 00:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:42:38.832 00:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:42:38.832 00:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:42:38.832 00:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:42:38.832 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:38.832 --rc genhtml_branch_coverage=1 00:42:38.832 --rc genhtml_function_coverage=1 00:42:38.832 --rc genhtml_legend=1 00:42:38.832 --rc geninfo_all_blocks=1 00:42:38.832 --rc geninfo_unexecuted_blocks=1 00:42:38.832 00:42:38.832 ' 00:42:38.832 00:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:42:38.832 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:38.832 --rc genhtml_branch_coverage=1 00:42:38.832 --rc genhtml_function_coverage=1 00:42:38.832 --rc genhtml_legend=1 00:42:38.832 --rc geninfo_all_blocks=1 00:42:38.832 --rc geninfo_unexecuted_blocks=1 00:42:38.832 00:42:38.832 ' 00:42:38.832 00:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:42:38.832 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:38.832 --rc genhtml_branch_coverage=1 00:42:38.832 --rc genhtml_function_coverage=1 00:42:38.832 --rc genhtml_legend=1 00:42:38.832 --rc geninfo_all_blocks=1 00:42:38.832 --rc geninfo_unexecuted_blocks=1 00:42:38.832 00:42:38.832 ' 00:42:38.832 00:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:42:38.832 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:38.832 --rc genhtml_branch_coverage=1 00:42:38.832 --rc genhtml_function_coverage=1 00:42:38.832 --rc genhtml_legend=1 00:42:38.832 --rc geninfo_all_blocks=1 00:42:38.832 --rc geninfo_unexecuted_blocks=1 00:42:38.832 00:42:38.832 ' 00:42:38.832 00:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:42:38.832 00:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:42:38.832 00:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:42:38.832 00:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:42:38.832 00:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:42:38.832 00:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:42:38.832 00:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:42:38.832 00:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:42:38.832 00:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:42:38.833 00:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:42:38.833 00:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:42:38.833 00:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:42:38.833 00:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:42:38.833 00:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:42:38.833 00:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:42:38.833 00:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:42:38.833 00:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:42:38.833 00:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:42:38.833 00:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:42:38.833 00:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:42:38.833 00:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:42:38.833 00:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:42:38.833 00:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:42:38.833 00:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:38.833 00:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:38.833 00:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:38.833 00:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:42:38.833 00:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:38.833 00:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:42:38.833 00:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:42:38.833 00:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:42:38.833 00:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:42:38.833 00:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:42:38.833 00:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:42:38.833 00:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:42:38.833 00:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:42:38.833 00:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:42:38.833 00:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:42:38.833 00:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:42:38.833 00:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:42:38.833 00:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:42:38.833 00:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:42:38.833 00:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:42:38.833 00:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:42:38.833 00:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:42:38.833 00:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:42:38.833 00:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:42:38.833 00:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:42:38.833 00:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:42:38.833 00:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:42:38.833 00:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:42:38.833 00:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:42:38.833 00:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:42:38.833 00:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:42:44.107 00:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:42:44.107 00:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:42:44.107 00:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:42:44.107 00:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:42:44.107 00:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:42:44.107 00:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:42:44.107 00:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:42:44.107 00:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:42:44.107 00:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:42:44.107 00:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:42:44.107 00:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:42:44.107 00:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:42:44.107 00:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:42:44.107 00:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:42:44.107 00:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:42:44.107 00:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:42:44.107 00:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:42:44.107 00:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:42:44.107 00:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:42:44.107 00:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:42:44.107 00:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:42:44.107 00:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:42:44.107 00:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:42:44.107 00:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:42:44.107 00:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:42:44.107 00:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:42:44.107 00:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:42:44.107 00:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:42:44.107 00:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:42:44.107 00:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:42:44.107 00:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:42:44.107 00:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:42:44.107 00:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:42:44.107 00:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:42:44.107 00:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:42:44.107 Found 0000:af:00.0 (0x8086 - 0x159b) 00:42:44.107 00:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:42:44.107 00:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:42:44.107 00:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:42:44.107 00:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:42:44.107 00:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:42:44.107 00:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:42:44.107 00:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:42:44.107 Found 0000:af:00.1 (0x8086 - 0x159b) 00:42:44.107 00:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:42:44.107 00:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:42:44.107 00:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:42:44.107 00:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:42:44.107 00:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:42:44.107 00:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:42:44.107 00:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:42:44.107 00:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:42:44.107 00:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:42:44.107 00:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:42:44.107 00:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:42:44.107 00:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:42:44.107 00:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:42:44.107 00:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:42:44.108 00:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:42:44.108 00:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:42:44.108 Found net devices under 0000:af:00.0: cvl_0_0 00:42:44.108 00:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:42:44.108 00:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:42:44.108 00:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:42:44.108 00:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:42:44.108 00:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:42:44.108 00:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:42:44.108 00:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:42:44.108 00:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:42:44.108 00:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:42:44.108 Found net devices under 0000:af:00.1: cvl_0_1 00:42:44.108 00:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:42:44.108 00:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:42:44.108 00:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # is_hw=yes 00:42:44.108 00:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:42:44.108 00:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:42:44.108 00:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:42:44.108 00:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:42:44.108 00:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:42:44.108 00:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:42:44.108 00:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:42:44.108 00:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:42:44.108 00:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:42:44.108 00:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:42:44.108 00:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:42:44.108 00:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:42:44.108 00:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:42:44.108 00:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:42:44.108 00:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:42:44.108 00:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:42:44.108 00:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:42:44.108 00:23:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:42:44.108 00:23:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:42:44.108 00:23:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:42:44.108 00:23:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:42:44.108 00:23:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:42:44.367 00:23:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:42:44.367 00:23:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:42:44.367 00:23:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:42:44.367 00:23:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:42:44.367 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:42:44.367 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.389 ms 00:42:44.367 00:42:44.367 --- 10.0.0.2 ping statistics --- 00:42:44.367 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:42:44.367 rtt min/avg/max/mdev = 0.389/0.389/0.389/0.000 ms 00:42:44.367 00:23:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:42:44.367 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:42:44.367 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.128 ms 00:42:44.367 00:42:44.367 --- 10.0.0.1 ping statistics --- 00:42:44.367 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:42:44.367 rtt min/avg/max/mdev = 0.128/0.128/0.128/0.000 ms 00:42:44.367 00:23:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:42:44.367 00:23:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@450 -- # return 0 00:42:44.367 00:23:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:42:44.367 00:23:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:42:44.367 00:23:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:42:44.367 00:23:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:42:44.367 00:23:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:42:44.367 00:23:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:42:44.367 00:23:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:42:44.367 00:23:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:42:44.367 00:23:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:42:44.367 00:23:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:42:44.367 00:23:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:42:44.368 00:23:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=124158 00:42:44.368 00:23:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 124158 00:42:44.368 00:23:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:42:44.368 00:23:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@835 -- # '[' -z 124158 ']' 00:42:44.368 00:23:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:42:44.368 00:23:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@840 -- # local max_retries=100 00:42:44.368 00:23:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:42:44.368 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:42:44.368 00:23:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@844 -- # xtrace_disable 00:42:44.368 00:23:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:42:44.368 [2024-12-14 00:23:23.402003] thread.c:3079:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:42:44.368 [2024-12-14 00:23:23.404084] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:42:44.368 [2024-12-14 00:23:23.404169] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:42:44.627 [2024-12-14 00:23:23.522108] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:42:44.627 [2024-12-14 00:23:23.630878] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:42:44.627 [2024-12-14 00:23:23.630921] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:42:44.627 [2024-12-14 00:23:23.630933] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:42:44.627 [2024-12-14 00:23:23.630942] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:42:44.627 [2024-12-14 00:23:23.630968] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:42:44.627 [2024-12-14 00:23:23.633393] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:42:44.627 [2024-12-14 00:23:23.633412] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:42:44.627 [2024-12-14 00:23:23.633516] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:42:44.627 [2024-12-14 00:23:23.633527] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:42:44.886 [2024-12-14 00:23:23.954404] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:42:44.886 [2024-12-14 00:23:23.955296] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:42:44.886 [2024-12-14 00:23:23.956507] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:42:44.886 [2024-12-14 00:23:23.957322] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:42:44.886 [2024-12-14 00:23:23.957595] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:42:45.145 00:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:42:45.145 00:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@868 -- # return 0 00:42:45.145 00:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:42:45.145 00:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@732 -- # xtrace_disable 00:42:45.145 00:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:42:45.145 00:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:42:45.145 00:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:42:45.145 00:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:45.145 00:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:42:45.145 [2024-12-14 00:23:24.242648] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:42:45.145 00:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:45.145 00:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:42:45.145 00:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:45.145 00:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:42:45.404 Malloc0 00:42:45.404 00:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:45.404 00:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:42:45.404 00:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:45.404 00:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:42:45.404 00:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:45.404 00:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:42:45.404 00:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:45.404 00:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:42:45.404 00:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:45.404 00:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:42:45.404 00:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:45.404 00:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:42:45.404 [2024-12-14 00:23:24.358483] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:42:45.404 00:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:45.404 00:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:42:45.404 test case1: single bdev can't be used in multiple subsystems 00:42:45.404 00:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:42:45.404 00:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:45.404 00:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:42:45.404 00:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:45.404 00:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:42:45.404 00:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:45.404 00:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:42:45.404 00:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:45.404 00:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:42:45.404 00:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:42:45.404 00:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:45.404 00:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:42:45.404 [2024-12-14 00:23:24.386176] bdev.c:8538:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:42:45.404 [2024-12-14 00:23:24.386210] subsystem.c:2160:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:42:45.404 [2024-12-14 00:23:24.386222] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:45.404 request: 00:42:45.404 { 00:42:45.404 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:42:45.404 "namespace": { 00:42:45.404 "bdev_name": "Malloc0", 00:42:45.404 "no_auto_visible": false, 00:42:45.404 "hide_metadata": false 00:42:45.404 }, 00:42:45.404 "method": "nvmf_subsystem_add_ns", 00:42:45.404 "req_id": 1 00:42:45.404 } 00:42:45.404 Got JSON-RPC error response 00:42:45.404 response: 00:42:45.404 { 00:42:45.404 "code": -32602, 00:42:45.404 "message": "Invalid parameters" 00:42:45.404 } 00:42:45.404 00:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:42:45.404 00:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:42:45.404 00:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:42:45.404 00:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:42:45.404 Adding namespace failed - expected result. 00:42:45.404 00:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:42:45.404 test case2: host connect to nvmf target in multiple paths 00:42:45.404 00:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:42:45.404 00:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:45.404 00:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:42:45.404 [2024-12-14 00:23:24.398274] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:42:45.404 00:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:45.404 00:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:42:45.664 00:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:42:45.922 00:23:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:42:45.922 00:23:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1202 -- # local i=0 00:42:45.922 00:23:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:42:45.922 00:23:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:42:45.922 00:23:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1209 -- # sleep 2 00:42:48.455 00:23:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:42:48.455 00:23:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:42:48.455 00:23:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:42:48.455 00:23:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:42:48.455 00:23:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:42:48.455 00:23:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1212 -- # return 0 00:42:48.455 00:23:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:42:48.455 [global] 00:42:48.455 thread=1 00:42:48.455 invalidate=1 00:42:48.455 rw=write 00:42:48.456 time_based=1 00:42:48.456 runtime=1 00:42:48.456 ioengine=libaio 00:42:48.456 direct=1 00:42:48.456 bs=4096 00:42:48.456 iodepth=1 00:42:48.456 norandommap=0 00:42:48.456 numjobs=1 00:42:48.456 00:42:48.456 verify_dump=1 00:42:48.456 verify_backlog=512 00:42:48.456 verify_state_save=0 00:42:48.456 do_verify=1 00:42:48.456 verify=crc32c-intel 00:42:48.456 [job0] 00:42:48.456 filename=/dev/nvme0n1 00:42:48.456 Could not set queue depth (nvme0n1) 00:42:48.456 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:42:48.456 fio-3.35 00:42:48.456 Starting 1 thread 00:42:49.391 00:42:49.391 job0: (groupid=0, jobs=1): err= 0: pid=124971: Sat Dec 14 00:23:28 2024 00:42:49.391 read: IOPS=2286, BW=9147KiB/s (9366kB/s)(9156KiB/1001msec) 00:42:49.391 slat (nsec): min=6290, max=24889, avg=7047.35, stdev=836.42 00:42:49.391 clat (usec): min=208, max=445, avg=239.14, stdev=16.07 00:42:49.391 lat (usec): min=215, max=452, avg=246.19, stdev=16.10 00:42:49.391 clat percentiles (usec): 00:42:49.391 | 1.00th=[ 219], 5.00th=[ 221], 10.00th=[ 223], 20.00th=[ 227], 00:42:49.391 | 30.00th=[ 229], 40.00th=[ 233], 50.00th=[ 241], 60.00th=[ 245], 00:42:49.391 | 70.00th=[ 247], 80.00th=[ 249], 90.00th=[ 253], 95.00th=[ 255], 00:42:49.391 | 99.00th=[ 293], 99.50th=[ 302], 99.90th=[ 437], 99.95th=[ 441], 00:42:49.391 | 99.99th=[ 445] 00:42:49.391 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:42:49.391 slat (nsec): min=8991, max=40311, avg=10102.71, stdev=1309.59 00:42:49.391 clat (usec): min=135, max=379, avg=156.32, stdev= 8.05 00:42:49.391 lat (usec): min=145, max=420, avg=166.42, stdev= 8.45 00:42:49.391 clat percentiles (usec): 00:42:49.391 | 1.00th=[ 143], 5.00th=[ 147], 10.00th=[ 149], 20.00th=[ 151], 00:42:49.391 | 30.00th=[ 153], 40.00th=[ 155], 50.00th=[ 157], 60.00th=[ 157], 00:42:49.391 | 70.00th=[ 159], 80.00th=[ 161], 90.00th=[ 165], 95.00th=[ 167], 00:42:49.391 | 99.00th=[ 176], 99.50th=[ 180], 99.90th=[ 215], 99.95th=[ 239], 00:42:49.391 | 99.99th=[ 379] 00:42:49.391 bw ( KiB/s): min=11560, max=11560, per=100.00%, avg=11560.00, stdev= 0.00, samples=1 00:42:49.391 iops : min= 2890, max= 2890, avg=2890.00, stdev= 0.00, samples=1 00:42:49.392 lat (usec) : 250=92.23%, 500=7.77% 00:42:49.392 cpu : usr=2.60%, sys=4.10%, ctx=4850, majf=0, minf=1 00:42:49.392 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:42:49.392 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:49.392 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:49.392 issued rwts: total=2289,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:49.392 latency : target=0, window=0, percentile=100.00%, depth=1 00:42:49.392 00:42:49.392 Run status group 0 (all jobs): 00:42:49.392 READ: bw=9147KiB/s (9366kB/s), 9147KiB/s-9147KiB/s (9366kB/s-9366kB/s), io=9156KiB (9376kB), run=1001-1001msec 00:42:49.392 WRITE: bw=9.99MiB/s (10.5MB/s), 9.99MiB/s-9.99MiB/s (10.5MB/s-10.5MB/s), io=10.0MiB (10.5MB), run=1001-1001msec 00:42:49.392 00:42:49.392 Disk stats (read/write): 00:42:49.392 nvme0n1: ios=2098/2311, merge=0/0, ticks=512/346, in_queue=858, util=91.88% 00:42:49.392 00:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:42:50.026 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:42:50.026 00:23:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:42:50.026 00:23:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1223 -- # local i=0 00:42:50.026 00:23:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:42:50.026 00:23:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:42:50.026 00:23:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:42:50.026 00:23:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:42:50.026 00:23:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1235 -- # return 0 00:42:50.026 00:23:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:42:50.026 00:23:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:42:50.026 00:23:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:42:50.026 00:23:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:42:50.026 00:23:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:42:50.026 00:23:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:42:50.026 00:23:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:42:50.026 00:23:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:42:50.026 rmmod nvme_tcp 00:42:50.027 rmmod nvme_fabrics 00:42:50.027 rmmod nvme_keyring 00:42:50.027 00:23:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:42:50.027 00:23:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:42:50.027 00:23:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:42:50.027 00:23:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 124158 ']' 00:42:50.027 00:23:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 124158 00:42:50.027 00:23:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@954 -- # '[' -z 124158 ']' 00:42:50.027 00:23:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@958 -- # kill -0 124158 00:42:50.027 00:23:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@959 -- # uname 00:42:50.027 00:23:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:42:50.027 00:23:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 124158 00:42:50.343 00:23:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:42:50.343 00:23:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:42:50.343 00:23:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 124158' 00:42:50.343 killing process with pid 124158 00:42:50.343 00:23:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@973 -- # kill 124158 00:42:50.343 00:23:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@978 -- # wait 124158 00:42:51.721 00:23:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:42:51.721 00:23:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:42:51.721 00:23:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:42:51.721 00:23:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:42:51.721 00:23:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:42:51.721 00:23:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 00:42:51.721 00:23:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 00:42:51.721 00:23:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:42:51.721 00:23:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:42:51.721 00:23:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:42:51.721 00:23:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:42:51.721 00:23:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:42:53.626 00:23:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:42:53.626 00:42:53.626 real 0m14.985s 00:42:53.626 user 0m27.477s 00:42:53.626 sys 0m5.992s 00:42:53.626 00:23:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:42:53.626 00:23:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:42:53.626 ************************************ 00:42:53.626 END TEST nvmf_nmic 00:42:53.626 ************************************ 00:42:53.627 00:23:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:42:53.627 00:23:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:42:53.627 00:23:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:42:53.627 00:23:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:42:53.627 ************************************ 00:42:53.627 START TEST nvmf_fio_target 00:42:53.627 ************************************ 00:42:53.627 00:23:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:42:53.627 * Looking for test storage... 00:42:53.627 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:42:53.627 00:23:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:42:53.627 00:23:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:42:53.627 00:23:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1711 -- # lcov --version 00:42:53.627 00:23:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:42:53.627 00:23:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:42:53.627 00:23:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:42:53.627 00:23:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:42:53.627 00:23:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:42:53.627 00:23:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:42:53.627 00:23:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:42:53.627 00:23:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:42:53.627 00:23:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:42:53.627 00:23:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:42:53.627 00:23:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:42:53.627 00:23:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:42:53.627 00:23:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:42:53.627 00:23:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:42:53.627 00:23:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:42:53.627 00:23:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:42:53.627 00:23:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:42:53.886 00:23:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:42:53.886 00:23:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:42:53.886 00:23:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:42:53.886 00:23:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:42:53.886 00:23:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:42:53.886 00:23:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:42:53.886 00:23:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:42:53.886 00:23:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:42:53.886 00:23:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:42:53.886 00:23:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:42:53.886 00:23:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:42:53.886 00:23:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:42:53.886 00:23:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:42:53.886 00:23:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:42:53.886 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:53.886 --rc genhtml_branch_coverage=1 00:42:53.886 --rc genhtml_function_coverage=1 00:42:53.886 --rc genhtml_legend=1 00:42:53.886 --rc geninfo_all_blocks=1 00:42:53.886 --rc geninfo_unexecuted_blocks=1 00:42:53.886 00:42:53.886 ' 00:42:53.886 00:23:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:42:53.886 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:53.886 --rc genhtml_branch_coverage=1 00:42:53.886 --rc genhtml_function_coverage=1 00:42:53.886 --rc genhtml_legend=1 00:42:53.886 --rc geninfo_all_blocks=1 00:42:53.886 --rc geninfo_unexecuted_blocks=1 00:42:53.886 00:42:53.886 ' 00:42:53.886 00:23:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:42:53.886 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:53.886 --rc genhtml_branch_coverage=1 00:42:53.886 --rc genhtml_function_coverage=1 00:42:53.886 --rc genhtml_legend=1 00:42:53.886 --rc geninfo_all_blocks=1 00:42:53.886 --rc geninfo_unexecuted_blocks=1 00:42:53.886 00:42:53.886 ' 00:42:53.886 00:23:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:42:53.886 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:53.886 --rc genhtml_branch_coverage=1 00:42:53.886 --rc genhtml_function_coverage=1 00:42:53.886 --rc genhtml_legend=1 00:42:53.886 --rc geninfo_all_blocks=1 00:42:53.886 --rc geninfo_unexecuted_blocks=1 00:42:53.886 00:42:53.886 ' 00:42:53.886 00:23:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:42:53.886 00:23:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:42:53.886 00:23:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:42:53.886 00:23:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:42:53.886 00:23:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:42:53.886 00:23:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:42:53.887 00:23:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:42:53.887 00:23:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:42:53.887 00:23:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:42:53.887 00:23:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:42:53.887 00:23:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:42:53.887 00:23:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:42:53.887 00:23:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:42:53.887 00:23:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:42:53.887 00:23:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:42:53.887 00:23:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:42:53.887 00:23:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:42:53.887 00:23:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:42:53.887 00:23:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:42:53.887 00:23:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:42:53.887 00:23:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:42:53.887 00:23:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:42:53.887 00:23:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:42:53.887 00:23:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:53.887 00:23:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:53.887 00:23:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:53.887 00:23:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:42:53.887 00:23:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:53.887 00:23:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:42:53.887 00:23:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:42:53.887 00:23:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:42:53.887 00:23:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:42:53.887 00:23:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:42:53.887 00:23:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:42:53.887 00:23:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:42:53.887 00:23:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:42:53.887 00:23:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:42:53.887 00:23:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:42:53.887 00:23:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:42:53.887 00:23:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:42:53.887 00:23:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:42:53.887 00:23:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:42:53.887 00:23:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:42:53.887 00:23:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:42:53.887 00:23:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:42:53.887 00:23:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:42:53.887 00:23:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:42:53.887 00:23:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:42:53.887 00:23:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:42:53.887 00:23:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:42:53.887 00:23:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:42:53.887 00:23:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:42:53.887 00:23:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:42:53.887 00:23:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:42:53.887 00:23:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:42:59.161 00:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:42:59.161 00:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:42:59.161 00:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:42:59.161 00:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:42:59.161 00:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:42:59.161 00:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:42:59.161 00:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:42:59.161 00:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:42:59.161 00:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:42:59.161 00:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:42:59.161 00:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:42:59.161 00:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:42:59.161 00:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:42:59.161 00:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:42:59.161 00:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:42:59.161 00:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:42:59.161 00:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:42:59.161 00:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:42:59.161 00:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:42:59.161 00:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:42:59.161 00:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:42:59.161 00:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:42:59.161 00:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:42:59.161 00:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:42:59.161 00:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:42:59.161 00:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:42:59.161 00:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:42:59.161 00:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:42:59.161 00:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:42:59.161 00:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:42:59.161 00:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:42:59.161 00:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:42:59.161 00:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:42:59.161 00:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:42:59.161 00:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:42:59.161 Found 0000:af:00.0 (0x8086 - 0x159b) 00:42:59.161 00:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:42:59.161 00:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:42:59.161 00:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:42:59.161 00:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:42:59.161 00:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:42:59.161 00:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:42:59.161 00:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:42:59.161 Found 0000:af:00.1 (0x8086 - 0x159b) 00:42:59.161 00:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:42:59.161 00:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:42:59.161 00:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:42:59.161 00:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:42:59.161 00:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:42:59.161 00:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:42:59.161 00:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:42:59.161 00:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:42:59.161 00:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:42:59.161 00:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:42:59.161 00:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:42:59.161 00:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:42:59.161 00:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:42:59.161 00:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:42:59.161 00:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:42:59.161 00:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:42:59.161 Found net devices under 0000:af:00.0: cvl_0_0 00:42:59.161 00:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:42:59.161 00:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:42:59.161 00:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:42:59.161 00:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:42:59.161 00:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:42:59.161 00:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:42:59.161 00:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:42:59.161 00:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:42:59.161 00:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:42:59.161 Found net devices under 0000:af:00.1: cvl_0_1 00:42:59.161 00:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:42:59.161 00:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:42:59.161 00:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # is_hw=yes 00:42:59.161 00:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:42:59.161 00:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:42:59.161 00:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:42:59.161 00:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:42:59.161 00:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:42:59.161 00:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:42:59.161 00:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:42:59.161 00:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:42:59.161 00:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:42:59.161 00:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:42:59.161 00:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:42:59.161 00:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:42:59.161 00:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:42:59.161 00:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:42:59.161 00:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:42:59.161 00:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:42:59.161 00:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:42:59.161 00:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:42:59.161 00:23:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:42:59.162 00:23:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:42:59.162 00:23:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:42:59.162 00:23:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:42:59.162 00:23:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:42:59.162 00:23:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:42:59.162 00:23:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:42:59.162 00:23:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:42:59.162 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:42:59.162 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.395 ms 00:42:59.162 00:42:59.162 --- 10.0.0.2 ping statistics --- 00:42:59.162 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:42:59.162 rtt min/avg/max/mdev = 0.395/0.395/0.395/0.000 ms 00:42:59.162 00:23:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:42:59.162 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:42:59.162 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.211 ms 00:42:59.162 00:42:59.162 --- 10.0.0.1 ping statistics --- 00:42:59.162 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:42:59.162 rtt min/avg/max/mdev = 0.211/0.211/0.211/0.000 ms 00:42:59.162 00:23:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:42:59.162 00:23:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@450 -- # return 0 00:42:59.162 00:23:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:42:59.162 00:23:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:42:59.162 00:23:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:42:59.162 00:23:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:42:59.162 00:23:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:42:59.162 00:23:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:42:59.162 00:23:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:42:59.162 00:23:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:42:59.162 00:23:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:42:59.162 00:23:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:42:59.162 00:23:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:42:59.162 00:23:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=128893 00:42:59.162 00:23:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 128893 00:42:59.162 00:23:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:42:59.162 00:23:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@835 -- # '[' -z 128893 ']' 00:42:59.162 00:23:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:42:59.162 00:23:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:42:59.162 00:23:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:42:59.162 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:42:59.162 00:23:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:42:59.162 00:23:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:42:59.421 [2024-12-14 00:23:38.341374] thread.c:3079:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:42:59.421 [2024-12-14 00:23:38.343534] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:42:59.421 [2024-12-14 00:23:38.343601] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:42:59.421 [2024-12-14 00:23:38.459957] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:42:59.679 [2024-12-14 00:23:38.568268] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:42:59.679 [2024-12-14 00:23:38.568313] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:42:59.679 [2024-12-14 00:23:38.568324] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:42:59.680 [2024-12-14 00:23:38.568349] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:42:59.680 [2024-12-14 00:23:38.568359] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:42:59.680 [2024-12-14 00:23:38.570846] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:42:59.680 [2024-12-14 00:23:38.570936] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:42:59.680 [2024-12-14 00:23:38.571013] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:42:59.680 [2024-12-14 00:23:38.571024] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:42:59.938 [2024-12-14 00:23:38.895869] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:42:59.938 [2024-12-14 00:23:38.897283] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:42:59.938 [2024-12-14 00:23:38.899036] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:42:59.938 [2024-12-14 00:23:38.900567] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:42:59.938 [2024-12-14 00:23:38.900884] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:43:00.197 00:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:43:00.197 00:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@868 -- # return 0 00:43:00.197 00:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:43:00.197 00:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:43:00.197 00:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:43:00.197 00:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:43:00.197 00:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:43:00.456 [2024-12-14 00:23:39.356069] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:43:00.456 00:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:43:00.714 00:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:43:00.714 00:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:43:00.972 00:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:43:00.972 00:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:43:01.230 00:23:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:43:01.230 00:23:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:43:01.488 00:23:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:43:01.488 00:23:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:43:01.488 00:23:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:43:02.056 00:23:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:43:02.057 00:23:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:43:02.057 00:23:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:43:02.057 00:23:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:43:02.315 00:23:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:43:02.315 00:23:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:43:02.574 00:23:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:43:02.833 00:23:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:43:02.833 00:23:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:43:02.833 00:23:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:43:02.833 00:23:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:43:03.092 00:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:43:03.351 [2024-12-14 00:23:42.331925] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:43:03.351 00:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:43:03.610 00:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:43:03.868 00:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:43:04.127 00:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:43:04.127 00:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1202 -- # local i=0 00:43:04.127 00:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:43:04.127 00:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1204 -- # [[ -n 4 ]] 00:43:04.127 00:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1205 -- # nvme_device_counter=4 00:43:04.127 00:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1209 -- # sleep 2 00:43:06.030 00:23:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:43:06.030 00:23:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:43:06.030 00:23:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:43:06.030 00:23:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # nvme_devices=4 00:43:06.030 00:23:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:43:06.030 00:23:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1212 -- # return 0 00:43:06.030 00:23:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:43:06.030 [global] 00:43:06.030 thread=1 00:43:06.030 invalidate=1 00:43:06.030 rw=write 00:43:06.030 time_based=1 00:43:06.030 runtime=1 00:43:06.030 ioengine=libaio 00:43:06.030 direct=1 00:43:06.030 bs=4096 00:43:06.030 iodepth=1 00:43:06.030 norandommap=0 00:43:06.030 numjobs=1 00:43:06.030 00:43:06.030 verify_dump=1 00:43:06.030 verify_backlog=512 00:43:06.030 verify_state_save=0 00:43:06.030 do_verify=1 00:43:06.030 verify=crc32c-intel 00:43:06.030 [job0] 00:43:06.030 filename=/dev/nvme0n1 00:43:06.030 [job1] 00:43:06.030 filename=/dev/nvme0n2 00:43:06.030 [job2] 00:43:06.030 filename=/dev/nvme0n3 00:43:06.030 [job3] 00:43:06.030 filename=/dev/nvme0n4 00:43:06.030 Could not set queue depth (nvme0n1) 00:43:06.030 Could not set queue depth (nvme0n2) 00:43:06.030 Could not set queue depth (nvme0n3) 00:43:06.030 Could not set queue depth (nvme0n4) 00:43:06.289 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:43:06.289 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:43:06.289 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:43:06.289 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:43:06.289 fio-3.35 00:43:06.289 Starting 4 threads 00:43:07.666 00:43:07.666 job0: (groupid=0, jobs=1): err= 0: pid=130202: Sat Dec 14 00:23:46 2024 00:43:07.666 read: IOPS=503, BW=2015KiB/s (2064kB/s)(2084KiB/1034msec) 00:43:07.666 slat (nsec): min=6606, max=26358, avg=8299.76, stdev=2566.33 00:43:07.666 clat (usec): min=243, max=41185, avg=1444.10, stdev=6576.51 00:43:07.666 lat (usec): min=251, max=41196, avg=1452.40, stdev=6578.82 00:43:07.666 clat percentiles (usec): 00:43:07.666 | 1.00th=[ 251], 5.00th=[ 262], 10.00th=[ 269], 20.00th=[ 293], 00:43:07.666 | 30.00th=[ 343], 40.00th=[ 351], 50.00th=[ 359], 60.00th=[ 367], 00:43:07.666 | 70.00th=[ 379], 80.00th=[ 388], 90.00th=[ 437], 95.00th=[ 502], 00:43:07.666 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:43:07.666 | 99.99th=[41157] 00:43:07.666 write: IOPS=990, BW=3961KiB/s (4056kB/s)(4096KiB/1034msec); 0 zone resets 00:43:07.666 slat (nsec): min=10570, max=37076, avg=12270.76, stdev=1600.27 00:43:07.666 clat (usec): min=167, max=415, avg=253.82, stdev=36.25 00:43:07.666 lat (usec): min=179, max=453, avg=266.09, stdev=36.62 00:43:07.666 clat percentiles (usec): 00:43:07.666 | 1.00th=[ 174], 5.00th=[ 182], 10.00th=[ 196], 20.00th=[ 239], 00:43:07.666 | 30.00th=[ 241], 40.00th=[ 245], 50.00th=[ 253], 60.00th=[ 262], 00:43:07.666 | 70.00th=[ 273], 80.00th=[ 285], 90.00th=[ 297], 95.00th=[ 306], 00:43:07.666 | 99.00th=[ 334], 99.50th=[ 363], 99.90th=[ 400], 99.95th=[ 416], 00:43:07.666 | 99.99th=[ 416] 00:43:07.666 bw ( KiB/s): min= 4096, max= 4096, per=22.98%, avg=4096.00, stdev= 0.00, samples=2 00:43:07.666 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=2 00:43:07.666 lat (usec) : 250=31.39%, 500=66.67%, 750=1.04% 00:43:07.666 lat (msec) : 50=0.91% 00:43:07.666 cpu : usr=1.36%, sys=1.84%, ctx=1548, majf=0, minf=1 00:43:07.666 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:43:07.666 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:07.666 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:07.666 issued rwts: total=521,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:07.666 latency : target=0, window=0, percentile=100.00%, depth=1 00:43:07.666 job1: (groupid=0, jobs=1): err= 0: pid=130203: Sat Dec 14 00:23:46 2024 00:43:07.666 read: IOPS=20, BW=82.9KiB/s (84.9kB/s)(84.0KiB/1013msec) 00:43:07.666 slat (nsec): min=9968, max=24658, avg=19573.10, stdev=5698.62 00:43:07.666 clat (usec): min=40806, max=42086, avg=41069.20, stdev=325.06 00:43:07.666 lat (usec): min=40830, max=42097, avg=41088.78, stdev=323.92 00:43:07.666 clat percentiles (usec): 00:43:07.666 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:43:07.666 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:43:07.666 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[42206], 00:43:07.666 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:43:07.667 | 99.99th=[42206] 00:43:07.667 write: IOPS=505, BW=2022KiB/s (2070kB/s)(2048KiB/1013msec); 0 zone resets 00:43:07.667 slat (nsec): min=10219, max=54710, avg=11657.21, stdev=2303.44 00:43:07.667 clat (usec): min=196, max=415, avg=277.77, stdev=22.94 00:43:07.667 lat (usec): min=210, max=426, avg=289.42, stdev=22.99 00:43:07.667 clat percentiles (usec): 00:43:07.667 | 1.00th=[ 239], 5.00th=[ 245], 10.00th=[ 251], 20.00th=[ 258], 00:43:07.667 | 30.00th=[ 265], 40.00th=[ 273], 50.00th=[ 277], 60.00th=[ 285], 00:43:07.667 | 70.00th=[ 289], 80.00th=[ 297], 90.00th=[ 306], 95.00th=[ 314], 00:43:07.667 | 99.00th=[ 347], 99.50th=[ 355], 99.90th=[ 416], 99.95th=[ 416], 00:43:07.667 | 99.99th=[ 416] 00:43:07.667 bw ( KiB/s): min= 4096, max= 4096, per=22.98%, avg=4096.00, stdev= 0.00, samples=1 00:43:07.667 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:43:07.667 lat (usec) : 250=9.01%, 500=87.05% 00:43:07.667 lat (msec) : 50=3.94% 00:43:07.667 cpu : usr=0.69%, sys=0.69%, ctx=533, majf=0, minf=1 00:43:07.667 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:43:07.667 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:07.667 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:07.667 issued rwts: total=21,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:07.667 latency : target=0, window=0, percentile=100.00%, depth=1 00:43:07.667 job2: (groupid=0, jobs=1): err= 0: pid=130204: Sat Dec 14 00:23:46 2024 00:43:07.667 read: IOPS=1024, BW=4099KiB/s (4198kB/s)(4124KiB/1006msec) 00:43:07.667 slat (nsec): min=6837, max=23092, avg=8049.80, stdev=1543.71 00:43:07.667 clat (usec): min=254, max=40996, avg=609.03, stdev=3336.46 00:43:07.667 lat (usec): min=262, max=41018, avg=617.08, stdev=3337.41 00:43:07.667 clat percentiles (usec): 00:43:07.667 | 1.00th=[ 260], 5.00th=[ 265], 10.00th=[ 273], 20.00th=[ 289], 00:43:07.667 | 30.00th=[ 310], 40.00th=[ 318], 50.00th=[ 322], 60.00th=[ 347], 00:43:07.667 | 70.00th=[ 359], 80.00th=[ 375], 90.00th=[ 396], 95.00th=[ 424], 00:43:07.667 | 99.00th=[ 498], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:43:07.667 | 99.99th=[41157] 00:43:07.667 write: IOPS=1526, BW=6107KiB/s (6254kB/s)(6144KiB/1006msec); 0 zone resets 00:43:07.667 slat (nsec): min=9500, max=55129, avg=10979.90, stdev=1827.79 00:43:07.667 clat (usec): min=155, max=400, avg=225.23, stdev=46.22 00:43:07.667 lat (usec): min=165, max=455, avg=236.21, stdev=46.46 00:43:07.667 clat percentiles (usec): 00:43:07.667 | 1.00th=[ 163], 5.00th=[ 167], 10.00th=[ 172], 20.00th=[ 180], 00:43:07.667 | 30.00th=[ 192], 40.00th=[ 198], 50.00th=[ 210], 60.00th=[ 239], 00:43:07.667 | 70.00th=[ 258], 80.00th=[ 273], 90.00th=[ 293], 95.00th=[ 302], 00:43:07.667 | 99.00th=[ 318], 99.50th=[ 334], 99.90th=[ 383], 99.95th=[ 400], 00:43:07.667 | 99.99th=[ 400] 00:43:07.667 bw ( KiB/s): min= 4096, max= 8192, per=34.47%, avg=6144.00, stdev=2896.31, samples=2 00:43:07.667 iops : min= 1024, max= 2048, avg=1536.00, stdev=724.08, samples=2 00:43:07.667 lat (usec) : 250=39.07%, 500=60.54%, 750=0.08%, 1000=0.04% 00:43:07.667 lat (msec) : 50=0.27% 00:43:07.667 cpu : usr=1.89%, sys=3.08%, ctx=2567, majf=0, minf=1 00:43:07.667 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:43:07.667 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:07.667 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:07.667 issued rwts: total=1031,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:07.667 latency : target=0, window=0, percentile=100.00%, depth=1 00:43:07.667 job3: (groupid=0, jobs=1): err= 0: pid=130205: Sat Dec 14 00:23:46 2024 00:43:07.667 read: IOPS=1107, BW=4430KiB/s (4536kB/s)(4492KiB/1014msec) 00:43:07.667 slat (nsec): min=6740, max=24927, avg=7709.57, stdev=1617.79 00:43:07.667 clat (usec): min=220, max=41066, avg=604.86, stdev=3626.36 00:43:07.667 lat (usec): min=228, max=41089, avg=612.57, stdev=3627.63 00:43:07.667 clat percentiles (usec): 00:43:07.667 | 1.00th=[ 235], 5.00th=[ 243], 10.00th=[ 247], 20.00th=[ 251], 00:43:07.667 | 30.00th=[ 255], 40.00th=[ 262], 50.00th=[ 265], 60.00th=[ 269], 00:43:07.667 | 70.00th=[ 273], 80.00th=[ 281], 90.00th=[ 314], 95.00th=[ 445], 00:43:07.667 | 99.00th=[ 506], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:43:07.667 | 99.99th=[41157] 00:43:07.667 write: IOPS=1514, BW=6059KiB/s (6205kB/s)(6144KiB/1014msec); 0 zone resets 00:43:07.667 slat (nsec): min=9706, max=38269, avg=11357.29, stdev=1766.37 00:43:07.667 clat (usec): min=155, max=432, avg=196.51, stdev=35.54 00:43:07.667 lat (usec): min=166, max=466, avg=207.87, stdev=36.29 00:43:07.667 clat percentiles (usec): 00:43:07.667 | 1.00th=[ 161], 5.00th=[ 167], 10.00th=[ 174], 20.00th=[ 178], 00:43:07.667 | 30.00th=[ 180], 40.00th=[ 184], 50.00th=[ 186], 60.00th=[ 190], 00:43:07.667 | 70.00th=[ 194], 80.00th=[ 200], 90.00th=[ 269], 95.00th=[ 293], 00:43:07.667 | 99.00th=[ 310], 99.50th=[ 318], 99.90th=[ 355], 99.95th=[ 433], 00:43:07.667 | 99.99th=[ 433] 00:43:07.667 bw ( KiB/s): min= 4096, max= 8192, per=34.47%, avg=6144.00, stdev=2896.31, samples=2 00:43:07.667 iops : min= 1024, max= 2048, avg=1536.00, stdev=724.08, samples=2 00:43:07.667 lat (usec) : 250=59.27%, 500=39.83%, 750=0.56% 00:43:07.667 lat (msec) : 50=0.34% 00:43:07.667 cpu : usr=1.68%, sys=2.37%, ctx=2661, majf=0, minf=1 00:43:07.667 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:43:07.667 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:07.667 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:07.667 issued rwts: total=1123,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:07.667 latency : target=0, window=0, percentile=100.00%, depth=1 00:43:07.667 00:43:07.667 Run status group 0 (all jobs): 00:43:07.667 READ: bw=10.2MiB/s (10.7MB/s), 82.9KiB/s-4430KiB/s (84.9kB/s-4536kB/s), io=10.5MiB (11.0MB), run=1006-1034msec 00:43:07.667 WRITE: bw=17.4MiB/s (18.3MB/s), 2022KiB/s-6107KiB/s (2070kB/s-6254kB/s), io=18.0MiB (18.9MB), run=1006-1034msec 00:43:07.667 00:43:07.667 Disk stats (read/write): 00:43:07.667 nvme0n1: ios=539/1024, merge=0/0, ticks=1414/253, in_queue=1667, util=85.67% 00:43:07.667 nvme0n2: ios=67/512, merge=0/0, ticks=768/139, in_queue=907, util=90.95% 00:43:07.667 nvme0n3: ios=1084/1536, merge=0/0, ticks=531/336, in_queue=867, util=94.69% 00:43:07.667 nvme0n4: ios=1142/1536, merge=0/0, ticks=1418/298, in_queue=1716, util=94.33% 00:43:07.667 00:23:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:43:07.667 [global] 00:43:07.667 thread=1 00:43:07.667 invalidate=1 00:43:07.667 rw=randwrite 00:43:07.667 time_based=1 00:43:07.667 runtime=1 00:43:07.667 ioengine=libaio 00:43:07.667 direct=1 00:43:07.667 bs=4096 00:43:07.667 iodepth=1 00:43:07.667 norandommap=0 00:43:07.667 numjobs=1 00:43:07.667 00:43:07.667 verify_dump=1 00:43:07.667 verify_backlog=512 00:43:07.667 verify_state_save=0 00:43:07.667 do_verify=1 00:43:07.667 verify=crc32c-intel 00:43:07.667 [job0] 00:43:07.667 filename=/dev/nvme0n1 00:43:07.667 [job1] 00:43:07.667 filename=/dev/nvme0n2 00:43:07.667 [job2] 00:43:07.667 filename=/dev/nvme0n3 00:43:07.667 [job3] 00:43:07.667 filename=/dev/nvme0n4 00:43:07.667 Could not set queue depth (nvme0n1) 00:43:07.667 Could not set queue depth (nvme0n2) 00:43:07.667 Could not set queue depth (nvme0n3) 00:43:07.667 Could not set queue depth (nvme0n4) 00:43:07.926 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:43:07.926 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:43:07.926 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:43:07.926 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:43:07.926 fio-3.35 00:43:07.926 Starting 4 threads 00:43:09.303 00:43:09.303 job0: (groupid=0, jobs=1): err= 0: pid=130566: Sat Dec 14 00:23:48 2024 00:43:09.303 read: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec) 00:43:09.303 slat (nsec): min=7152, max=39436, avg=8167.22, stdev=1320.63 00:43:09.303 clat (usec): min=212, max=283, avg=245.26, stdev= 7.73 00:43:09.303 lat (usec): min=227, max=291, avg=253.43, stdev= 7.76 00:43:09.303 clat percentiles (usec): 00:43:09.303 | 1.00th=[ 227], 5.00th=[ 233], 10.00th=[ 237], 20.00th=[ 239], 00:43:09.303 | 30.00th=[ 241], 40.00th=[ 243], 50.00th=[ 245], 60.00th=[ 247], 00:43:09.303 | 70.00th=[ 249], 80.00th=[ 251], 90.00th=[ 255], 95.00th=[ 260], 00:43:09.303 | 99.00th=[ 265], 99.50th=[ 269], 99.90th=[ 281], 99.95th=[ 281], 00:43:09.303 | 99.99th=[ 285] 00:43:09.303 write: IOPS=2453, BW=9814KiB/s (10.0MB/s)(9824KiB/1001msec); 0 zone resets 00:43:09.303 slat (nsec): min=10397, max=49265, avg=11706.86, stdev=1981.78 00:43:09.303 clat (usec): min=142, max=399, avg=178.60, stdev=14.25 00:43:09.303 lat (usec): min=153, max=417, avg=190.31, stdev=14.54 00:43:09.303 clat percentiles (usec): 00:43:09.303 | 1.00th=[ 151], 5.00th=[ 157], 10.00th=[ 163], 20.00th=[ 169], 00:43:09.303 | 30.00th=[ 174], 40.00th=[ 176], 50.00th=[ 178], 60.00th=[ 182], 00:43:09.303 | 70.00th=[ 184], 80.00th=[ 188], 90.00th=[ 194], 95.00th=[ 200], 00:43:09.303 | 99.00th=[ 217], 99.50th=[ 223], 99.90th=[ 241], 99.95th=[ 367], 00:43:09.303 | 99.99th=[ 400] 00:43:09.303 bw ( KiB/s): min= 9464, max= 9464, per=37.41%, avg=9464.00, stdev= 0.00, samples=1 00:43:09.303 iops : min= 2366, max= 2366, avg=2366.00, stdev= 0.00, samples=1 00:43:09.303 lat (usec) : 250=89.10%, 500=10.90% 00:43:09.303 cpu : usr=3.50%, sys=7.40%, ctx=4505, majf=0, minf=1 00:43:09.303 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:43:09.303 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:09.303 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:09.303 issued rwts: total=2048,2456,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:09.303 latency : target=0, window=0, percentile=100.00%, depth=1 00:43:09.303 job1: (groupid=0, jobs=1): err= 0: pid=130567: Sat Dec 14 00:23:48 2024 00:43:09.303 read: IOPS=1902, BW=7608KiB/s (7791kB/s)(7616KiB/1001msec) 00:43:09.303 slat (nsec): min=6880, max=20470, avg=7880.59, stdev=1043.83 00:43:09.303 clat (usec): min=230, max=537, avg=286.92, stdev=41.80 00:43:09.303 lat (usec): min=237, max=548, avg=294.80, stdev=41.94 00:43:09.303 clat percentiles (usec): 00:43:09.303 | 1.00th=[ 241], 5.00th=[ 247], 10.00th=[ 251], 20.00th=[ 255], 00:43:09.303 | 30.00th=[ 260], 40.00th=[ 265], 50.00th=[ 273], 60.00th=[ 281], 00:43:09.303 | 70.00th=[ 297], 80.00th=[ 322], 90.00th=[ 351], 95.00th=[ 367], 00:43:09.303 | 99.00th=[ 404], 99.50th=[ 469], 99.90th=[ 498], 99.95th=[ 537], 00:43:09.303 | 99.99th=[ 537] 00:43:09.303 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:43:09.303 slat (nsec): min=9709, max=40555, avg=11160.12, stdev=1763.73 00:43:09.303 clat (usec): min=152, max=385, avg=197.37, stdev=25.20 00:43:09.303 lat (usec): min=162, max=415, avg=208.53, stdev=25.69 00:43:09.303 clat percentiles (usec): 00:43:09.303 | 1.00th=[ 159], 5.00th=[ 167], 10.00th=[ 172], 20.00th=[ 178], 00:43:09.303 | 30.00th=[ 186], 40.00th=[ 192], 50.00th=[ 196], 60.00th=[ 198], 00:43:09.303 | 70.00th=[ 202], 80.00th=[ 206], 90.00th=[ 225], 95.00th=[ 260], 00:43:09.303 | 99.00th=[ 285], 99.50th=[ 285], 99.90th=[ 334], 99.95th=[ 334], 00:43:09.303 | 99.99th=[ 388] 00:43:09.303 bw ( KiB/s): min= 8192, max= 8192, per=32.38%, avg=8192.00, stdev= 0.00, samples=1 00:43:09.303 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:43:09.303 lat (usec) : 250=53.26%, 500=46.71%, 750=0.03% 00:43:09.303 cpu : usr=3.10%, sys=6.40%, ctx=3952, majf=0, minf=1 00:43:09.303 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:43:09.303 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:09.303 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:09.303 issued rwts: total=1904,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:09.303 latency : target=0, window=0, percentile=100.00%, depth=1 00:43:09.303 job2: (groupid=0, jobs=1): err= 0: pid=130568: Sat Dec 14 00:23:48 2024 00:43:09.303 read: IOPS=1204, BW=4817KiB/s (4933kB/s)(4856KiB/1008msec) 00:43:09.303 slat (nsec): min=7141, max=28036, avg=8228.56, stdev=1669.73 00:43:09.303 clat (usec): min=226, max=42045, avg=533.04, stdev=3100.51 00:43:09.303 lat (usec): min=234, max=42069, avg=541.26, stdev=3101.54 00:43:09.303 clat percentiles (usec): 00:43:09.303 | 1.00th=[ 235], 5.00th=[ 247], 10.00th=[ 255], 20.00th=[ 265], 00:43:09.303 | 30.00th=[ 273], 40.00th=[ 281], 50.00th=[ 293], 60.00th=[ 302], 00:43:09.303 | 70.00th=[ 310], 80.00th=[ 338], 90.00th=[ 355], 95.00th=[ 363], 00:43:09.303 | 99.00th=[ 383], 99.50th=[40633], 99.90th=[42206], 99.95th=[42206], 00:43:09.303 | 99.99th=[42206] 00:43:09.303 write: IOPS=1523, BW=6095KiB/s (6242kB/s)(6144KiB/1008msec); 0 zone resets 00:43:09.303 slat (nsec): min=9850, max=37635, avg=11169.51, stdev=1624.42 00:43:09.303 clat (usec): min=162, max=370, avg=211.99, stdev=25.96 00:43:09.303 lat (usec): min=173, max=385, avg=223.16, stdev=26.12 00:43:09.303 clat percentiles (usec): 00:43:09.303 | 1.00th=[ 176], 5.00th=[ 184], 10.00th=[ 188], 20.00th=[ 194], 00:43:09.303 | 30.00th=[ 198], 40.00th=[ 200], 50.00th=[ 204], 60.00th=[ 208], 00:43:09.303 | 70.00th=[ 217], 80.00th=[ 229], 90.00th=[ 255], 95.00th=[ 269], 00:43:09.303 | 99.00th=[ 285], 99.50th=[ 293], 99.90th=[ 343], 99.95th=[ 371], 00:43:09.303 | 99.99th=[ 371] 00:43:09.303 bw ( KiB/s): min= 4096, max= 8192, per=24.29%, avg=6144.00, stdev=2896.31, samples=2 00:43:09.303 iops : min= 1024, max= 2048, avg=1536.00, stdev=724.08, samples=2 00:43:09.303 lat (usec) : 250=52.44%, 500=47.31% 00:43:09.303 lat (msec) : 50=0.25% 00:43:09.303 cpu : usr=2.38%, sys=4.17%, ctx=2750, majf=0, minf=1 00:43:09.303 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:43:09.303 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:09.304 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:09.304 issued rwts: total=1214,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:09.304 latency : target=0, window=0, percentile=100.00%, depth=1 00:43:09.304 job3: (groupid=0, jobs=1): err= 0: pid=130569: Sat Dec 14 00:23:48 2024 00:43:09.304 read: IOPS=21, BW=84.9KiB/s (87.0kB/s)(88.0KiB/1036msec) 00:43:09.304 slat (nsec): min=9643, max=23851, avg=20504.50, stdev=5074.26 00:43:09.304 clat (usec): min=40604, max=42006, avg=41137.91, stdev=396.96 00:43:09.304 lat (usec): min=40614, max=42028, avg=41158.42, stdev=397.02 00:43:09.304 clat percentiles (usec): 00:43:09.304 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:43:09.304 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:43:09.304 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41681], 95.00th=[41681], 00:43:09.304 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:43:09.304 | 99.99th=[42206] 00:43:09.304 write: IOPS=494, BW=1977KiB/s (2024kB/s)(2048KiB/1036msec); 0 zone resets 00:43:09.304 slat (nsec): min=9200, max=35318, avg=10231.36, stdev=1512.50 00:43:09.304 clat (usec): min=169, max=868, avg=240.91, stdev=37.71 00:43:09.304 lat (usec): min=179, max=879, avg=251.14, stdev=38.00 00:43:09.304 clat percentiles (usec): 00:43:09.304 | 1.00th=[ 200], 5.00th=[ 206], 10.00th=[ 210], 20.00th=[ 219], 00:43:09.304 | 30.00th=[ 225], 40.00th=[ 229], 50.00th=[ 235], 60.00th=[ 243], 00:43:09.304 | 70.00th=[ 253], 80.00th=[ 265], 90.00th=[ 273], 95.00th=[ 281], 00:43:09.304 | 99.00th=[ 297], 99.50th=[ 302], 99.90th=[ 873], 99.95th=[ 873], 00:43:09.304 | 99.99th=[ 873] 00:43:09.304 bw ( KiB/s): min= 4096, max= 4096, per=16.19%, avg=4096.00, stdev= 0.00, samples=1 00:43:09.304 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:43:09.304 lat (usec) : 250=64.42%, 500=31.27%, 1000=0.19% 00:43:09.304 lat (msec) : 50=4.12% 00:43:09.304 cpu : usr=0.10%, sys=0.68%, ctx=534, majf=0, minf=1 00:43:09.304 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:43:09.304 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:09.304 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:09.304 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:09.304 latency : target=0, window=0, percentile=100.00%, depth=1 00:43:09.304 00:43:09.304 Run status group 0 (all jobs): 00:43:09.304 READ: bw=19.6MiB/s (20.5MB/s), 84.9KiB/s-8184KiB/s (87.0kB/s-8380kB/s), io=20.3MiB (21.2MB), run=1001-1036msec 00:43:09.304 WRITE: bw=24.7MiB/s (25.9MB/s), 1977KiB/s-9814KiB/s (2024kB/s-10.0MB/s), io=25.6MiB (26.8MB), run=1001-1036msec 00:43:09.304 00:43:09.304 Disk stats (read/write): 00:43:09.304 nvme0n1: ios=1806/2048, merge=0/0, ticks=499/342, in_queue=841, util=86.67% 00:43:09.304 nvme0n2: ios=1586/1791, merge=0/0, ticks=504/341, in_queue=845, util=91.05% 00:43:09.304 nvme0n3: ios=1267/1536, merge=0/0, ticks=540/309, in_queue=849, util=94.89% 00:43:09.304 nvme0n4: ios=74/512, merge=0/0, ticks=778/123, in_queue=901, util=95.69% 00:43:09.304 00:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:43:09.304 [global] 00:43:09.304 thread=1 00:43:09.304 invalidate=1 00:43:09.304 rw=write 00:43:09.304 time_based=1 00:43:09.304 runtime=1 00:43:09.304 ioengine=libaio 00:43:09.304 direct=1 00:43:09.304 bs=4096 00:43:09.304 iodepth=128 00:43:09.304 norandommap=0 00:43:09.304 numjobs=1 00:43:09.304 00:43:09.304 verify_dump=1 00:43:09.304 verify_backlog=512 00:43:09.304 verify_state_save=0 00:43:09.304 do_verify=1 00:43:09.304 verify=crc32c-intel 00:43:09.304 [job0] 00:43:09.304 filename=/dev/nvme0n1 00:43:09.304 [job1] 00:43:09.304 filename=/dev/nvme0n2 00:43:09.304 [job2] 00:43:09.304 filename=/dev/nvme0n3 00:43:09.304 [job3] 00:43:09.304 filename=/dev/nvme0n4 00:43:09.304 Could not set queue depth (nvme0n1) 00:43:09.304 Could not set queue depth (nvme0n2) 00:43:09.304 Could not set queue depth (nvme0n3) 00:43:09.304 Could not set queue depth (nvme0n4) 00:43:09.563 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:43:09.563 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:43:09.563 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:43:09.563 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:43:09.563 fio-3.35 00:43:09.563 Starting 4 threads 00:43:10.944 00:43:10.944 job0: (groupid=0, jobs=1): err= 0: pid=130934: Sat Dec 14 00:23:49 2024 00:43:10.944 read: IOPS=5830, BW=22.8MiB/s (23.9MB/s)(23.0MiB/1008msec) 00:43:10.944 slat (nsec): min=1404, max=10573k, avg=83404.36, stdev=678214.77 00:43:10.944 clat (usec): min=987, max=22038, avg=10624.36, stdev=3046.26 00:43:10.944 lat (usec): min=2835, max=24999, avg=10707.76, stdev=3095.46 00:43:10.944 clat percentiles (usec): 00:43:10.944 | 1.00th=[ 5800], 5.00th=[ 7046], 10.00th=[ 7504], 20.00th=[ 8291], 00:43:10.944 | 30.00th=[ 8848], 40.00th=[ 9765], 50.00th=[10159], 60.00th=[10421], 00:43:10.944 | 70.00th=[10945], 80.00th=[12911], 90.00th=[15008], 95.00th=[17433], 00:43:10.944 | 99.00th=[19268], 99.50th=[20579], 99.90th=[21890], 99.95th=[21890], 00:43:10.944 | 99.99th=[22152] 00:43:10.944 write: IOPS=6095, BW=23.8MiB/s (25.0MB/s)(24.0MiB/1008msec); 0 zone resets 00:43:10.944 slat (usec): min=2, max=9532, avg=77.77, stdev=544.58 00:43:10.945 clat (usec): min=1648, max=32388, avg=10633.43, stdev=5027.61 00:43:10.945 lat (usec): min=1709, max=32393, avg=10711.20, stdev=5071.56 00:43:10.945 clat percentiles (usec): 00:43:10.945 | 1.00th=[ 4359], 5.00th=[ 5669], 10.00th=[ 6783], 20.00th=[ 7832], 00:43:10.945 | 30.00th=[ 8717], 40.00th=[ 8979], 50.00th=[ 9634], 60.00th=[10159], 00:43:10.945 | 70.00th=[10421], 80.00th=[11469], 90.00th=[14615], 95.00th=[22152], 00:43:10.945 | 99.00th=[30802], 99.50th=[31589], 99.90th=[32375], 99.95th=[32375], 00:43:10.945 | 99.99th=[32375] 00:43:10.945 bw ( KiB/s): min=24576, max=24576, per=36.68%, avg=24576.00, stdev= 0.00, samples=2 00:43:10.945 iops : min= 6144, max= 6144, avg=6144.00, stdev= 0.00, samples=2 00:43:10.945 lat (usec) : 1000=0.01% 00:43:10.945 lat (msec) : 2=0.06%, 4=0.43%, 10=50.84%, 20=45.53%, 50=3.13% 00:43:10.945 cpu : usr=5.16%, sys=6.85%, ctx=423, majf=0, minf=1 00:43:10.945 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:43:10.945 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:10.945 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:43:10.945 issued rwts: total=5877,6144,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:10.945 latency : target=0, window=0, percentile=100.00%, depth=128 00:43:10.945 job1: (groupid=0, jobs=1): err= 0: pid=130935: Sat Dec 14 00:23:49 2024 00:43:10.945 read: IOPS=4531, BW=17.7MiB/s (18.6MB/s)(18.1MiB/1020msec) 00:43:10.945 slat (nsec): min=1018, max=20966k, avg=93833.40, stdev=845264.24 00:43:10.945 clat (usec): min=773, max=58845, avg=14042.57, stdev=8519.61 00:43:10.945 lat (usec): min=781, max=58851, avg=14136.40, stdev=8580.58 00:43:10.945 clat percentiles (usec): 00:43:10.945 | 1.00th=[ 881], 5.00th=[ 4752], 10.00th=[ 5669], 20.00th=[ 8979], 00:43:10.945 | 30.00th=[10159], 40.00th=[10945], 50.00th=[11731], 60.00th=[12780], 00:43:10.945 | 70.00th=[14877], 80.00th=[18482], 90.00th=[25822], 95.00th=[31589], 00:43:10.945 | 99.00th=[45876], 99.50th=[54264], 99.90th=[58983], 99.95th=[58983], 00:43:10.945 | 99.99th=[58983] 00:43:10.945 write: IOPS=5019, BW=19.6MiB/s (20.6MB/s)(20.0MiB/1020msec); 0 zone resets 00:43:10.945 slat (nsec): min=1864, max=14065k, avg=87077.86, stdev=584498.36 00:43:10.945 clat (usec): min=569, max=56074, avg=12609.39, stdev=8742.19 00:43:10.945 lat (usec): min=576, max=56084, avg=12696.47, stdev=8805.77 00:43:10.945 clat percentiles (usec): 00:43:10.945 | 1.00th=[ 2024], 5.00th=[ 4490], 10.00th=[ 6259], 20.00th=[ 7635], 00:43:10.945 | 30.00th=[ 9241], 40.00th=[10159], 50.00th=[10945], 60.00th=[11338], 00:43:10.945 | 70.00th=[11863], 80.00th=[14615], 90.00th=[17171], 95.00th=[34866], 00:43:10.945 | 99.00th=[50594], 99.50th=[53740], 99.90th=[55837], 99.95th=[55837], 00:43:10.945 | 99.99th=[55837] 00:43:10.945 bw ( KiB/s): min=16832, max=23216, per=29.88%, avg=20024.00, stdev=4514.17, samples=2 00:43:10.945 iops : min= 4208, max= 5804, avg=5006.00, stdev=1128.54, samples=2 00:43:10.945 lat (usec) : 750=0.03%, 1000=0.91% 00:43:10.945 lat (msec) : 2=0.90%, 4=2.12%, 10=28.46%, 20=56.08%, 50=10.50% 00:43:10.945 lat (msec) : 100=0.99% 00:43:10.945 cpu : usr=3.14%, sys=4.91%, ctx=420, majf=0, minf=2 00:43:10.945 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:43:10.945 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:10.945 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:43:10.945 issued rwts: total=4622,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:10.945 latency : target=0, window=0, percentile=100.00%, depth=128 00:43:10.945 job2: (groupid=0, jobs=1): err= 0: pid=130936: Sat Dec 14 00:23:49 2024 00:43:10.945 read: IOPS=3065, BW=12.0MiB/s (12.6MB/s)(12.2MiB/1017msec) 00:43:10.945 slat (nsec): min=1415, max=15951k, avg=129099.70, stdev=927958.04 00:43:10.945 clat (usec): min=3637, max=36151, avg=16609.52, stdev=5521.55 00:43:10.945 lat (usec): min=3648, max=36175, avg=16738.62, stdev=5588.63 00:43:10.945 clat percentiles (usec): 00:43:10.945 | 1.00th=[ 8848], 5.00th=[10028], 10.00th=[10814], 20.00th=[11469], 00:43:10.945 | 30.00th=[11863], 40.00th=[13960], 50.00th=[15926], 60.00th=[17171], 00:43:10.945 | 70.00th=[19268], 80.00th=[21103], 90.00th=[24511], 95.00th=[26608], 00:43:10.945 | 99.00th=[32375], 99.50th=[32637], 99.90th=[32900], 99.95th=[34341], 00:43:10.945 | 99.99th=[35914] 00:43:10.945 write: IOPS=3524, BW=13.8MiB/s (14.4MB/s)(14.0MiB/1017msec); 0 zone resets 00:43:10.945 slat (usec): min=2, max=14434, avg=161.19, stdev=865.88 00:43:10.945 clat (usec): min=1461, max=59528, avg=21552.80, stdev=12466.76 00:43:10.945 lat (usec): min=1476, max=59539, avg=21713.99, stdev=12553.17 00:43:10.945 clat percentiles (usec): 00:43:10.945 | 1.00th=[ 5473], 5.00th=[10290], 10.00th=[11338], 20.00th=[11994], 00:43:10.945 | 30.00th=[12256], 40.00th=[13304], 50.00th=[16909], 60.00th=[21365], 00:43:10.945 | 70.00th=[24249], 80.00th=[32375], 90.00th=[41681], 95.00th=[47973], 00:43:10.945 | 99.00th=[58459], 99.50th=[58983], 99.90th=[59507], 99.95th=[59507], 00:43:10.945 | 99.99th=[59507] 00:43:10.945 bw ( KiB/s): min=11528, max=16488, per=20.91%, avg=14008.00, stdev=3507.25, samples=2 00:43:10.945 iops : min= 2882, max= 4122, avg=3502.00, stdev=876.81, samples=2 00:43:10.945 lat (msec) : 2=0.03%, 4=0.36%, 10=4.69%, 20=60.53%, 50=32.18% 00:43:10.945 lat (msec) : 100=2.21% 00:43:10.945 cpu : usr=3.54%, sys=3.35%, ctx=336, majf=0, minf=1 00:43:10.945 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:43:10.945 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:10.945 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:43:10.945 issued rwts: total=3118,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:10.945 latency : target=0, window=0, percentile=100.00%, depth=128 00:43:10.945 job3: (groupid=0, jobs=1): err= 0: pid=130937: Sat Dec 14 00:23:49 2024 00:43:10.945 read: IOPS=2023, BW=8095KiB/s (8289kB/s)(8192KiB/1012msec) 00:43:10.945 slat (nsec): min=1517, max=28092k, avg=184322.20, stdev=1341542.77 00:43:10.945 clat (usec): min=8545, max=70527, avg=22955.54, stdev=15974.85 00:43:10.945 lat (usec): min=8551, max=70556, avg=23139.86, stdev=16075.63 00:43:10.945 clat percentiles (usec): 00:43:10.945 | 1.00th=[ 9634], 5.00th=[11863], 10.00th=[12125], 20.00th=[12256], 00:43:10.945 | 30.00th=[14091], 40.00th=[14484], 50.00th=[14877], 60.00th=[15270], 00:43:10.945 | 70.00th=[17695], 80.00th=[39060], 90.00th=[49546], 95.00th=[62653], 00:43:10.945 | 99.00th=[67634], 99.50th=[68682], 99.90th=[69731], 99.95th=[69731], 00:43:10.945 | 99.99th=[70779] 00:43:10.945 write: IOPS=2212, BW=8850KiB/s (9062kB/s)(8956KiB/1012msec); 0 zone resets 00:43:10.945 slat (usec): min=2, max=26714, avg=271.37, stdev=1663.22 00:43:10.945 clat (usec): min=10250, max=94390, avg=35837.50, stdev=17323.53 00:43:10.945 lat (usec): min=11775, max=94398, avg=36108.87, stdev=17443.57 00:43:10.945 clat percentiles (usec): 00:43:10.945 | 1.00th=[13042], 5.00th=[14877], 10.00th=[17695], 20.00th=[21627], 00:43:10.945 | 30.00th=[24249], 40.00th=[26084], 50.00th=[30016], 60.00th=[38011], 00:43:10.945 | 70.00th=[44303], 80.00th=[49021], 90.00th=[54789], 95.00th=[70779], 00:43:10.945 | 99.00th=[93848], 99.50th=[93848], 99.90th=[94897], 99.95th=[94897], 00:43:10.945 | 99.99th=[94897] 00:43:10.945 bw ( KiB/s): min= 6664, max=10224, per=12.60%, avg=8444.00, stdev=2517.30, samples=2 00:43:10.945 iops : min= 1666, max= 2556, avg=2111.00, stdev=629.33, samples=2 00:43:10.945 lat (msec) : 10=0.68%, 20=40.84%, 50=44.76%, 100=13.72% 00:43:10.945 cpu : usr=1.88%, sys=2.97%, ctx=233, majf=0, minf=1 00:43:10.945 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.5% 00:43:10.945 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:10.945 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:43:10.945 issued rwts: total=2048,2239,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:10.945 latency : target=0, window=0, percentile=100.00%, depth=128 00:43:10.945 00:43:10.945 Run status group 0 (all jobs): 00:43:10.945 READ: bw=60.0MiB/s (62.9MB/s), 8095KiB/s-22.8MiB/s (8289kB/s-23.9MB/s), io=61.2MiB (64.2MB), run=1008-1020msec 00:43:10.945 WRITE: bw=65.4MiB/s (68.6MB/s), 8850KiB/s-23.8MiB/s (9062kB/s-25.0MB/s), io=66.7MiB (70.0MB), run=1008-1020msec 00:43:10.945 00:43:10.945 Disk stats (read/write): 00:43:10.945 nvme0n1: ios=4641/5054, merge=0/0, ticks=49480/55143, in_queue=104623, util=91.08% 00:43:10.945 nvme0n2: ios=4222/4608, merge=0/0, ticks=50515/41545, in_queue=92060, util=86.79% 00:43:10.945 nvme0n3: ios=2937/3072, merge=0/0, ticks=47022/57101, in_queue=104123, util=88.96% 00:43:10.945 nvme0n4: ios=1581/1863, merge=0/0, ticks=19979/27919, in_queue=47898, util=99.58% 00:43:10.945 00:23:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:43:10.945 [global] 00:43:10.945 thread=1 00:43:10.945 invalidate=1 00:43:10.945 rw=randwrite 00:43:10.945 time_based=1 00:43:10.945 runtime=1 00:43:10.945 ioengine=libaio 00:43:10.945 direct=1 00:43:10.945 bs=4096 00:43:10.945 iodepth=128 00:43:10.945 norandommap=0 00:43:10.945 numjobs=1 00:43:10.945 00:43:10.945 verify_dump=1 00:43:10.945 verify_backlog=512 00:43:10.945 verify_state_save=0 00:43:10.945 do_verify=1 00:43:10.945 verify=crc32c-intel 00:43:10.945 [job0] 00:43:10.945 filename=/dev/nvme0n1 00:43:10.945 [job1] 00:43:10.945 filename=/dev/nvme0n2 00:43:10.945 [job2] 00:43:10.945 filename=/dev/nvme0n3 00:43:10.945 [job3] 00:43:10.945 filename=/dev/nvme0n4 00:43:10.945 Could not set queue depth (nvme0n1) 00:43:10.945 Could not set queue depth (nvme0n2) 00:43:10.945 Could not set queue depth (nvme0n3) 00:43:10.945 Could not set queue depth (nvme0n4) 00:43:11.203 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:43:11.203 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:43:11.203 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:43:11.203 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:43:11.203 fio-3.35 00:43:11.203 Starting 4 threads 00:43:12.588 00:43:12.588 job0: (groupid=0, jobs=1): err= 0: pid=131298: Sat Dec 14 00:23:51 2024 00:43:12.588 read: IOPS=4043, BW=15.8MiB/s (16.6MB/s)(16.0MiB/1013msec) 00:43:12.588 slat (nsec): min=1637, max=23229k, avg=114599.65, stdev=970696.30 00:43:12.588 clat (usec): min=5126, max=42513, avg=15402.98, stdev=4703.65 00:43:12.588 lat (usec): min=5137, max=42530, avg=15517.58, stdev=4786.96 00:43:12.588 clat percentiles (usec): 00:43:12.588 | 1.00th=[ 8455], 5.00th=[ 9765], 10.00th=[10552], 20.00th=[11469], 00:43:12.588 | 30.00th=[12256], 40.00th=[13173], 50.00th=[14091], 60.00th=[15664], 00:43:12.588 | 70.00th=[17695], 80.00th=[19268], 90.00th=[21627], 95.00th=[25297], 00:43:12.588 | 99.00th=[27919], 99.50th=[30016], 99.90th=[37487], 99.95th=[37487], 00:43:12.588 | 99.99th=[42730] 00:43:12.588 write: IOPS=4270, BW=16.7MiB/s (17.5MB/s)(16.9MiB/1013msec); 0 zone resets 00:43:12.588 slat (usec): min=2, max=42963, avg=117.57, stdev=1176.68 00:43:12.588 clat (usec): min=1535, max=56664, avg=15076.83, stdev=7579.97 00:43:12.588 lat (usec): min=1546, max=56690, avg=15194.40, stdev=7665.25 00:43:12.588 clat percentiles (usec): 00:43:12.588 | 1.00th=[ 6390], 5.00th=[ 8455], 10.00th=[10028], 20.00th=[10945], 00:43:12.588 | 30.00th=[11469], 40.00th=[11994], 50.00th=[12649], 60.00th=[14353], 00:43:12.588 | 70.00th=[16450], 80.00th=[18482], 90.00th=[20579], 95.00th=[21627], 00:43:12.588 | 99.00th=[53216], 99.50th=[53216], 99.90th=[53216], 99.95th=[53216], 00:43:12.588 | 99.99th=[56886] 00:43:12.588 bw ( KiB/s): min=16399, max=17152, per=22.83%, avg=16775.50, stdev=532.45, samples=2 00:43:12.588 iops : min= 4099, max= 4288, avg=4193.50, stdev=133.64, samples=2 00:43:12.588 lat (msec) : 2=0.02%, 4=0.15%, 10=7.68%, 20=76.67%, 50=14.28% 00:43:12.588 lat (msec) : 100=1.19% 00:43:12.588 cpu : usr=3.36%, sys=6.32%, ctx=216, majf=0, minf=1 00:43:12.588 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:43:12.588 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:12.588 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:43:12.588 issued rwts: total=4096,4326,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:12.588 latency : target=0, window=0, percentile=100.00%, depth=128 00:43:12.588 job1: (groupid=0, jobs=1): err= 0: pid=131299: Sat Dec 14 00:23:51 2024 00:43:12.588 read: IOPS=4589, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1004msec) 00:43:12.588 slat (nsec): min=1585, max=15217k, avg=94147.53, stdev=731370.98 00:43:12.588 clat (usec): min=3752, max=38162, avg=13006.73, stdev=4978.17 00:43:12.588 lat (usec): min=3757, max=48501, avg=13100.87, stdev=5028.54 00:43:12.588 clat percentiles (usec): 00:43:12.588 | 1.00th=[ 4948], 5.00th=[ 7898], 10.00th=[ 9241], 20.00th=[ 9765], 00:43:12.588 | 30.00th=[10290], 40.00th=[10814], 50.00th=[11731], 60.00th=[12256], 00:43:12.588 | 70.00th=[13566], 80.00th=[15008], 90.00th=[19530], 95.00th=[24773], 00:43:12.588 | 99.00th=[28443], 99.50th=[38011], 99.90th=[38011], 99.95th=[38011], 00:43:12.588 | 99.99th=[38011] 00:43:12.588 write: IOPS=5060, BW=19.8MiB/s (20.7MB/s)(19.8MiB/1004msec); 0 zone resets 00:43:12.588 slat (usec): min=2, max=18497, avg=96.62, stdev=754.59 00:43:12.588 clat (usec): min=620, max=36947, avg=13244.28, stdev=4695.33 00:43:12.588 lat (usec): min=1417, max=36970, avg=13340.89, stdev=4744.40 00:43:12.588 clat percentiles (usec): 00:43:12.588 | 1.00th=[ 5211], 5.00th=[ 8848], 10.00th=[ 9765], 20.00th=[10290], 00:43:12.588 | 30.00th=[10552], 40.00th=[10945], 50.00th=[11731], 60.00th=[12518], 00:43:12.588 | 70.00th=[13960], 80.00th=[17433], 90.00th=[19006], 95.00th=[21103], 00:43:12.588 | 99.00th=[31851], 99.50th=[33424], 99.90th=[33424], 99.95th=[34341], 00:43:12.588 | 99.99th=[36963] 00:43:12.588 bw ( KiB/s): min=16384, max=23240, per=26.97%, avg=19812.00, stdev=4847.92, samples=2 00:43:12.588 iops : min= 4096, max= 5810, avg=4953.00, stdev=1211.98, samples=2 00:43:12.588 lat (usec) : 750=0.01% 00:43:12.588 lat (msec) : 2=0.02%, 4=0.14%, 10=18.29%, 20=73.04%, 50=8.49% 00:43:12.588 cpu : usr=3.89%, sys=6.28%, ctx=309, majf=0, minf=1 00:43:12.588 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:43:12.588 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:12.588 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:43:12.588 issued rwts: total=4608,5081,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:12.588 latency : target=0, window=0, percentile=100.00%, depth=128 00:43:12.588 job2: (groupid=0, jobs=1): err= 0: pid=131300: Sat Dec 14 00:23:51 2024 00:43:12.588 read: IOPS=3594, BW=14.0MiB/s (14.7MB/s)(14.2MiB/1014msec) 00:43:12.588 slat (nsec): min=1217, max=16053k, avg=103481.05, stdev=887175.16 00:43:12.588 clat (usec): min=3664, max=45698, avg=15783.08, stdev=7193.63 00:43:12.588 lat (usec): min=3670, max=45706, avg=15886.56, stdev=7227.94 00:43:12.588 clat percentiles (usec): 00:43:12.588 | 1.00th=[ 6128], 5.00th=[ 7701], 10.00th=[ 8356], 20.00th=[10552], 00:43:12.588 | 30.00th=[11207], 40.00th=[13304], 50.00th=[14222], 60.00th=[15664], 00:43:12.588 | 70.00th=[17957], 80.00th=[19530], 90.00th=[24249], 95.00th=[27132], 00:43:12.588 | 99.00th=[44303], 99.50th=[44827], 99.90th=[45876], 99.95th=[45876], 00:43:12.588 | 99.99th=[45876] 00:43:12.588 write: IOPS=4039, BW=15.8MiB/s (16.5MB/s)(16.0MiB/1014msec); 0 zone resets 00:43:12.588 slat (usec): min=2, max=13319, avg=106.97, stdev=794.93 00:43:12.588 clat (usec): min=359, max=141427, avg=17378.38, stdev=22638.85 00:43:12.588 lat (usec): min=374, max=141439, avg=17485.35, stdev=22773.17 00:43:12.588 clat percentiles (usec): 00:43:12.588 | 1.00th=[ 1385], 5.00th=[ 3687], 10.00th=[ 6390], 20.00th=[ 9372], 00:43:12.588 | 30.00th=[ 11076], 40.00th=[ 12256], 50.00th=[ 12911], 60.00th=[ 14091], 00:43:12.588 | 70.00th=[ 14746], 80.00th=[ 16581], 90.00th=[ 19006], 95.00th=[ 50070], 00:43:12.588 | 99.00th=[133694], 99.50th=[139461], 99.90th=[141558], 99.95th=[141558], 00:43:12.588 | 99.99th=[141558] 00:43:12.588 bw ( KiB/s): min=11752, max=20480, per=21.94%, avg=16116.00, stdev=6171.63, samples=2 00:43:12.588 iops : min= 2938, max= 5120, avg=4029.00, stdev=1542.91, samples=2 00:43:12.588 lat (usec) : 500=0.04%, 750=0.03%, 1000=0.19% 00:43:12.588 lat (msec) : 2=0.62%, 4=2.31%, 10=16.11%, 20=67.02%, 50=11.02% 00:43:12.588 lat (msec) : 100=0.92%, 250=1.74% 00:43:12.588 cpu : usr=2.27%, sys=5.13%, ctx=393, majf=0, minf=2 00:43:12.588 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:43:12.588 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:12.588 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:43:12.588 issued rwts: total=3645,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:12.588 latency : target=0, window=0, percentile=100.00%, depth=128 00:43:12.588 job3: (groupid=0, jobs=1): err= 0: pid=131301: Sat Dec 14 00:23:51 2024 00:43:12.588 read: IOPS=4673, BW=18.3MiB/s (19.1MB/s)(18.3MiB/1003msec) 00:43:12.588 slat (nsec): min=1643, max=7929.3k, avg=102889.41, stdev=622459.98 00:43:12.588 clat (usec): min=660, max=21233, avg=13113.57, stdev=2540.73 00:43:12.588 lat (usec): min=4417, max=21237, avg=13216.46, stdev=2558.47 00:43:12.588 clat percentiles (usec): 00:43:12.588 | 1.00th=[ 5276], 5.00th=[ 9372], 10.00th=[10028], 20.00th=[11076], 00:43:12.588 | 30.00th=[11731], 40.00th=[12387], 50.00th=[13042], 60.00th=[13698], 00:43:12.588 | 70.00th=[14222], 80.00th=[15008], 90.00th=[16450], 95.00th=[17695], 00:43:12.588 | 99.00th=[19268], 99.50th=[19792], 99.90th=[20579], 99.95th=[20579], 00:43:12.588 | 99.99th=[21365] 00:43:12.588 write: IOPS=5104, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1003msec); 0 zone resets 00:43:12.588 slat (usec): min=2, max=6453, avg=95.48, stdev=568.44 00:43:12.588 clat (usec): min=5526, max=20086, avg=12773.59, stdev=1624.43 00:43:12.588 lat (usec): min=5537, max=20115, avg=12869.07, stdev=1693.05 00:43:12.588 clat percentiles (usec): 00:43:12.588 | 1.00th=[ 7570], 5.00th=[10159], 10.00th=[11338], 20.00th=[11863], 00:43:12.588 | 30.00th=[12256], 40.00th=[12518], 50.00th=[12780], 60.00th=[13042], 00:43:12.588 | 70.00th=[13435], 80.00th=[13829], 90.00th=[14091], 95.00th=[15139], 00:43:12.588 | 99.00th=[17957], 99.50th=[18482], 99.90th=[19792], 99.95th=[20055], 00:43:12.588 | 99.99th=[20055] 00:43:12.588 bw ( KiB/s): min=20104, max=20480, per=27.62%, avg=20292.00, stdev=265.87, samples=2 00:43:12.588 iops : min= 5026, max= 5120, avg=5073.00, stdev=66.47, samples=2 00:43:12.588 lat (usec) : 750=0.01% 00:43:12.588 lat (msec) : 10=6.78%, 20=93.01%, 50=0.20% 00:43:12.588 cpu : usr=2.79%, sys=7.09%, ctx=451, majf=0, minf=2 00:43:12.588 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:43:12.588 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:12.588 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:43:12.588 issued rwts: total=4688,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:12.588 latency : target=0, window=0, percentile=100.00%, depth=128 00:43:12.588 00:43:12.588 Run status group 0 (all jobs): 00:43:12.588 READ: bw=65.6MiB/s (68.8MB/s), 14.0MiB/s-18.3MiB/s (14.7MB/s-19.1MB/s), io=66.6MiB (69.8MB), run=1003-1014msec 00:43:12.588 WRITE: bw=71.7MiB/s (75.2MB/s), 15.8MiB/s-19.9MiB/s (16.5MB/s-20.9MB/s), io=72.7MiB (76.3MB), run=1003-1014msec 00:43:12.588 00:43:12.588 Disk stats (read/write): 00:43:12.588 nvme0n1: ios=3257/3584, merge=0/0, ticks=50153/49321, in_queue=99474, util=98.09% 00:43:12.588 nvme0n2: ios=3822/4096, merge=0/0, ticks=34472/37217, in_queue=71689, util=98.27% 00:43:12.588 nvme0n3: ios=3611/3630, merge=0/0, ticks=49220/37381, in_queue=86601, util=98.65% 00:43:12.588 nvme0n4: ios=4096/4141, merge=0/0, ticks=27015/24672, in_queue=51687, util=89.72% 00:43:12.589 00:23:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:43:12.589 00:23:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=131530 00:43:12.589 00:23:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:43:12.589 00:23:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:43:12.589 [global] 00:43:12.589 thread=1 00:43:12.589 invalidate=1 00:43:12.589 rw=read 00:43:12.589 time_based=1 00:43:12.589 runtime=10 00:43:12.589 ioengine=libaio 00:43:12.589 direct=1 00:43:12.589 bs=4096 00:43:12.589 iodepth=1 00:43:12.589 norandommap=1 00:43:12.589 numjobs=1 00:43:12.589 00:43:12.589 [job0] 00:43:12.589 filename=/dev/nvme0n1 00:43:12.589 [job1] 00:43:12.589 filename=/dev/nvme0n2 00:43:12.589 [job2] 00:43:12.589 filename=/dev/nvme0n3 00:43:12.589 [job3] 00:43:12.589 filename=/dev/nvme0n4 00:43:12.589 Could not set queue depth (nvme0n1) 00:43:12.589 Could not set queue depth (nvme0n2) 00:43:12.589 Could not set queue depth (nvme0n3) 00:43:12.589 Could not set queue depth (nvme0n4) 00:43:12.846 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:43:12.846 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:43:12.846 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:43:12.846 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:43:12.846 fio-3.35 00:43:12.846 Starting 4 threads 00:43:15.369 00:23:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:43:15.626 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=16588800, buflen=4096 00:43:15.626 fio: pid=131669, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:43:15.626 00:23:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:43:15.883 00:23:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:43:15.883 00:23:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:43:15.883 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=8499200, buflen=4096 00:43:15.883 fio: pid=131668, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:43:15.883 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=9965568, buflen=4096 00:43:15.883 fio: pid=131666, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:43:16.139 00:23:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:43:16.139 00:23:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:43:16.396 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=339968, buflen=4096 00:43:16.396 fio: pid=131667, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:43:16.396 00:23:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:43:16.396 00:23:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:43:16.396 00:43:16.396 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=131666: Sat Dec 14 00:23:55 2024 00:43:16.396 read: IOPS=788, BW=3154KiB/s (3229kB/s)(9732KiB/3086msec) 00:43:16.396 slat (usec): min=4, max=11681, avg=12.31, stdev=236.65 00:43:16.396 clat (usec): min=222, max=43019, avg=1245.59, stdev=6277.39 00:43:16.396 lat (usec): min=229, max=52887, avg=1257.89, stdev=6313.84 00:43:16.396 clat percentiles (usec): 00:43:16.396 | 1.00th=[ 233], 5.00th=[ 239], 10.00th=[ 241], 20.00th=[ 245], 00:43:16.396 | 30.00th=[ 247], 40.00th=[ 249], 50.00th=[ 251], 60.00th=[ 253], 00:43:16.396 | 70.00th=[ 255], 80.00th=[ 258], 90.00th=[ 265], 95.00th=[ 285], 00:43:16.396 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41681], 99.95th=[42206], 00:43:16.396 | 99.99th=[43254] 00:43:16.396 bw ( KiB/s): min= 96, max=15048, per=38.18%, avg=3870.40, stdev=6474.92, samples=5 00:43:16.396 iops : min= 24, max= 3762, avg=967.60, stdev=1618.73, samples=5 00:43:16.396 lat (usec) : 250=47.70%, 500=49.59%, 750=0.16%, 1000=0.04% 00:43:16.396 lat (msec) : 2=0.04%, 50=2.42% 00:43:16.396 cpu : usr=0.45%, sys=1.13%, ctx=2436, majf=0, minf=1 00:43:16.396 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:43:16.396 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:16.396 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:16.396 issued rwts: total=2434,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:16.396 latency : target=0, window=0, percentile=100.00%, depth=1 00:43:16.396 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=131667: Sat Dec 14 00:23:55 2024 00:43:16.396 read: IOPS=24, BW=97.4KiB/s (99.7kB/s)(332KiB/3410msec) 00:43:16.396 slat (usec): min=11, max=3710, avg=67.63, stdev=402.24 00:43:16.396 clat (usec): min=430, max=89900, avg=40749.00, stdev=10635.31 00:43:16.396 lat (usec): min=466, max=89923, avg=40817.14, stdev=10643.79 00:43:16.396 clat percentiles (usec): 00:43:16.396 | 1.00th=[ 433], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:43:16.396 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:43:16.396 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:43:16.396 | 99.00th=[89654], 99.50th=[89654], 99.90th=[89654], 99.95th=[89654], 00:43:16.396 | 99.99th=[89654] 00:43:16.396 bw ( KiB/s): min= 93, max= 104, per=0.98%, avg=99.50, stdev= 5.05, samples=6 00:43:16.396 iops : min= 23, max= 26, avg=24.83, stdev= 1.33, samples=6 00:43:16.396 lat (usec) : 500=1.19% 00:43:16.396 lat (msec) : 2=1.19%, 10=1.19%, 50=92.86%, 100=2.38% 00:43:16.396 cpu : usr=0.15%, sys=0.00%, ctx=87, majf=0, minf=2 00:43:16.396 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:43:16.396 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:16.396 complete : 0=1.2%, 4=98.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:16.396 issued rwts: total=84,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:16.396 latency : target=0, window=0, percentile=100.00%, depth=1 00:43:16.396 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=131668: Sat Dec 14 00:23:55 2024 00:43:16.396 read: IOPS=713, BW=2851KiB/s (2920kB/s)(8300KiB/2911msec) 00:43:16.397 slat (nsec): min=6079, max=32729, avg=7684.37, stdev=1831.80 00:43:16.397 clat (usec): min=241, max=41968, avg=1383.63, stdev=6610.04 00:43:16.397 lat (usec): min=248, max=41980, avg=1391.32, stdev=6610.90 00:43:16.397 clat percentiles (usec): 00:43:16.397 | 1.00th=[ 258], 5.00th=[ 265], 10.00th=[ 269], 20.00th=[ 273], 00:43:16.397 | 30.00th=[ 277], 40.00th=[ 277], 50.00th=[ 281], 60.00th=[ 285], 00:43:16.397 | 70.00th=[ 289], 80.00th=[ 289], 90.00th=[ 302], 95.00th=[ 318], 00:43:16.397 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41681], 99.95th=[42206], 00:43:16.397 | 99.99th=[42206] 00:43:16.397 bw ( KiB/s): min= 96, max=13800, per=32.60%, avg=3304.00, stdev=5953.99, samples=5 00:43:16.397 iops : min= 24, max= 3450, avg=826.00, stdev=1488.50, samples=5 00:43:16.397 lat (usec) : 250=0.19%, 500=96.82%, 750=0.24% 00:43:16.397 lat (msec) : 50=2.70% 00:43:16.397 cpu : usr=0.31%, sys=0.55%, ctx=2077, majf=0, minf=2 00:43:16.397 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:43:16.397 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:16.397 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:16.397 issued rwts: total=2076,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:16.397 latency : target=0, window=0, percentile=100.00%, depth=1 00:43:16.397 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=131669: Sat Dec 14 00:23:55 2024 00:43:16.397 read: IOPS=1499, BW=5996KiB/s (6139kB/s)(15.8MiB/2702msec) 00:43:16.397 slat (nsec): min=5885, max=41401, avg=7770.34, stdev=2037.83 00:43:16.397 clat (usec): min=223, max=41861, avg=652.89, stdev=3873.42 00:43:16.397 lat (usec): min=231, max=41869, avg=660.66, stdev=3874.84 00:43:16.397 clat percentiles (usec): 00:43:16.397 | 1.00th=[ 243], 5.00th=[ 247], 10.00th=[ 249], 20.00th=[ 253], 00:43:16.397 | 30.00th=[ 258], 40.00th=[ 262], 50.00th=[ 265], 60.00th=[ 269], 00:43:16.397 | 70.00th=[ 277], 80.00th=[ 281], 90.00th=[ 306], 95.00th=[ 461], 00:43:16.397 | 99.00th=[ 570], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:43:16.397 | 99.99th=[41681] 00:43:16.397 bw ( KiB/s): min= 96, max=11464, per=53.54%, avg=5427.20, stdev=5280.75, samples=5 00:43:16.397 iops : min= 24, max= 2866, avg=1356.80, stdev=1320.19, samples=5 00:43:16.397 lat (usec) : 250=10.79%, 500=88.15%, 750=0.12% 00:43:16.397 lat (msec) : 50=0.91% 00:43:16.397 cpu : usr=0.33%, sys=1.52%, ctx=4052, majf=0, minf=2 00:43:16.397 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:43:16.397 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:16.397 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:16.397 issued rwts: total=4051,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:16.397 latency : target=0, window=0, percentile=100.00%, depth=1 00:43:16.397 00:43:16.397 Run status group 0 (all jobs): 00:43:16.397 READ: bw=9.90MiB/s (10.4MB/s), 97.4KiB/s-5996KiB/s (99.7kB/s-6139kB/s), io=33.8MiB (35.4MB), run=2702-3410msec 00:43:16.397 00:43:16.397 Disk stats (read/write): 00:43:16.397 nvme0n1: ios=2446/0, merge=0/0, ticks=2838/0, in_queue=2838, util=95.49% 00:43:16.397 nvme0n2: ios=116/0, merge=0/0, ticks=3487/0, in_queue=3487, util=99.89% 00:43:16.397 nvme0n3: ios=2116/0, merge=0/0, ticks=2979/0, in_queue=2979, util=99.90% 00:43:16.397 nvme0n4: ios=3809/0, merge=0/0, ticks=3436/0, in_queue=3436, util=99.89% 00:43:16.653 00:23:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:43:16.653 00:23:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:43:16.909 00:23:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:43:16.910 00:23:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:43:17.166 00:23:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:43:17.166 00:23:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:43:17.423 00:23:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:43:17.423 00:23:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:43:17.678 00:23:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:43:17.678 00:23:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # wait 131530 00:43:17.678 00:23:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:43:17.678 00:23:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:43:18.606 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:43:18.606 00:23:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:43:18.606 00:23:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1223 -- # local i=0 00:43:18.606 00:23:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:43:18.606 00:23:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:43:18.606 00:23:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:43:18.606 00:23:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:43:18.606 00:23:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1235 -- # return 0 00:43:18.606 00:23:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:43:18.606 00:23:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:43:18.606 nvmf hotplug test: fio failed as expected 00:43:18.606 00:23:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:43:18.863 00:23:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:43:18.863 00:23:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:43:18.863 00:23:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:43:18.863 00:23:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:43:18.863 00:23:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:43:18.863 00:23:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:43:18.863 00:23:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:43:18.863 00:23:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:43:18.863 00:23:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:43:18.863 00:23:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:43:18.863 00:23:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:43:18.863 rmmod nvme_tcp 00:43:18.863 rmmod nvme_fabrics 00:43:18.863 rmmod nvme_keyring 00:43:18.863 00:23:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:43:18.863 00:23:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:43:18.863 00:23:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:43:18.863 00:23:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 128893 ']' 00:43:18.863 00:23:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 128893 00:43:18.863 00:23:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@954 -- # '[' -z 128893 ']' 00:43:18.863 00:23:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@958 -- # kill -0 128893 00:43:18.863 00:23:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@959 -- # uname 00:43:18.863 00:23:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:43:18.863 00:23:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 128893 00:43:19.120 00:23:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:43:19.120 00:23:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:43:19.120 00:23:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 128893' 00:43:19.120 killing process with pid 128893 00:43:19.120 00:23:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@973 -- # kill 128893 00:43:19.120 00:23:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@978 -- # wait 128893 00:43:20.052 00:23:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:43:20.052 00:23:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:43:20.052 00:23:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:43:20.052 00:23:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:43:20.052 00:23:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 00:43:20.052 00:23:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 00:43:20.052 00:23:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:43:20.052 00:23:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:43:20.052 00:23:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:43:20.052 00:23:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:43:20.052 00:23:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:43:20.052 00:23:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:43:22.582 00:24:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:43:22.582 00:43:22.582 real 0m28.613s 00:43:22.582 user 1m38.783s 00:43:22.582 sys 0m10.651s 00:43:22.582 00:24:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:43:22.582 00:24:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:43:22.582 ************************************ 00:43:22.582 END TEST nvmf_fio_target 00:43:22.582 ************************************ 00:43:22.582 00:24:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:43:22.582 00:24:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:43:22.582 00:24:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:43:22.582 00:24:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:43:22.582 ************************************ 00:43:22.582 START TEST nvmf_bdevio 00:43:22.582 ************************************ 00:43:22.582 00:24:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:43:22.582 * Looking for test storage... 00:43:22.582 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:43:22.582 00:24:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:43:22.582 00:24:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1711 -- # lcov --version 00:43:22.582 00:24:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:43:22.582 00:24:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:43:22.582 00:24:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:43:22.582 00:24:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:43:22.582 00:24:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:43:22.582 00:24:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:43:22.582 00:24:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:43:22.582 00:24:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:43:22.582 00:24:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:43:22.582 00:24:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:43:22.582 00:24:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:43:22.582 00:24:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:43:22.582 00:24:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:43:22.582 00:24:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:43:22.582 00:24:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:43:22.582 00:24:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:43:22.582 00:24:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:43:22.582 00:24:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:43:22.582 00:24:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:43:22.582 00:24:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:43:22.582 00:24:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:43:22.582 00:24:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:43:22.582 00:24:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:43:22.582 00:24:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:43:22.582 00:24:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:43:22.582 00:24:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:43:22.582 00:24:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:43:22.582 00:24:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:43:22.582 00:24:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:43:22.582 00:24:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:43:22.582 00:24:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:43:22.582 00:24:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:43:22.582 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:22.582 --rc genhtml_branch_coverage=1 00:43:22.582 --rc genhtml_function_coverage=1 00:43:22.582 --rc genhtml_legend=1 00:43:22.582 --rc geninfo_all_blocks=1 00:43:22.582 --rc geninfo_unexecuted_blocks=1 00:43:22.582 00:43:22.582 ' 00:43:22.582 00:24:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:43:22.582 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:22.582 --rc genhtml_branch_coverage=1 00:43:22.582 --rc genhtml_function_coverage=1 00:43:22.582 --rc genhtml_legend=1 00:43:22.582 --rc geninfo_all_blocks=1 00:43:22.582 --rc geninfo_unexecuted_blocks=1 00:43:22.582 00:43:22.582 ' 00:43:22.582 00:24:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:43:22.582 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:22.582 --rc genhtml_branch_coverage=1 00:43:22.582 --rc genhtml_function_coverage=1 00:43:22.582 --rc genhtml_legend=1 00:43:22.582 --rc geninfo_all_blocks=1 00:43:22.582 --rc geninfo_unexecuted_blocks=1 00:43:22.582 00:43:22.582 ' 00:43:22.582 00:24:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:43:22.582 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:22.582 --rc genhtml_branch_coverage=1 00:43:22.582 --rc genhtml_function_coverage=1 00:43:22.582 --rc genhtml_legend=1 00:43:22.582 --rc geninfo_all_blocks=1 00:43:22.582 --rc geninfo_unexecuted_blocks=1 00:43:22.582 00:43:22.582 ' 00:43:22.582 00:24:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:43:22.582 00:24:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:43:22.582 00:24:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:43:22.582 00:24:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:43:22.582 00:24:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:43:22.582 00:24:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:43:22.582 00:24:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:43:22.582 00:24:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:43:22.582 00:24:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:43:22.582 00:24:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:43:22.582 00:24:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:43:22.582 00:24:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:43:22.582 00:24:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:43:22.582 00:24:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:43:22.582 00:24:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:43:22.582 00:24:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:43:22.582 00:24:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:43:22.582 00:24:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:43:22.582 00:24:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:43:22.582 00:24:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:43:22.582 00:24:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:43:22.583 00:24:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:43:22.583 00:24:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:43:22.583 00:24:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:22.583 00:24:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:22.583 00:24:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:22.583 00:24:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:43:22.583 00:24:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:22.583 00:24:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:43:22.583 00:24:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:43:22.583 00:24:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:43:22.583 00:24:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:43:22.583 00:24:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:43:22.583 00:24:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:43:22.583 00:24:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:43:22.583 00:24:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:43:22.583 00:24:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:43:22.583 00:24:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:43:22.583 00:24:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:43:22.583 00:24:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:43:22.583 00:24:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:43:22.583 00:24:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:43:22.583 00:24:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:43:22.583 00:24:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:43:22.583 00:24:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:43:22.583 00:24:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:43:22.583 00:24:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:43:22.583 00:24:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:43:22.583 00:24:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:43:22.583 00:24:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:43:22.583 00:24:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:43:22.583 00:24:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:43:22.583 00:24:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:43:22.583 00:24:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:43:27.838 00:24:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:43:27.838 00:24:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:43:27.838 00:24:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:43:27.838 00:24:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:43:27.838 00:24:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:43:27.838 00:24:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:43:27.838 00:24:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:43:27.838 00:24:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:43:27.838 00:24:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:43:27.838 00:24:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:43:27.838 00:24:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:43:27.838 00:24:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:43:27.838 00:24:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:43:27.838 00:24:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:43:27.838 00:24:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:43:27.838 00:24:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:43:27.838 00:24:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:43:27.838 00:24:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:43:27.838 00:24:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:43:27.838 00:24:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:43:27.838 00:24:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:43:27.838 00:24:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:43:27.838 00:24:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:43:27.838 00:24:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:43:27.838 00:24:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:43:27.838 00:24:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:43:27.839 00:24:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:43:27.839 00:24:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:43:27.839 00:24:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:43:27.839 00:24:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:43:27.839 00:24:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:43:27.839 00:24:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:43:27.839 00:24:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:43:27.839 00:24:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:43:27.839 00:24:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:43:27.839 Found 0000:af:00.0 (0x8086 - 0x159b) 00:43:27.839 00:24:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:43:27.839 00:24:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:43:27.839 00:24:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:43:27.839 00:24:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:43:27.839 00:24:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:43:27.839 00:24:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:43:27.839 00:24:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:43:27.839 Found 0000:af:00.1 (0x8086 - 0x159b) 00:43:27.839 00:24:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:43:27.839 00:24:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:43:27.839 00:24:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:43:27.839 00:24:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:43:27.839 00:24:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:43:27.839 00:24:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:43:27.839 00:24:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:43:27.839 00:24:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:43:27.839 00:24:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:43:27.839 00:24:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:43:27.839 00:24:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:43:27.839 00:24:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:43:27.839 00:24:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:43:27.839 00:24:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:43:27.839 00:24:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:43:27.839 00:24:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:43:27.839 Found net devices under 0000:af:00.0: cvl_0_0 00:43:27.839 00:24:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:43:27.839 00:24:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:43:27.839 00:24:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:43:27.839 00:24:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:43:27.839 00:24:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:43:27.839 00:24:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:43:27.839 00:24:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:43:27.839 00:24:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:43:27.839 00:24:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:43:27.839 Found net devices under 0000:af:00.1: cvl_0_1 00:43:27.839 00:24:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:43:27.839 00:24:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:43:27.839 00:24:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # is_hw=yes 00:43:27.839 00:24:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:43:27.839 00:24:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:43:27.839 00:24:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:43:27.839 00:24:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:43:27.839 00:24:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:43:27.839 00:24:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:43:27.839 00:24:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:43:27.839 00:24:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:43:27.839 00:24:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:43:27.839 00:24:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:43:27.839 00:24:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:43:27.839 00:24:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:43:27.839 00:24:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:43:27.839 00:24:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:43:27.839 00:24:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:43:27.839 00:24:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:43:27.839 00:24:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:43:27.839 00:24:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:43:27.839 00:24:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:43:27.839 00:24:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:43:27.839 00:24:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:43:27.839 00:24:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:43:27.839 00:24:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:43:27.839 00:24:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:43:27.839 00:24:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:43:27.839 00:24:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:43:27.839 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:43:27.839 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.353 ms 00:43:27.839 00:43:27.839 --- 10.0.0.2 ping statistics --- 00:43:27.839 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:43:27.839 rtt min/avg/max/mdev = 0.353/0.353/0.353/0.000 ms 00:43:27.839 00:24:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:43:27.839 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:43:27.839 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.195 ms 00:43:27.839 00:43:27.839 --- 10.0.0.1 ping statistics --- 00:43:27.839 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:43:27.839 rtt min/avg/max/mdev = 0.195/0.195/0.195/0.000 ms 00:43:27.839 00:24:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:43:27.839 00:24:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@450 -- # return 0 00:43:27.839 00:24:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:43:27.839 00:24:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:43:27.839 00:24:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:43:27.839 00:24:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:43:27.839 00:24:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:43:27.839 00:24:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:43:27.839 00:24:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:43:27.839 00:24:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:43:27.839 00:24:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:43:27.839 00:24:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:43:27.839 00:24:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:43:27.839 00:24:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=136466 00:43:27.839 00:24:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 136466 00:43:27.839 00:24:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x78 00:43:27.839 00:24:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@835 -- # '[' -z 136466 ']' 00:43:27.839 00:24:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:43:27.839 00:24:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@840 -- # local max_retries=100 00:43:27.839 00:24:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:43:27.839 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:43:27.839 00:24:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@844 -- # xtrace_disable 00:43:27.839 00:24:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:43:27.839 [2024-12-14 00:24:06.977205] thread.c:3079:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:43:28.097 [2024-12-14 00:24:06.979365] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:43:28.097 [2024-12-14 00:24:06.979454] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:43:28.097 [2024-12-14 00:24:07.097518] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:43:28.097 [2024-12-14 00:24:07.204351] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:43:28.097 [2024-12-14 00:24:07.204397] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:43:28.097 [2024-12-14 00:24:07.204408] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:43:28.097 [2024-12-14 00:24:07.204417] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:43:28.097 [2024-12-14 00:24:07.204447] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:43:28.097 [2024-12-14 00:24:07.207156] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 4 00:43:28.097 [2024-12-14 00:24:07.207247] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 5 00:43:28.097 [2024-12-14 00:24:07.207367] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:43:28.097 [2024-12-14 00:24:07.207391] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 6 00:43:28.662 [2024-12-14 00:24:07.539270] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:43:28.662 [2024-12-14 00:24:07.540749] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:43:28.662 [2024-12-14 00:24:07.542365] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:43:28.662 [2024-12-14 00:24:07.543206] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:43:28.662 [2024-12-14 00:24:07.543509] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:43:28.662 00:24:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:43:28.662 00:24:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@868 -- # return 0 00:43:28.662 00:24:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:43:28.662 00:24:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@732 -- # xtrace_disable 00:43:28.662 00:24:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:43:28.920 00:24:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:43:28.920 00:24:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:43:28.920 00:24:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:28.920 00:24:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:43:28.920 [2024-12-14 00:24:07.828099] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:43:28.920 00:24:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:28.920 00:24:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:43:28.920 00:24:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:28.920 00:24:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:43:28.920 Malloc0 00:43:28.920 00:24:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:28.920 00:24:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:43:28.920 00:24:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:28.920 00:24:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:43:28.920 00:24:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:28.920 00:24:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:43:28.920 00:24:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:28.920 00:24:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:43:28.920 00:24:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:28.920 00:24:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:43:28.920 00:24:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:28.920 00:24:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:43:28.920 [2024-12-14 00:24:07.964405] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:43:28.920 00:24:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:28.920 00:24:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:43:28.920 00:24:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:43:28.920 00:24:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:43:28.920 00:24:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:43:28.920 00:24:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:43:28.920 00:24:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:43:28.920 { 00:43:28.920 "params": { 00:43:28.920 "name": "Nvme$subsystem", 00:43:28.920 "trtype": "$TEST_TRANSPORT", 00:43:28.920 "traddr": "$NVMF_FIRST_TARGET_IP", 00:43:28.920 "adrfam": "ipv4", 00:43:28.920 "trsvcid": "$NVMF_PORT", 00:43:28.920 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:43:28.920 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:43:28.920 "hdgst": ${hdgst:-false}, 00:43:28.920 "ddgst": ${ddgst:-false} 00:43:28.920 }, 00:43:28.920 "method": "bdev_nvme_attach_controller" 00:43:28.920 } 00:43:28.920 EOF 00:43:28.920 )") 00:43:28.920 00:24:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:43:28.920 00:24:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:43:28.920 00:24:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:43:28.920 00:24:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:43:28.920 "params": { 00:43:28.920 "name": "Nvme1", 00:43:28.920 "trtype": "tcp", 00:43:28.920 "traddr": "10.0.0.2", 00:43:28.920 "adrfam": "ipv4", 00:43:28.920 "trsvcid": "4420", 00:43:28.920 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:43:28.920 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:43:28.920 "hdgst": false, 00:43:28.920 "ddgst": false 00:43:28.920 }, 00:43:28.920 "method": "bdev_nvme_attach_controller" 00:43:28.920 }' 00:43:28.920 [2024-12-14 00:24:08.043900] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:43:28.920 [2024-12-14 00:24:08.043990] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid136776 ] 00:43:29.177 [2024-12-14 00:24:08.159520] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:43:29.178 [2024-12-14 00:24:08.272692] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:43:29.178 [2024-12-14 00:24:08.272760] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:43:29.178 [2024-12-14 00:24:08.272765] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:43:29.742 I/O targets: 00:43:29.742 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:43:29.742 00:43:29.742 00:43:29.742 CUnit - A unit testing framework for C - Version 2.1-3 00:43:29.742 http://cunit.sourceforge.net/ 00:43:29.742 00:43:29.742 00:43:29.742 Suite: bdevio tests on: Nvme1n1 00:43:29.999 Test: blockdev write read block ...passed 00:43:29.999 Test: blockdev write zeroes read block ...passed 00:43:29.999 Test: blockdev write zeroes read no split ...passed 00:43:29.999 Test: blockdev write zeroes read split ...passed 00:43:29.999 Test: blockdev write zeroes read split partial ...passed 00:43:29.999 Test: blockdev reset ...[2024-12-14 00:24:09.066904] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:43:29.999 [2024-12-14 00:24:09.067012] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000326480 (9): Bad file descriptor 00:43:29.999 [2024-12-14 00:24:09.114779] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:43:29.999 passed 00:43:30.256 Test: blockdev write read 8 blocks ...passed 00:43:30.256 Test: blockdev write read size > 128k ...passed 00:43:30.256 Test: blockdev write read invalid size ...passed 00:43:30.256 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:43:30.256 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:43:30.256 Test: blockdev write read max offset ...passed 00:43:30.256 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:43:30.256 Test: blockdev writev readv 8 blocks ...passed 00:43:30.256 Test: blockdev writev readv 30 x 1block ...passed 00:43:30.256 Test: blockdev writev readv block ...passed 00:43:30.256 Test: blockdev writev readv size > 128k ...passed 00:43:30.256 Test: blockdev writev readv size > 128k in two iovs ...passed 00:43:30.256 Test: blockdev comparev and writev ...[2024-12-14 00:24:09.330563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:43:30.256 [2024-12-14 00:24:09.330600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:30.256 [2024-12-14 00:24:09.330621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:43:30.256 [2024-12-14 00:24:09.330637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:43:30.256 [2024-12-14 00:24:09.331009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:43:30.256 [2024-12-14 00:24:09.331028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:43:30.256 [2024-12-14 00:24:09.331045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:43:30.256 [2024-12-14 00:24:09.331055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:43:30.256 [2024-12-14 00:24:09.331416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:43:30.256 [2024-12-14 00:24:09.331434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:43:30.256 [2024-12-14 00:24:09.331456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:43:30.256 [2024-12-14 00:24:09.331466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:43:30.256 [2024-12-14 00:24:09.331814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:43:30.256 [2024-12-14 00:24:09.331835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:43:30.256 [2024-12-14 00:24:09.331852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:43:30.256 [2024-12-14 00:24:09.331862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:43:30.256 passed 00:43:30.513 Test: blockdev nvme passthru rw ...passed 00:43:30.513 Test: blockdev nvme passthru vendor specific ...[2024-12-14 00:24:09.413799] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:43:30.513 [2024-12-14 00:24:09.413830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:43:30.513 [2024-12-14 00:24:09.413976] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:43:30.513 [2024-12-14 00:24:09.413990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:43:30.513 [2024-12-14 00:24:09.414125] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:43:30.513 [2024-12-14 00:24:09.414139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:43:30.513 [2024-12-14 00:24:09.414273] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:43:30.513 [2024-12-14 00:24:09.414286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:43:30.513 passed 00:43:30.513 Test: blockdev nvme admin passthru ...passed 00:43:30.513 Test: blockdev copy ...passed 00:43:30.513 00:43:30.513 Run Summary: Type Total Ran Passed Failed Inactive 00:43:30.513 suites 1 1 n/a 0 0 00:43:30.514 tests 23 23 23 0 0 00:43:30.514 asserts 152 152 152 0 n/a 00:43:30.514 00:43:30.514 Elapsed time = 1.301 seconds 00:43:31.446 00:24:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:43:31.446 00:24:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:31.446 00:24:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:43:31.446 00:24:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:31.446 00:24:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:43:31.446 00:24:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:43:31.446 00:24:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:43:31.446 00:24:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:43:31.446 00:24:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:43:31.446 00:24:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:43:31.446 00:24:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:43:31.446 00:24:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:43:31.446 rmmod nvme_tcp 00:43:31.446 rmmod nvme_fabrics 00:43:31.446 rmmod nvme_keyring 00:43:31.446 00:24:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:43:31.446 00:24:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:43:31.446 00:24:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:43:31.446 00:24:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 136466 ']' 00:43:31.446 00:24:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 136466 00:43:31.446 00:24:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@954 -- # '[' -z 136466 ']' 00:43:31.446 00:24:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@958 -- # kill -0 136466 00:43:31.446 00:24:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@959 -- # uname 00:43:31.446 00:24:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:43:31.446 00:24:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 136466 00:43:31.446 00:24:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:43:31.446 00:24:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:43:31.446 00:24:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@972 -- # echo 'killing process with pid 136466' 00:43:31.446 killing process with pid 136466 00:43:31.446 00:24:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@973 -- # kill 136466 00:43:31.446 00:24:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@978 -- # wait 136466 00:43:32.818 00:24:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:43:32.818 00:24:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:43:32.818 00:24:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:43:32.818 00:24:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:43:32.818 00:24:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:43:32.818 00:24:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 00:43:32.818 00:24:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 00:43:32.818 00:24:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:43:32.818 00:24:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:43:32.818 00:24:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:43:32.818 00:24:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:43:32.818 00:24:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:43:35.345 00:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:43:35.345 00:43:35.345 real 0m12.572s 00:43:35.345 user 0m18.516s 00:43:35.345 sys 0m5.380s 00:43:35.345 00:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:43:35.345 00:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:43:35.345 ************************************ 00:43:35.345 END TEST nvmf_bdevio 00:43:35.345 ************************************ 00:43:35.345 00:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:43:35.346 00:43:35.346 real 4m57.424s 00:43:35.346 user 10m11.342s 00:43:35.346 sys 1m48.865s 00:43:35.346 00:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1130 -- # xtrace_disable 00:43:35.346 00:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:43:35.346 ************************************ 00:43:35.346 END TEST nvmf_target_core_interrupt_mode 00:43:35.346 ************************************ 00:43:35.346 00:24:13 nvmf_tcp -- nvmf/nvmf.sh@21 -- # run_test nvmf_interrupt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:43:35.346 00:24:13 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:43:35.346 00:24:13 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:43:35.346 00:24:13 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:43:35.346 ************************************ 00:43:35.346 START TEST nvmf_interrupt 00:43:35.346 ************************************ 00:43:35.346 00:24:13 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:43:35.346 * Looking for test storage... 00:43:35.346 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:43:35.346 00:24:14 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:43:35.346 00:24:14 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1711 -- # lcov --version 00:43:35.346 00:24:14 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:43:35.346 00:24:14 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:43:35.346 00:24:14 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:43:35.346 00:24:14 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@333 -- # local ver1 ver1_l 00:43:35.346 00:24:14 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@334 -- # local ver2 ver2_l 00:43:35.346 00:24:14 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # IFS=.-: 00:43:35.346 00:24:14 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # read -ra ver1 00:43:35.346 00:24:14 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # IFS=.-: 00:43:35.346 00:24:14 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # read -ra ver2 00:43:35.346 00:24:14 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@338 -- # local 'op=<' 00:43:35.346 00:24:14 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@340 -- # ver1_l=2 00:43:35.346 00:24:14 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@341 -- # ver2_l=1 00:43:35.346 00:24:14 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:43:35.346 00:24:14 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@344 -- # case "$op" in 00:43:35.346 00:24:14 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@345 -- # : 1 00:43:35.346 00:24:14 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v = 0 )) 00:43:35.346 00:24:14 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:43:35.346 00:24:14 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # decimal 1 00:43:35.346 00:24:14 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=1 00:43:35.346 00:24:14 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:43:35.346 00:24:14 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 1 00:43:35.346 00:24:14 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # ver1[v]=1 00:43:35.346 00:24:14 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # decimal 2 00:43:35.346 00:24:14 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=2 00:43:35.346 00:24:14 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:43:35.346 00:24:14 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 2 00:43:35.346 00:24:14 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # ver2[v]=2 00:43:35.346 00:24:14 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:43:35.346 00:24:14 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:43:35.346 00:24:14 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # return 0 00:43:35.346 00:24:14 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:43:35.346 00:24:14 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:43:35.346 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:35.346 --rc genhtml_branch_coverage=1 00:43:35.346 --rc genhtml_function_coverage=1 00:43:35.346 --rc genhtml_legend=1 00:43:35.346 --rc geninfo_all_blocks=1 00:43:35.346 --rc geninfo_unexecuted_blocks=1 00:43:35.346 00:43:35.346 ' 00:43:35.346 00:24:14 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:43:35.346 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:35.346 --rc genhtml_branch_coverage=1 00:43:35.346 --rc genhtml_function_coverage=1 00:43:35.346 --rc genhtml_legend=1 00:43:35.346 --rc geninfo_all_blocks=1 00:43:35.346 --rc geninfo_unexecuted_blocks=1 00:43:35.346 00:43:35.346 ' 00:43:35.346 00:24:14 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:43:35.346 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:35.346 --rc genhtml_branch_coverage=1 00:43:35.346 --rc genhtml_function_coverage=1 00:43:35.346 --rc genhtml_legend=1 00:43:35.346 --rc geninfo_all_blocks=1 00:43:35.346 --rc geninfo_unexecuted_blocks=1 00:43:35.346 00:43:35.346 ' 00:43:35.346 00:24:14 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:43:35.346 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:35.346 --rc genhtml_branch_coverage=1 00:43:35.346 --rc genhtml_function_coverage=1 00:43:35.346 --rc genhtml_legend=1 00:43:35.346 --rc geninfo_all_blocks=1 00:43:35.346 --rc geninfo_unexecuted_blocks=1 00:43:35.346 00:43:35.346 ' 00:43:35.346 00:24:14 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:43:35.346 00:24:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # uname -s 00:43:35.346 00:24:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:43:35.346 00:24:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:43:35.346 00:24:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:43:35.346 00:24:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:43:35.346 00:24:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:43:35.346 00:24:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:43:35.346 00:24:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:43:35.346 00:24:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:43:35.346 00:24:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:43:35.346 00:24:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:43:35.346 00:24:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:43:35.346 00:24:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:43:35.346 00:24:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:43:35.346 00:24:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:43:35.346 00:24:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:43:35.346 00:24:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:43:35.346 00:24:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:43:35.346 00:24:14 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@15 -- # shopt -s extglob 00:43:35.346 00:24:14 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:43:35.346 00:24:14 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:43:35.346 00:24:14 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:43:35.346 00:24:14 nvmf_tcp.nvmf_interrupt -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:35.346 00:24:14 nvmf_tcp.nvmf_interrupt -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:35.346 00:24:14 nvmf_tcp.nvmf_interrupt -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:35.346 00:24:14 nvmf_tcp.nvmf_interrupt -- paths/export.sh@5 -- # export PATH 00:43:35.346 00:24:14 nvmf_tcp.nvmf_interrupt -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:35.346 00:24:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@51 -- # : 0 00:43:35.346 00:24:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:43:35.346 00:24:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:43:35.346 00:24:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:43:35.346 00:24:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:43:35.346 00:24:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:43:35.346 00:24:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:43:35.346 00:24:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:43:35.346 00:24:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:43:35.346 00:24:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:43:35.346 00:24:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@55 -- # have_pci_nics=0 00:43:35.346 00:24:14 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/interrupt/common.sh 00:43:35.346 00:24:14 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@12 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:43:35.346 00:24:14 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@14 -- # nvmftestinit 00:43:35.346 00:24:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:43:35.346 00:24:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:43:35.346 00:24:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@476 -- # prepare_net_devs 00:43:35.347 00:24:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@438 -- # local -g is_hw=no 00:43:35.347 00:24:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@440 -- # remove_spdk_ns 00:43:35.347 00:24:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:43:35.347 00:24:14 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:43:35.347 00:24:14 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:43:35.347 00:24:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:43:35.347 00:24:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:43:35.347 00:24:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@309 -- # xtrace_disable 00:43:35.347 00:24:14 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:43:40.604 00:24:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:43:40.604 00:24:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # pci_devs=() 00:43:40.604 00:24:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # local -a pci_devs 00:43:40.604 00:24:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # pci_net_devs=() 00:43:40.604 00:24:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:43:40.604 00:24:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@317 -- # pci_drivers=() 00:43:40.604 00:24:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@317 -- # local -A pci_drivers 00:43:40.604 00:24:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@319 -- # net_devs=() 00:43:40.604 00:24:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@319 -- # local -ga net_devs 00:43:40.604 00:24:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@320 -- # e810=() 00:43:40.604 00:24:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@320 -- # local -ga e810 00:43:40.604 00:24:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # x722=() 00:43:40.604 00:24:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # local -ga x722 00:43:40.604 00:24:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # mlx=() 00:43:40.604 00:24:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # local -ga mlx 00:43:40.604 00:24:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:43:40.604 00:24:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:43:40.604 00:24:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:43:40.604 00:24:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:43:40.604 00:24:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:43:40.604 00:24:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:43:40.604 00:24:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:43:40.604 00:24:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:43:40.604 00:24:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:43:40.604 00:24:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:43:40.604 00:24:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:43:40.604 00:24:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:43:40.604 00:24:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:43:40.604 00:24:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:43:40.604 00:24:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:43:40.604 00:24:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:43:40.604 00:24:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:43:40.604 00:24:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:43:40.604 00:24:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:43:40.604 00:24:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:43:40.604 Found 0000:af:00.0 (0x8086 - 0x159b) 00:43:40.604 00:24:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:43:40.604 00:24:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:43:40.604 00:24:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:43:40.604 00:24:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:43:40.604 00:24:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:43:40.604 00:24:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:43:40.604 00:24:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:43:40.604 Found 0000:af:00.1 (0x8086 - 0x159b) 00:43:40.604 00:24:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:43:40.604 00:24:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:43:40.604 00:24:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:43:40.604 00:24:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:43:40.604 00:24:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:43:40.604 00:24:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:43:40.604 00:24:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:43:40.604 00:24:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:43:40.604 00:24:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:43:40.604 00:24:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:43:40.604 00:24:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:43:40.604 00:24:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:43:40.604 00:24:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@418 -- # [[ up == up ]] 00:43:40.604 00:24:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:43:40.604 00:24:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:43:40.604 00:24:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:43:40.605 Found net devices under 0000:af:00.0: cvl_0_0 00:43:40.605 00:24:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:43:40.605 00:24:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:43:40.605 00:24:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:43:40.605 00:24:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:43:40.605 00:24:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:43:40.605 00:24:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@418 -- # [[ up == up ]] 00:43:40.605 00:24:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:43:40.605 00:24:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:43:40.605 00:24:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:43:40.605 Found net devices under 0000:af:00.1: cvl_0_1 00:43:40.605 00:24:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:43:40.605 00:24:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:43:40.605 00:24:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # is_hw=yes 00:43:40.605 00:24:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:43:40.605 00:24:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:43:40.605 00:24:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:43:40.605 00:24:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:43:40.605 00:24:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:43:40.605 00:24:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:43:40.605 00:24:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:43:40.605 00:24:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:43:40.605 00:24:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:43:40.605 00:24:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:43:40.605 00:24:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:43:40.605 00:24:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:43:40.605 00:24:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:43:40.605 00:24:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:43:40.605 00:24:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:43:40.605 00:24:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:43:40.605 00:24:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:43:40.605 00:24:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:43:40.605 00:24:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:43:40.605 00:24:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:43:40.605 00:24:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:43:40.605 00:24:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:43:40.605 00:24:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:43:40.605 00:24:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:43:40.605 00:24:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:43:40.605 00:24:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:43:40.605 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:43:40.605 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.379 ms 00:43:40.605 00:43:40.605 --- 10.0.0.2 ping statistics --- 00:43:40.605 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:43:40.605 rtt min/avg/max/mdev = 0.379/0.379/0.379/0.000 ms 00:43:40.605 00:24:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:43:40.605 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:43:40.605 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.220 ms 00:43:40.605 00:43:40.605 --- 10.0.0.1 ping statistics --- 00:43:40.605 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:43:40.605 rtt min/avg/max/mdev = 0.220/0.220/0.220/0.000 ms 00:43:40.605 00:24:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:43:40.605 00:24:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@450 -- # return 0 00:43:40.605 00:24:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:43:40.605 00:24:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:43:40.605 00:24:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:43:40.605 00:24:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:43:40.605 00:24:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:43:40.605 00:24:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:43:40.605 00:24:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:43:40.605 00:24:19 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@15 -- # nvmfappstart -m 0x3 00:43:40.605 00:24:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:43:40.605 00:24:19 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@726 -- # xtrace_disable 00:43:40.605 00:24:19 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:43:40.605 00:24:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@509 -- # nvmfpid=140746 00:43:40.605 00:24:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:43:40.605 00:24:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@510 -- # waitforlisten 140746 00:43:40.605 00:24:19 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@835 -- # '[' -z 140746 ']' 00:43:40.605 00:24:19 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:43:40.605 00:24:19 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@840 -- # local max_retries=100 00:43:40.605 00:24:19 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:43:40.605 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:43:40.605 00:24:19 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@844 -- # xtrace_disable 00:43:40.605 00:24:19 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:43:40.605 [2024-12-14 00:24:19.613809] thread.c:3079:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:43:40.605 [2024-12-14 00:24:19.615861] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:43:40.605 [2024-12-14 00:24:19.615931] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:43:40.605 [2024-12-14 00:24:19.732294] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:43:40.863 [2024-12-14 00:24:19.837060] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:43:40.863 [2024-12-14 00:24:19.837103] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:43:40.863 [2024-12-14 00:24:19.837114] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:43:40.863 [2024-12-14 00:24:19.837138] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:43:40.863 [2024-12-14 00:24:19.837152] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:43:40.863 [2024-12-14 00:24:19.839156] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:43:40.863 [2024-12-14 00:24:19.839168] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:43:41.121 [2024-12-14 00:24:20.169089] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:43:41.121 [2024-12-14 00:24:20.169774] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:43:41.121 [2024-12-14 00:24:20.169996] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:43:41.380 00:24:20 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:43:41.380 00:24:20 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@868 -- # return 0 00:43:41.380 00:24:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:43:41.380 00:24:20 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@732 -- # xtrace_disable 00:43:41.380 00:24:20 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:43:41.380 00:24:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:43:41.380 00:24:20 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@16 -- # setup_bdev_aio 00:43:41.380 00:24:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # uname -s 00:43:41.380 00:24:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # [[ Linux != \F\r\e\e\B\S\D ]] 00:43:41.380 00:24:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@78 -- # dd if=/dev/zero of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile bs=2048 count=5000 00:43:41.380 5000+0 records in 00:43:41.380 5000+0 records out 00:43:41.380 10240000 bytes (10 MB, 9.8 MiB) copied, 0.0170151 s, 602 MB/s 00:43:41.380 00:24:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@79 -- # rpc_cmd bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile AIO0 2048 00:43:41.380 00:24:20 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:41.380 00:24:20 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:43:41.380 AIO0 00:43:41.380 00:24:20 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:41.380 00:24:20 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -q 256 00:43:41.380 00:24:20 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:41.380 00:24:20 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:43:41.639 [2024-12-14 00:24:20.520231] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:43:41.639 00:24:20 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:41.639 00:24:20 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:43:41.639 00:24:20 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:41.639 00:24:20 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:43:41.639 00:24:20 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:41.639 00:24:20 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 AIO0 00:43:41.639 00:24:20 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:41.639 00:24:20 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:43:41.639 00:24:20 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:41.639 00:24:20 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:43:41.639 00:24:20 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:41.639 00:24:20 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:43:41.639 [2024-12-14 00:24:20.548130] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:43:41.639 00:24:20 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:41.639 00:24:20 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:43:41.639 00:24:20 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 140746 0 00:43:41.639 00:24:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 140746 0 idle 00:43:41.639 00:24:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=140746 00:43:41.639 00:24:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:43:41.639 00:24:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:43:41.639 00:24:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:43:41.639 00:24:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:43:41.639 00:24:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:43:41.639 00:24:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:43:41.639 00:24:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:43:41.639 00:24:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:43:41.639 00:24:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:43:41.639 00:24:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 140746 -w 256 00:43:41.639 00:24:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:43:41.639 00:24:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 140746 root 20 0 20.1t 206592 99840 S 0.0 0.2 0:00.63 reactor_0' 00:43:41.639 00:24:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 140746 root 20 0 20.1t 206592 99840 S 0.0 0.2 0:00.63 reactor_0 00:43:41.639 00:24:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:43:41.639 00:24:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:43:41.639 00:24:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:43:41.639 00:24:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:43:41.639 00:24:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:43:41.639 00:24:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:43:41.639 00:24:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:43:41.639 00:24:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:43:41.639 00:24:20 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:43:41.639 00:24:20 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 140746 1 00:43:41.639 00:24:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 140746 1 idle 00:43:41.639 00:24:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=140746 00:43:41.639 00:24:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:43:41.639 00:24:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:43:41.639 00:24:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:43:41.639 00:24:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:43:41.639 00:24:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:43:41.639 00:24:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:43:41.639 00:24:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:43:41.639 00:24:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:43:41.639 00:24:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:43:41.639 00:24:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 140746 -w 256 00:43:41.639 00:24:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:43:41.898 00:24:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 140752 root 20 0 20.1t 206592 99840 S 0.0 0.2 0:00.00 reactor_1' 00:43:41.898 00:24:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 140752 root 20 0 20.1t 206592 99840 S 0.0 0.2 0:00.00 reactor_1 00:43:41.898 00:24:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:43:41.898 00:24:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:43:41.898 00:24:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:43:41.898 00:24:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:43:41.898 00:24:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:43:41.898 00:24:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:43:41.898 00:24:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:43:41.898 00:24:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:43:41.898 00:24:20 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@28 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:43:41.898 00:24:20 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@35 -- # perf_pid=140974 00:43:41.898 00:24:20 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:43:41.898 00:24:20 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 256 -o 4096 -w randrw -M 30 -t 10 -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:43:41.898 00:24:20 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:43:41.898 00:24:20 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 140746 0 00:43:41.898 00:24:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 140746 0 busy 00:43:41.898 00:24:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=140746 00:43:41.898 00:24:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:43:41.898 00:24:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:43:41.898 00:24:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:43:41.898 00:24:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:43:41.898 00:24:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:43:41.898 00:24:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:43:41.898 00:24:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:43:41.898 00:24:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:43:41.898 00:24:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:43:41.898 00:24:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 140746 -w 256 00:43:42.176 00:24:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 140746 root 20 0 20.1t 207360 100608 S 0.0 0.2 0:00.65 reactor_0' 00:43:42.176 00:24:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 140746 root 20 0 20.1t 207360 100608 S 0.0 0.2 0:00.65 reactor_0 00:43:42.176 00:24:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:43:42.176 00:24:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:43:42.176 00:24:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:43:42.176 00:24:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:43:42.176 00:24:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:43:42.176 00:24:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:43:42.176 00:24:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@31 -- # sleep 1 00:43:43.219 00:24:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j-- )) 00:43:43.219 00:24:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:43:43.219 00:24:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:43:43.219 00:24:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 140746 -w 256 00:43:43.219 00:24:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 140746 root 20 0 20.1t 219648 100608 R 99.9 0.2 0:02.86 reactor_0' 00:43:43.219 00:24:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 140746 root 20 0 20.1t 219648 100608 R 99.9 0.2 0:02.86 reactor_0 00:43:43.219 00:24:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:43:43.219 00:24:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:43:43.219 00:24:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=99.9 00:43:43.219 00:24:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=99 00:43:43.219 00:24:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:43:43.219 00:24:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:43:43.219 00:24:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:43:43.219 00:24:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:43:43.219 00:24:22 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:43:43.219 00:24:22 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:43:43.219 00:24:22 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 140746 1 00:43:43.219 00:24:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 140746 1 busy 00:43:43.219 00:24:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=140746 00:43:43.219 00:24:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:43:43.219 00:24:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:43:43.219 00:24:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:43:43.219 00:24:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:43:43.219 00:24:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:43:43.219 00:24:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:43:43.219 00:24:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:43:43.219 00:24:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:43:43.219 00:24:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 140746 -w 256 00:43:43.219 00:24:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:43:43.477 00:24:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 140752 root 20 0 20.1t 219648 100608 R 99.9 0.2 0:01.30 reactor_1' 00:43:43.477 00:24:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 140752 root 20 0 20.1t 219648 100608 R 99.9 0.2 0:01.30 reactor_1 00:43:43.477 00:24:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:43:43.477 00:24:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:43:43.477 00:24:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=99.9 00:43:43.477 00:24:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=99 00:43:43.477 00:24:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:43:43.477 00:24:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:43:43.478 00:24:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:43:43.478 00:24:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:43:43.478 00:24:22 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@42 -- # wait 140974 00:43:53.441 Initializing NVMe Controllers 00:43:53.441 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:43:53.441 Controller IO queue size 256, less than required. 00:43:53.441 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:43:53.441 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:43:53.441 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:43:53.441 Initialization complete. Launching workers. 00:43:53.441 ======================================================== 00:43:53.441 Latency(us) 00:43:53.441 Device Information : IOPS MiB/s Average min max 00:43:53.441 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 15058.68 58.82 17009.65 5087.61 22475.14 00:43:53.441 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 14953.08 58.41 17131.63 5198.22 62645.79 00:43:53.441 ======================================================== 00:43:53.441 Total : 30011.76 117.23 17070.42 5087.61 62645.79 00:43:53.441 00:43:53.441 00:24:31 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:43:53.441 00:24:31 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 140746 0 00:43:53.441 00:24:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 140746 0 idle 00:43:53.441 00:24:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=140746 00:43:53.441 00:24:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:43:53.441 00:24:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:43:53.441 00:24:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:43:53.441 00:24:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:43:53.441 00:24:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:43:53.441 00:24:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:43:53.441 00:24:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:43:53.441 00:24:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:43:53.441 00:24:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:43:53.441 00:24:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:43:53.441 00:24:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 140746 -w 256 00:43:53.441 00:24:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 140746 root 20 0 20.1t 219648 100608 S 0.0 0.2 0:20.63 reactor_0' 00:43:53.441 00:24:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 140746 root 20 0 20.1t 219648 100608 S 0.0 0.2 0:20.63 reactor_0 00:43:53.441 00:24:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:43:53.441 00:24:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:43:53.441 00:24:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:43:53.441 00:24:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:43:53.441 00:24:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:43:53.441 00:24:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:43:53.441 00:24:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:43:53.441 00:24:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:43:53.441 00:24:31 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:43:53.441 00:24:31 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 140746 1 00:43:53.441 00:24:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 140746 1 idle 00:43:53.441 00:24:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=140746 00:43:53.441 00:24:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:43:53.441 00:24:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:43:53.441 00:24:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:43:53.441 00:24:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:43:53.441 00:24:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:43:53.441 00:24:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:43:53.441 00:24:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:43:53.441 00:24:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:43:53.441 00:24:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:43:53.441 00:24:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 140746 -w 256 00:43:53.441 00:24:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:43:53.441 00:24:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 140752 root 20 0 20.1t 219648 100608 S 0.0 0.2 0:10.00 reactor_1' 00:43:53.441 00:24:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 140752 root 20 0 20.1t 219648 100608 S 0.0 0.2 0:10.00 reactor_1 00:43:53.441 00:24:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:43:53.441 00:24:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:43:53.441 00:24:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:43:53.441 00:24:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:43:53.441 00:24:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:43:53.441 00:24:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:43:53.441 00:24:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:43:53.441 00:24:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:43:53.442 00:24:31 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@50 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:43:53.442 00:24:32 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@51 -- # waitforserial SPDKISFASTANDAWESOME 00:43:53.442 00:24:32 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1202 -- # local i=0 00:43:53.442 00:24:32 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:43:53.442 00:24:32 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:43:53.442 00:24:32 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1209 -- # sleep 2 00:43:55.339 00:24:34 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:43:55.339 00:24:34 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:43:55.339 00:24:34 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:43:55.339 00:24:34 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:43:55.339 00:24:34 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:43:55.339 00:24:34 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1212 -- # return 0 00:43:55.339 00:24:34 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:43:55.340 00:24:34 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 140746 0 00:43:55.340 00:24:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 140746 0 idle 00:43:55.340 00:24:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=140746 00:43:55.340 00:24:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:43:55.340 00:24:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:43:55.340 00:24:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:43:55.340 00:24:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:43:55.340 00:24:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:43:55.340 00:24:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:43:55.340 00:24:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:43:55.340 00:24:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:43:55.340 00:24:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:43:55.340 00:24:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 140746 -w 256 00:43:55.340 00:24:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:43:55.340 00:24:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 140746 root 20 0 20.1t 274176 119808 S 6.7 0.3 0:21.05 reactor_0' 00:43:55.340 00:24:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 140746 root 20 0 20.1t 274176 119808 S 6.7 0.3 0:21.05 reactor_0 00:43:55.340 00:24:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:43:55.340 00:24:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:43:55.340 00:24:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=6.7 00:43:55.340 00:24:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=6 00:43:55.340 00:24:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:43:55.340 00:24:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:43:55.340 00:24:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:43:55.340 00:24:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:43:55.340 00:24:34 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:43:55.340 00:24:34 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 140746 1 00:43:55.340 00:24:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 140746 1 idle 00:43:55.340 00:24:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=140746 00:43:55.340 00:24:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:43:55.340 00:24:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:43:55.340 00:24:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:43:55.340 00:24:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:43:55.340 00:24:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:43:55.340 00:24:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:43:55.340 00:24:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:43:55.340 00:24:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:43:55.340 00:24:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:43:55.340 00:24:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 140746 -w 256 00:43:55.340 00:24:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:43:55.598 00:24:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 140752 root 20 0 20.1t 274176 119808 S 0.0 0.3 0:10.18 reactor_1' 00:43:55.598 00:24:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 140752 root 20 0 20.1t 274176 119808 S 0.0 0.3 0:10.18 reactor_1 00:43:55.598 00:24:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:43:55.598 00:24:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:43:55.598 00:24:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:43:55.598 00:24:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:43:55.598 00:24:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:43:55.598 00:24:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:43:55.598 00:24:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:43:55.598 00:24:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:43:55.598 00:24:34 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@55 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:43:56.163 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:43:56.163 00:24:35 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@56 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:43:56.163 00:24:35 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1223 -- # local i=0 00:43:56.163 00:24:35 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:43:56.163 00:24:35 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:43:56.163 00:24:35 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:43:56.163 00:24:35 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:43:56.163 00:24:35 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1235 -- # return 0 00:43:56.163 00:24:35 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:43:56.163 00:24:35 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@59 -- # nvmftestfini 00:43:56.163 00:24:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@516 -- # nvmfcleanup 00:43:56.163 00:24:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@121 -- # sync 00:43:56.163 00:24:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:43:56.163 00:24:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@124 -- # set +e 00:43:56.163 00:24:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@125 -- # for i in {1..20} 00:43:56.163 00:24:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:43:56.163 rmmod nvme_tcp 00:43:56.163 rmmod nvme_fabrics 00:43:56.163 rmmod nvme_keyring 00:43:56.163 00:24:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:43:56.163 00:24:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@128 -- # set -e 00:43:56.163 00:24:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@129 -- # return 0 00:43:56.163 00:24:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@517 -- # '[' -n 140746 ']' 00:43:56.163 00:24:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@518 -- # killprocess 140746 00:43:56.163 00:24:35 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@954 -- # '[' -z 140746 ']' 00:43:56.163 00:24:35 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@958 -- # kill -0 140746 00:43:56.163 00:24:35 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@959 -- # uname 00:43:56.163 00:24:35 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:43:56.163 00:24:35 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 140746 00:43:56.163 00:24:35 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:43:56.163 00:24:35 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:43:56.163 00:24:35 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@972 -- # echo 'killing process with pid 140746' 00:43:56.163 killing process with pid 140746 00:43:56.163 00:24:35 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@973 -- # kill 140746 00:43:56.163 00:24:35 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@978 -- # wait 140746 00:43:57.536 00:24:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:43:57.536 00:24:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:43:57.536 00:24:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:43:57.536 00:24:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@297 -- # iptr 00:43:57.536 00:24:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # iptables-save 00:43:57.536 00:24:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # iptables-restore 00:43:57.536 00:24:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:43:57.536 00:24:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:43:57.536 00:24:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@302 -- # remove_spdk_ns 00:43:57.536 00:24:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:43:57.536 00:24:36 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:43:57.536 00:24:36 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:43:59.437 00:24:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:43:59.437 00:43:59.437 real 0m24.513s 00:43:59.437 user 0m42.093s 00:43:59.437 sys 0m8.176s 00:43:59.437 00:24:38 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1130 -- # xtrace_disable 00:43:59.437 00:24:38 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:43:59.437 ************************************ 00:43:59.437 END TEST nvmf_interrupt 00:43:59.437 ************************************ 00:43:59.437 00:43:59.437 real 37m16.629s 00:43:59.437 user 92m14.574s 00:43:59.437 sys 9m42.513s 00:43:59.437 00:24:38 nvmf_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:43:59.437 00:24:38 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:43:59.437 ************************************ 00:43:59.437 END TEST nvmf_tcp 00:43:59.437 ************************************ 00:43:59.437 00:24:38 -- spdk/autotest.sh@285 -- # [[ 0 -eq 0 ]] 00:43:59.437 00:24:38 -- spdk/autotest.sh@286 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:43:59.437 00:24:38 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:43:59.437 00:24:38 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:43:59.437 00:24:38 -- common/autotest_common.sh@10 -- # set +x 00:43:59.696 ************************************ 00:43:59.696 START TEST spdkcli_nvmf_tcp 00:43:59.696 ************************************ 00:43:59.696 00:24:38 spdkcli_nvmf_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:43:59.696 * Looking for test storage... 00:43:59.696 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:43:59.696 00:24:38 spdkcli_nvmf_tcp -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:43:59.696 00:24:38 spdkcli_nvmf_tcp -- common/autotest_common.sh@1711 -- # lcov --version 00:43:59.696 00:24:38 spdkcli_nvmf_tcp -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:43:59.696 00:24:38 spdkcli_nvmf_tcp -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:43:59.696 00:24:38 spdkcli_nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:43:59.696 00:24:38 spdkcli_nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:43:59.696 00:24:38 spdkcli_nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:43:59.696 00:24:38 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:43:59.696 00:24:38 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:43:59.696 00:24:38 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:43:59.696 00:24:38 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:43:59.696 00:24:38 spdkcli_nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:43:59.696 00:24:38 spdkcli_nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:43:59.696 00:24:38 spdkcli_nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:43:59.696 00:24:38 spdkcli_nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:43:59.696 00:24:38 spdkcli_nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:43:59.696 00:24:38 spdkcli_nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:43:59.696 00:24:38 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:43:59.697 00:24:38 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:43:59.697 00:24:38 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:43:59.697 00:24:38 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:43:59.697 00:24:38 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:43:59.697 00:24:38 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:43:59.697 00:24:38 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:43:59.697 00:24:38 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:43:59.697 00:24:38 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:43:59.697 00:24:38 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:43:59.697 00:24:38 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:43:59.697 00:24:38 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:43:59.697 00:24:38 spdkcli_nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:43:59.697 00:24:38 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:43:59.697 00:24:38 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:43:59.697 00:24:38 spdkcli_nvmf_tcp -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:43:59.697 00:24:38 spdkcli_nvmf_tcp -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:43:59.697 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:59.697 --rc genhtml_branch_coverage=1 00:43:59.697 --rc genhtml_function_coverage=1 00:43:59.697 --rc genhtml_legend=1 00:43:59.697 --rc geninfo_all_blocks=1 00:43:59.697 --rc geninfo_unexecuted_blocks=1 00:43:59.697 00:43:59.697 ' 00:43:59.697 00:24:38 spdkcli_nvmf_tcp -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:43:59.697 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:59.697 --rc genhtml_branch_coverage=1 00:43:59.697 --rc genhtml_function_coverage=1 00:43:59.697 --rc genhtml_legend=1 00:43:59.697 --rc geninfo_all_blocks=1 00:43:59.697 --rc geninfo_unexecuted_blocks=1 00:43:59.697 00:43:59.697 ' 00:43:59.697 00:24:38 spdkcli_nvmf_tcp -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:43:59.697 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:59.697 --rc genhtml_branch_coverage=1 00:43:59.697 --rc genhtml_function_coverage=1 00:43:59.697 --rc genhtml_legend=1 00:43:59.697 --rc geninfo_all_blocks=1 00:43:59.697 --rc geninfo_unexecuted_blocks=1 00:43:59.697 00:43:59.697 ' 00:43:59.697 00:24:38 spdkcli_nvmf_tcp -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:43:59.697 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:59.697 --rc genhtml_branch_coverage=1 00:43:59.697 --rc genhtml_function_coverage=1 00:43:59.697 --rc genhtml_legend=1 00:43:59.697 --rc geninfo_all_blocks=1 00:43:59.697 --rc geninfo_unexecuted_blocks=1 00:43:59.697 00:43:59.697 ' 00:43:59.697 00:24:38 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:43:59.697 00:24:38 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:43:59.697 00:24:38 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:43:59.697 00:24:38 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:43:59.697 00:24:38 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:43:59.697 00:24:38 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:43:59.697 00:24:38 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:43:59.697 00:24:38 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:43:59.697 00:24:38 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:43:59.697 00:24:38 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:43:59.697 00:24:38 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:43:59.697 00:24:38 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:43:59.697 00:24:38 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:43:59.697 00:24:38 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:43:59.697 00:24:38 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:43:59.697 00:24:38 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:43:59.697 00:24:38 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:43:59.697 00:24:38 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:43:59.697 00:24:38 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:43:59.697 00:24:38 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:43:59.697 00:24:38 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:43:59.697 00:24:38 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:43:59.697 00:24:38 spdkcli_nvmf_tcp -- scripts/common.sh@15 -- # shopt -s extglob 00:43:59.697 00:24:38 spdkcli_nvmf_tcp -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:43:59.697 00:24:38 spdkcli_nvmf_tcp -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:43:59.697 00:24:38 spdkcli_nvmf_tcp -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:43:59.697 00:24:38 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:59.697 00:24:38 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:59.697 00:24:38 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:59.697 00:24:38 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:43:59.697 00:24:38 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:59.697 00:24:38 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # : 0 00:43:59.697 00:24:38 spdkcli_nvmf_tcp -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:43:59.697 00:24:38 spdkcli_nvmf_tcp -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:43:59.697 00:24:38 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:43:59.697 00:24:38 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:43:59.697 00:24:38 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:43:59.697 00:24:38 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:43:59.697 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:43:59.697 00:24:38 spdkcli_nvmf_tcp -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:43:59.697 00:24:38 spdkcli_nvmf_tcp -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:43:59.697 00:24:38 spdkcli_nvmf_tcp -- nvmf/common.sh@55 -- # have_pci_nics=0 00:43:59.697 00:24:38 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:43:59.697 00:24:38 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:43:59.697 00:24:38 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:43:59.697 00:24:38 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:43:59.697 00:24:38 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:43:59.697 00:24:38 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:43:59.697 00:24:38 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:43:59.697 00:24:38 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=143863 00:43:59.697 00:24:38 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 143863 00:43:59.697 00:24:38 spdkcli_nvmf_tcp -- common/autotest_common.sh@835 -- # '[' -z 143863 ']' 00:43:59.697 00:24:38 spdkcli_nvmf_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:43:59.697 00:24:38 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:43:59.697 00:24:38 spdkcli_nvmf_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:43:59.697 00:24:38 spdkcli_nvmf_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:43:59.697 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:43:59.697 00:24:38 spdkcli_nvmf_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:43:59.697 00:24:38 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:43:59.956 [2024-12-14 00:24:38.849468] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:43:59.956 [2024-12-14 00:24:38.849558] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid143863 ] 00:43:59.956 [2024-12-14 00:24:38.960087] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:43:59.956 [2024-12-14 00:24:39.061794] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:43:59.956 [2024-12-14 00:24:39.061804] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:44:00.522 00:24:39 spdkcli_nvmf_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:44:00.522 00:24:39 spdkcli_nvmf_tcp -- common/autotest_common.sh@868 -- # return 0 00:44:00.522 00:24:39 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:44:00.522 00:24:39 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:44:00.522 00:24:39 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:44:00.780 00:24:39 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:44:00.780 00:24:39 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:44:00.780 00:24:39 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:44:00.780 00:24:39 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:44:00.780 00:24:39 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:44:00.780 00:24:39 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:44:00.780 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:44:00.780 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:44:00.780 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:44:00.780 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:44:00.780 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:44:00.780 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:44:00.780 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:44:00.780 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:44:00.780 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:44:00.780 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:44:00.780 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:44:00.780 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:44:00.780 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:44:00.780 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:44:00.780 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:44:00.780 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:44:00.780 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:44:00.780 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:44:00.780 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:44:00.780 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:44:00.780 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:44:00.780 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:44:00.780 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:44:00.780 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:44:00.780 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:44:00.780 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:44:00.780 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:44:00.780 ' 00:44:03.310 [2024-12-14 00:24:42.312346] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:44:04.684 [2024-12-14 00:24:43.532561] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:44:07.214 [2024-12-14 00:24:45.779726] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:44:08.592 [2024-12-14 00:24:47.709927] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:44:10.493 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:44:10.493 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:44:10.493 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:44:10.493 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:44:10.493 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:44:10.493 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:44:10.493 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:44:10.493 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:44:10.493 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:44:10.493 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:44:10.493 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:44:10.493 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:44:10.493 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:44:10.493 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:44:10.493 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:44:10.493 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:44:10.493 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:44:10.493 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:44:10.493 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:44:10.493 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:44:10.493 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:44:10.493 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:44:10.493 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:44:10.493 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:44:10.493 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:44:10.493 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:44:10.493 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:44:10.493 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:44:10.493 00:24:49 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:44:10.493 00:24:49 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:44:10.493 00:24:49 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:44:10.493 00:24:49 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:44:10.493 00:24:49 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:44:10.493 00:24:49 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:44:10.493 00:24:49 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:44:10.493 00:24:49 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:44:10.752 00:24:49 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:44:10.752 00:24:49 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:44:10.752 00:24:49 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:44:10.752 00:24:49 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:44:10.752 00:24:49 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:44:10.752 00:24:49 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:44:10.752 00:24:49 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:44:10.752 00:24:49 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:44:10.752 00:24:49 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:44:10.752 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:44:10.752 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:44:10.752 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:44:10.752 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:44:10.752 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:44:10.752 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:44:10.752 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:44:10.752 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:44:10.752 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:44:10.752 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:44:10.752 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:44:10.752 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:44:10.752 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:44:10.753 ' 00:44:17.322 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:44:17.322 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:44:17.322 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:44:17.322 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:44:17.322 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:44:17.322 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:44:17.322 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:44:17.322 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:44:17.322 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:44:17.322 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:44:17.322 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:44:17.322 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:44:17.322 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:44:17.322 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:44:17.322 00:24:55 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:44:17.322 00:24:55 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:44:17.322 00:24:55 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:44:17.322 00:24:55 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 143863 00:44:17.322 00:24:55 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # '[' -z 143863 ']' 00:44:17.322 00:24:55 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # kill -0 143863 00:44:17.322 00:24:55 spdkcli_nvmf_tcp -- common/autotest_common.sh@959 -- # uname 00:44:17.322 00:24:55 spdkcli_nvmf_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:44:17.322 00:24:55 spdkcli_nvmf_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 143863 00:44:17.322 00:24:56 spdkcli_nvmf_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:44:17.322 00:24:56 spdkcli_nvmf_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:44:17.322 00:24:56 spdkcli_nvmf_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 143863' 00:44:17.322 killing process with pid 143863 00:44:17.322 00:24:56 spdkcli_nvmf_tcp -- common/autotest_common.sh@973 -- # kill 143863 00:44:17.322 00:24:56 spdkcli_nvmf_tcp -- common/autotest_common.sh@978 -- # wait 143863 00:44:18.259 00:24:57 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:44:18.259 00:24:57 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:44:18.259 00:24:57 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 143863 ']' 00:44:18.259 00:24:57 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 143863 00:44:18.259 00:24:57 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # '[' -z 143863 ']' 00:44:18.259 00:24:57 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # kill -0 143863 00:44:18.259 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (143863) - No such process 00:44:18.259 00:24:57 spdkcli_nvmf_tcp -- common/autotest_common.sh@981 -- # echo 'Process with pid 143863 is not found' 00:44:18.259 Process with pid 143863 is not found 00:44:18.259 00:24:57 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:44:18.259 00:24:57 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:44:18.259 00:24:57 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:44:18.259 00:44:18.259 real 0m18.580s 00:44:18.259 user 0m38.856s 00:44:18.259 sys 0m0.871s 00:44:18.259 00:24:57 spdkcli_nvmf_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:44:18.259 00:24:57 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:44:18.259 ************************************ 00:44:18.259 END TEST spdkcli_nvmf_tcp 00:44:18.259 ************************************ 00:44:18.259 00:24:57 -- spdk/autotest.sh@287 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:44:18.259 00:24:57 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:44:18.259 00:24:57 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:44:18.259 00:24:57 -- common/autotest_common.sh@10 -- # set +x 00:44:18.259 ************************************ 00:44:18.259 START TEST nvmf_identify_passthru 00:44:18.259 ************************************ 00:44:18.259 00:24:57 nvmf_identify_passthru -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:44:18.259 * Looking for test storage... 00:44:18.259 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:44:18.259 00:24:57 nvmf_identify_passthru -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:44:18.259 00:24:57 nvmf_identify_passthru -- common/autotest_common.sh@1711 -- # lcov --version 00:44:18.259 00:24:57 nvmf_identify_passthru -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:44:18.259 00:24:57 nvmf_identify_passthru -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:44:18.259 00:24:57 nvmf_identify_passthru -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:44:18.259 00:24:57 nvmf_identify_passthru -- scripts/common.sh@333 -- # local ver1 ver1_l 00:44:18.259 00:24:57 nvmf_identify_passthru -- scripts/common.sh@334 -- # local ver2 ver2_l 00:44:18.259 00:24:57 nvmf_identify_passthru -- scripts/common.sh@336 -- # IFS=.-: 00:44:18.259 00:24:57 nvmf_identify_passthru -- scripts/common.sh@336 -- # read -ra ver1 00:44:18.259 00:24:57 nvmf_identify_passthru -- scripts/common.sh@337 -- # IFS=.-: 00:44:18.259 00:24:57 nvmf_identify_passthru -- scripts/common.sh@337 -- # read -ra ver2 00:44:18.259 00:24:57 nvmf_identify_passthru -- scripts/common.sh@338 -- # local 'op=<' 00:44:18.259 00:24:57 nvmf_identify_passthru -- scripts/common.sh@340 -- # ver1_l=2 00:44:18.259 00:24:57 nvmf_identify_passthru -- scripts/common.sh@341 -- # ver2_l=1 00:44:18.259 00:24:57 nvmf_identify_passthru -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:44:18.259 00:24:57 nvmf_identify_passthru -- scripts/common.sh@344 -- # case "$op" in 00:44:18.259 00:24:57 nvmf_identify_passthru -- scripts/common.sh@345 -- # : 1 00:44:18.259 00:24:57 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v = 0 )) 00:44:18.259 00:24:57 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:44:18.259 00:24:57 nvmf_identify_passthru -- scripts/common.sh@365 -- # decimal 1 00:44:18.259 00:24:57 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=1 00:44:18.259 00:24:57 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:44:18.259 00:24:57 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 1 00:44:18.259 00:24:57 nvmf_identify_passthru -- scripts/common.sh@365 -- # ver1[v]=1 00:44:18.259 00:24:57 nvmf_identify_passthru -- scripts/common.sh@366 -- # decimal 2 00:44:18.259 00:24:57 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=2 00:44:18.259 00:24:57 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:44:18.259 00:24:57 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 2 00:44:18.259 00:24:57 nvmf_identify_passthru -- scripts/common.sh@366 -- # ver2[v]=2 00:44:18.259 00:24:57 nvmf_identify_passthru -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:44:18.259 00:24:57 nvmf_identify_passthru -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:44:18.259 00:24:57 nvmf_identify_passthru -- scripts/common.sh@368 -- # return 0 00:44:18.259 00:24:57 nvmf_identify_passthru -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:44:18.259 00:24:57 nvmf_identify_passthru -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:44:18.259 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:18.259 --rc genhtml_branch_coverage=1 00:44:18.259 --rc genhtml_function_coverage=1 00:44:18.259 --rc genhtml_legend=1 00:44:18.259 --rc geninfo_all_blocks=1 00:44:18.259 --rc geninfo_unexecuted_blocks=1 00:44:18.259 00:44:18.259 ' 00:44:18.259 00:24:57 nvmf_identify_passthru -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:44:18.259 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:18.259 --rc genhtml_branch_coverage=1 00:44:18.259 --rc genhtml_function_coverage=1 00:44:18.259 --rc genhtml_legend=1 00:44:18.259 --rc geninfo_all_blocks=1 00:44:18.259 --rc geninfo_unexecuted_blocks=1 00:44:18.259 00:44:18.259 ' 00:44:18.259 00:24:57 nvmf_identify_passthru -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:44:18.259 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:18.259 --rc genhtml_branch_coverage=1 00:44:18.259 --rc genhtml_function_coverage=1 00:44:18.259 --rc genhtml_legend=1 00:44:18.259 --rc geninfo_all_blocks=1 00:44:18.259 --rc geninfo_unexecuted_blocks=1 00:44:18.259 00:44:18.259 ' 00:44:18.259 00:24:57 nvmf_identify_passthru -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:44:18.259 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:18.259 --rc genhtml_branch_coverage=1 00:44:18.259 --rc genhtml_function_coverage=1 00:44:18.259 --rc genhtml_legend=1 00:44:18.259 --rc geninfo_all_blocks=1 00:44:18.259 --rc geninfo_unexecuted_blocks=1 00:44:18.259 00:44:18.259 ' 00:44:18.259 00:24:57 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:44:18.259 00:24:57 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:44:18.259 00:24:57 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:44:18.259 00:24:57 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:44:18.259 00:24:57 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:44:18.259 00:24:57 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:44:18.259 00:24:57 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:44:18.259 00:24:57 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:44:18.259 00:24:57 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:44:18.259 00:24:57 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:44:18.259 00:24:57 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:44:18.259 00:24:57 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:44:18.519 00:24:57 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:44:18.519 00:24:57 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:44:18.519 00:24:57 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:44:18.519 00:24:57 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:44:18.519 00:24:57 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:44:18.519 00:24:57 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:44:18.519 00:24:57 nvmf_identify_passthru -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:44:18.519 00:24:57 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:44:18.519 00:24:57 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:44:18.519 00:24:57 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:44:18.519 00:24:57 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:44:18.519 00:24:57 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:18.519 00:24:57 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:18.519 00:24:57 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:18.519 00:24:57 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:44:18.519 00:24:57 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:18.519 00:24:57 nvmf_identify_passthru -- nvmf/common.sh@51 -- # : 0 00:44:18.519 00:24:57 nvmf_identify_passthru -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:44:18.519 00:24:57 nvmf_identify_passthru -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:44:18.519 00:24:57 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:44:18.519 00:24:57 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:44:18.519 00:24:57 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:44:18.519 00:24:57 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:44:18.519 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:44:18.519 00:24:57 nvmf_identify_passthru -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:44:18.519 00:24:57 nvmf_identify_passthru -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:44:18.519 00:24:57 nvmf_identify_passthru -- nvmf/common.sh@55 -- # have_pci_nics=0 00:44:18.519 00:24:57 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:44:18.519 00:24:57 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:44:18.519 00:24:57 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:44:18.519 00:24:57 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:44:18.519 00:24:57 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:44:18.519 00:24:57 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:18.519 00:24:57 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:18.519 00:24:57 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:18.519 00:24:57 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:44:18.519 00:24:57 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:18.519 00:24:57 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:44:18.519 00:24:57 nvmf_identify_passthru -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:44:18.519 00:24:57 nvmf_identify_passthru -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:44:18.519 00:24:57 nvmf_identify_passthru -- nvmf/common.sh@476 -- # prepare_net_devs 00:44:18.519 00:24:57 nvmf_identify_passthru -- nvmf/common.sh@438 -- # local -g is_hw=no 00:44:18.519 00:24:57 nvmf_identify_passthru -- nvmf/common.sh@440 -- # remove_spdk_ns 00:44:18.519 00:24:57 nvmf_identify_passthru -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:44:18.519 00:24:57 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:44:18.519 00:24:57 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:44:18.519 00:24:57 nvmf_identify_passthru -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:44:18.519 00:24:57 nvmf_identify_passthru -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:44:18.519 00:24:57 nvmf_identify_passthru -- nvmf/common.sh@309 -- # xtrace_disable 00:44:18.519 00:24:57 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:44:23.793 00:25:02 nvmf_identify_passthru -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:44:23.793 00:25:02 nvmf_identify_passthru -- nvmf/common.sh@315 -- # pci_devs=() 00:44:23.793 00:25:02 nvmf_identify_passthru -- nvmf/common.sh@315 -- # local -a pci_devs 00:44:23.793 00:25:02 nvmf_identify_passthru -- nvmf/common.sh@316 -- # pci_net_devs=() 00:44:23.793 00:25:02 nvmf_identify_passthru -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:44:23.793 00:25:02 nvmf_identify_passthru -- nvmf/common.sh@317 -- # pci_drivers=() 00:44:23.793 00:25:02 nvmf_identify_passthru -- nvmf/common.sh@317 -- # local -A pci_drivers 00:44:23.793 00:25:02 nvmf_identify_passthru -- nvmf/common.sh@319 -- # net_devs=() 00:44:23.793 00:25:02 nvmf_identify_passthru -- nvmf/common.sh@319 -- # local -ga net_devs 00:44:23.793 00:25:02 nvmf_identify_passthru -- nvmf/common.sh@320 -- # e810=() 00:44:23.793 00:25:02 nvmf_identify_passthru -- nvmf/common.sh@320 -- # local -ga e810 00:44:23.793 00:25:02 nvmf_identify_passthru -- nvmf/common.sh@321 -- # x722=() 00:44:23.793 00:25:02 nvmf_identify_passthru -- nvmf/common.sh@321 -- # local -ga x722 00:44:23.793 00:25:02 nvmf_identify_passthru -- nvmf/common.sh@322 -- # mlx=() 00:44:23.793 00:25:02 nvmf_identify_passthru -- nvmf/common.sh@322 -- # local -ga mlx 00:44:23.793 00:25:02 nvmf_identify_passthru -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:44:23.793 00:25:02 nvmf_identify_passthru -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:44:23.793 00:25:02 nvmf_identify_passthru -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:44:23.793 00:25:02 nvmf_identify_passthru -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:44:23.793 00:25:02 nvmf_identify_passthru -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:44:23.793 00:25:02 nvmf_identify_passthru -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:44:23.793 00:25:02 nvmf_identify_passthru -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:44:23.793 00:25:02 nvmf_identify_passthru -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:44:23.793 00:25:02 nvmf_identify_passthru -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:44:23.793 00:25:02 nvmf_identify_passthru -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:44:23.793 00:25:02 nvmf_identify_passthru -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:44:23.793 00:25:02 nvmf_identify_passthru -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:44:23.793 00:25:02 nvmf_identify_passthru -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:44:23.793 00:25:02 nvmf_identify_passthru -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:44:23.793 00:25:02 nvmf_identify_passthru -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:44:23.793 00:25:02 nvmf_identify_passthru -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:44:23.793 00:25:02 nvmf_identify_passthru -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:44:23.793 00:25:02 nvmf_identify_passthru -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:44:23.793 00:25:02 nvmf_identify_passthru -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:44:23.793 00:25:02 nvmf_identify_passthru -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:44:23.793 Found 0000:af:00.0 (0x8086 - 0x159b) 00:44:23.793 00:25:02 nvmf_identify_passthru -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:44:23.793 00:25:02 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:44:23.794 00:25:02 nvmf_identify_passthru -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:44:23.794 00:25:02 nvmf_identify_passthru -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:44:23.794 00:25:02 nvmf_identify_passthru -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:44:23.794 00:25:02 nvmf_identify_passthru -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:44:23.794 00:25:02 nvmf_identify_passthru -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:44:23.794 Found 0000:af:00.1 (0x8086 - 0x159b) 00:44:23.794 00:25:02 nvmf_identify_passthru -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:44:23.794 00:25:02 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:44:23.794 00:25:02 nvmf_identify_passthru -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:44:23.794 00:25:02 nvmf_identify_passthru -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:44:23.794 00:25:02 nvmf_identify_passthru -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:44:23.794 00:25:02 nvmf_identify_passthru -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:44:23.794 00:25:02 nvmf_identify_passthru -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:44:23.794 00:25:02 nvmf_identify_passthru -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:44:23.794 00:25:02 nvmf_identify_passthru -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:44:23.794 00:25:02 nvmf_identify_passthru -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:44:23.794 00:25:02 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:44:23.794 00:25:02 nvmf_identify_passthru -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:44:23.794 00:25:02 nvmf_identify_passthru -- nvmf/common.sh@418 -- # [[ up == up ]] 00:44:23.794 00:25:02 nvmf_identify_passthru -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:44:23.794 00:25:02 nvmf_identify_passthru -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:44:23.794 00:25:02 nvmf_identify_passthru -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:44:23.794 Found net devices under 0000:af:00.0: cvl_0_0 00:44:23.794 00:25:02 nvmf_identify_passthru -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:44:23.794 00:25:02 nvmf_identify_passthru -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:44:23.794 00:25:02 nvmf_identify_passthru -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:44:23.794 00:25:02 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:44:23.794 00:25:02 nvmf_identify_passthru -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:44:23.794 00:25:02 nvmf_identify_passthru -- nvmf/common.sh@418 -- # [[ up == up ]] 00:44:23.794 00:25:02 nvmf_identify_passthru -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:44:23.794 00:25:02 nvmf_identify_passthru -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:44:23.794 00:25:02 nvmf_identify_passthru -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:44:23.794 Found net devices under 0000:af:00.1: cvl_0_1 00:44:23.794 00:25:02 nvmf_identify_passthru -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:44:23.794 00:25:02 nvmf_identify_passthru -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:44:23.794 00:25:02 nvmf_identify_passthru -- nvmf/common.sh@442 -- # is_hw=yes 00:44:23.794 00:25:02 nvmf_identify_passthru -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:44:23.794 00:25:02 nvmf_identify_passthru -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:44:23.794 00:25:02 nvmf_identify_passthru -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:44:23.794 00:25:02 nvmf_identify_passthru -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:44:23.794 00:25:02 nvmf_identify_passthru -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:44:23.794 00:25:02 nvmf_identify_passthru -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:44:23.794 00:25:02 nvmf_identify_passthru -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:44:23.794 00:25:02 nvmf_identify_passthru -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:44:23.794 00:25:02 nvmf_identify_passthru -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:44:23.794 00:25:02 nvmf_identify_passthru -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:44:23.794 00:25:02 nvmf_identify_passthru -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:44:23.794 00:25:02 nvmf_identify_passthru -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:44:23.794 00:25:02 nvmf_identify_passthru -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:44:23.794 00:25:02 nvmf_identify_passthru -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:44:23.794 00:25:02 nvmf_identify_passthru -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:44:23.794 00:25:02 nvmf_identify_passthru -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:44:23.794 00:25:02 nvmf_identify_passthru -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:44:23.794 00:25:02 nvmf_identify_passthru -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:44:23.794 00:25:02 nvmf_identify_passthru -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:44:23.794 00:25:02 nvmf_identify_passthru -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:44:23.794 00:25:02 nvmf_identify_passthru -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:44:23.794 00:25:02 nvmf_identify_passthru -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:44:23.794 00:25:02 nvmf_identify_passthru -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:44:23.794 00:25:02 nvmf_identify_passthru -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:44:23.794 00:25:02 nvmf_identify_passthru -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:44:23.794 00:25:02 nvmf_identify_passthru -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:44:23.794 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:44:23.794 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.376 ms 00:44:23.794 00:44:23.794 --- 10.0.0.2 ping statistics --- 00:44:23.794 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:44:23.794 rtt min/avg/max/mdev = 0.376/0.376/0.376/0.000 ms 00:44:23.794 00:25:02 nvmf_identify_passthru -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:44:23.794 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:44:23.794 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.191 ms 00:44:23.794 00:44:23.794 --- 10.0.0.1 ping statistics --- 00:44:23.794 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:44:23.794 rtt min/avg/max/mdev = 0.191/0.191/0.191/0.000 ms 00:44:23.794 00:25:02 nvmf_identify_passthru -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:44:23.794 00:25:02 nvmf_identify_passthru -- nvmf/common.sh@450 -- # return 0 00:44:23.794 00:25:02 nvmf_identify_passthru -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:44:23.794 00:25:02 nvmf_identify_passthru -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:44:23.794 00:25:02 nvmf_identify_passthru -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:44:23.794 00:25:02 nvmf_identify_passthru -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:44:23.794 00:25:02 nvmf_identify_passthru -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:44:23.794 00:25:02 nvmf_identify_passthru -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:44:23.794 00:25:02 nvmf_identify_passthru -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:44:23.794 00:25:02 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:44:23.794 00:25:02 nvmf_identify_passthru -- common/autotest_common.sh@726 -- # xtrace_disable 00:44:23.794 00:25:02 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:44:23.794 00:25:02 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:44:23.794 00:25:02 nvmf_identify_passthru -- common/autotest_common.sh@1509 -- # bdfs=() 00:44:23.794 00:25:02 nvmf_identify_passthru -- common/autotest_common.sh@1509 -- # local bdfs 00:44:23.794 00:25:02 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:44:23.794 00:25:02 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:44:23.794 00:25:02 nvmf_identify_passthru -- common/autotest_common.sh@1498 -- # bdfs=() 00:44:23.794 00:25:02 nvmf_identify_passthru -- common/autotest_common.sh@1498 -- # local bdfs 00:44:23.794 00:25:02 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:44:23.794 00:25:02 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:44:23.794 00:25:02 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:44:23.794 00:25:02 nvmf_identify_passthru -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:44:23.794 00:25:02 nvmf_identify_passthru -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:5e:00.0 00:44:23.794 00:25:02 nvmf_identify_passthru -- common/autotest_common.sh@1512 -- # echo 0000:5e:00.0 00:44:23.794 00:25:02 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:5e:00.0 00:44:23.794 00:25:02 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:5e:00.0 ']' 00:44:23.794 00:25:02 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:5e:00.0' -i 0 00:44:23.794 00:25:02 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:44:23.794 00:25:02 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:44:27.988 00:25:06 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=BTLJ7244049A1P0FGN 00:44:27.988 00:25:06 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:5e:00.0' -i 0 00:44:27.988 00:25:06 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:44:27.988 00:25:06 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:44:32.182 00:25:11 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=INTEL 00:44:32.182 00:25:11 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:44:32.182 00:25:11 nvmf_identify_passthru -- common/autotest_common.sh@732 -- # xtrace_disable 00:44:32.182 00:25:11 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:44:32.182 00:25:11 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:44:32.182 00:25:11 nvmf_identify_passthru -- common/autotest_common.sh@726 -- # xtrace_disable 00:44:32.182 00:25:11 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:44:32.182 00:25:11 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=150961 00:44:32.182 00:25:11 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:44:32.182 00:25:11 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:44:32.182 00:25:11 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 150961 00:44:32.182 00:25:11 nvmf_identify_passthru -- common/autotest_common.sh@835 -- # '[' -z 150961 ']' 00:44:32.182 00:25:11 nvmf_identify_passthru -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:44:32.182 00:25:11 nvmf_identify_passthru -- common/autotest_common.sh@840 -- # local max_retries=100 00:44:32.182 00:25:11 nvmf_identify_passthru -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:44:32.182 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:44:32.182 00:25:11 nvmf_identify_passthru -- common/autotest_common.sh@844 -- # xtrace_disable 00:44:32.182 00:25:11 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:44:32.182 [2024-12-14 00:25:11.219690] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:44:32.182 [2024-12-14 00:25:11.219786] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:44:32.441 [2024-12-14 00:25:11.339151] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:44:32.441 [2024-12-14 00:25:11.448307] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:44:32.441 [2024-12-14 00:25:11.448354] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:44:32.441 [2024-12-14 00:25:11.448364] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:44:32.441 [2024-12-14 00:25:11.448390] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:44:32.441 [2024-12-14 00:25:11.448399] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:44:32.441 [2024-12-14 00:25:11.450657] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:44:32.441 [2024-12-14 00:25:11.450729] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:44:32.441 [2024-12-14 00:25:11.450830] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:44:32.441 [2024-12-14 00:25:11.450840] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:44:33.009 00:25:12 nvmf_identify_passthru -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:44:33.009 00:25:12 nvmf_identify_passthru -- common/autotest_common.sh@868 -- # return 0 00:44:33.009 00:25:12 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:44:33.009 00:25:12 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:33.009 00:25:12 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:44:33.009 INFO: Log level set to 20 00:44:33.009 INFO: Requests: 00:44:33.009 { 00:44:33.009 "jsonrpc": "2.0", 00:44:33.009 "method": "nvmf_set_config", 00:44:33.009 "id": 1, 00:44:33.009 "params": { 00:44:33.009 "admin_cmd_passthru": { 00:44:33.009 "identify_ctrlr": true 00:44:33.009 } 00:44:33.009 } 00:44:33.009 } 00:44:33.009 00:44:33.009 INFO: response: 00:44:33.009 { 00:44:33.009 "jsonrpc": "2.0", 00:44:33.009 "id": 1, 00:44:33.009 "result": true 00:44:33.009 } 00:44:33.009 00:44:33.009 00:25:12 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:33.009 00:25:12 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:44:33.009 00:25:12 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:33.009 00:25:12 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:44:33.009 INFO: Setting log level to 20 00:44:33.009 INFO: Setting log level to 20 00:44:33.009 INFO: Log level set to 20 00:44:33.009 INFO: Log level set to 20 00:44:33.009 INFO: Requests: 00:44:33.009 { 00:44:33.009 "jsonrpc": "2.0", 00:44:33.009 "method": "framework_start_init", 00:44:33.009 "id": 1 00:44:33.009 } 00:44:33.009 00:44:33.009 INFO: Requests: 00:44:33.009 { 00:44:33.009 "jsonrpc": "2.0", 00:44:33.009 "method": "framework_start_init", 00:44:33.009 "id": 1 00:44:33.009 } 00:44:33.009 00:44:33.269 [2024-12-14 00:25:12.366160] nvmf_tgt.c: 462:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:44:33.269 INFO: response: 00:44:33.269 { 00:44:33.269 "jsonrpc": "2.0", 00:44:33.269 "id": 1, 00:44:33.269 "result": true 00:44:33.269 } 00:44:33.269 00:44:33.269 INFO: response: 00:44:33.269 { 00:44:33.269 "jsonrpc": "2.0", 00:44:33.269 "id": 1, 00:44:33.269 "result": true 00:44:33.269 } 00:44:33.269 00:44:33.269 00:25:12 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:33.269 00:25:12 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:44:33.269 00:25:12 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:33.269 00:25:12 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:44:33.269 INFO: Setting log level to 40 00:44:33.269 INFO: Setting log level to 40 00:44:33.269 INFO: Setting log level to 40 00:44:33.269 [2024-12-14 00:25:12.382770] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:44:33.269 00:25:12 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:33.269 00:25:12 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:44:33.269 00:25:12 nvmf_identify_passthru -- common/autotest_common.sh@732 -- # xtrace_disable 00:44:33.269 00:25:12 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:44:33.527 00:25:12 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:5e:00.0 00:44:33.527 00:25:12 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:33.527 00:25:12 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:44:36.815 Nvme0n1 00:44:36.816 00:25:15 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:36.816 00:25:15 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:44:36.816 00:25:15 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:36.816 00:25:15 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:44:36.816 00:25:15 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:36.816 00:25:15 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:44:36.816 00:25:15 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:36.816 00:25:15 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:44:36.816 00:25:15 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:36.816 00:25:15 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:44:36.816 00:25:15 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:36.816 00:25:15 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:44:36.816 [2024-12-14 00:25:15.345675] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:44:36.816 00:25:15 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:36.816 00:25:15 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:44:36.816 00:25:15 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:36.816 00:25:15 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:44:36.816 [ 00:44:36.816 { 00:44:36.816 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:44:36.816 "subtype": "Discovery", 00:44:36.816 "listen_addresses": [], 00:44:36.816 "allow_any_host": true, 00:44:36.816 "hosts": [] 00:44:36.816 }, 00:44:36.816 { 00:44:36.816 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:44:36.816 "subtype": "NVMe", 00:44:36.816 "listen_addresses": [ 00:44:36.816 { 00:44:36.816 "trtype": "TCP", 00:44:36.816 "adrfam": "IPv4", 00:44:36.816 "traddr": "10.0.0.2", 00:44:36.816 "trsvcid": "4420" 00:44:36.816 } 00:44:36.816 ], 00:44:36.816 "allow_any_host": true, 00:44:36.816 "hosts": [], 00:44:36.816 "serial_number": "SPDK00000000000001", 00:44:36.816 "model_number": "SPDK bdev Controller", 00:44:36.816 "max_namespaces": 1, 00:44:36.816 "min_cntlid": 1, 00:44:36.816 "max_cntlid": 65519, 00:44:36.816 "namespaces": [ 00:44:36.816 { 00:44:36.816 "nsid": 1, 00:44:36.816 "bdev_name": "Nvme0n1", 00:44:36.816 "name": "Nvme0n1", 00:44:36.816 "nguid": "1C89CD7625E74EF4B30F025F8B38A880", 00:44:36.816 "uuid": "1c89cd76-25e7-4ef4-b30f-025f8b38a880" 00:44:36.816 } 00:44:36.816 ] 00:44:36.816 } 00:44:36.816 ] 00:44:36.816 00:25:15 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:36.816 00:25:15 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:44:36.816 00:25:15 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:44:36.816 00:25:15 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:44:36.816 00:25:15 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=BTLJ7244049A1P0FGN 00:44:36.816 00:25:15 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:44:36.816 00:25:15 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:44:36.816 00:25:15 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:44:36.816 00:25:15 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=INTEL 00:44:36.816 00:25:15 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' BTLJ7244049A1P0FGN '!=' BTLJ7244049A1P0FGN ']' 00:44:36.816 00:25:15 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' INTEL '!=' INTEL ']' 00:44:36.816 00:25:15 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:44:36.816 00:25:15 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:36.816 00:25:15 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:44:36.816 00:25:15 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:36.816 00:25:15 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:44:36.816 00:25:15 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:44:36.816 00:25:15 nvmf_identify_passthru -- nvmf/common.sh@516 -- # nvmfcleanup 00:44:36.816 00:25:15 nvmf_identify_passthru -- nvmf/common.sh@121 -- # sync 00:44:36.816 00:25:15 nvmf_identify_passthru -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:44:36.816 00:25:15 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set +e 00:44:36.816 00:25:15 nvmf_identify_passthru -- nvmf/common.sh@125 -- # for i in {1..20} 00:44:36.816 00:25:15 nvmf_identify_passthru -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:44:36.816 rmmod nvme_tcp 00:44:36.816 rmmod nvme_fabrics 00:44:36.816 rmmod nvme_keyring 00:44:36.816 00:25:15 nvmf_identify_passthru -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:44:36.816 00:25:15 nvmf_identify_passthru -- nvmf/common.sh@128 -- # set -e 00:44:36.816 00:25:15 nvmf_identify_passthru -- nvmf/common.sh@129 -- # return 0 00:44:36.816 00:25:15 nvmf_identify_passthru -- nvmf/common.sh@517 -- # '[' -n 150961 ']' 00:44:36.816 00:25:15 nvmf_identify_passthru -- nvmf/common.sh@518 -- # killprocess 150961 00:44:36.816 00:25:15 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # '[' -z 150961 ']' 00:44:36.816 00:25:15 nvmf_identify_passthru -- common/autotest_common.sh@958 -- # kill -0 150961 00:44:36.816 00:25:15 nvmf_identify_passthru -- common/autotest_common.sh@959 -- # uname 00:44:36.816 00:25:15 nvmf_identify_passthru -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:44:36.816 00:25:15 nvmf_identify_passthru -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 150961 00:44:37.075 00:25:15 nvmf_identify_passthru -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:44:37.075 00:25:15 nvmf_identify_passthru -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:44:37.075 00:25:15 nvmf_identify_passthru -- common/autotest_common.sh@972 -- # echo 'killing process with pid 150961' 00:44:37.075 killing process with pid 150961 00:44:37.075 00:25:15 nvmf_identify_passthru -- common/autotest_common.sh@973 -- # kill 150961 00:44:37.075 00:25:15 nvmf_identify_passthru -- common/autotest_common.sh@978 -- # wait 150961 00:44:39.609 00:25:18 nvmf_identify_passthru -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:44:39.609 00:25:18 nvmf_identify_passthru -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:44:39.609 00:25:18 nvmf_identify_passthru -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:44:39.609 00:25:18 nvmf_identify_passthru -- nvmf/common.sh@297 -- # iptr 00:44:39.609 00:25:18 nvmf_identify_passthru -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:44:39.609 00:25:18 nvmf_identify_passthru -- nvmf/common.sh@791 -- # iptables-save 00:44:39.609 00:25:18 nvmf_identify_passthru -- nvmf/common.sh@791 -- # iptables-restore 00:44:39.609 00:25:18 nvmf_identify_passthru -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:44:39.609 00:25:18 nvmf_identify_passthru -- nvmf/common.sh@302 -- # remove_spdk_ns 00:44:39.609 00:25:18 nvmf_identify_passthru -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:44:39.609 00:25:18 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:44:39.609 00:25:18 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:44:41.516 00:25:20 nvmf_identify_passthru -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:44:41.516 00:44:41.516 real 0m23.276s 00:44:41.516 user 0m33.438s 00:44:41.516 sys 0m5.867s 00:44:41.516 00:25:20 nvmf_identify_passthru -- common/autotest_common.sh@1130 -- # xtrace_disable 00:44:41.516 00:25:20 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:44:41.516 ************************************ 00:44:41.516 END TEST nvmf_identify_passthru 00:44:41.516 ************************************ 00:44:41.516 00:25:20 -- spdk/autotest.sh@289 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:44:41.516 00:25:20 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:44:41.516 00:25:20 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:44:41.516 00:25:20 -- common/autotest_common.sh@10 -- # set +x 00:44:41.516 ************************************ 00:44:41.516 START TEST nvmf_dif 00:44:41.516 ************************************ 00:44:41.516 00:25:20 nvmf_dif -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:44:41.516 * Looking for test storage... 00:44:41.516 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:44:41.516 00:25:20 nvmf_dif -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:44:41.516 00:25:20 nvmf_dif -- common/autotest_common.sh@1711 -- # lcov --version 00:44:41.516 00:25:20 nvmf_dif -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:44:41.775 00:25:20 nvmf_dif -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:44:41.775 00:25:20 nvmf_dif -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:44:41.775 00:25:20 nvmf_dif -- scripts/common.sh@333 -- # local ver1 ver1_l 00:44:41.775 00:25:20 nvmf_dif -- scripts/common.sh@334 -- # local ver2 ver2_l 00:44:41.775 00:25:20 nvmf_dif -- scripts/common.sh@336 -- # IFS=.-: 00:44:41.775 00:25:20 nvmf_dif -- scripts/common.sh@336 -- # read -ra ver1 00:44:41.775 00:25:20 nvmf_dif -- scripts/common.sh@337 -- # IFS=.-: 00:44:41.775 00:25:20 nvmf_dif -- scripts/common.sh@337 -- # read -ra ver2 00:44:41.775 00:25:20 nvmf_dif -- scripts/common.sh@338 -- # local 'op=<' 00:44:41.775 00:25:20 nvmf_dif -- scripts/common.sh@340 -- # ver1_l=2 00:44:41.775 00:25:20 nvmf_dif -- scripts/common.sh@341 -- # ver2_l=1 00:44:41.775 00:25:20 nvmf_dif -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:44:41.775 00:25:20 nvmf_dif -- scripts/common.sh@344 -- # case "$op" in 00:44:41.775 00:25:20 nvmf_dif -- scripts/common.sh@345 -- # : 1 00:44:41.775 00:25:20 nvmf_dif -- scripts/common.sh@364 -- # (( v = 0 )) 00:44:41.775 00:25:20 nvmf_dif -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:44:41.775 00:25:20 nvmf_dif -- scripts/common.sh@365 -- # decimal 1 00:44:41.775 00:25:20 nvmf_dif -- scripts/common.sh@353 -- # local d=1 00:44:41.775 00:25:20 nvmf_dif -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:44:41.775 00:25:20 nvmf_dif -- scripts/common.sh@355 -- # echo 1 00:44:41.775 00:25:20 nvmf_dif -- scripts/common.sh@365 -- # ver1[v]=1 00:44:41.775 00:25:20 nvmf_dif -- scripts/common.sh@366 -- # decimal 2 00:44:41.775 00:25:20 nvmf_dif -- scripts/common.sh@353 -- # local d=2 00:44:41.775 00:25:20 nvmf_dif -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:44:41.775 00:25:20 nvmf_dif -- scripts/common.sh@355 -- # echo 2 00:44:41.775 00:25:20 nvmf_dif -- scripts/common.sh@366 -- # ver2[v]=2 00:44:41.775 00:25:20 nvmf_dif -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:44:41.775 00:25:20 nvmf_dif -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:44:41.775 00:25:20 nvmf_dif -- scripts/common.sh@368 -- # return 0 00:44:41.775 00:25:20 nvmf_dif -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:44:41.775 00:25:20 nvmf_dif -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:44:41.775 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:41.775 --rc genhtml_branch_coverage=1 00:44:41.776 --rc genhtml_function_coverage=1 00:44:41.776 --rc genhtml_legend=1 00:44:41.776 --rc geninfo_all_blocks=1 00:44:41.776 --rc geninfo_unexecuted_blocks=1 00:44:41.776 00:44:41.776 ' 00:44:41.776 00:25:20 nvmf_dif -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:44:41.776 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:41.776 --rc genhtml_branch_coverage=1 00:44:41.776 --rc genhtml_function_coverage=1 00:44:41.776 --rc genhtml_legend=1 00:44:41.776 --rc geninfo_all_blocks=1 00:44:41.776 --rc geninfo_unexecuted_blocks=1 00:44:41.776 00:44:41.776 ' 00:44:41.776 00:25:20 nvmf_dif -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:44:41.776 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:41.776 --rc genhtml_branch_coverage=1 00:44:41.776 --rc genhtml_function_coverage=1 00:44:41.776 --rc genhtml_legend=1 00:44:41.776 --rc geninfo_all_blocks=1 00:44:41.776 --rc geninfo_unexecuted_blocks=1 00:44:41.776 00:44:41.776 ' 00:44:41.776 00:25:20 nvmf_dif -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:44:41.776 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:41.776 --rc genhtml_branch_coverage=1 00:44:41.776 --rc genhtml_function_coverage=1 00:44:41.776 --rc genhtml_legend=1 00:44:41.776 --rc geninfo_all_blocks=1 00:44:41.776 --rc geninfo_unexecuted_blocks=1 00:44:41.776 00:44:41.776 ' 00:44:41.776 00:25:20 nvmf_dif -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:44:41.776 00:25:20 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:44:41.776 00:25:20 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:44:41.776 00:25:20 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:44:41.776 00:25:20 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:44:41.776 00:25:20 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:44:41.776 00:25:20 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:44:41.776 00:25:20 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:44:41.776 00:25:20 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:44:41.776 00:25:20 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:44:41.776 00:25:20 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:44:41.776 00:25:20 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:44:41.776 00:25:20 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:44:41.776 00:25:20 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:44:41.776 00:25:20 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:44:41.776 00:25:20 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:44:41.776 00:25:20 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:44:41.776 00:25:20 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:44:41.776 00:25:20 nvmf_dif -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:44:41.776 00:25:20 nvmf_dif -- scripts/common.sh@15 -- # shopt -s extglob 00:44:41.776 00:25:20 nvmf_dif -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:44:41.776 00:25:20 nvmf_dif -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:44:41.776 00:25:20 nvmf_dif -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:44:41.776 00:25:20 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:41.776 00:25:20 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:41.776 00:25:20 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:41.776 00:25:20 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:44:41.776 00:25:20 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:41.776 00:25:20 nvmf_dif -- nvmf/common.sh@51 -- # : 0 00:44:41.776 00:25:20 nvmf_dif -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:44:41.776 00:25:20 nvmf_dif -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:44:41.776 00:25:20 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:44:41.776 00:25:20 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:44:41.776 00:25:20 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:44:41.776 00:25:20 nvmf_dif -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:44:41.776 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:44:41.776 00:25:20 nvmf_dif -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:44:41.776 00:25:20 nvmf_dif -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:44:41.776 00:25:20 nvmf_dif -- nvmf/common.sh@55 -- # have_pci_nics=0 00:44:41.776 00:25:20 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:44:41.776 00:25:20 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:44:41.776 00:25:20 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:44:41.776 00:25:20 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:44:41.776 00:25:20 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:44:41.776 00:25:20 nvmf_dif -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:44:41.776 00:25:20 nvmf_dif -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:44:41.776 00:25:20 nvmf_dif -- nvmf/common.sh@476 -- # prepare_net_devs 00:44:41.776 00:25:20 nvmf_dif -- nvmf/common.sh@438 -- # local -g is_hw=no 00:44:41.776 00:25:20 nvmf_dif -- nvmf/common.sh@440 -- # remove_spdk_ns 00:44:41.776 00:25:20 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:44:41.776 00:25:20 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:44:41.776 00:25:20 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:44:41.776 00:25:20 nvmf_dif -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:44:41.776 00:25:20 nvmf_dif -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:44:41.776 00:25:20 nvmf_dif -- nvmf/common.sh@309 -- # xtrace_disable 00:44:41.776 00:25:20 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:44:47.051 00:25:25 nvmf_dif -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:44:47.051 00:25:25 nvmf_dif -- nvmf/common.sh@315 -- # pci_devs=() 00:44:47.051 00:25:25 nvmf_dif -- nvmf/common.sh@315 -- # local -a pci_devs 00:44:47.051 00:25:25 nvmf_dif -- nvmf/common.sh@316 -- # pci_net_devs=() 00:44:47.051 00:25:25 nvmf_dif -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:44:47.051 00:25:25 nvmf_dif -- nvmf/common.sh@317 -- # pci_drivers=() 00:44:47.051 00:25:25 nvmf_dif -- nvmf/common.sh@317 -- # local -A pci_drivers 00:44:47.051 00:25:25 nvmf_dif -- nvmf/common.sh@319 -- # net_devs=() 00:44:47.051 00:25:25 nvmf_dif -- nvmf/common.sh@319 -- # local -ga net_devs 00:44:47.051 00:25:25 nvmf_dif -- nvmf/common.sh@320 -- # e810=() 00:44:47.051 00:25:25 nvmf_dif -- nvmf/common.sh@320 -- # local -ga e810 00:44:47.051 00:25:25 nvmf_dif -- nvmf/common.sh@321 -- # x722=() 00:44:47.051 00:25:25 nvmf_dif -- nvmf/common.sh@321 -- # local -ga x722 00:44:47.051 00:25:25 nvmf_dif -- nvmf/common.sh@322 -- # mlx=() 00:44:47.051 00:25:25 nvmf_dif -- nvmf/common.sh@322 -- # local -ga mlx 00:44:47.051 00:25:25 nvmf_dif -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:44:47.051 00:25:25 nvmf_dif -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:44:47.051 00:25:25 nvmf_dif -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:44:47.051 00:25:25 nvmf_dif -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:44:47.051 00:25:25 nvmf_dif -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:44:47.051 00:25:25 nvmf_dif -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:44:47.051 00:25:25 nvmf_dif -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:44:47.051 00:25:25 nvmf_dif -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:44:47.051 00:25:25 nvmf_dif -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:44:47.051 00:25:25 nvmf_dif -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:44:47.051 00:25:25 nvmf_dif -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:44:47.051 00:25:25 nvmf_dif -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:44:47.051 00:25:25 nvmf_dif -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:44:47.051 00:25:25 nvmf_dif -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:44:47.051 00:25:25 nvmf_dif -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:44:47.051 00:25:25 nvmf_dif -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:44:47.051 00:25:25 nvmf_dif -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:44:47.051 00:25:25 nvmf_dif -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:44:47.051 00:25:25 nvmf_dif -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:44:47.051 00:25:25 nvmf_dif -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:44:47.051 Found 0000:af:00.0 (0x8086 - 0x159b) 00:44:47.051 00:25:25 nvmf_dif -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:44:47.051 00:25:25 nvmf_dif -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:44:47.051 00:25:25 nvmf_dif -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:44:47.051 00:25:25 nvmf_dif -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:44:47.051 00:25:25 nvmf_dif -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:44:47.051 00:25:25 nvmf_dif -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:44:47.051 00:25:25 nvmf_dif -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:44:47.051 Found 0000:af:00.1 (0x8086 - 0x159b) 00:44:47.051 00:25:25 nvmf_dif -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:44:47.051 00:25:25 nvmf_dif -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:44:47.051 00:25:25 nvmf_dif -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:44:47.051 00:25:25 nvmf_dif -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:44:47.051 00:25:25 nvmf_dif -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:44:47.051 00:25:25 nvmf_dif -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:44:47.051 00:25:25 nvmf_dif -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:44:47.051 00:25:25 nvmf_dif -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:44:47.051 00:25:25 nvmf_dif -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:44:47.051 00:25:25 nvmf_dif -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:44:47.051 00:25:25 nvmf_dif -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:44:47.051 00:25:25 nvmf_dif -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:44:47.051 00:25:25 nvmf_dif -- nvmf/common.sh@418 -- # [[ up == up ]] 00:44:47.051 00:25:25 nvmf_dif -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:44:47.051 00:25:25 nvmf_dif -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:44:47.051 00:25:25 nvmf_dif -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:44:47.051 Found net devices under 0000:af:00.0: cvl_0_0 00:44:47.051 00:25:25 nvmf_dif -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:44:47.051 00:25:25 nvmf_dif -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:44:47.051 00:25:25 nvmf_dif -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:44:47.051 00:25:25 nvmf_dif -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:44:47.051 00:25:25 nvmf_dif -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:44:47.051 00:25:25 nvmf_dif -- nvmf/common.sh@418 -- # [[ up == up ]] 00:44:47.051 00:25:25 nvmf_dif -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:44:47.051 00:25:25 nvmf_dif -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:44:47.051 00:25:25 nvmf_dif -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:44:47.051 Found net devices under 0000:af:00.1: cvl_0_1 00:44:47.051 00:25:25 nvmf_dif -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:44:47.051 00:25:25 nvmf_dif -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:44:47.051 00:25:25 nvmf_dif -- nvmf/common.sh@442 -- # is_hw=yes 00:44:47.051 00:25:25 nvmf_dif -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:44:47.051 00:25:25 nvmf_dif -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:44:47.051 00:25:25 nvmf_dif -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:44:47.051 00:25:25 nvmf_dif -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:44:47.051 00:25:25 nvmf_dif -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:44:47.051 00:25:25 nvmf_dif -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:44:47.051 00:25:25 nvmf_dif -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:44:47.051 00:25:25 nvmf_dif -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:44:47.051 00:25:25 nvmf_dif -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:44:47.051 00:25:25 nvmf_dif -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:44:47.051 00:25:25 nvmf_dif -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:44:47.051 00:25:25 nvmf_dif -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:44:47.051 00:25:25 nvmf_dif -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:44:47.051 00:25:25 nvmf_dif -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:44:47.051 00:25:25 nvmf_dif -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:44:47.051 00:25:25 nvmf_dif -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:44:47.051 00:25:25 nvmf_dif -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:44:47.051 00:25:25 nvmf_dif -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:44:47.051 00:25:26 nvmf_dif -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:44:47.051 00:25:26 nvmf_dif -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:44:47.051 00:25:26 nvmf_dif -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:44:47.051 00:25:26 nvmf_dif -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:44:47.051 00:25:26 nvmf_dif -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:44:47.310 00:25:26 nvmf_dif -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:44:47.310 00:25:26 nvmf_dif -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:44:47.310 00:25:26 nvmf_dif -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:44:47.310 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:44:47.310 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.371 ms 00:44:47.310 00:44:47.310 --- 10.0.0.2 ping statistics --- 00:44:47.310 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:44:47.310 rtt min/avg/max/mdev = 0.371/0.371/0.371/0.000 ms 00:44:47.310 00:25:26 nvmf_dif -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:44:47.310 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:44:47.310 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.198 ms 00:44:47.310 00:44:47.310 --- 10.0.0.1 ping statistics --- 00:44:47.310 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:44:47.310 rtt min/avg/max/mdev = 0.198/0.198/0.198/0.000 ms 00:44:47.310 00:25:26 nvmf_dif -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:44:47.310 00:25:26 nvmf_dif -- nvmf/common.sh@450 -- # return 0 00:44:47.310 00:25:26 nvmf_dif -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:44:47.310 00:25:26 nvmf_dif -- nvmf/common.sh@479 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:44:49.900 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:44:49.900 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:44:49.900 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:44:49.900 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:44:49.900 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:44:49.900 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:44:49.900 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:44:49.900 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:44:49.900 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:44:49.900 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:44:49.900 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:44:49.900 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:44:49.900 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:44:49.900 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:44:49.900 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:44:49.900 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:44:49.900 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:44:49.900 00:25:28 nvmf_dif -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:44:49.900 00:25:28 nvmf_dif -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:44:49.900 00:25:28 nvmf_dif -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:44:49.900 00:25:28 nvmf_dif -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:44:49.900 00:25:28 nvmf_dif -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:44:49.900 00:25:28 nvmf_dif -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:44:49.900 00:25:28 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:44:49.900 00:25:28 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:44:49.900 00:25:28 nvmf_dif -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:44:49.900 00:25:28 nvmf_dif -- common/autotest_common.sh@726 -- # xtrace_disable 00:44:49.900 00:25:28 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:44:49.900 00:25:28 nvmf_dif -- nvmf/common.sh@509 -- # nvmfpid=156750 00:44:49.900 00:25:28 nvmf_dif -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:44:49.900 00:25:28 nvmf_dif -- nvmf/common.sh@510 -- # waitforlisten 156750 00:44:49.900 00:25:28 nvmf_dif -- common/autotest_common.sh@835 -- # '[' -z 156750 ']' 00:44:49.900 00:25:28 nvmf_dif -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:44:49.900 00:25:28 nvmf_dif -- common/autotest_common.sh@840 -- # local max_retries=100 00:44:49.900 00:25:28 nvmf_dif -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:44:49.900 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:44:49.900 00:25:28 nvmf_dif -- common/autotest_common.sh@844 -- # xtrace_disable 00:44:49.900 00:25:28 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:44:50.282 [2024-12-14 00:25:29.040144] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:44:50.282 [2024-12-14 00:25:29.040234] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:44:50.282 [2024-12-14 00:25:29.156848] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:44:50.282 [2024-12-14 00:25:29.261095] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:44:50.282 [2024-12-14 00:25:29.261141] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:44:50.282 [2024-12-14 00:25:29.261151] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:44:50.282 [2024-12-14 00:25:29.261161] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:44:50.282 [2024-12-14 00:25:29.261168] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:44:50.282 [2024-12-14 00:25:29.262372] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:44:50.852 00:25:29 nvmf_dif -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:44:50.852 00:25:29 nvmf_dif -- common/autotest_common.sh@868 -- # return 0 00:44:50.852 00:25:29 nvmf_dif -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:44:50.852 00:25:29 nvmf_dif -- common/autotest_common.sh@732 -- # xtrace_disable 00:44:50.852 00:25:29 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:44:50.852 00:25:29 nvmf_dif -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:44:50.852 00:25:29 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:44:50.852 00:25:29 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:44:50.852 00:25:29 nvmf_dif -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:50.852 00:25:29 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:44:50.852 [2024-12-14 00:25:29.889275] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:44:50.852 00:25:29 nvmf_dif -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:50.852 00:25:29 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:44:50.852 00:25:29 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:44:50.852 00:25:29 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:44:50.852 00:25:29 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:44:50.852 ************************************ 00:44:50.852 START TEST fio_dif_1_default 00:44:50.852 ************************************ 00:44:50.852 00:25:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1129 -- # fio_dif_1 00:44:50.852 00:25:29 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:44:50.852 00:25:29 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:44:50.852 00:25:29 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:44:50.852 00:25:29 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:44:50.852 00:25:29 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:44:50.852 00:25:29 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:44:50.852 00:25:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:50.852 00:25:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:44:50.852 bdev_null0 00:44:50.852 00:25:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:50.852 00:25:29 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:44:50.852 00:25:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:50.852 00:25:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:44:50.852 00:25:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:50.852 00:25:29 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:44:50.852 00:25:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:50.852 00:25:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:44:50.852 00:25:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:50.852 00:25:29 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:44:50.852 00:25:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:50.852 00:25:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:44:50.852 [2024-12-14 00:25:29.965639] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:44:50.852 00:25:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:50.852 00:25:29 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:44:50.852 00:25:29 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:44:50.852 00:25:29 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:44:50.852 00:25:29 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # config=() 00:44:50.852 00:25:29 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # local subsystem config 00:44:50.852 00:25:29 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:44:50.852 00:25:29 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:44:50.852 00:25:29 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:44:50.852 { 00:44:50.852 "params": { 00:44:50.852 "name": "Nvme$subsystem", 00:44:50.852 "trtype": "$TEST_TRANSPORT", 00:44:50.852 "traddr": "$NVMF_FIRST_TARGET_IP", 00:44:50.852 "adrfam": "ipv4", 00:44:50.852 "trsvcid": "$NVMF_PORT", 00:44:50.853 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:44:50.853 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:44:50.853 "hdgst": ${hdgst:-false}, 00:44:50.853 "ddgst": ${ddgst:-false} 00:44:50.853 }, 00:44:50.853 "method": "bdev_nvme_attach_controller" 00:44:50.853 } 00:44:50.853 EOF 00:44:50.853 )") 00:44:50.853 00:25:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:44:50.853 00:25:29 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:44:50.853 00:25:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:44:50.853 00:25:29 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:44:50.853 00:25:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:44:50.853 00:25:29 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:44:50.853 00:25:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local sanitizers 00:44:50.853 00:25:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:44:50.853 00:25:29 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # cat 00:44:50.853 00:25:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # shift 00:44:50.853 00:25:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # local asan_lib= 00:44:50.853 00:25:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:44:50.853 00:25:29 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:44:50.853 00:25:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:44:50.853 00:25:29 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:44:50.853 00:25:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # grep libasan 00:44:50.853 00:25:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:44:50.853 00:25:29 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@584 -- # jq . 00:44:50.853 00:25:29 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@585 -- # IFS=, 00:44:50.853 00:25:29 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:44:50.853 "params": { 00:44:50.853 "name": "Nvme0", 00:44:50.853 "trtype": "tcp", 00:44:50.853 "traddr": "10.0.0.2", 00:44:50.853 "adrfam": "ipv4", 00:44:50.853 "trsvcid": "4420", 00:44:50.853 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:44:50.853 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:44:50.853 "hdgst": false, 00:44:50.853 "ddgst": false 00:44:50.853 }, 00:44:50.853 "method": "bdev_nvme_attach_controller" 00:44:50.853 }' 00:44:51.112 00:25:30 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:44:51.112 00:25:30 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:44:51.112 00:25:30 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1351 -- # break 00:44:51.112 00:25:30 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:44:51.112 00:25:30 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:44:51.370 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:44:51.370 fio-3.35 00:44:51.370 Starting 1 thread 00:45:03.583 00:45:03.583 filename0: (groupid=0, jobs=1): err= 0: pid=157136: Sat Dec 14 00:25:41 2024 00:45:03.583 read: IOPS=96, BW=386KiB/s (395kB/s)(3872KiB/10036msec) 00:45:03.583 slat (nsec): min=7011, max=45517, avg=9026.83, stdev=2918.01 00:45:03.583 clat (usec): min=40765, max=45951, avg=41439.82, stdev=570.98 00:45:03.583 lat (usec): min=40773, max=45996, avg=41448.84, stdev=571.36 00:45:03.583 clat percentiles (usec): 00:45:03.583 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:45:03.583 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41681], 00:45:03.583 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:45:03.583 | 99.00th=[42206], 99.50th=[42206], 99.90th=[45876], 99.95th=[45876], 00:45:03.583 | 99.99th=[45876] 00:45:03.583 bw ( KiB/s): min= 352, max= 416, per=99.79%, avg=385.60, stdev=12.61, samples=20 00:45:03.583 iops : min= 88, max= 104, avg=96.40, stdev= 3.15, samples=20 00:45:03.583 lat (msec) : 50=100.00% 00:45:03.583 cpu : usr=93.76%, sys=5.92%, ctx=15, majf=0, minf=1634 00:45:03.583 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:45:03.583 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:03.583 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:03.583 issued rwts: total=968,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:03.583 latency : target=0, window=0, percentile=100.00%, depth=4 00:45:03.583 00:45:03.583 Run status group 0 (all jobs): 00:45:03.583 READ: bw=386KiB/s (395kB/s), 386KiB/s-386KiB/s (395kB/s-395kB/s), io=3872KiB (3965kB), run=10036-10036msec 00:45:03.583 ----------------------------------------------------- 00:45:03.583 Suppressions used: 00:45:03.583 count bytes template 00:45:03.583 1 8 /usr/src/fio/parse.c 00:45:03.583 1 8 libtcmalloc_minimal.so 00:45:03.583 1 904 libcrypto.so 00:45:03.583 ----------------------------------------------------- 00:45:03.583 00:45:03.583 00:25:42 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:45:03.583 00:25:42 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:45:03.583 00:25:42 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:45:03.583 00:25:42 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:45:03.583 00:25:42 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:45:03.583 00:25:42 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:45:03.583 00:25:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:03.583 00:25:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:45:03.583 00:25:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:03.583 00:25:42 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:45:03.583 00:25:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:03.583 00:25:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:45:03.583 00:25:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:03.583 00:45:03.583 real 0m12.582s 00:45:03.583 user 0m17.075s 00:45:03.583 sys 0m1.089s 00:45:03.583 00:25:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1130 -- # xtrace_disable 00:45:03.583 00:25:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:45:03.583 ************************************ 00:45:03.583 END TEST fio_dif_1_default 00:45:03.583 ************************************ 00:45:03.583 00:25:42 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:45:03.583 00:25:42 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:45:03.583 00:25:42 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:45:03.583 00:25:42 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:45:03.583 ************************************ 00:45:03.583 START TEST fio_dif_1_multi_subsystems 00:45:03.583 ************************************ 00:45:03.583 00:25:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1129 -- # fio_dif_1_multi_subsystems 00:45:03.583 00:25:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:45:03.583 00:25:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:45:03.583 00:25:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:45:03.583 00:25:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:45:03.583 00:25:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:45:03.583 00:25:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:45:03.583 00:25:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:45:03.583 00:25:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:03.583 00:25:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:45:03.583 bdev_null0 00:45:03.583 00:25:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:03.583 00:25:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:45:03.583 00:25:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:03.583 00:25:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:45:03.583 00:25:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:03.583 00:25:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:45:03.583 00:25:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:03.583 00:25:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:45:03.583 00:25:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:03.583 00:25:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:45:03.583 00:25:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:03.583 00:25:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:45:03.583 [2024-12-14 00:25:42.619313] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:45:03.583 00:25:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:03.583 00:25:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:45:03.583 00:25:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:45:03.583 00:25:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:45:03.583 00:25:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:45:03.583 00:25:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:03.583 00:25:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:45:03.583 bdev_null1 00:45:03.583 00:25:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:03.583 00:25:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:45:03.583 00:25:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:03.583 00:25:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:45:03.584 00:25:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:03.584 00:25:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:45:03.584 00:25:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:03.584 00:25:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:45:03.584 00:25:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:03.584 00:25:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:45:03.584 00:25:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:03.584 00:25:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:45:03.584 00:25:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:03.584 00:25:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:45:03.584 00:25:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:45:03.584 00:25:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:45:03.584 00:25:42 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # config=() 00:45:03.584 00:25:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:45:03.584 00:25:42 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # local subsystem config 00:45:03.584 00:25:42 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:45:03.584 00:25:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:45:03.584 00:25:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:45:03.584 00:25:42 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:45:03.584 { 00:45:03.584 "params": { 00:45:03.584 "name": "Nvme$subsystem", 00:45:03.584 "trtype": "$TEST_TRANSPORT", 00:45:03.584 "traddr": "$NVMF_FIRST_TARGET_IP", 00:45:03.584 "adrfam": "ipv4", 00:45:03.584 "trsvcid": "$NVMF_PORT", 00:45:03.584 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:45:03.584 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:45:03.584 "hdgst": ${hdgst:-false}, 00:45:03.584 "ddgst": ${ddgst:-false} 00:45:03.584 }, 00:45:03.584 "method": "bdev_nvme_attach_controller" 00:45:03.584 } 00:45:03.584 EOF 00:45:03.584 )") 00:45:03.584 00:25:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:45:03.584 00:25:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:45:03.584 00:25:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:45:03.584 00:25:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:45:03.584 00:25:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local sanitizers 00:45:03.584 00:25:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:45:03.584 00:25:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # shift 00:45:03.584 00:25:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # local asan_lib= 00:45:03.584 00:25:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:45:03.584 00:25:42 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:45:03.584 00:25:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:45:03.584 00:25:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:45:03.584 00:25:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:45:03.584 00:25:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:45:03.584 00:25:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # grep libasan 00:45:03.584 00:25:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:45:03.584 00:25:42 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:45:03.584 00:25:42 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:45:03.584 { 00:45:03.584 "params": { 00:45:03.584 "name": "Nvme$subsystem", 00:45:03.584 "trtype": "$TEST_TRANSPORT", 00:45:03.584 "traddr": "$NVMF_FIRST_TARGET_IP", 00:45:03.584 "adrfam": "ipv4", 00:45:03.584 "trsvcid": "$NVMF_PORT", 00:45:03.584 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:45:03.584 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:45:03.584 "hdgst": ${hdgst:-false}, 00:45:03.584 "ddgst": ${ddgst:-false} 00:45:03.584 }, 00:45:03.584 "method": "bdev_nvme_attach_controller" 00:45:03.584 } 00:45:03.584 EOF 00:45:03.584 )") 00:45:03.584 00:25:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:45:03.584 00:25:42 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:45:03.584 00:25:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:45:03.584 00:25:42 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@584 -- # jq . 00:45:03.584 00:25:42 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@585 -- # IFS=, 00:45:03.584 00:25:42 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:45:03.584 "params": { 00:45:03.584 "name": "Nvme0", 00:45:03.584 "trtype": "tcp", 00:45:03.584 "traddr": "10.0.0.2", 00:45:03.584 "adrfam": "ipv4", 00:45:03.584 "trsvcid": "4420", 00:45:03.584 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:45:03.584 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:45:03.584 "hdgst": false, 00:45:03.584 "ddgst": false 00:45:03.584 }, 00:45:03.584 "method": "bdev_nvme_attach_controller" 00:45:03.584 },{ 00:45:03.584 "params": { 00:45:03.584 "name": "Nvme1", 00:45:03.584 "trtype": "tcp", 00:45:03.584 "traddr": "10.0.0.2", 00:45:03.584 "adrfam": "ipv4", 00:45:03.584 "trsvcid": "4420", 00:45:03.584 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:45:03.584 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:45:03.584 "hdgst": false, 00:45:03.584 "ddgst": false 00:45:03.584 }, 00:45:03.584 "method": "bdev_nvme_attach_controller" 00:45:03.584 }' 00:45:03.584 00:25:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:45:03.584 00:25:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:45:03.584 00:25:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1351 -- # break 00:45:03.584 00:25:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:45:03.584 00:25:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:45:04.173 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:45:04.173 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:45:04.173 fio-3.35 00:45:04.173 Starting 2 threads 00:45:16.363 00:45:16.363 filename0: (groupid=0, jobs=1): err= 0: pid=159267: Sat Dec 14 00:25:54 2024 00:45:16.363 read: IOPS=189, BW=759KiB/s (777kB/s)(7600KiB/10017msec) 00:45:16.363 slat (nsec): min=6940, max=36006, avg=9749.40, stdev=3665.10 00:45:16.363 clat (usec): min=473, max=46087, avg=21057.87, stdev=20243.15 00:45:16.363 lat (usec): min=480, max=46120, avg=21067.62, stdev=20242.44 00:45:16.363 clat percentiles (usec): 00:45:16.363 | 1.00th=[ 490], 5.00th=[ 586], 10.00th=[ 668], 20.00th=[ 717], 00:45:16.363 | 30.00th=[ 734], 40.00th=[ 840], 50.00th=[40633], 60.00th=[41157], 00:45:16.363 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41681], 00:45:16.363 | 99.00th=[42206], 99.50th=[42206], 99.90th=[45876], 99.95th=[45876], 00:45:16.363 | 99.99th=[45876] 00:45:16.363 bw ( KiB/s): min= 704, max= 768, per=50.01%, avg=758.40, stdev=23.45, samples=20 00:45:16.363 iops : min= 176, max= 192, avg=189.60, stdev= 5.86, samples=20 00:45:16.363 lat (usec) : 500=2.32%, 750=31.68%, 1000=6.21% 00:45:16.363 lat (msec) : 2=9.68%, 50=50.11% 00:45:16.363 cpu : usr=96.64%, sys=3.07%, ctx=12, majf=0, minf=1634 00:45:16.363 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:45:16.363 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:16.363 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:16.363 issued rwts: total=1900,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:16.363 latency : target=0, window=0, percentile=100.00%, depth=4 00:45:16.363 filename1: (groupid=0, jobs=1): err= 0: pid=159268: Sat Dec 14 00:25:54 2024 00:45:16.363 read: IOPS=189, BW=757KiB/s (775kB/s)(7584KiB/10017msec) 00:45:16.363 slat (nsec): min=6846, max=30905, avg=9682.73, stdev=3592.84 00:45:16.363 clat (usec): min=474, max=47169, avg=21102.16, stdev=20310.48 00:45:16.363 lat (usec): min=481, max=47198, avg=21111.84, stdev=20309.78 00:45:16.363 clat percentiles (usec): 00:45:16.363 | 1.00th=[ 490], 5.00th=[ 502], 10.00th=[ 594], 20.00th=[ 709], 00:45:16.363 | 30.00th=[ 734], 40.00th=[ 766], 50.00th=[40633], 60.00th=[41157], 00:45:16.363 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41681], 00:45:16.363 | 99.00th=[42206], 99.50th=[42730], 99.90th=[46924], 99.95th=[46924], 00:45:16.363 | 99.99th=[46924] 00:45:16.363 bw ( KiB/s): min= 672, max= 768, per=49.87%, avg=756.80, stdev=28.00, samples=20 00:45:16.363 iops : min= 168, max= 192, avg=189.20, stdev= 7.00, samples=20 00:45:16.363 lat (usec) : 500=3.74%, 750=31.17%, 1000=11.50% 00:45:16.363 lat (msec) : 2=3.38%, 50=50.21% 00:45:16.363 cpu : usr=96.73%, sys=2.99%, ctx=13, majf=0, minf=1632 00:45:16.363 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:45:16.363 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:16.363 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:16.363 issued rwts: total=1896,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:16.363 latency : target=0, window=0, percentile=100.00%, depth=4 00:45:16.363 00:45:16.363 Run status group 0 (all jobs): 00:45:16.363 READ: bw=1516KiB/s (1552kB/s), 757KiB/s-759KiB/s (775kB/s-777kB/s), io=14.8MiB (15.5MB), run=10017-10017msec 00:45:16.363 ----------------------------------------------------- 00:45:16.363 Suppressions used: 00:45:16.363 count bytes template 00:45:16.363 2 16 /usr/src/fio/parse.c 00:45:16.363 1 8 libtcmalloc_minimal.so 00:45:16.363 1 904 libcrypto.so 00:45:16.363 ----------------------------------------------------- 00:45:16.363 00:45:16.363 00:25:55 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:45:16.363 00:25:55 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:45:16.363 00:25:55 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:45:16.363 00:25:55 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:45:16.363 00:25:55 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:45:16.363 00:25:55 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:45:16.363 00:25:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:16.363 00:25:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:45:16.363 00:25:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:16.363 00:25:55 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:45:16.363 00:25:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:16.363 00:25:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:45:16.363 00:25:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:16.363 00:25:55 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:45:16.363 00:25:55 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:45:16.363 00:25:55 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:45:16.363 00:25:55 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:45:16.363 00:25:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:16.363 00:25:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:45:16.363 00:25:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:16.363 00:25:55 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:45:16.363 00:25:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:16.363 00:25:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:45:16.363 00:25:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:16.363 00:45:16.363 real 0m12.853s 00:45:16.363 user 0m27.445s 00:45:16.363 sys 0m1.168s 00:45:16.363 00:25:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1130 -- # xtrace_disable 00:45:16.363 00:25:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:45:16.363 ************************************ 00:45:16.363 END TEST fio_dif_1_multi_subsystems 00:45:16.363 ************************************ 00:45:16.363 00:25:55 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:45:16.363 00:25:55 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:45:16.363 00:25:55 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:45:16.363 00:25:55 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:45:16.621 ************************************ 00:45:16.621 START TEST fio_dif_rand_params 00:45:16.621 ************************************ 00:45:16.621 00:25:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1129 -- # fio_dif_rand_params 00:45:16.621 00:25:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:45:16.621 00:25:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:45:16.621 00:25:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:45:16.621 00:25:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:45:16.621 00:25:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:45:16.621 00:25:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:45:16.621 00:25:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:45:16.621 00:25:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:45:16.621 00:25:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:45:16.621 00:25:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:45:16.621 00:25:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:45:16.621 00:25:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:45:16.621 00:25:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:45:16.621 00:25:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:16.621 00:25:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:16.621 bdev_null0 00:45:16.621 00:25:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:16.621 00:25:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:45:16.621 00:25:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:16.621 00:25:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:16.621 00:25:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:16.621 00:25:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:45:16.621 00:25:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:16.621 00:25:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:16.621 00:25:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:16.621 00:25:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:45:16.621 00:25:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:16.621 00:25:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:16.621 [2024-12-14 00:25:55.544386] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:45:16.621 00:25:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:16.621 00:25:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:45:16.621 00:25:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:45:16.621 00:25:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:45:16.621 00:25:55 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:45:16.621 00:25:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:45:16.621 00:25:55 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:45:16.621 00:25:55 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:45:16.621 00:25:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:45:16.621 00:25:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:45:16.621 00:25:55 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:45:16.621 { 00:45:16.621 "params": { 00:45:16.621 "name": "Nvme$subsystem", 00:45:16.621 "trtype": "$TEST_TRANSPORT", 00:45:16.621 "traddr": "$NVMF_FIRST_TARGET_IP", 00:45:16.621 "adrfam": "ipv4", 00:45:16.621 "trsvcid": "$NVMF_PORT", 00:45:16.621 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:45:16.621 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:45:16.621 "hdgst": ${hdgst:-false}, 00:45:16.621 "ddgst": ${ddgst:-false} 00:45:16.621 }, 00:45:16.621 "method": "bdev_nvme_attach_controller" 00:45:16.621 } 00:45:16.621 EOF 00:45:16.621 )") 00:45:16.621 00:25:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:45:16.621 00:25:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:45:16.621 00:25:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:45:16.621 00:25:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:45:16.621 00:25:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:45:16.621 00:25:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:45:16.621 00:25:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:45:16.621 00:25:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:45:16.621 00:25:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:45:16.621 00:25:55 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:45:16.621 00:25:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:45:16.621 00:25:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:45:16.621 00:25:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:45:16.621 00:25:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:45:16.621 00:25:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:45:16.621 00:25:55 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:45:16.621 00:25:55 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:45:16.621 00:25:55 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:45:16.621 "params": { 00:45:16.621 "name": "Nvme0", 00:45:16.621 "trtype": "tcp", 00:45:16.621 "traddr": "10.0.0.2", 00:45:16.621 "adrfam": "ipv4", 00:45:16.621 "trsvcid": "4420", 00:45:16.621 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:45:16.621 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:45:16.621 "hdgst": false, 00:45:16.621 "ddgst": false 00:45:16.621 }, 00:45:16.621 "method": "bdev_nvme_attach_controller" 00:45:16.621 }' 00:45:16.621 00:25:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:45:16.621 00:25:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:45:16.621 00:25:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1351 -- # break 00:45:16.621 00:25:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:45:16.621 00:25:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:45:16.879 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:45:16.879 ... 00:45:16.879 fio-3.35 00:45:16.879 Starting 3 threads 00:45:23.431 00:45:23.431 filename0: (groupid=0, jobs=1): err= 0: pid=161393: Sat Dec 14 00:26:01 2024 00:45:23.431 read: IOPS=278, BW=34.8MiB/s (36.5MB/s)(174MiB/5006msec) 00:45:23.431 slat (nsec): min=7342, max=60931, avg=16110.53, stdev=4460.23 00:45:23.431 clat (usec): min=4304, max=51931, avg=10743.54, stdev=2795.60 00:45:23.431 lat (usec): min=4319, max=51973, avg=10759.65, stdev=2795.68 00:45:23.431 clat percentiles (usec): 00:45:23.431 | 1.00th=[ 7111], 5.00th=[ 8717], 10.00th=[ 9241], 20.00th=[ 9634], 00:45:23.431 | 30.00th=[10028], 40.00th=[10290], 50.00th=[10552], 60.00th=[10814], 00:45:23.431 | 70.00th=[11076], 80.00th=[11469], 90.00th=[12125], 95.00th=[12649], 00:45:23.431 | 99.00th=[13960], 99.50th=[14484], 99.90th=[50594], 99.95th=[52167], 00:45:23.431 | 99.99th=[52167] 00:45:23.431 bw ( KiB/s): min=32191, max=37376, per=36.29%, avg=35654.30, stdev=1519.11, samples=10 00:45:23.431 iops : min= 251, max= 292, avg=278.50, stdev=11.99, samples=10 00:45:23.431 lat (msec) : 10=29.18%, 20=70.39%, 50=0.22%, 100=0.22% 00:45:23.431 cpu : usr=94.43%, sys=5.17%, ctx=13, majf=0, minf=1634 00:45:23.431 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:45:23.431 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:23.431 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:23.431 issued rwts: total=1395,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:23.431 latency : target=0, window=0, percentile=100.00%, depth=3 00:45:23.431 filename0: (groupid=0, jobs=1): err= 0: pid=161394: Sat Dec 14 00:26:01 2024 00:45:23.431 read: IOPS=242, BW=30.3MiB/s (31.7MB/s)(153MiB/5045msec) 00:45:23.431 slat (nsec): min=7260, max=40338, avg=16450.21, stdev=4434.82 00:45:23.431 clat (usec): min=6951, max=49626, avg=12340.08, stdev=3686.70 00:45:23.431 lat (usec): min=6965, max=49641, avg=12356.53, stdev=3686.84 00:45:23.431 clat percentiles (usec): 00:45:23.431 | 1.00th=[ 8717], 5.00th=[ 9765], 10.00th=[10159], 20.00th=[10683], 00:45:23.431 | 30.00th=[11207], 40.00th=[11600], 50.00th=[11994], 60.00th=[12387], 00:45:23.431 | 70.00th=[12780], 80.00th=[13304], 90.00th=[13960], 95.00th=[14615], 00:45:23.431 | 99.00th=[19006], 99.50th=[48497], 99.90th=[49546], 99.95th=[49546], 00:45:23.431 | 99.99th=[49546] 00:45:23.431 bw ( KiB/s): min=26880, max=33024, per=31.77%, avg=31206.40, stdev=1721.32, samples=10 00:45:23.431 iops : min= 210, max= 258, avg=243.80, stdev=13.45, samples=10 00:45:23.431 lat (msec) : 10=7.53%, 20=91.56%, 50=0.90% 00:45:23.431 cpu : usr=95.32%, sys=4.30%, ctx=10, majf=0, minf=1634 00:45:23.431 IO depths : 1=0.3%, 2=99.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:45:23.431 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:23.431 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:23.431 issued rwts: total=1221,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:23.431 latency : target=0, window=0, percentile=100.00%, depth=3 00:45:23.431 filename0: (groupid=0, jobs=1): err= 0: pid=161395: Sat Dec 14 00:26:01 2024 00:45:23.431 read: IOPS=251, BW=31.4MiB/s (32.9MB/s)(157MiB/5003msec) 00:45:23.431 slat (nsec): min=7380, max=77272, avg=19561.64, stdev=7568.86 00:45:23.431 clat (usec): min=3955, max=51591, avg=11926.51, stdev=3477.62 00:45:23.431 lat (usec): min=3966, max=51620, avg=11946.07, stdev=3477.45 00:45:23.431 clat percentiles (usec): 00:45:23.431 | 1.00th=[ 8160], 5.00th=[ 9503], 10.00th=[ 9896], 20.00th=[10421], 00:45:23.431 | 30.00th=[10814], 40.00th=[11338], 50.00th=[11600], 60.00th=[12125], 00:45:23.431 | 70.00th=[12518], 80.00th=[13042], 90.00th=[13566], 95.00th=[14091], 00:45:23.431 | 99.00th=[15401], 99.50th=[46400], 99.90th=[51119], 99.95th=[51643], 00:45:23.431 | 99.99th=[51643] 00:45:23.431 bw ( KiB/s): min=26624, max=34560, per=32.69%, avg=32113.78, stdev=2336.17, samples=9 00:45:23.431 iops : min= 208, max= 270, avg=250.89, stdev=18.25, samples=9 00:45:23.431 lat (msec) : 4=0.08%, 10=11.07%, 20=88.14%, 50=0.24%, 100=0.48% 00:45:23.431 cpu : usr=95.34%, sys=4.30%, ctx=10, majf=0, minf=1635 00:45:23.431 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:45:23.431 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:23.431 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:23.431 issued rwts: total=1256,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:23.431 latency : target=0, window=0, percentile=100.00%, depth=3 00:45:23.431 00:45:23.431 Run status group 0 (all jobs): 00:45:23.431 READ: bw=95.9MiB/s (101MB/s), 30.3MiB/s-34.8MiB/s (31.7MB/s-36.5MB/s), io=484MiB (508MB), run=5003-5045msec 00:45:23.996 ----------------------------------------------------- 00:45:23.996 Suppressions used: 00:45:23.996 count bytes template 00:45:23.996 5 44 /usr/src/fio/parse.c 00:45:23.996 1 8 libtcmalloc_minimal.so 00:45:23.996 1 904 libcrypto.so 00:45:23.996 ----------------------------------------------------- 00:45:23.996 00:45:23.996 00:26:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:45:23.996 00:26:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:45:23.996 00:26:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:45:23.996 00:26:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:45:23.996 00:26:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:45:23.996 00:26:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:45:23.996 00:26:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:23.996 00:26:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:23.996 00:26:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:23.996 00:26:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:45:23.996 00:26:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:23.996 00:26:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:23.996 00:26:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:23.996 00:26:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:45:23.996 00:26:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:45:23.996 00:26:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:45:23.996 00:26:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:45:23.996 00:26:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:45:23.996 00:26:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:45:23.996 00:26:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:45:23.996 00:26:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:45:23.996 00:26:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:45:23.996 00:26:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:45:23.996 00:26:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:45:23.996 00:26:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:45:24.254 00:26:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:24.254 00:26:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:24.254 bdev_null0 00:45:24.254 00:26:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:24.254 00:26:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:45:24.254 00:26:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:24.254 00:26:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:24.254 00:26:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:24.254 00:26:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:45:24.254 00:26:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:24.254 00:26:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:24.254 00:26:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:24.254 00:26:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:45:24.254 00:26:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:24.254 00:26:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:24.254 [2024-12-14 00:26:03.163835] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:45:24.254 00:26:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:24.254 00:26:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:45:24.254 00:26:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:45:24.254 00:26:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:45:24.254 00:26:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:45:24.254 00:26:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:24.254 00:26:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:24.254 bdev_null1 00:45:24.254 00:26:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:24.254 00:26:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:45:24.254 00:26:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:24.254 00:26:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:24.254 00:26:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:24.254 00:26:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:45:24.254 00:26:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:24.254 00:26:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:24.254 00:26:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:24.254 00:26:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:45:24.254 00:26:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:24.254 00:26:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:24.254 00:26:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:24.254 00:26:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:45:24.254 00:26:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:45:24.254 00:26:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:45:24.254 00:26:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:45:24.254 00:26:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:24.254 00:26:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:24.254 bdev_null2 00:45:24.254 00:26:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:24.255 00:26:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:45:24.255 00:26:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:24.255 00:26:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:24.255 00:26:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:24.255 00:26:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:45:24.255 00:26:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:24.255 00:26:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:24.255 00:26:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:24.255 00:26:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:45:24.255 00:26:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:24.255 00:26:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:24.255 00:26:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:24.255 00:26:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:45:24.255 00:26:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:45:24.255 00:26:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:45:24.255 00:26:03 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:45:24.255 00:26:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:45:24.255 00:26:03 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:45:24.255 00:26:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:45:24.255 00:26:03 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:45:24.255 00:26:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:45:24.255 00:26:03 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:45:24.255 { 00:45:24.255 "params": { 00:45:24.255 "name": "Nvme$subsystem", 00:45:24.255 "trtype": "$TEST_TRANSPORT", 00:45:24.255 "traddr": "$NVMF_FIRST_TARGET_IP", 00:45:24.255 "adrfam": "ipv4", 00:45:24.255 "trsvcid": "$NVMF_PORT", 00:45:24.255 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:45:24.255 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:45:24.255 "hdgst": ${hdgst:-false}, 00:45:24.255 "ddgst": ${ddgst:-false} 00:45:24.255 }, 00:45:24.255 "method": "bdev_nvme_attach_controller" 00:45:24.255 } 00:45:24.255 EOF 00:45:24.255 )") 00:45:24.255 00:26:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:45:24.255 00:26:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:45:24.255 00:26:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:45:24.255 00:26:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:45:24.255 00:26:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:45:24.255 00:26:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:45:24.255 00:26:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:45:24.255 00:26:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:45:24.255 00:26:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:45:24.255 00:26:03 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:45:24.255 00:26:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:45:24.255 00:26:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:45:24.255 00:26:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:45:24.255 00:26:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:45:24.255 00:26:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:45:24.255 00:26:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:45:24.255 00:26:03 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:45:24.255 00:26:03 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:45:24.255 { 00:45:24.255 "params": { 00:45:24.255 "name": "Nvme$subsystem", 00:45:24.255 "trtype": "$TEST_TRANSPORT", 00:45:24.255 "traddr": "$NVMF_FIRST_TARGET_IP", 00:45:24.255 "adrfam": "ipv4", 00:45:24.255 "trsvcid": "$NVMF_PORT", 00:45:24.255 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:45:24.255 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:45:24.255 "hdgst": ${hdgst:-false}, 00:45:24.255 "ddgst": ${ddgst:-false} 00:45:24.255 }, 00:45:24.255 "method": "bdev_nvme_attach_controller" 00:45:24.255 } 00:45:24.255 EOF 00:45:24.255 )") 00:45:24.255 00:26:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:45:24.255 00:26:03 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:45:24.255 00:26:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:45:24.255 00:26:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:45:24.255 00:26:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:45:24.255 00:26:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:45:24.255 00:26:03 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:45:24.255 00:26:03 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:45:24.255 { 00:45:24.255 "params": { 00:45:24.255 "name": "Nvme$subsystem", 00:45:24.255 "trtype": "$TEST_TRANSPORT", 00:45:24.255 "traddr": "$NVMF_FIRST_TARGET_IP", 00:45:24.255 "adrfam": "ipv4", 00:45:24.255 "trsvcid": "$NVMF_PORT", 00:45:24.255 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:45:24.255 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:45:24.255 "hdgst": ${hdgst:-false}, 00:45:24.255 "ddgst": ${ddgst:-false} 00:45:24.255 }, 00:45:24.255 "method": "bdev_nvme_attach_controller" 00:45:24.255 } 00:45:24.255 EOF 00:45:24.255 )") 00:45:24.255 00:26:03 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:45:24.255 00:26:03 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:45:24.255 00:26:03 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:45:24.255 00:26:03 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:45:24.255 "params": { 00:45:24.255 "name": "Nvme0", 00:45:24.255 "trtype": "tcp", 00:45:24.255 "traddr": "10.0.0.2", 00:45:24.255 "adrfam": "ipv4", 00:45:24.255 "trsvcid": "4420", 00:45:24.255 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:45:24.255 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:45:24.255 "hdgst": false, 00:45:24.255 "ddgst": false 00:45:24.255 }, 00:45:24.255 "method": "bdev_nvme_attach_controller" 00:45:24.255 },{ 00:45:24.255 "params": { 00:45:24.255 "name": "Nvme1", 00:45:24.255 "trtype": "tcp", 00:45:24.255 "traddr": "10.0.0.2", 00:45:24.255 "adrfam": "ipv4", 00:45:24.255 "trsvcid": "4420", 00:45:24.255 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:45:24.255 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:45:24.255 "hdgst": false, 00:45:24.255 "ddgst": false 00:45:24.255 }, 00:45:24.255 "method": "bdev_nvme_attach_controller" 00:45:24.255 },{ 00:45:24.255 "params": { 00:45:24.255 "name": "Nvme2", 00:45:24.255 "trtype": "tcp", 00:45:24.255 "traddr": "10.0.0.2", 00:45:24.255 "adrfam": "ipv4", 00:45:24.255 "trsvcid": "4420", 00:45:24.255 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:45:24.255 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:45:24.255 "hdgst": false, 00:45:24.255 "ddgst": false 00:45:24.255 }, 00:45:24.255 "method": "bdev_nvme_attach_controller" 00:45:24.255 }' 00:45:24.255 00:26:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:45:24.255 00:26:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:45:24.255 00:26:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1351 -- # break 00:45:24.255 00:26:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:45:24.255 00:26:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:45:24.512 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:45:24.512 ... 00:45:24.512 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:45:24.512 ... 00:45:24.512 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:45:24.512 ... 00:45:24.512 fio-3.35 00:45:24.512 Starting 24 threads 00:45:36.706 00:45:36.706 filename0: (groupid=0, jobs=1): err= 0: pid=162678: Sat Dec 14 00:26:14 2024 00:45:36.706 read: IOPS=518, BW=2076KiB/s (2125kB/s)(20.3MiB/10010msec) 00:45:36.706 slat (usec): min=7, max=118, avg=35.50, stdev=23.47 00:45:36.706 clat (usec): min=11458, max=90919, avg=30507.90, stdev=4860.71 00:45:36.706 lat (usec): min=11479, max=90948, avg=30543.40, stdev=4865.01 00:45:36.706 clat percentiles (usec): 00:45:36.706 | 1.00th=[19530], 5.00th=[24511], 10.00th=[27132], 20.00th=[28443], 00:45:36.706 | 30.00th=[28967], 40.00th=[29492], 50.00th=[30016], 60.00th=[31327], 00:45:36.706 | 70.00th=[31851], 80.00th=[32375], 90.00th=[34866], 95.00th=[35914], 00:45:36.706 | 99.00th=[41681], 99.50th=[44303], 99.90th=[90702], 99.95th=[90702], 00:45:36.706 | 99.99th=[90702] 00:45:36.706 bw ( KiB/s): min= 1792, max= 2304, per=4.23%, avg=2071.50, stdev=140.73, samples=20 00:45:36.706 iops : min= 448, max= 576, avg=517.80, stdev=35.27, samples=20 00:45:36.706 lat (msec) : 20=1.23%, 50=98.46%, 100=0.31% 00:45:36.706 cpu : usr=97.87%, sys=1.31%, ctx=100, majf=0, minf=1633 00:45:36.706 IO depths : 1=5.0%, 2=10.0%, 4=20.7%, 8=56.2%, 16=8.1%, 32=0.0%, >=64=0.0% 00:45:36.706 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:36.706 complete : 0=0.0%, 4=93.0%, 8=1.8%, 16=5.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:36.706 issued rwts: total=5194,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:36.706 latency : target=0, window=0, percentile=100.00%, depth=16 00:45:36.706 filename0: (groupid=0, jobs=1): err= 0: pid=162679: Sat Dec 14 00:26:14 2024 00:45:36.706 read: IOPS=511, BW=2045KiB/s (2094kB/s)(20.0MiB/10017msec) 00:45:36.706 slat (usec): min=4, max=138, avg=40.98, stdev=18.47 00:45:36.706 clat (usec): min=13851, max=51550, avg=30980.64, stdev=2722.49 00:45:36.706 lat (usec): min=13861, max=51566, avg=31021.62, stdev=2723.99 00:45:36.706 clat percentiles (usec): 00:45:36.706 | 1.00th=[26608], 5.00th=[27919], 10.00th=[28181], 20.00th=[28967], 00:45:36.706 | 30.00th=[29230], 40.00th=[29754], 50.00th=[30540], 60.00th=[31589], 00:45:36.706 | 70.00th=[32113], 80.00th=[32637], 90.00th=[34866], 95.00th=[35914], 00:45:36.706 | 99.00th=[36439], 99.50th=[39060], 99.90th=[51643], 99.95th=[51643], 00:45:36.706 | 99.99th=[51643] 00:45:36.706 bw ( KiB/s): min= 1792, max= 2176, per=4.17%, avg=2041.60, stdev=127.83, samples=20 00:45:36.706 iops : min= 448, max= 544, avg=510.40, stdev=31.96, samples=20 00:45:36.706 lat (msec) : 20=0.31%, 50=99.53%, 100=0.16% 00:45:36.706 cpu : usr=98.61%, sys=0.98%, ctx=14, majf=0, minf=1635 00:45:36.706 IO depths : 1=6.1%, 2=12.4%, 4=24.9%, 8=50.2%, 16=6.4%, 32=0.0%, >=64=0.0% 00:45:36.706 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:36.706 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:36.706 issued rwts: total=5120,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:36.706 latency : target=0, window=0, percentile=100.00%, depth=16 00:45:36.706 filename0: (groupid=0, jobs=1): err= 0: pid=162680: Sat Dec 14 00:26:14 2024 00:45:36.706 read: IOPS=508, BW=2033KiB/s (2082kB/s)(19.9MiB/10012msec) 00:45:36.706 slat (usec): min=4, max=102, avg=44.01, stdev=17.84 00:45:36.706 clat (usec): min=25372, max=89101, avg=31081.71, stdev=4010.58 00:45:36.706 lat (usec): min=25416, max=89116, avg=31125.72, stdev=4010.13 00:45:36.706 clat percentiles (usec): 00:45:36.706 | 1.00th=[26608], 5.00th=[27919], 10.00th=[28181], 20.00th=[28967], 00:45:36.706 | 30.00th=[29230], 40.00th=[29754], 50.00th=[30278], 60.00th=[31589], 00:45:36.706 | 70.00th=[31851], 80.00th=[32637], 90.00th=[34866], 95.00th=[35914], 00:45:36.706 | 99.00th=[36439], 99.50th=[38536], 99.90th=[88605], 99.95th=[88605], 00:45:36.706 | 99.99th=[88605] 00:45:36.706 bw ( KiB/s): min= 1792, max= 2176, per=4.16%, avg=2034.53, stdev=112.03, samples=19 00:45:36.706 iops : min= 448, max= 544, avg=508.63, stdev=28.01, samples=19 00:45:36.706 lat (msec) : 50=99.69%, 100=0.31% 00:45:36.706 cpu : usr=98.65%, sys=0.93%, ctx=19, majf=0, minf=1632 00:45:36.706 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:45:36.706 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:36.706 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:36.706 issued rwts: total=5088,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:36.706 latency : target=0, window=0, percentile=100.00%, depth=16 00:45:36.706 filename0: (groupid=0, jobs=1): err= 0: pid=162681: Sat Dec 14 00:26:14 2024 00:45:36.706 read: IOPS=515, BW=2063KiB/s (2112kB/s)(20.2MiB/10022msec) 00:45:36.706 slat (usec): min=7, max=118, avg=46.05, stdev=21.19 00:45:36.706 clat (usec): min=4150, max=45199, avg=30604.87, stdev=3508.12 00:45:36.706 lat (usec): min=4159, max=45223, avg=30650.92, stdev=3513.12 00:45:36.706 clat percentiles (usec): 00:45:36.706 | 1.00th=[14615], 5.00th=[27919], 10.00th=[28181], 20.00th=[28967], 00:45:36.706 | 30.00th=[29230], 40.00th=[29492], 50.00th=[30278], 60.00th=[31327], 00:45:36.706 | 70.00th=[31851], 80.00th=[32637], 90.00th=[34866], 95.00th=[35390], 00:45:36.706 | 99.00th=[36439], 99.50th=[36439], 99.90th=[45351], 99.95th=[45351], 00:45:36.706 | 99.99th=[45351] 00:45:36.706 bw ( KiB/s): min= 1792, max= 2432, per=4.21%, avg=2060.80, stdev=154.83, samples=20 00:45:36.706 iops : min= 448, max= 608, avg=515.20, stdev=38.71, samples=20 00:45:36.706 lat (msec) : 10=0.93%, 20=0.62%, 50=98.45% 00:45:36.706 cpu : usr=98.74%, sys=0.83%, ctx=49, majf=0, minf=1635 00:45:36.706 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:45:36.706 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:36.706 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:36.706 issued rwts: total=5168,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:36.706 latency : target=0, window=0, percentile=100.00%, depth=16 00:45:36.706 filename0: (groupid=0, jobs=1): err= 0: pid=162682: Sat Dec 14 00:26:14 2024 00:45:36.706 read: IOPS=509, BW=2039KiB/s (2088kB/s)(19.9MiB/10010msec) 00:45:36.706 slat (usec): min=4, max=116, avg=37.27, stdev=21.89 00:45:36.706 clat (usec): min=15366, max=93929, avg=31032.18, stdev=4791.78 00:45:36.706 lat (usec): min=15374, max=93946, avg=31069.46, stdev=4794.41 00:45:36.706 clat percentiles (usec): 00:45:36.706 | 1.00th=[20317], 5.00th=[27657], 10.00th=[28181], 20.00th=[28967], 00:45:36.706 | 30.00th=[29230], 40.00th=[29492], 50.00th=[30278], 60.00th=[31589], 00:45:36.706 | 70.00th=[31851], 80.00th=[32637], 90.00th=[34866], 95.00th=[35914], 00:45:36.706 | 99.00th=[43779], 99.50th=[48497], 99.90th=[93848], 99.95th=[93848], 00:45:36.706 | 99.99th=[93848] 00:45:36.706 bw ( KiB/s): min= 1792, max= 2176, per=4.16%, avg=2034.20, stdev=136.66, samples=20 00:45:36.706 iops : min= 448, max= 544, avg=508.55, stdev=34.17, samples=20 00:45:36.706 lat (msec) : 20=0.59%, 50=99.10%, 100=0.31% 00:45:36.706 cpu : usr=98.49%, sys=1.00%, ctx=49, majf=0, minf=1632 00:45:36.706 IO depths : 1=5.6%, 2=11.5%, 4=23.8%, 8=52.1%, 16=6.9%, 32=0.0%, >=64=0.0% 00:45:36.706 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:36.706 complete : 0=0.0%, 4=93.8%, 8=0.5%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:36.706 issued rwts: total=5102,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:36.706 latency : target=0, window=0, percentile=100.00%, depth=16 00:45:36.706 filename0: (groupid=0, jobs=1): err= 0: pid=162684: Sat Dec 14 00:26:14 2024 00:45:36.706 read: IOPS=508, BW=2035KiB/s (2084kB/s)(19.9MiB/10011msec) 00:45:36.706 slat (usec): min=4, max=125, avg=39.02, stdev=23.93 00:45:36.706 clat (usec): min=11509, max=92648, avg=31064.05, stdev=4349.19 00:45:36.706 lat (usec): min=11522, max=92663, avg=31103.08, stdev=4351.60 00:45:36.706 clat percentiles (usec): 00:45:36.706 | 1.00th=[26346], 5.00th=[27919], 10.00th=[28443], 20.00th=[28967], 00:45:36.706 | 30.00th=[29230], 40.00th=[29754], 50.00th=[30540], 60.00th=[31589], 00:45:36.706 | 70.00th=[31851], 80.00th=[32637], 90.00th=[34866], 95.00th=[35390], 00:45:36.706 | 99.00th=[36439], 99.50th=[45351], 99.90th=[92799], 99.95th=[92799], 00:45:36.706 | 99.99th=[92799] 00:45:36.706 bw ( KiB/s): min= 1792, max= 2176, per=4.15%, avg=2031.35, stdev=124.61, samples=20 00:45:36.706 iops : min= 448, max= 544, avg=507.80, stdev=31.19, samples=20 00:45:36.706 lat (msec) : 20=0.31%, 50=99.37%, 100=0.31% 00:45:36.706 cpu : usr=98.13%, sys=1.09%, ctx=135, majf=0, minf=1635 00:45:36.706 IO depths : 1=6.1%, 2=12.3%, 4=24.9%, 8=50.3%, 16=6.4%, 32=0.0%, >=64=0.0% 00:45:36.706 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:36.706 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:36.706 issued rwts: total=5094,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:36.706 latency : target=0, window=0, percentile=100.00%, depth=16 00:45:36.706 filename0: (groupid=0, jobs=1): err= 0: pid=162685: Sat Dec 14 00:26:14 2024 00:45:36.706 read: IOPS=509, BW=2037KiB/s (2086kB/s)(19.9MiB/10023msec) 00:45:36.706 slat (nsec): min=4552, max=98266, avg=43212.84, stdev=17922.16 00:45:36.706 clat (usec): min=22481, max=71865, avg=31046.29, stdev=3281.78 00:45:36.706 lat (usec): min=22496, max=71884, avg=31089.50, stdev=3281.91 00:45:36.706 clat percentiles (usec): 00:45:36.706 | 1.00th=[26608], 5.00th=[27919], 10.00th=[28181], 20.00th=[28967], 00:45:36.706 | 30.00th=[29230], 40.00th=[29754], 50.00th=[30540], 60.00th=[31589], 00:45:36.706 | 70.00th=[32113], 80.00th=[32637], 90.00th=[34866], 95.00th=[35914], 00:45:36.706 | 99.00th=[36439], 99.50th=[38536], 99.90th=[71828], 99.95th=[71828], 00:45:36.706 | 99.99th=[71828] 00:45:36.706 bw ( KiB/s): min= 1792, max= 2304, per=4.16%, avg=2035.35, stdev=149.04, samples=20 00:45:36.706 iops : min= 448, max= 576, avg=508.80, stdev=37.29, samples=20 00:45:36.706 lat (msec) : 50=99.69%, 100=0.31% 00:45:36.706 cpu : usr=97.79%, sys=1.47%, ctx=118, majf=0, minf=1633 00:45:36.707 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:45:36.707 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:36.707 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:36.707 issued rwts: total=5104,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:36.707 latency : target=0, window=0, percentile=100.00%, depth=16 00:45:36.707 filename0: (groupid=0, jobs=1): err= 0: pid=162686: Sat Dec 14 00:26:14 2024 00:45:36.707 read: IOPS=508, BW=2034KiB/s (2082kB/s)(19.9MiB/10008msec) 00:45:36.707 slat (nsec): min=7445, max=94238, avg=43360.15, stdev=18599.40 00:45:36.707 clat (usec): min=25551, max=86753, avg=31126.26, stdev=3892.90 00:45:36.707 lat (usec): min=25605, max=86780, avg=31169.62, stdev=3891.79 00:45:36.707 clat percentiles (usec): 00:45:36.707 | 1.00th=[26870], 5.00th=[27919], 10.00th=[28181], 20.00th=[28967], 00:45:36.707 | 30.00th=[29230], 40.00th=[29754], 50.00th=[30540], 60.00th=[31589], 00:45:36.707 | 70.00th=[32113], 80.00th=[32637], 90.00th=[34866], 95.00th=[35914], 00:45:36.707 | 99.00th=[36439], 99.50th=[36439], 99.90th=[86508], 99.95th=[86508], 00:45:36.707 | 99.99th=[86508] 00:45:36.707 bw ( KiB/s): min= 1792, max= 2176, per=4.16%, avg=2034.53, stdev=112.03, samples=19 00:45:36.707 iops : min= 448, max= 544, avg=508.63, stdev=28.01, samples=19 00:45:36.707 lat (msec) : 50=99.69%, 100=0.31% 00:45:36.707 cpu : usr=98.54%, sys=1.04%, ctx=19, majf=0, minf=1635 00:45:36.707 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:45:36.707 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:36.707 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:36.707 issued rwts: total=5088,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:36.707 latency : target=0, window=0, percentile=100.00%, depth=16 00:45:36.707 filename1: (groupid=0, jobs=1): err= 0: pid=162687: Sat Dec 14 00:26:14 2024 00:45:36.707 read: IOPS=517, BW=2069KiB/s (2119kB/s)(20.3MiB/10033msec) 00:45:36.707 slat (usec): min=7, max=122, avg=46.03, stdev=24.15 00:45:36.707 clat (usec): min=4124, max=47685, avg=30515.99, stdev=3902.40 00:45:36.707 lat (usec): min=4133, max=47705, avg=30562.02, stdev=3909.43 00:45:36.707 clat percentiles (usec): 00:45:36.707 | 1.00th=[ 9372], 5.00th=[28181], 10.00th=[28181], 20.00th=[28967], 00:45:36.707 | 30.00th=[29230], 40.00th=[29492], 50.00th=[30278], 60.00th=[31327], 00:45:36.707 | 70.00th=[31851], 80.00th=[32637], 90.00th=[34866], 95.00th=[35390], 00:45:36.707 | 99.00th=[36439], 99.50th=[38536], 99.90th=[47449], 99.95th=[47449], 00:45:36.707 | 99.99th=[47449] 00:45:36.707 bw ( KiB/s): min= 1792, max= 2608, per=4.23%, avg=2069.35, stdev=193.69, samples=20 00:45:36.707 iops : min= 448, max= 652, avg=517.30, stdev=48.40, samples=20 00:45:36.707 lat (msec) : 10=1.66%, 20=0.35%, 50=98.00% 00:45:36.707 cpu : usr=98.71%, sys=0.78%, ctx=66, majf=0, minf=1635 00:45:36.707 IO depths : 1=6.1%, 2=12.3%, 4=24.7%, 8=50.5%, 16=6.4%, 32=0.0%, >=64=0.0% 00:45:36.707 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:36.707 complete : 0=0.0%, 4=94.0%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:36.707 issued rwts: total=5190,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:36.707 latency : target=0, window=0, percentile=100.00%, depth=16 00:45:36.707 filename1: (groupid=0, jobs=1): err= 0: pid=162688: Sat Dec 14 00:26:14 2024 00:45:36.707 read: IOPS=508, BW=2033KiB/s (2082kB/s)(19.9MiB/10010msec) 00:45:36.707 slat (nsec): min=6503, max=92305, avg=38308.18, stdev=19618.90 00:45:36.707 clat (usec): min=26003, max=88763, avg=31193.56, stdev=3979.92 00:45:36.707 lat (usec): min=26032, max=88796, avg=31231.87, stdev=3978.84 00:45:36.707 clat percentiles (usec): 00:45:36.707 | 1.00th=[26870], 5.00th=[27919], 10.00th=[28443], 20.00th=[28967], 00:45:36.707 | 30.00th=[29492], 40.00th=[29754], 50.00th=[30540], 60.00th=[31589], 00:45:36.707 | 70.00th=[32113], 80.00th=[32900], 90.00th=[34866], 95.00th=[35914], 00:45:36.707 | 99.00th=[36439], 99.50th=[36439], 99.90th=[88605], 99.95th=[88605], 00:45:36.707 | 99.99th=[88605] 00:45:36.707 bw ( KiB/s): min= 1792, max= 2180, per=4.16%, avg=2035.32, stdev=127.40, samples=19 00:45:36.707 iops : min= 448, max= 545, avg=508.79, stdev=31.89, samples=19 00:45:36.707 lat (msec) : 50=99.69%, 100=0.31% 00:45:36.707 cpu : usr=98.12%, sys=1.17%, ctx=164, majf=0, minf=1635 00:45:36.707 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:45:36.707 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:36.707 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:36.707 issued rwts: total=5088,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:36.707 latency : target=0, window=0, percentile=100.00%, depth=16 00:45:36.707 filename1: (groupid=0, jobs=1): err= 0: pid=162689: Sat Dec 14 00:26:14 2024 00:45:36.707 read: IOPS=509, BW=2037KiB/s (2086kB/s)(19.9MiB/10023msec) 00:45:36.707 slat (usec): min=7, max=122, avg=46.03, stdev=23.73 00:45:36.707 clat (usec): min=25124, max=71411, avg=30966.77, stdev=3233.23 00:45:36.707 lat (usec): min=25147, max=71436, avg=31012.80, stdev=3234.92 00:45:36.707 clat percentiles (usec): 00:45:36.707 | 1.00th=[26608], 5.00th=[27919], 10.00th=[28181], 20.00th=[28967], 00:45:36.707 | 30.00th=[29230], 40.00th=[29492], 50.00th=[30278], 60.00th=[31327], 00:45:36.707 | 70.00th=[31851], 80.00th=[32637], 90.00th=[34866], 95.00th=[35390], 00:45:36.707 | 99.00th=[36439], 99.50th=[38536], 99.90th=[71828], 99.95th=[71828], 00:45:36.707 | 99.99th=[71828] 00:45:36.707 bw ( KiB/s): min= 1792, max= 2304, per=4.16%, avg=2035.50, stdev=148.92, samples=20 00:45:36.707 iops : min= 448, max= 576, avg=508.80, stdev=37.29, samples=20 00:45:36.707 lat (msec) : 50=99.69%, 100=0.31% 00:45:36.707 cpu : usr=98.53%, sys=0.98%, ctx=41, majf=0, minf=1635 00:45:36.707 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:45:36.707 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:36.707 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:36.707 issued rwts: total=5104,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:36.707 latency : target=0, window=0, percentile=100.00%, depth=16 00:45:36.707 filename1: (groupid=0, jobs=1): err= 0: pid=162691: Sat Dec 14 00:26:14 2024 00:45:36.707 read: IOPS=508, BW=2034KiB/s (2083kB/s)(19.9MiB/10004msec) 00:45:36.707 slat (nsec): min=5976, max=93362, avg=44825.09, stdev=18057.78 00:45:36.707 clat (usec): min=25575, max=83045, avg=31086.66, stdev=3734.88 00:45:36.707 lat (usec): min=25638, max=83066, avg=31131.49, stdev=3733.85 00:45:36.707 clat percentiles (usec): 00:45:36.707 | 1.00th=[26608], 5.00th=[27919], 10.00th=[28181], 20.00th=[28967], 00:45:36.707 | 30.00th=[29230], 40.00th=[29754], 50.00th=[30540], 60.00th=[31589], 00:45:36.707 | 70.00th=[32113], 80.00th=[32637], 90.00th=[34866], 95.00th=[35914], 00:45:36.707 | 99.00th=[36439], 99.50th=[36439], 99.90th=[83362], 99.95th=[83362], 00:45:36.707 | 99.99th=[83362] 00:45:36.707 bw ( KiB/s): min= 1792, max= 2176, per=4.16%, avg=2034.53, stdev=127.25, samples=19 00:45:36.707 iops : min= 448, max= 544, avg=508.63, stdev=31.81, samples=19 00:45:36.707 lat (msec) : 50=99.69%, 100=0.31% 00:45:36.707 cpu : usr=98.09%, sys=1.21%, ctx=52, majf=0, minf=1634 00:45:36.707 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:45:36.707 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:36.707 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:36.707 issued rwts: total=5088,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:36.707 latency : target=0, window=0, percentile=100.00%, depth=16 00:45:36.707 filename1: (groupid=0, jobs=1): err= 0: pid=162692: Sat Dec 14 00:26:14 2024 00:45:36.707 read: IOPS=511, BW=2045KiB/s (2094kB/s)(20.0MiB/10016msec) 00:45:36.707 slat (nsec): min=4688, max=99501, avg=39224.24, stdev=18464.88 00:45:36.707 clat (usec): min=17780, max=47513, avg=30995.21, stdev=2619.01 00:45:36.707 lat (usec): min=17794, max=47547, avg=31034.43, stdev=2620.53 00:45:36.707 clat percentiles (usec): 00:45:36.707 | 1.00th=[26608], 5.00th=[27919], 10.00th=[28443], 20.00th=[28967], 00:45:36.707 | 30.00th=[29492], 40.00th=[29754], 50.00th=[30540], 60.00th=[31589], 00:45:36.707 | 70.00th=[32113], 80.00th=[32637], 90.00th=[34866], 95.00th=[35914], 00:45:36.707 | 99.00th=[36439], 99.50th=[38536], 99.90th=[47449], 99.95th=[47449], 00:45:36.707 | 99.99th=[47449] 00:45:36.707 bw ( KiB/s): min= 1792, max= 2180, per=4.17%, avg=2041.80, stdev=128.06, samples=20 00:45:36.707 iops : min= 448, max= 545, avg=510.45, stdev=32.01, samples=20 00:45:36.707 lat (msec) : 20=0.31%, 50=99.69% 00:45:36.707 cpu : usr=98.24%, sys=1.28%, ctx=95, majf=0, minf=1636 00:45:36.707 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:45:36.707 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:36.707 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:36.707 issued rwts: total=5120,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:36.707 latency : target=0, window=0, percentile=100.00%, depth=16 00:45:36.707 filename1: (groupid=0, jobs=1): err= 0: pid=162693: Sat Dec 14 00:26:14 2024 00:45:36.707 read: IOPS=510, BW=2043KiB/s (2092kB/s)(20.0MiB/10023msec) 00:45:36.707 slat (usec): min=4, max=104, avg=37.80, stdev=18.70 00:45:36.707 clat (usec): min=17750, max=47572, avg=31033.74, stdev=2589.15 00:45:36.707 lat (usec): min=17767, max=47593, avg=31071.54, stdev=2590.70 00:45:36.707 clat percentiles (usec): 00:45:36.707 | 1.00th=[26870], 5.00th=[28181], 10.00th=[28443], 20.00th=[28967], 00:45:36.707 | 30.00th=[29492], 40.00th=[29754], 50.00th=[30540], 60.00th=[31589], 00:45:36.707 | 70.00th=[32113], 80.00th=[32637], 90.00th=[34866], 95.00th=[35914], 00:45:36.707 | 99.00th=[36439], 99.50th=[38536], 99.90th=[47449], 99.95th=[47449], 00:45:36.707 | 99.99th=[47449] 00:45:36.707 bw ( KiB/s): min= 1792, max= 2304, per=4.17%, avg=2041.60, stdev=127.83, samples=20 00:45:36.707 iops : min= 448, max= 576, avg=510.40, stdev=31.96, samples=20 00:45:36.707 lat (msec) : 20=0.31%, 50=99.69% 00:45:36.707 cpu : usr=97.83%, sys=1.43%, ctx=149, majf=0, minf=1634 00:45:36.707 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:45:36.707 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:36.707 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:36.707 issued rwts: total=5120,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:36.707 latency : target=0, window=0, percentile=100.00%, depth=16 00:45:36.707 filename1: (groupid=0, jobs=1): err= 0: pid=162694: Sat Dec 14 00:26:14 2024 00:45:36.707 read: IOPS=509, BW=2037KiB/s (2086kB/s)(19.9MiB/10023msec) 00:45:36.707 slat (usec): min=4, max=100, avg=26.03, stdev=20.64 00:45:36.707 clat (usec): min=17031, max=74625, avg=31180.30, stdev=3351.21 00:45:36.707 lat (usec): min=17042, max=74639, avg=31206.34, stdev=3355.08 00:45:36.707 clat percentiles (usec): 00:45:36.707 | 1.00th=[26870], 5.00th=[28181], 10.00th=[28443], 20.00th=[29230], 00:45:36.707 | 30.00th=[29492], 40.00th=[29754], 50.00th=[30540], 60.00th=[31589], 00:45:36.707 | 70.00th=[32113], 80.00th=[32900], 90.00th=[34866], 95.00th=[35914], 00:45:36.707 | 99.00th=[36439], 99.50th=[36439], 99.90th=[74974], 99.95th=[74974], 00:45:36.707 | 99.99th=[74974] 00:45:36.707 bw ( KiB/s): min= 1792, max= 2304, per=4.16%, avg=2035.00, stdev=143.43, samples=20 00:45:36.708 iops : min= 448, max= 576, avg=508.75, stdev=35.86, samples=20 00:45:36.708 lat (msec) : 20=0.08%, 50=99.61%, 100=0.31% 00:45:36.708 cpu : usr=98.46%, sys=1.02%, ctx=52, majf=0, minf=1635 00:45:36.708 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:45:36.708 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:36.708 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:36.708 issued rwts: total=5104,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:36.708 latency : target=0, window=0, percentile=100.00%, depth=16 00:45:36.708 filename1: (groupid=0, jobs=1): err= 0: pid=162695: Sat Dec 14 00:26:14 2024 00:45:36.708 read: IOPS=508, BW=2035KiB/s (2084kB/s)(19.9MiB/10001msec) 00:45:36.708 slat (usec): min=5, max=118, avg=46.16, stdev=21.24 00:45:36.708 clat (usec): min=25560, max=79975, avg=31020.66, stdev=3579.78 00:45:36.708 lat (usec): min=25569, max=79994, avg=31066.82, stdev=3579.85 00:45:36.708 clat percentiles (usec): 00:45:36.708 | 1.00th=[26870], 5.00th=[27919], 10.00th=[28181], 20.00th=[28967], 00:45:36.708 | 30.00th=[29230], 40.00th=[29754], 50.00th=[30278], 60.00th=[31589], 00:45:36.708 | 70.00th=[31851], 80.00th=[32637], 90.00th=[34866], 95.00th=[35390], 00:45:36.708 | 99.00th=[36439], 99.50th=[36439], 99.90th=[80217], 99.95th=[80217], 00:45:36.708 | 99.99th=[80217] 00:45:36.708 bw ( KiB/s): min= 1792, max= 2304, per=4.16%, avg=2034.68, stdev=147.02, samples=19 00:45:36.708 iops : min= 448, max= 576, avg=508.63, stdev=36.79, samples=19 00:45:36.708 lat (msec) : 50=99.69%, 100=0.31% 00:45:36.708 cpu : usr=98.38%, sys=1.02%, ctx=38, majf=0, minf=1635 00:45:36.708 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:45:36.708 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:36.708 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:36.708 issued rwts: total=5088,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:36.708 latency : target=0, window=0, percentile=100.00%, depth=16 00:45:36.708 filename2: (groupid=0, jobs=1): err= 0: pid=162697: Sat Dec 14 00:26:14 2024 00:45:36.708 read: IOPS=508, BW=2034KiB/s (2083kB/s)(19.9MiB/10007msec) 00:45:36.708 slat (nsec): min=5092, max=77729, avg=28086.54, stdev=13990.40 00:45:36.708 clat (usec): min=17038, max=94276, avg=31236.96, stdev=4313.86 00:45:36.708 lat (usec): min=17049, max=94298, avg=31265.05, stdev=4313.58 00:45:36.708 clat percentiles (usec): 00:45:36.708 | 1.00th=[26870], 5.00th=[28181], 10.00th=[28443], 20.00th=[29230], 00:45:36.708 | 30.00th=[29492], 40.00th=[29754], 50.00th=[30540], 60.00th=[31589], 00:45:36.708 | 70.00th=[32113], 80.00th=[32900], 90.00th=[34866], 95.00th=[35914], 00:45:36.708 | 99.00th=[36439], 99.50th=[36439], 99.90th=[93848], 99.95th=[93848], 00:45:36.708 | 99.99th=[93848] 00:45:36.708 bw ( KiB/s): min= 1792, max= 2176, per=4.16%, avg=2034.32, stdev=127.45, samples=19 00:45:36.708 iops : min= 448, max= 544, avg=508.58, stdev=31.86, samples=19 00:45:36.708 lat (msec) : 20=0.20%, 50=99.49%, 100=0.31% 00:45:36.708 cpu : usr=98.27%, sys=1.25%, ctx=61, majf=0, minf=1635 00:45:36.708 IO depths : 1=6.1%, 2=12.3%, 4=25.0%, 8=50.2%, 16=6.4%, 32=0.0%, >=64=0.0% 00:45:36.708 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:36.708 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:36.708 issued rwts: total=5088,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:36.708 latency : target=0, window=0, percentile=100.00%, depth=16 00:45:36.708 filename2: (groupid=0, jobs=1): err= 0: pid=162698: Sat Dec 14 00:26:14 2024 00:45:36.708 read: IOPS=510, BW=2044KiB/s (2093kB/s)(20.0MiB/10022msec) 00:45:36.708 slat (nsec): min=6064, max=90476, avg=29440.28, stdev=17720.29 00:45:36.708 clat (usec): min=17758, max=47492, avg=31101.49, stdev=2625.93 00:45:36.708 lat (usec): min=17775, max=47517, avg=31130.94, stdev=2623.15 00:45:36.708 clat percentiles (usec): 00:45:36.708 | 1.00th=[26870], 5.00th=[28181], 10.00th=[28443], 20.00th=[28967], 00:45:36.708 | 30.00th=[29492], 40.00th=[29754], 50.00th=[30540], 60.00th=[31589], 00:45:36.708 | 70.00th=[32113], 80.00th=[32900], 90.00th=[34866], 95.00th=[35914], 00:45:36.708 | 99.00th=[36439], 99.50th=[38536], 99.90th=[47449], 99.95th=[47449], 00:45:36.708 | 99.99th=[47449] 00:45:36.708 bw ( KiB/s): min= 1792, max= 2304, per=4.17%, avg=2041.80, stdev=127.85, samples=20 00:45:36.708 iops : min= 448, max= 576, avg=510.45, stdev=31.96, samples=20 00:45:36.708 lat (msec) : 20=0.31%, 50=99.69% 00:45:36.708 cpu : usr=97.00%, sys=1.75%, ctx=263, majf=0, minf=1636 00:45:36.708 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:45:36.708 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:36.708 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:36.708 issued rwts: total=5120,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:36.708 latency : target=0, window=0, percentile=100.00%, depth=16 00:45:36.708 filename2: (groupid=0, jobs=1): err= 0: pid=162699: Sat Dec 14 00:26:14 2024 00:45:36.708 read: IOPS=508, BW=2035KiB/s (2084kB/s)(19.9MiB/10011msec) 00:45:36.708 slat (usec): min=4, max=108, avg=39.14, stdev=19.20 00:45:36.708 clat (usec): min=18609, max=86930, avg=31077.22, stdev=4229.70 00:45:36.708 lat (usec): min=18621, max=86949, avg=31116.36, stdev=4228.47 00:45:36.708 clat percentiles (usec): 00:45:36.708 | 1.00th=[22152], 5.00th=[27919], 10.00th=[28181], 20.00th=[28705], 00:45:36.708 | 30.00th=[29230], 40.00th=[29754], 50.00th=[30540], 60.00th=[31589], 00:45:36.708 | 70.00th=[32113], 80.00th=[32900], 90.00th=[34866], 95.00th=[35914], 00:45:36.708 | 99.00th=[39060], 99.50th=[46924], 99.90th=[86508], 99.95th=[86508], 00:45:36.708 | 99.99th=[86508] 00:45:36.708 bw ( KiB/s): min= 1792, max= 2176, per=4.16%, avg=2036.79, stdev=125.41, samples=19 00:45:36.708 iops : min= 448, max= 544, avg=509.16, stdev=31.35, samples=19 00:45:36.708 lat (msec) : 20=0.43%, 50=99.25%, 100=0.31% 00:45:36.708 cpu : usr=98.67%, sys=0.92%, ctx=19, majf=0, minf=1635 00:45:36.708 IO depths : 1=5.7%, 2=11.8%, 4=24.8%, 8=50.9%, 16=6.9%, 32=0.0%, >=64=0.0% 00:45:36.708 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:36.708 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:36.708 issued rwts: total=5094,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:36.708 latency : target=0, window=0, percentile=100.00%, depth=16 00:45:36.708 filename2: (groupid=0, jobs=1): err= 0: pid=162700: Sat Dec 14 00:26:14 2024 00:45:36.708 read: IOPS=509, BW=2040KiB/s (2089kB/s)(19.9MiB/10010msec) 00:45:36.708 slat (usec): min=6, max=108, avg=29.01, stdev=19.03 00:45:36.708 clat (usec): min=24067, max=61328, avg=31105.51, stdev=2844.72 00:45:36.708 lat (usec): min=24076, max=61352, avg=31134.53, stdev=2846.79 00:45:36.708 clat percentiles (usec): 00:45:36.708 | 1.00th=[26870], 5.00th=[28181], 10.00th=[28443], 20.00th=[29230], 00:45:36.708 | 30.00th=[29492], 40.00th=[29754], 50.00th=[30540], 60.00th=[31589], 00:45:36.708 | 70.00th=[32113], 80.00th=[32637], 90.00th=[34866], 95.00th=[35390], 00:45:36.708 | 99.00th=[36439], 99.50th=[36439], 99.90th=[61080], 99.95th=[61080], 00:45:36.708 | 99.99th=[61080] 00:45:36.708 bw ( KiB/s): min= 1792, max= 2176, per=4.17%, avg=2041.26, stdev=124.20, samples=19 00:45:36.708 iops : min= 448, max= 544, avg=510.32, stdev=31.05, samples=19 00:45:36.708 lat (msec) : 50=99.69%, 100=0.31% 00:45:36.708 cpu : usr=98.24%, sys=1.07%, ctx=77, majf=0, minf=1636 00:45:36.708 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:45:36.708 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:36.708 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:36.708 issued rwts: total=5104,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:36.708 latency : target=0, window=0, percentile=100.00%, depth=16 00:45:36.708 filename2: (groupid=0, jobs=1): err= 0: pid=162702: Sat Dec 14 00:26:14 2024 00:45:36.708 read: IOPS=508, BW=2033KiB/s (2082kB/s)(19.9MiB/10010msec) 00:45:36.708 slat (nsec): min=5147, max=95608, avg=40648.85, stdev=19365.41 00:45:36.708 clat (usec): min=25501, max=88900, avg=31168.71, stdev=3996.59 00:45:36.708 lat (usec): min=25560, max=88919, avg=31209.36, stdev=3994.74 00:45:36.708 clat percentiles (usec): 00:45:36.708 | 1.00th=[26870], 5.00th=[27919], 10.00th=[28443], 20.00th=[28967], 00:45:36.708 | 30.00th=[29492], 40.00th=[29754], 50.00th=[30540], 60.00th=[31589], 00:45:36.708 | 70.00th=[32113], 80.00th=[32637], 90.00th=[34866], 95.00th=[35914], 00:45:36.708 | 99.00th=[36439], 99.50th=[36439], 99.90th=[88605], 99.95th=[88605], 00:45:36.708 | 99.99th=[88605] 00:45:36.708 bw ( KiB/s): min= 1792, max= 2180, per=4.16%, avg=2035.32, stdev=127.40, samples=19 00:45:36.708 iops : min= 448, max= 545, avg=508.79, stdev=31.89, samples=19 00:45:36.708 lat (msec) : 50=99.69%, 100=0.31% 00:45:36.708 cpu : usr=97.64%, sys=1.50%, ctx=160, majf=0, minf=1634 00:45:36.708 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:45:36.708 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:36.708 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:36.708 issued rwts: total=5088,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:36.708 latency : target=0, window=0, percentile=100.00%, depth=16 00:45:36.708 filename2: (groupid=0, jobs=1): err= 0: pid=162703: Sat Dec 14 00:26:14 2024 00:45:36.708 read: IOPS=515, BW=2062KiB/s (2112kB/s)(20.2MiB/10024msec) 00:45:36.708 slat (nsec): min=6854, max=52486, avg=13186.88, stdev=5811.21 00:45:36.708 clat (usec): min=4154, max=45219, avg=30910.37, stdev=3517.57 00:45:36.708 lat (usec): min=4169, max=45239, avg=30923.55, stdev=3516.74 00:45:36.708 clat percentiles (usec): 00:45:36.708 | 1.00th=[17433], 5.00th=[28181], 10.00th=[28443], 20.00th=[29230], 00:45:36.708 | 30.00th=[29492], 40.00th=[30016], 50.00th=[30540], 60.00th=[31851], 00:45:36.708 | 70.00th=[32113], 80.00th=[32900], 90.00th=[35390], 95.00th=[35914], 00:45:36.708 | 99.00th=[36439], 99.50th=[36439], 99.90th=[45351], 99.95th=[45351], 00:45:36.708 | 99.99th=[45351] 00:45:36.708 bw ( KiB/s): min= 1792, max= 2436, per=4.21%, avg=2060.75, stdev=155.15, samples=20 00:45:36.708 iops : min= 448, max= 609, avg=515.15, stdev=38.76, samples=20 00:45:36.708 lat (msec) : 10=0.93%, 20=0.62%, 50=98.45% 00:45:36.708 cpu : usr=97.92%, sys=1.26%, ctx=73, majf=0, minf=1634 00:45:36.708 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:45:36.708 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:36.708 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:36.708 issued rwts: total=5168,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:36.708 latency : target=0, window=0, percentile=100.00%, depth=16 00:45:36.708 filename2: (groupid=0, jobs=1): err= 0: pid=162704: Sat Dec 14 00:26:14 2024 00:45:36.708 read: IOPS=508, BW=2035KiB/s (2084kB/s)(19.9MiB/10002msec) 00:45:36.708 slat (nsec): min=4202, max=95444, avg=45021.21, stdev=17984.49 00:45:36.708 clat (usec): min=25652, max=81223, avg=31071.90, stdev=3652.11 00:45:36.708 lat (usec): min=25679, max=81239, avg=31116.92, stdev=3651.08 00:45:36.708 clat percentiles (usec): 00:45:36.708 | 1.00th=[26870], 5.00th=[27919], 10.00th=[28181], 20.00th=[28967], 00:45:36.708 | 30.00th=[29230], 40.00th=[29754], 50.00th=[30540], 60.00th=[31589], 00:45:36.709 | 70.00th=[32113], 80.00th=[32637], 90.00th=[34866], 95.00th=[35914], 00:45:36.709 | 99.00th=[36439], 99.50th=[36439], 99.90th=[81265], 99.95th=[81265], 00:45:36.709 | 99.99th=[81265] 00:45:36.709 bw ( KiB/s): min= 1792, max= 2304, per=4.16%, avg=2034.53, stdev=147.15, samples=19 00:45:36.709 iops : min= 448, max= 576, avg=508.63, stdev=36.79, samples=19 00:45:36.709 lat (msec) : 50=99.69%, 100=0.31% 00:45:36.709 cpu : usr=98.62%, sys=0.95%, ctx=45, majf=0, minf=1633 00:45:36.709 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:45:36.709 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:36.709 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:36.709 issued rwts: total=5088,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:36.709 latency : target=0, window=0, percentile=100.00%, depth=16 00:45:36.709 filename2: (groupid=0, jobs=1): err= 0: pid=162705: Sat Dec 14 00:26:14 2024 00:45:36.709 read: IOPS=508, BW=2034KiB/s (2083kB/s)(19.9MiB/10005msec) 00:45:36.709 slat (nsec): min=8479, max=85879, avg=31155.17, stdev=14836.30 00:45:36.709 clat (msec): min=19, max=104, avg=31.19, stdev= 4.27 00:45:36.709 lat (msec): min=19, max=104, avg=31.22, stdev= 4.27 00:45:36.709 clat percentiles (msec): 00:45:36.709 | 1.00th=[ 27], 5.00th=[ 29], 10.00th=[ 29], 20.00th=[ 29], 00:45:36.709 | 30.00th=[ 30], 40.00th=[ 30], 50.00th=[ 31], 60.00th=[ 32], 00:45:36.709 | 70.00th=[ 33], 80.00th=[ 33], 90.00th=[ 35], 95.00th=[ 36], 00:45:36.709 | 99.00th=[ 37], 99.50th=[ 37], 99.90th=[ 92], 99.95th=[ 92], 00:45:36.709 | 99.99th=[ 105] 00:45:36.709 bw ( KiB/s): min= 1792, max= 2176, per=4.16%, avg=2034.68, stdev=127.10, samples=19 00:45:36.709 iops : min= 448, max= 544, avg=508.63, stdev=31.81, samples=19 00:45:36.709 lat (msec) : 20=0.12%, 50=99.57%, 100=0.28%, 250=0.04% 00:45:36.709 cpu : usr=97.98%, sys=1.38%, ctx=126, majf=0, minf=1635 00:45:36.709 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:45:36.709 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:36.709 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:36.709 issued rwts: total=5088,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:36.709 latency : target=0, window=0, percentile=100.00%, depth=16 00:45:36.709 00:45:36.709 Run status group 0 (all jobs): 00:45:36.709 READ: bw=47.8MiB/s (50.1MB/s), 2033KiB/s-2076KiB/s (2082kB/s-2125kB/s), io=479MiB (503MB), run=10001-10033msec 00:45:37.275 ----------------------------------------------------- 00:45:37.275 Suppressions used: 00:45:37.275 count bytes template 00:45:37.275 45 402 /usr/src/fio/parse.c 00:45:37.275 1 8 libtcmalloc_minimal.so 00:45:37.275 1 904 libcrypto.so 00:45:37.275 ----------------------------------------------------- 00:45:37.275 00:45:37.275 00:26:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:45:37.275 00:26:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:45:37.275 00:26:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:45:37.275 00:26:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:45:37.275 00:26:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:45:37.275 00:26:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:45:37.275 00:26:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:37.275 00:26:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:37.275 00:26:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:37.275 00:26:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:45:37.275 00:26:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:37.275 00:26:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:37.275 00:26:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:37.276 00:26:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:45:37.276 00:26:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:45:37.276 00:26:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:45:37.276 00:26:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:45:37.276 00:26:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:37.276 00:26:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:37.276 00:26:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:37.276 00:26:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:45:37.276 00:26:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:37.276 00:26:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:37.276 00:26:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:37.276 00:26:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:45:37.276 00:26:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:45:37.276 00:26:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:45:37.276 00:26:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:45:37.276 00:26:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:37.276 00:26:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:37.276 00:26:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:37.276 00:26:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:45:37.276 00:26:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:37.276 00:26:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:37.276 00:26:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:37.276 00:26:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:45:37.276 00:26:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:45:37.276 00:26:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:45:37.276 00:26:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:45:37.276 00:26:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:45:37.276 00:26:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:45:37.276 00:26:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:45:37.276 00:26:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:45:37.276 00:26:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:45:37.276 00:26:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:45:37.276 00:26:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:45:37.276 00:26:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:45:37.276 00:26:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:37.276 00:26:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:37.276 bdev_null0 00:45:37.276 00:26:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:37.276 00:26:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:45:37.276 00:26:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:37.276 00:26:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:37.276 00:26:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:37.276 00:26:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:45:37.276 00:26:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:37.276 00:26:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:37.276 00:26:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:37.276 00:26:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:45:37.276 00:26:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:37.276 00:26:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:37.276 [2024-12-14 00:26:16.234738] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:45:37.276 00:26:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:37.276 00:26:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:45:37.276 00:26:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:45:37.276 00:26:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:45:37.276 00:26:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:45:37.276 00:26:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:37.276 00:26:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:37.276 bdev_null1 00:45:37.276 00:26:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:37.276 00:26:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:45:37.276 00:26:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:37.276 00:26:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:37.276 00:26:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:37.276 00:26:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:45:37.276 00:26:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:37.276 00:26:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:37.276 00:26:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:37.276 00:26:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:45:37.276 00:26:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:37.276 00:26:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:37.276 00:26:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:37.276 00:26:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:45:37.276 00:26:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:45:37.276 00:26:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:45:37.276 00:26:16 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:45:37.276 00:26:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:45:37.276 00:26:16 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:45:37.276 00:26:16 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:45:37.276 00:26:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:45:37.276 00:26:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:45:37.276 00:26:16 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:45:37.276 { 00:45:37.276 "params": { 00:45:37.276 "name": "Nvme$subsystem", 00:45:37.276 "trtype": "$TEST_TRANSPORT", 00:45:37.276 "traddr": "$NVMF_FIRST_TARGET_IP", 00:45:37.276 "adrfam": "ipv4", 00:45:37.276 "trsvcid": "$NVMF_PORT", 00:45:37.276 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:45:37.276 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:45:37.276 "hdgst": ${hdgst:-false}, 00:45:37.276 "ddgst": ${ddgst:-false} 00:45:37.276 }, 00:45:37.276 "method": "bdev_nvme_attach_controller" 00:45:37.276 } 00:45:37.276 EOF 00:45:37.276 )") 00:45:37.276 00:26:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:45:37.276 00:26:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:45:37.276 00:26:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:45:37.276 00:26:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:45:37.276 00:26:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:45:37.276 00:26:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:45:37.276 00:26:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:45:37.276 00:26:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:45:37.276 00:26:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:45:37.276 00:26:16 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:45:37.276 00:26:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:45:37.276 00:26:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:45:37.276 00:26:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:45:37.276 00:26:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:45:37.276 00:26:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:45:37.276 00:26:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:45:37.276 00:26:16 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:45:37.276 00:26:16 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:45:37.276 { 00:45:37.276 "params": { 00:45:37.276 "name": "Nvme$subsystem", 00:45:37.276 "trtype": "$TEST_TRANSPORT", 00:45:37.276 "traddr": "$NVMF_FIRST_TARGET_IP", 00:45:37.276 "adrfam": "ipv4", 00:45:37.276 "trsvcid": "$NVMF_PORT", 00:45:37.276 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:45:37.276 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:45:37.276 "hdgst": ${hdgst:-false}, 00:45:37.276 "ddgst": ${ddgst:-false} 00:45:37.276 }, 00:45:37.276 "method": "bdev_nvme_attach_controller" 00:45:37.276 } 00:45:37.276 EOF 00:45:37.277 )") 00:45:37.277 00:26:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:45:37.277 00:26:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:45:37.277 00:26:16 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:45:37.277 00:26:16 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:45:37.277 00:26:16 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:45:37.277 00:26:16 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:45:37.277 "params": { 00:45:37.277 "name": "Nvme0", 00:45:37.277 "trtype": "tcp", 00:45:37.277 "traddr": "10.0.0.2", 00:45:37.277 "adrfam": "ipv4", 00:45:37.277 "trsvcid": "4420", 00:45:37.277 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:45:37.277 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:45:37.277 "hdgst": false, 00:45:37.277 "ddgst": false 00:45:37.277 }, 00:45:37.277 "method": "bdev_nvme_attach_controller" 00:45:37.277 },{ 00:45:37.277 "params": { 00:45:37.277 "name": "Nvme1", 00:45:37.277 "trtype": "tcp", 00:45:37.277 "traddr": "10.0.0.2", 00:45:37.277 "adrfam": "ipv4", 00:45:37.277 "trsvcid": "4420", 00:45:37.277 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:45:37.277 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:45:37.277 "hdgst": false, 00:45:37.277 "ddgst": false 00:45:37.277 }, 00:45:37.277 "method": "bdev_nvme_attach_controller" 00:45:37.277 }' 00:45:37.277 00:26:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:45:37.277 00:26:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:45:37.277 00:26:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1351 -- # break 00:45:37.277 00:26:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:45:37.277 00:26:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:45:37.535 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:45:37.535 ... 00:45:37.535 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:45:37.535 ... 00:45:37.535 fio-3.35 00:45:37.535 Starting 4 threads 00:45:44.095 00:45:44.095 filename0: (groupid=0, jobs=1): err= 0: pid=164792: Sat Dec 14 00:26:22 2024 00:45:44.095 read: IOPS=2238, BW=17.5MiB/s (18.3MB/s)(87.5MiB/5002msec) 00:45:44.095 slat (nsec): min=4686, max=69006, avg=10561.67, stdev=3615.77 00:45:44.095 clat (usec): min=1090, max=6619, avg=3543.05, stdev=504.12 00:45:44.095 lat (usec): min=1103, max=6627, avg=3553.61, stdev=503.92 00:45:44.095 clat percentiles (usec): 00:45:44.095 | 1.00th=[ 2376], 5.00th=[ 2737], 10.00th=[ 2966], 20.00th=[ 3294], 00:45:44.095 | 30.00th=[ 3458], 40.00th=[ 3490], 50.00th=[ 3523], 60.00th=[ 3523], 00:45:44.095 | 70.00th=[ 3589], 80.00th=[ 3818], 90.00th=[ 4146], 95.00th=[ 4359], 00:45:44.095 | 99.00th=[ 5342], 99.50th=[ 5604], 99.90th=[ 5997], 99.95th=[ 6456], 00:45:44.095 | 99.99th=[ 6587] 00:45:44.095 bw ( KiB/s): min=17008, max=18800, per=24.76%, avg=17893.33, stdev=626.15, samples=9 00:45:44.095 iops : min= 2126, max= 2350, avg=2236.67, stdev=78.27, samples=9 00:45:44.095 lat (msec) : 2=0.38%, 4=85.99%, 10=13.63% 00:45:44.095 cpu : usr=95.82%, sys=3.80%, ctx=8, majf=0, minf=1630 00:45:44.095 IO depths : 1=0.1%, 2=1.5%, 4=70.4%, 8=27.9%, 16=0.0%, 32=0.0%, >=64=0.0% 00:45:44.095 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:44.095 complete : 0=0.0%, 4=92.6%, 8=7.4%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:44.095 issued rwts: total=11196,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:44.095 latency : target=0, window=0, percentile=100.00%, depth=8 00:45:44.095 filename0: (groupid=0, jobs=1): err= 0: pid=164794: Sat Dec 14 00:26:22 2024 00:45:44.095 read: IOPS=2204, BW=17.2MiB/s (18.1MB/s)(86.1MiB/5002msec) 00:45:44.095 slat (usec): min=5, max=153, avg=11.13, stdev= 3.97 00:45:44.095 clat (usec): min=1269, max=6165, avg=3600.36, stdev=444.01 00:45:44.095 lat (usec): min=1284, max=6179, avg=3611.49, stdev=443.77 00:45:44.095 clat percentiles (usec): 00:45:44.095 | 1.00th=[ 2606], 5.00th=[ 2933], 10.00th=[ 3163], 20.00th=[ 3425], 00:45:44.095 | 30.00th=[ 3490], 40.00th=[ 3490], 50.00th=[ 3523], 60.00th=[ 3556], 00:45:44.095 | 70.00th=[ 3654], 80.00th=[ 3851], 90.00th=[ 4146], 95.00th=[ 4359], 00:45:44.095 | 99.00th=[ 5211], 99.50th=[ 5473], 99.90th=[ 6063], 99.95th=[ 6128], 00:45:44.095 | 99.99th=[ 6128] 00:45:44.095 bw ( KiB/s): min=16912, max=18048, per=24.41%, avg=17639.11, stdev=318.00, samples=9 00:45:44.095 iops : min= 2114, max= 2256, avg=2204.89, stdev=39.75, samples=9 00:45:44.095 lat (msec) : 2=0.13%, 4=85.37%, 10=14.50% 00:45:44.095 cpu : usr=96.10%, sys=3.48%, ctx=11, majf=0, minf=1633 00:45:44.095 IO depths : 1=0.1%, 2=1.9%, 4=64.8%, 8=33.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:45:44.095 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:44.095 complete : 0=0.0%, 4=96.8%, 8=3.2%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:44.095 issued rwts: total=11026,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:44.095 latency : target=0, window=0, percentile=100.00%, depth=8 00:45:44.095 filename1: (groupid=0, jobs=1): err= 0: pid=164795: Sat Dec 14 00:26:22 2024 00:45:44.095 read: IOPS=2391, BW=18.7MiB/s (19.6MB/s)(93.4MiB/5002msec) 00:45:44.095 slat (usec): min=6, max=173, avg=10.48, stdev= 3.83 00:45:44.095 clat (usec): min=1436, max=45538, avg=3313.13, stdev=1188.26 00:45:44.095 lat (usec): min=1450, max=45572, avg=3323.62, stdev=1188.25 00:45:44.095 clat percentiles (usec): 00:45:44.095 | 1.00th=[ 2180], 5.00th=[ 2573], 10.00th=[ 2704], 20.00th=[ 2900], 00:45:44.095 | 30.00th=[ 2999], 40.00th=[ 3195], 50.00th=[ 3392], 60.00th=[ 3490], 00:45:44.095 | 70.00th=[ 3490], 80.00th=[ 3556], 90.00th=[ 3752], 95.00th=[ 3982], 00:45:44.095 | 99.00th=[ 4686], 99.50th=[ 5080], 99.90th=[ 5866], 99.95th=[45351], 00:45:44.095 | 99.99th=[45351] 00:45:44.095 bw ( KiB/s): min=17392, max=20928, per=26.42%, avg=19091.56, stdev=1007.20, samples=9 00:45:44.095 iops : min= 2174, max= 2616, avg=2386.44, stdev=125.90, samples=9 00:45:44.095 lat (msec) : 2=0.48%, 4=94.56%, 10=4.90%, 50=0.07% 00:45:44.095 cpu : usr=95.54%, sys=4.06%, ctx=10, majf=0, minf=1633 00:45:44.095 IO depths : 1=0.2%, 2=5.7%, 4=65.8%, 8=28.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:45:44.095 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:44.095 complete : 0=0.0%, 4=92.7%, 8=7.3%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:44.095 issued rwts: total=11960,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:44.095 latency : target=0, window=0, percentile=100.00%, depth=8 00:45:44.095 filename1: (groupid=0, jobs=1): err= 0: pid=164796: Sat Dec 14 00:26:22 2024 00:45:44.095 read: IOPS=2200, BW=17.2MiB/s (18.0MB/s)(86.0MiB/5001msec) 00:45:44.095 slat (nsec): min=7049, max=53052, avg=10311.20, stdev=3583.38 00:45:44.095 clat (usec): min=662, max=6883, avg=3603.91, stdev=458.82 00:45:44.095 lat (usec): min=670, max=6902, avg=3614.23, stdev=458.60 00:45:44.095 clat percentiles (usec): 00:45:44.095 | 1.00th=[ 2474], 5.00th=[ 2933], 10.00th=[ 3195], 20.00th=[ 3425], 00:45:44.095 | 30.00th=[ 3490], 40.00th=[ 3490], 50.00th=[ 3523], 60.00th=[ 3556], 00:45:44.096 | 70.00th=[ 3654], 80.00th=[ 3851], 90.00th=[ 4146], 95.00th=[ 4359], 00:45:44.096 | 99.00th=[ 5276], 99.50th=[ 5538], 99.90th=[ 6128], 99.95th=[ 6390], 00:45:44.096 | 99.99th=[ 6849] 00:45:44.096 bw ( KiB/s): min=16944, max=18352, per=24.33%, avg=17584.00, stdev=533.79, samples=9 00:45:44.096 iops : min= 2118, max= 2294, avg=2198.00, stdev=66.72, samples=9 00:45:44.096 lat (usec) : 750=0.03%, 1000=0.05% 00:45:44.096 lat (msec) : 2=0.20%, 4=85.44%, 10=14.28% 00:45:44.096 cpu : usr=96.02%, sys=3.60%, ctx=9, majf=0, minf=1633 00:45:44.096 IO depths : 1=0.1%, 2=1.4%, 4=72.2%, 8=26.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:45:44.096 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:44.096 complete : 0=0.0%, 4=91.1%, 8=8.9%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:44.096 issued rwts: total=11005,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:44.096 latency : target=0, window=0, percentile=100.00%, depth=8 00:45:44.096 00:45:44.096 Run status group 0 (all jobs): 00:45:44.096 READ: bw=70.6MiB/s (74.0MB/s), 17.2MiB/s-18.7MiB/s (18.0MB/s-19.6MB/s), io=353MiB (370MB), run=5001-5002msec 00:45:45.030 ----------------------------------------------------- 00:45:45.030 Suppressions used: 00:45:45.030 count bytes template 00:45:45.030 6 52 /usr/src/fio/parse.c 00:45:45.030 1 8 libtcmalloc_minimal.so 00:45:45.030 1 904 libcrypto.so 00:45:45.030 ----------------------------------------------------- 00:45:45.030 00:45:45.030 00:26:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:45:45.030 00:26:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:45:45.030 00:26:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:45:45.030 00:26:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:45:45.030 00:26:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:45:45.030 00:26:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:45:45.030 00:26:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:45.030 00:26:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:45.030 00:26:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:45.030 00:26:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:45:45.030 00:26:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:45.030 00:26:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:45.030 00:26:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:45.030 00:26:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:45:45.031 00:26:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:45:45.031 00:26:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:45:45.031 00:26:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:45:45.031 00:26:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:45.031 00:26:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:45.031 00:26:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:45.031 00:26:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:45:45.031 00:26:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:45.031 00:26:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:45.031 00:26:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:45.031 00:45:45.031 real 0m28.546s 00:45:45.031 user 4m55.857s 00:45:45.031 sys 0m5.981s 00:45:45.031 00:26:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1130 -- # xtrace_disable 00:45:45.031 00:26:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:45.031 ************************************ 00:45:45.031 END TEST fio_dif_rand_params 00:45:45.031 ************************************ 00:45:45.031 00:26:24 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:45:45.031 00:26:24 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:45:45.031 00:26:24 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:45:45.031 00:26:24 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:45:45.031 ************************************ 00:45:45.031 START TEST fio_dif_digest 00:45:45.031 ************************************ 00:45:45.031 00:26:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1129 -- # fio_dif_digest 00:45:45.031 00:26:24 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:45:45.031 00:26:24 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:45:45.031 00:26:24 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:45:45.031 00:26:24 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:45:45.031 00:26:24 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:45:45.031 00:26:24 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:45:45.031 00:26:24 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:45:45.031 00:26:24 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:45:45.031 00:26:24 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:45:45.031 00:26:24 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:45:45.031 00:26:24 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:45:45.031 00:26:24 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:45:45.031 00:26:24 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:45:45.031 00:26:24 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:45:45.031 00:26:24 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:45:45.031 00:26:24 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:45:45.031 00:26:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:45.031 00:26:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:45:45.031 bdev_null0 00:45:45.031 00:26:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:45.031 00:26:24 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:45:45.031 00:26:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:45.031 00:26:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:45:45.031 00:26:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:45.031 00:26:24 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:45:45.031 00:26:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:45.031 00:26:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:45:45.031 00:26:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:45.031 00:26:24 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:45:45.031 00:26:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:45.031 00:26:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:45:45.031 [2024-12-14 00:26:24.159206] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:45:45.031 00:26:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:45.031 00:26:24 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:45:45.031 00:26:24 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:45:45.031 00:26:24 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:45:45.031 00:26:24 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # config=() 00:45:45.031 00:26:24 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:45:45.031 00:26:24 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # local subsystem config 00:45:45.031 00:26:24 nvmf_dif.fio_dif_digest -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:45:45.031 00:26:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:45:45.031 00:26:24 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:45:45.031 00:26:24 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:45:45.031 { 00:45:45.031 "params": { 00:45:45.031 "name": "Nvme$subsystem", 00:45:45.031 "trtype": "$TEST_TRANSPORT", 00:45:45.031 "traddr": "$NVMF_FIRST_TARGET_IP", 00:45:45.031 "adrfam": "ipv4", 00:45:45.031 "trsvcid": "$NVMF_PORT", 00:45:45.031 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:45:45.031 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:45:45.031 "hdgst": ${hdgst:-false}, 00:45:45.031 "ddgst": ${ddgst:-false} 00:45:45.031 }, 00:45:45.031 "method": "bdev_nvme_attach_controller" 00:45:45.031 } 00:45:45.031 EOF 00:45:45.031 )") 00:45:45.031 00:26:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:45:45.031 00:26:24 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:45:45.031 00:26:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:45:45.031 00:26:24 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:45:45.031 00:26:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local sanitizers 00:45:45.031 00:26:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:45:45.031 00:26:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # shift 00:45:45.031 00:26:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # local asan_lib= 00:45:45.031 00:26:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:45:45.031 00:26:24 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # cat 00:45:45.289 00:26:24 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:45:45.289 00:26:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:45:45.289 00:26:24 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:45:45.289 00:26:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # grep libasan 00:45:45.289 00:26:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:45:45.289 00:26:24 nvmf_dif.fio_dif_digest -- nvmf/common.sh@584 -- # jq . 00:45:45.289 00:26:24 nvmf_dif.fio_dif_digest -- nvmf/common.sh@585 -- # IFS=, 00:45:45.289 00:26:24 nvmf_dif.fio_dif_digest -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:45:45.289 "params": { 00:45:45.289 "name": "Nvme0", 00:45:45.289 "trtype": "tcp", 00:45:45.289 "traddr": "10.0.0.2", 00:45:45.289 "adrfam": "ipv4", 00:45:45.289 "trsvcid": "4420", 00:45:45.289 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:45:45.289 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:45:45.289 "hdgst": true, 00:45:45.289 "ddgst": true 00:45:45.289 }, 00:45:45.289 "method": "bdev_nvme_attach_controller" 00:45:45.289 }' 00:45:45.289 00:26:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:45:45.289 00:26:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:45:45.289 00:26:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1351 -- # break 00:45:45.289 00:26:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:45:45.289 00:26:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:45:45.547 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:45:45.547 ... 00:45:45.547 fio-3.35 00:45:45.547 Starting 3 threads 00:45:57.745 00:45:57.745 filename0: (groupid=0, jobs=1): err= 0: pid=166182: Sat Dec 14 00:26:35 2024 00:45:57.745 read: IOPS=238, BW=29.8MiB/s (31.3MB/s)(300MiB/10050msec) 00:45:57.745 slat (nsec): min=6123, max=26592, avg=14163.98, stdev=1517.67 00:45:57.745 clat (usec): min=9554, max=55191, avg=12533.33, stdev=2087.77 00:45:57.745 lat (usec): min=9568, max=55204, avg=12547.49, stdev=2087.80 00:45:57.745 clat percentiles (usec): 00:45:57.745 | 1.00th=[10421], 5.00th=[11076], 10.00th=[11338], 20.00th=[11731], 00:45:57.745 | 30.00th=[11994], 40.00th=[12256], 50.00th=[12518], 60.00th=[12649], 00:45:57.745 | 70.00th=[12911], 80.00th=[13042], 90.00th=[13566], 95.00th=[13960], 00:45:57.745 | 99.00th=[14746], 99.50th=[15533], 99.90th=[54264], 99.95th=[55313], 00:45:57.745 | 99.99th=[55313] 00:45:57.745 bw ( KiB/s): min=27904, max=31488, per=34.81%, avg=30681.60, stdev=823.38, samples=20 00:45:57.745 iops : min= 218, max= 246, avg=239.70, stdev= 6.43, samples=20 00:45:57.745 lat (msec) : 10=0.33%, 20=99.42%, 50=0.04%, 100=0.21% 00:45:57.745 cpu : usr=93.92%, sys=5.73%, ctx=30, majf=0, minf=1637 00:45:57.745 IO depths : 1=0.1%, 2=100.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:45:57.745 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:57.745 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:57.745 issued rwts: total=2399,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:57.745 latency : target=0, window=0, percentile=100.00%, depth=3 00:45:57.745 filename0: (groupid=0, jobs=1): err= 0: pid=166183: Sat Dec 14 00:26:35 2024 00:45:57.745 read: IOPS=223, BW=28.0MiB/s (29.4MB/s)(281MiB/10046msec) 00:45:57.745 slat (nsec): min=7798, max=48381, avg=14411.06, stdev=1713.40 00:45:57.745 clat (usec): min=8176, max=50528, avg=13358.45, stdev=1432.15 00:45:57.745 lat (usec): min=8189, max=50541, avg=13372.86, stdev=1432.29 00:45:57.745 clat percentiles (usec): 00:45:57.745 | 1.00th=[10814], 5.00th=[11731], 10.00th=[12125], 20.00th=[12649], 00:45:57.745 | 30.00th=[12911], 40.00th=[13173], 50.00th=[13304], 60.00th=[13566], 00:45:57.745 | 70.00th=[13829], 80.00th=[14091], 90.00th=[14484], 95.00th=[14877], 00:45:57.745 | 99.00th=[15664], 99.50th=[15926], 99.90th=[17171], 99.95th=[45876], 00:45:57.745 | 99.99th=[50594] 00:45:57.745 bw ( KiB/s): min=27392, max=30208, per=32.63%, avg=28766.32, stdev=699.58, samples=19 00:45:57.745 iops : min= 214, max= 236, avg=224.74, stdev= 5.47, samples=19 00:45:57.745 lat (msec) : 10=0.58%, 20=99.33%, 50=0.04%, 100=0.04% 00:45:57.745 cpu : usr=94.57%, sys=5.08%, ctx=22, majf=0, minf=1634 00:45:57.745 IO depths : 1=0.1%, 2=100.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:45:57.745 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:57.745 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:57.745 issued rwts: total=2250,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:57.745 latency : target=0, window=0, percentile=100.00%, depth=3 00:45:57.745 filename0: (groupid=0, jobs=1): err= 0: pid=166184: Sat Dec 14 00:26:35 2024 00:45:57.745 read: IOPS=227, BW=28.4MiB/s (29.8MB/s)(284MiB/10004msec) 00:45:57.745 slat (nsec): min=7551, max=33557, avg=14463.26, stdev=1762.02 00:45:57.745 clat (usec): min=5394, max=16754, avg=13190.86, stdev=990.95 00:45:57.745 lat (usec): min=5408, max=16767, avg=13205.32, stdev=991.14 00:45:57.745 clat percentiles (usec): 00:45:57.745 | 1.00th=[11076], 5.00th=[11731], 10.00th=[12125], 20.00th=[12518], 00:45:57.745 | 30.00th=[12780], 40.00th=[12911], 50.00th=[13173], 60.00th=[13435], 00:45:57.745 | 70.00th=[13698], 80.00th=[13960], 90.00th=[14353], 95.00th=[14746], 00:45:57.745 | 99.00th=[15533], 99.50th=[15926], 99.90th=[16450], 99.95th=[16450], 00:45:57.745 | 99.99th=[16712] 00:45:57.745 bw ( KiB/s): min=28416, max=29952, per=32.94%, avg=29032.68, stdev=490.03, samples=19 00:45:57.745 iops : min= 222, max= 234, avg=226.79, stdev= 3.81, samples=19 00:45:57.745 lat (msec) : 10=0.62%, 20=99.38% 00:45:57.745 cpu : usr=94.38%, sys=5.27%, ctx=19, majf=0, minf=1637 00:45:57.745 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:45:57.745 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:57.745 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:57.745 issued rwts: total=2272,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:57.745 latency : target=0, window=0, percentile=100.00%, depth=3 00:45:57.745 00:45:57.745 Run status group 0 (all jobs): 00:45:57.745 READ: bw=86.1MiB/s (90.3MB/s), 28.0MiB/s-29.8MiB/s (29.4MB/s-31.3MB/s), io=865MiB (907MB), run=10004-10050msec 00:45:57.745 ----------------------------------------------------- 00:45:57.745 Suppressions used: 00:45:57.745 count bytes template 00:45:57.745 5 44 /usr/src/fio/parse.c 00:45:57.745 1 8 libtcmalloc_minimal.so 00:45:57.745 1 904 libcrypto.so 00:45:57.745 ----------------------------------------------------- 00:45:57.745 00:45:57.745 00:26:36 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:45:57.745 00:26:36 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:45:57.745 00:26:36 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:45:57.745 00:26:36 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:45:57.745 00:26:36 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:45:57.745 00:26:36 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:45:57.745 00:26:36 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:57.745 00:26:36 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:45:57.745 00:26:36 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:57.745 00:26:36 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:45:57.745 00:26:36 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:57.745 00:26:36 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:45:57.745 00:26:36 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:57.745 00:45:57.745 real 0m12.434s 00:45:57.745 user 0m36.209s 00:45:57.745 sys 0m2.060s 00:45:57.745 00:26:36 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1130 -- # xtrace_disable 00:45:57.745 00:26:36 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:45:57.745 ************************************ 00:45:57.745 END TEST fio_dif_digest 00:45:57.745 ************************************ 00:45:57.745 00:26:36 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:45:57.745 00:26:36 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:45:57.745 00:26:36 nvmf_dif -- nvmf/common.sh@516 -- # nvmfcleanup 00:45:57.745 00:26:36 nvmf_dif -- nvmf/common.sh@121 -- # sync 00:45:57.745 00:26:36 nvmf_dif -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:45:57.745 00:26:36 nvmf_dif -- nvmf/common.sh@124 -- # set +e 00:45:57.745 00:26:36 nvmf_dif -- nvmf/common.sh@125 -- # for i in {1..20} 00:45:57.745 00:26:36 nvmf_dif -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:45:57.745 rmmod nvme_tcp 00:45:57.745 rmmod nvme_fabrics 00:45:57.745 rmmod nvme_keyring 00:45:57.746 00:26:36 nvmf_dif -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:45:57.746 00:26:36 nvmf_dif -- nvmf/common.sh@128 -- # set -e 00:45:57.746 00:26:36 nvmf_dif -- nvmf/common.sh@129 -- # return 0 00:45:57.746 00:26:36 nvmf_dif -- nvmf/common.sh@517 -- # '[' -n 156750 ']' 00:45:57.746 00:26:36 nvmf_dif -- nvmf/common.sh@518 -- # killprocess 156750 00:45:57.746 00:26:36 nvmf_dif -- common/autotest_common.sh@954 -- # '[' -z 156750 ']' 00:45:57.746 00:26:36 nvmf_dif -- common/autotest_common.sh@958 -- # kill -0 156750 00:45:57.746 00:26:36 nvmf_dif -- common/autotest_common.sh@959 -- # uname 00:45:57.746 00:26:36 nvmf_dif -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:45:57.746 00:26:36 nvmf_dif -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 156750 00:45:57.746 00:26:36 nvmf_dif -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:45:57.746 00:26:36 nvmf_dif -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:45:57.746 00:26:36 nvmf_dif -- common/autotest_common.sh@972 -- # echo 'killing process with pid 156750' 00:45:57.746 killing process with pid 156750 00:45:57.746 00:26:36 nvmf_dif -- common/autotest_common.sh@973 -- # kill 156750 00:45:57.746 00:26:36 nvmf_dif -- common/autotest_common.sh@978 -- # wait 156750 00:45:58.680 00:26:37 nvmf_dif -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:45:58.680 00:26:37 nvmf_dif -- nvmf/common.sh@521 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:46:01.210 Waiting for block devices as requested 00:46:01.468 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:46:01.468 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:46:01.468 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:46:01.468 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:46:01.726 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:46:01.726 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:46:01.726 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:46:01.726 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:46:01.983 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:46:01.983 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:46:01.983 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:46:02.241 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:46:02.241 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:46:02.241 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:46:02.241 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:46:02.499 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:46:02.499 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:46:02.499 00:26:41 nvmf_dif -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:46:02.499 00:26:41 nvmf_dif -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:46:02.499 00:26:41 nvmf_dif -- nvmf/common.sh@297 -- # iptr 00:46:02.499 00:26:41 nvmf_dif -- nvmf/common.sh@791 -- # iptables-save 00:46:02.499 00:26:41 nvmf_dif -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:46:02.499 00:26:41 nvmf_dif -- nvmf/common.sh@791 -- # iptables-restore 00:46:02.499 00:26:41 nvmf_dif -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:46:02.499 00:26:41 nvmf_dif -- nvmf/common.sh@302 -- # remove_spdk_ns 00:46:02.499 00:26:41 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:46:02.499 00:26:41 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:46:02.499 00:26:41 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:46:05.030 00:26:43 nvmf_dif -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:46:05.030 00:46:05.030 real 1m23.070s 00:46:05.030 user 7m25.984s 00:46:05.030 sys 0m21.230s 00:46:05.030 00:26:43 nvmf_dif -- common/autotest_common.sh@1130 -- # xtrace_disable 00:46:05.030 00:26:43 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:46:05.030 ************************************ 00:46:05.030 END TEST nvmf_dif 00:46:05.030 ************************************ 00:46:05.030 00:26:43 -- spdk/autotest.sh@290 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:46:05.030 00:26:43 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:46:05.030 00:26:43 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:46:05.030 00:26:43 -- common/autotest_common.sh@10 -- # set +x 00:46:05.030 ************************************ 00:46:05.030 START TEST nvmf_abort_qd_sizes 00:46:05.030 ************************************ 00:46:05.030 00:26:43 nvmf_abort_qd_sizes -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:46:05.030 * Looking for test storage... 00:46:05.030 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:46:05.030 00:26:43 nvmf_abort_qd_sizes -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:46:05.030 00:26:43 nvmf_abort_qd_sizes -- common/autotest_common.sh@1711 -- # lcov --version 00:46:05.030 00:26:43 nvmf_abort_qd_sizes -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:46:05.030 00:26:43 nvmf_abort_qd_sizes -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:46:05.030 00:26:43 nvmf_abort_qd_sizes -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:46:05.030 00:26:43 nvmf_abort_qd_sizes -- scripts/common.sh@333 -- # local ver1 ver1_l 00:46:05.030 00:26:43 nvmf_abort_qd_sizes -- scripts/common.sh@334 -- # local ver2 ver2_l 00:46:05.030 00:26:43 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # IFS=.-: 00:46:05.030 00:26:43 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # read -ra ver1 00:46:05.030 00:26:43 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # IFS=.-: 00:46:05.030 00:26:43 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # read -ra ver2 00:46:05.030 00:26:43 nvmf_abort_qd_sizes -- scripts/common.sh@338 -- # local 'op=<' 00:46:05.030 00:26:43 nvmf_abort_qd_sizes -- scripts/common.sh@340 -- # ver1_l=2 00:46:05.030 00:26:43 nvmf_abort_qd_sizes -- scripts/common.sh@341 -- # ver2_l=1 00:46:05.030 00:26:43 nvmf_abort_qd_sizes -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:46:05.030 00:26:43 nvmf_abort_qd_sizes -- scripts/common.sh@344 -- # case "$op" in 00:46:05.030 00:26:43 nvmf_abort_qd_sizes -- scripts/common.sh@345 -- # : 1 00:46:05.030 00:26:43 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v = 0 )) 00:46:05.030 00:26:43 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:46:05.030 00:26:43 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # decimal 1 00:46:05.030 00:26:43 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=1 00:46:05.030 00:26:43 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:46:05.030 00:26:43 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 1 00:46:05.030 00:26:43 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # ver1[v]=1 00:46:05.030 00:26:43 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # decimal 2 00:46:05.030 00:26:43 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=2 00:46:05.030 00:26:43 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:46:05.030 00:26:43 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 2 00:46:05.030 00:26:43 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # ver2[v]=2 00:46:05.030 00:26:43 nvmf_abort_qd_sizes -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:46:05.030 00:26:43 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:46:05.030 00:26:43 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # return 0 00:46:05.030 00:26:43 nvmf_abort_qd_sizes -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:46:05.030 00:26:43 nvmf_abort_qd_sizes -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:46:05.030 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:46:05.030 --rc genhtml_branch_coverage=1 00:46:05.030 --rc genhtml_function_coverage=1 00:46:05.030 --rc genhtml_legend=1 00:46:05.030 --rc geninfo_all_blocks=1 00:46:05.030 --rc geninfo_unexecuted_blocks=1 00:46:05.030 00:46:05.030 ' 00:46:05.030 00:26:43 nvmf_abort_qd_sizes -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:46:05.030 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:46:05.030 --rc genhtml_branch_coverage=1 00:46:05.030 --rc genhtml_function_coverage=1 00:46:05.030 --rc genhtml_legend=1 00:46:05.030 --rc geninfo_all_blocks=1 00:46:05.030 --rc geninfo_unexecuted_blocks=1 00:46:05.030 00:46:05.030 ' 00:46:05.030 00:26:43 nvmf_abort_qd_sizes -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:46:05.030 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:46:05.030 --rc genhtml_branch_coverage=1 00:46:05.030 --rc genhtml_function_coverage=1 00:46:05.030 --rc genhtml_legend=1 00:46:05.030 --rc geninfo_all_blocks=1 00:46:05.030 --rc geninfo_unexecuted_blocks=1 00:46:05.030 00:46:05.030 ' 00:46:05.030 00:26:43 nvmf_abort_qd_sizes -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:46:05.030 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:46:05.030 --rc genhtml_branch_coverage=1 00:46:05.030 --rc genhtml_function_coverage=1 00:46:05.030 --rc genhtml_legend=1 00:46:05.030 --rc geninfo_all_blocks=1 00:46:05.030 --rc geninfo_unexecuted_blocks=1 00:46:05.030 00:46:05.030 ' 00:46:05.030 00:26:43 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:46:05.030 00:26:43 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:46:05.030 00:26:43 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:46:05.030 00:26:43 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:46:05.030 00:26:43 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:46:05.030 00:26:43 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:46:05.030 00:26:43 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:46:05.030 00:26:43 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:46:05.030 00:26:43 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:46:05.030 00:26:43 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:46:05.030 00:26:43 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:46:05.031 00:26:43 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:46:05.031 00:26:43 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:46:05.031 00:26:43 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:46:05.031 00:26:43 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:46:05.031 00:26:43 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:46:05.031 00:26:43 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:46:05.031 00:26:43 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:46:05.031 00:26:43 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:46:05.031 00:26:43 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # shopt -s extglob 00:46:05.031 00:26:43 nvmf_abort_qd_sizes -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:46:05.031 00:26:43 nvmf_abort_qd_sizes -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:46:05.031 00:26:43 nvmf_abort_qd_sizes -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:46:05.031 00:26:43 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:46:05.031 00:26:43 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:46:05.031 00:26:43 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:46:05.031 00:26:43 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:46:05.031 00:26:43 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:46:05.031 00:26:43 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # : 0 00:46:05.031 00:26:43 nvmf_abort_qd_sizes -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:46:05.031 00:26:43 nvmf_abort_qd_sizes -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:46:05.031 00:26:43 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:46:05.031 00:26:43 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:46:05.031 00:26:43 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:46:05.031 00:26:43 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:46:05.031 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:46:05.031 00:26:43 nvmf_abort_qd_sizes -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:46:05.031 00:26:43 nvmf_abort_qd_sizes -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:46:05.031 00:26:43 nvmf_abort_qd_sizes -- nvmf/common.sh@55 -- # have_pci_nics=0 00:46:05.031 00:26:43 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:46:05.031 00:26:43 nvmf_abort_qd_sizes -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:46:05.031 00:26:43 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:46:05.031 00:26:43 nvmf_abort_qd_sizes -- nvmf/common.sh@476 -- # prepare_net_devs 00:46:05.031 00:26:43 nvmf_abort_qd_sizes -- nvmf/common.sh@438 -- # local -g is_hw=no 00:46:05.031 00:26:43 nvmf_abort_qd_sizes -- nvmf/common.sh@440 -- # remove_spdk_ns 00:46:05.031 00:26:43 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:46:05.031 00:26:43 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:46:05.031 00:26:43 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:46:05.031 00:26:43 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:46:05.031 00:26:43 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:46:05.031 00:26:43 nvmf_abort_qd_sizes -- nvmf/common.sh@309 -- # xtrace_disable 00:46:05.031 00:26:43 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:46:10.303 00:26:49 nvmf_abort_qd_sizes -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:46:10.303 00:26:49 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # pci_devs=() 00:46:10.303 00:26:49 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # local -a pci_devs 00:46:10.303 00:26:49 nvmf_abort_qd_sizes -- nvmf/common.sh@316 -- # pci_net_devs=() 00:46:10.303 00:26:49 nvmf_abort_qd_sizes -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:46:10.303 00:26:49 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # pci_drivers=() 00:46:10.303 00:26:49 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # local -A pci_drivers 00:46:10.303 00:26:49 nvmf_abort_qd_sizes -- nvmf/common.sh@319 -- # net_devs=() 00:46:10.303 00:26:49 nvmf_abort_qd_sizes -- nvmf/common.sh@319 -- # local -ga net_devs 00:46:10.303 00:26:49 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # e810=() 00:46:10.303 00:26:49 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # local -ga e810 00:46:10.303 00:26:49 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # x722=() 00:46:10.303 00:26:49 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # local -ga x722 00:46:10.303 00:26:49 nvmf_abort_qd_sizes -- nvmf/common.sh@322 -- # mlx=() 00:46:10.303 00:26:49 nvmf_abort_qd_sizes -- nvmf/common.sh@322 -- # local -ga mlx 00:46:10.303 00:26:49 nvmf_abort_qd_sizes -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:46:10.303 00:26:49 nvmf_abort_qd_sizes -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:46:10.303 00:26:49 nvmf_abort_qd_sizes -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:46:10.303 00:26:49 nvmf_abort_qd_sizes -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:46:10.303 00:26:49 nvmf_abort_qd_sizes -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:46:10.303 00:26:49 nvmf_abort_qd_sizes -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:46:10.303 00:26:49 nvmf_abort_qd_sizes -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:46:10.303 00:26:49 nvmf_abort_qd_sizes -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:46:10.303 00:26:49 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:46:10.303 00:26:49 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:46:10.303 00:26:49 nvmf_abort_qd_sizes -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:46:10.303 00:26:49 nvmf_abort_qd_sizes -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:46:10.303 00:26:49 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:46:10.303 00:26:49 nvmf_abort_qd_sizes -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:46:10.303 00:26:49 nvmf_abort_qd_sizes -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:46:10.303 00:26:49 nvmf_abort_qd_sizes -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:46:10.303 00:26:49 nvmf_abort_qd_sizes -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:46:10.303 00:26:49 nvmf_abort_qd_sizes -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:46:10.304 00:26:49 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:46:10.304 00:26:49 nvmf_abort_qd_sizes -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:46:10.304 Found 0000:af:00.0 (0x8086 - 0x159b) 00:46:10.304 00:26:49 nvmf_abort_qd_sizes -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:46:10.304 00:26:49 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:46:10.304 00:26:49 nvmf_abort_qd_sizes -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:46:10.304 00:26:49 nvmf_abort_qd_sizes -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:46:10.304 00:26:49 nvmf_abort_qd_sizes -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:46:10.304 00:26:49 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:46:10.304 00:26:49 nvmf_abort_qd_sizes -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:46:10.304 Found 0000:af:00.1 (0x8086 - 0x159b) 00:46:10.304 00:26:49 nvmf_abort_qd_sizes -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:46:10.304 00:26:49 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:46:10.304 00:26:49 nvmf_abort_qd_sizes -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:46:10.304 00:26:49 nvmf_abort_qd_sizes -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:46:10.304 00:26:49 nvmf_abort_qd_sizes -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:46:10.304 00:26:49 nvmf_abort_qd_sizes -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:46:10.304 00:26:49 nvmf_abort_qd_sizes -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:46:10.304 00:26:49 nvmf_abort_qd_sizes -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:46:10.304 00:26:49 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:46:10.304 00:26:49 nvmf_abort_qd_sizes -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:46:10.304 00:26:49 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:46:10.304 00:26:49 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:46:10.304 00:26:49 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # [[ up == up ]] 00:46:10.304 00:26:49 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:46:10.304 00:26:49 nvmf_abort_qd_sizes -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:46:10.304 00:26:49 nvmf_abort_qd_sizes -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:46:10.304 Found net devices under 0000:af:00.0: cvl_0_0 00:46:10.304 00:26:49 nvmf_abort_qd_sizes -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:46:10.304 00:26:49 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:46:10.304 00:26:49 nvmf_abort_qd_sizes -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:46:10.304 00:26:49 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:46:10.304 00:26:49 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:46:10.304 00:26:49 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # [[ up == up ]] 00:46:10.304 00:26:49 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:46:10.304 00:26:49 nvmf_abort_qd_sizes -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:46:10.304 00:26:49 nvmf_abort_qd_sizes -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:46:10.304 Found net devices under 0000:af:00.1: cvl_0_1 00:46:10.304 00:26:49 nvmf_abort_qd_sizes -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:46:10.304 00:26:49 nvmf_abort_qd_sizes -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:46:10.304 00:26:49 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # is_hw=yes 00:46:10.304 00:26:49 nvmf_abort_qd_sizes -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:46:10.304 00:26:49 nvmf_abort_qd_sizes -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:46:10.304 00:26:49 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:46:10.304 00:26:49 nvmf_abort_qd_sizes -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:46:10.304 00:26:49 nvmf_abort_qd_sizes -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:46:10.304 00:26:49 nvmf_abort_qd_sizes -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:46:10.304 00:26:49 nvmf_abort_qd_sizes -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:46:10.304 00:26:49 nvmf_abort_qd_sizes -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:46:10.304 00:26:49 nvmf_abort_qd_sizes -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:46:10.304 00:26:49 nvmf_abort_qd_sizes -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:46:10.304 00:26:49 nvmf_abort_qd_sizes -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:46:10.304 00:26:49 nvmf_abort_qd_sizes -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:46:10.304 00:26:49 nvmf_abort_qd_sizes -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:46:10.304 00:26:49 nvmf_abort_qd_sizes -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:46:10.304 00:26:49 nvmf_abort_qd_sizes -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:46:10.304 00:26:49 nvmf_abort_qd_sizes -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:46:10.304 00:26:49 nvmf_abort_qd_sizes -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:46:10.304 00:26:49 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:46:10.304 00:26:49 nvmf_abort_qd_sizes -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:46:10.304 00:26:49 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:46:10.304 00:26:49 nvmf_abort_qd_sizes -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:46:10.304 00:26:49 nvmf_abort_qd_sizes -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:46:10.563 00:26:49 nvmf_abort_qd_sizes -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:46:10.563 00:26:49 nvmf_abort_qd_sizes -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:46:10.563 00:26:49 nvmf_abort_qd_sizes -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:46:10.563 00:26:49 nvmf_abort_qd_sizes -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:46:10.563 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:46:10.563 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.347 ms 00:46:10.563 00:46:10.563 --- 10.0.0.2 ping statistics --- 00:46:10.563 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:46:10.563 rtt min/avg/max/mdev = 0.347/0.347/0.347/0.000 ms 00:46:10.563 00:26:49 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:46:10.563 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:46:10.563 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.190 ms 00:46:10.563 00:46:10.563 --- 10.0.0.1 ping statistics --- 00:46:10.563 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:46:10.563 rtt min/avg/max/mdev = 0.190/0.190/0.190/0.000 ms 00:46:10.563 00:26:49 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:46:10.563 00:26:49 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # return 0 00:46:10.563 00:26:49 nvmf_abort_qd_sizes -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:46:10.563 00:26:49 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:46:13.092 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:46:13.092 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:46:13.092 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:46:13.092 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:46:13.092 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:46:13.092 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:46:13.092 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:46:13.092 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:46:13.092 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:46:13.092 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:46:13.092 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:46:13.092 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:46:13.092 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:46:13.092 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:46:13.092 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:46:13.092 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:46:13.659 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:46:13.659 00:26:52 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:46:13.659 00:26:52 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:46:13.659 00:26:52 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:46:13.659 00:26:52 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:46:13.659 00:26:52 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:46:13.659 00:26:52 nvmf_abort_qd_sizes -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:46:13.918 00:26:52 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:46:13.918 00:26:52 nvmf_abort_qd_sizes -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:46:13.918 00:26:52 nvmf_abort_qd_sizes -- common/autotest_common.sh@726 -- # xtrace_disable 00:46:13.918 00:26:52 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:46:13.918 00:26:52 nvmf_abort_qd_sizes -- nvmf/common.sh@509 -- # nvmfpid=174079 00:46:13.918 00:26:52 nvmf_abort_qd_sizes -- nvmf/common.sh@510 -- # waitforlisten 174079 00:46:13.918 00:26:52 nvmf_abort_qd_sizes -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:46:13.918 00:26:52 nvmf_abort_qd_sizes -- common/autotest_common.sh@835 -- # '[' -z 174079 ']' 00:46:13.918 00:26:52 nvmf_abort_qd_sizes -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:46:13.918 00:26:52 nvmf_abort_qd_sizes -- common/autotest_common.sh@840 -- # local max_retries=100 00:46:13.918 00:26:52 nvmf_abort_qd_sizes -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:46:13.918 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:46:13.918 00:26:52 nvmf_abort_qd_sizes -- common/autotest_common.sh@844 -- # xtrace_disable 00:46:13.918 00:26:52 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:46:13.918 [2024-12-14 00:26:52.920303] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:46:13.918 [2024-12-14 00:26:52.920395] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:46:13.918 [2024-12-14 00:26:53.039821] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:46:14.176 [2024-12-14 00:26:53.148626] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:46:14.176 [2024-12-14 00:26:53.148676] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:46:14.176 [2024-12-14 00:26:53.148686] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:46:14.176 [2024-12-14 00:26:53.148697] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:46:14.176 [2024-12-14 00:26:53.148705] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:46:14.176 [2024-12-14 00:26:53.151026] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:46:14.176 [2024-12-14 00:26:53.151101] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:46:14.176 [2024-12-14 00:26:53.151163] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:46:14.176 [2024-12-14 00:26:53.151173] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:46:14.741 00:26:53 nvmf_abort_qd_sizes -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:46:14.741 00:26:53 nvmf_abort_qd_sizes -- common/autotest_common.sh@868 -- # return 0 00:46:14.741 00:26:53 nvmf_abort_qd_sizes -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:46:14.741 00:26:53 nvmf_abort_qd_sizes -- common/autotest_common.sh@732 -- # xtrace_disable 00:46:14.741 00:26:53 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:46:14.741 00:26:53 nvmf_abort_qd_sizes -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:46:14.741 00:26:53 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:46:14.741 00:26:53 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:46:14.741 00:26:53 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:46:14.741 00:26:53 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # local bdf bdfs 00:46:14.741 00:26:53 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # local nvmes 00:46:14.741 00:26:53 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # [[ -n 0000:5e:00.0 ]] 00:46:14.741 00:26:53 nvmf_abort_qd_sizes -- scripts/common.sh@316 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:46:14.741 00:26:53 nvmf_abort_qd_sizes -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:46:14.741 00:26:53 nvmf_abort_qd_sizes -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:5e:00.0 ]] 00:46:14.741 00:26:53 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # uname -s 00:46:14.741 00:26:53 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:46:14.741 00:26:53 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:46:14.741 00:26:53 nvmf_abort_qd_sizes -- scripts/common.sh@328 -- # (( 1 )) 00:46:14.741 00:26:53 nvmf_abort_qd_sizes -- scripts/common.sh@329 -- # printf '%s\n' 0000:5e:00.0 00:46:14.741 00:26:53 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:46:14.741 00:26:53 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:5e:00.0 00:46:14.741 00:26:53 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:46:14.741 00:26:53 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:46:14.741 00:26:53 nvmf_abort_qd_sizes -- common/autotest_common.sh@1111 -- # xtrace_disable 00:46:14.741 00:26:53 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:46:14.741 ************************************ 00:46:14.741 START TEST spdk_target_abort 00:46:14.741 ************************************ 00:46:14.741 00:26:53 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1129 -- # spdk_target 00:46:14.741 00:26:53 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:46:14.741 00:26:53 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:5e:00.0 -b spdk_target 00:46:14.741 00:26:53 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:14.741 00:26:53 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:46:18.072 spdk_targetn1 00:46:18.072 00:26:56 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:18.072 00:26:56 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:46:18.072 00:26:56 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:18.072 00:26:56 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:46:18.072 [2024-12-14 00:26:56.688524] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:46:18.072 00:26:56 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:18.072 00:26:56 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:46:18.072 00:26:56 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:18.072 00:26:56 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:46:18.072 00:26:56 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:18.072 00:26:56 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:46:18.072 00:26:56 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:18.072 00:26:56 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:46:18.072 00:26:56 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:18.072 00:26:56 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:46:18.072 00:26:56 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:18.072 00:26:56 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:46:18.072 [2024-12-14 00:26:56.741210] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:46:18.072 00:26:56 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:18.072 00:26:56 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:46:18.072 00:26:56 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:46:18.072 00:26:56 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:46:18.072 00:26:56 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:46:18.072 00:26:56 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:46:18.072 00:26:56 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:46:18.072 00:26:56 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:46:18.072 00:26:56 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:46:18.072 00:26:56 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:46:18.072 00:26:56 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:46:18.072 00:26:56 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:46:18.072 00:26:56 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:46:18.072 00:26:56 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:46:18.072 00:26:56 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:46:18.072 00:26:56 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:46:18.072 00:26:56 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:46:18.072 00:26:56 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:46:18.072 00:26:56 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:46:18.072 00:26:56 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:46:18.072 00:26:56 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:46:18.072 00:26:56 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:46:21.445 Initializing NVMe Controllers 00:46:21.445 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:46:21.445 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:46:21.445 Initialization complete. Launching workers. 00:46:21.445 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 14855, failed: 0 00:46:21.445 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1259, failed to submit 13596 00:46:21.445 success 751, unsuccessful 508, failed 0 00:46:21.445 00:27:00 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:46:21.445 00:27:00 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:46:24.726 Initializing NVMe Controllers 00:46:24.726 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:46:24.726 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:46:24.726 Initialization complete. Launching workers. 00:46:24.726 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8396, failed: 0 00:46:24.726 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1254, failed to submit 7142 00:46:24.726 success 293, unsuccessful 961, failed 0 00:46:24.726 00:27:03 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:46:24.726 00:27:03 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:46:28.005 Initializing NVMe Controllers 00:46:28.005 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:46:28.005 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:46:28.005 Initialization complete. Launching workers. 00:46:28.005 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 33608, failed: 0 00:46:28.005 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2751, failed to submit 30857 00:46:28.005 success 579, unsuccessful 2172, failed 0 00:46:28.005 00:27:06 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:46:28.005 00:27:06 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:28.005 00:27:06 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:46:28.005 00:27:06 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:28.005 00:27:06 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:46:28.005 00:27:06 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:28.005 00:27:06 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:46:29.376 00:27:08 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:29.376 00:27:08 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 174079 00:46:29.376 00:27:08 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # '[' -z 174079 ']' 00:46:29.376 00:27:08 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # kill -0 174079 00:46:29.376 00:27:08 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # uname 00:46:29.376 00:27:08 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:46:29.377 00:27:08 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 174079 00:46:29.377 00:27:08 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:46:29.377 00:27:08 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:46:29.377 00:27:08 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 174079' 00:46:29.377 killing process with pid 174079 00:46:29.377 00:27:08 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@973 -- # kill 174079 00:46:29.377 00:27:08 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@978 -- # wait 174079 00:46:30.311 00:46:30.311 real 0m15.293s 00:46:30.311 user 0m59.878s 00:46:30.311 sys 0m2.667s 00:46:30.311 00:27:09 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:46:30.311 00:27:09 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:46:30.311 ************************************ 00:46:30.311 END TEST spdk_target_abort 00:46:30.311 ************************************ 00:46:30.311 00:27:09 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:46:30.311 00:27:09 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:46:30.311 00:27:09 nvmf_abort_qd_sizes -- common/autotest_common.sh@1111 -- # xtrace_disable 00:46:30.311 00:27:09 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:46:30.311 ************************************ 00:46:30.311 START TEST kernel_target_abort 00:46:30.311 ************************************ 00:46:30.311 00:27:09 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1129 -- # kernel_target 00:46:30.311 00:27:09 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:46:30.311 00:27:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@769 -- # local ip 00:46:30.311 00:27:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # ip_candidates=() 00:46:30.311 00:27:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # local -A ip_candidates 00:46:30.311 00:27:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:46:30.311 00:27:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:46:30.311 00:27:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:46:30.311 00:27:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:46:30.311 00:27:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:46:30.311 00:27:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:46:30.311 00:27:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:46:30.311 00:27:09 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:46:30.311 00:27:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:46:30.311 00:27:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:46:30.311 00:27:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:46:30.311 00:27:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:46:30.311 00:27:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:46:30.311 00:27:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # local block nvme 00:46:30.311 00:27:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:46:30.311 00:27:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@670 -- # modprobe nvmet 00:46:30.311 00:27:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:46:30.311 00:27:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:46:32.842 Waiting for block devices as requested 00:46:32.842 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:46:32.842 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:46:32.842 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:46:32.842 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:46:33.101 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:46:33.101 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:46:33.101 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:46:33.101 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:46:33.360 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:46:33.360 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:46:33.360 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:46:33.619 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:46:33.619 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:46:33.619 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:46:33.619 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:46:33.878 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:46:33.878 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:46:34.446 00:27:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:46:34.446 00:27:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:46:34.446 00:27:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:46:34.446 00:27:13 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:46:34.446 00:27:13 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:46:34.446 00:27:13 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:46:34.446 00:27:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:46:34.446 00:27:13 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:46:34.446 00:27:13 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:46:34.446 No valid GPT data, bailing 00:46:34.446 00:27:13 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:46:34.446 00:27:13 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:46:34.446 00:27:13 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:46:34.446 00:27:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:46:34.446 00:27:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:46:34.446 00:27:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:46:34.446 00:27:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:46:34.446 00:27:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:46:34.446 00:27:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:46:34.446 00:27:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # echo 1 00:46:34.446 00:27:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:46:34.446 00:27:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@697 -- # echo 1 00:46:34.446 00:27:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:46:34.446 00:27:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@700 -- # echo tcp 00:46:34.446 00:27:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@701 -- # echo 4420 00:46:34.446 00:27:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@702 -- # echo ipv4 00:46:34.446 00:27:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:46:34.446 00:27:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -a 10.0.0.1 -t tcp -s 4420 00:46:34.706 00:46:34.706 Discovery Log Number of Records 2, Generation counter 2 00:46:34.706 =====Discovery Log Entry 0====== 00:46:34.706 trtype: tcp 00:46:34.706 adrfam: ipv4 00:46:34.706 subtype: current discovery subsystem 00:46:34.706 treq: not specified, sq flow control disable supported 00:46:34.706 portid: 1 00:46:34.706 trsvcid: 4420 00:46:34.706 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:46:34.706 traddr: 10.0.0.1 00:46:34.706 eflags: none 00:46:34.706 sectype: none 00:46:34.706 =====Discovery Log Entry 1====== 00:46:34.706 trtype: tcp 00:46:34.706 adrfam: ipv4 00:46:34.706 subtype: nvme subsystem 00:46:34.706 treq: not specified, sq flow control disable supported 00:46:34.706 portid: 1 00:46:34.706 trsvcid: 4420 00:46:34.706 subnqn: nqn.2016-06.io.spdk:testnqn 00:46:34.706 traddr: 10.0.0.1 00:46:34.706 eflags: none 00:46:34.706 sectype: none 00:46:34.706 00:27:13 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:46:34.706 00:27:13 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:46:34.706 00:27:13 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:46:34.706 00:27:13 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:46:34.706 00:27:13 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:46:34.706 00:27:13 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:46:34.706 00:27:13 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:46:34.706 00:27:13 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:46:34.706 00:27:13 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:46:34.706 00:27:13 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:46:34.706 00:27:13 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:46:34.706 00:27:13 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:46:34.706 00:27:13 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:46:34.706 00:27:13 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:46:34.706 00:27:13 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:46:34.706 00:27:13 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:46:34.706 00:27:13 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:46:34.706 00:27:13 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:46:34.706 00:27:13 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:46:34.706 00:27:13 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:46:34.706 00:27:13 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:46:37.994 Initializing NVMe Controllers 00:46:37.994 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:46:37.994 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:46:37.994 Initialization complete. Launching workers. 00:46:37.994 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 80127, failed: 0 00:46:37.994 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 80127, failed to submit 0 00:46:37.994 success 0, unsuccessful 80127, failed 0 00:46:37.994 00:27:16 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:46:37.994 00:27:16 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:46:41.285 Initializing NVMe Controllers 00:46:41.285 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:46:41.285 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:46:41.285 Initialization complete. Launching workers. 00:46:41.285 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 127611, failed: 0 00:46:41.285 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 32018, failed to submit 95593 00:46:41.285 success 0, unsuccessful 32018, failed 0 00:46:41.285 00:27:19 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:46:41.285 00:27:19 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:46:44.575 Initializing NVMe Controllers 00:46:44.575 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:46:44.575 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:46:44.575 Initialization complete. Launching workers. 00:46:44.575 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 119187, failed: 0 00:46:44.575 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 29806, failed to submit 89381 00:46:44.575 success 0, unsuccessful 29806, failed 0 00:46:44.575 00:27:23 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:46:44.575 00:27:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:46:44.575 00:27:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@714 -- # echo 0 00:46:44.575 00:27:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:46:44.575 00:27:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:46:44.575 00:27:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:46:44.575 00:27:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:46:44.575 00:27:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:46:44.575 00:27:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:46:44.575 00:27:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:46:47.111 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:46:47.111 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:46:47.111 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:46:47.111 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:46:47.111 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:46:47.111 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:46:47.111 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:46:47.111 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:46:47.111 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:46:47.111 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:46:47.111 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:46:47.111 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:46:47.111 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:46:47.111 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:46:47.111 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:46:47.111 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:46:47.679 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:46:47.679 00:46:47.679 real 0m17.580s 00:46:47.679 user 0m9.088s 00:46:47.679 sys 0m5.238s 00:46:47.679 00:27:26 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:46:47.679 00:27:26 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:46:47.679 ************************************ 00:46:47.679 END TEST kernel_target_abort 00:46:47.679 ************************************ 00:46:47.679 00:27:26 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:46:47.679 00:27:26 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:46:47.679 00:27:26 nvmf_abort_qd_sizes -- nvmf/common.sh@516 -- # nvmfcleanup 00:46:47.679 00:27:26 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # sync 00:46:47.679 00:27:26 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:46:47.679 00:27:26 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set +e 00:46:47.679 00:27:26 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # for i in {1..20} 00:46:47.679 00:27:26 nvmf_abort_qd_sizes -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:46:47.679 rmmod nvme_tcp 00:46:47.938 rmmod nvme_fabrics 00:46:47.938 rmmod nvme_keyring 00:46:47.938 00:27:26 nvmf_abort_qd_sizes -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:46:47.938 00:27:26 nvmf_abort_qd_sizes -- nvmf/common.sh@128 -- # set -e 00:46:47.938 00:27:26 nvmf_abort_qd_sizes -- nvmf/common.sh@129 -- # return 0 00:46:47.938 00:27:26 nvmf_abort_qd_sizes -- nvmf/common.sh@517 -- # '[' -n 174079 ']' 00:46:47.938 00:27:26 nvmf_abort_qd_sizes -- nvmf/common.sh@518 -- # killprocess 174079 00:46:47.938 00:27:26 nvmf_abort_qd_sizes -- common/autotest_common.sh@954 -- # '[' -z 174079 ']' 00:46:47.938 00:27:26 nvmf_abort_qd_sizes -- common/autotest_common.sh@958 -- # kill -0 174079 00:46:47.938 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (174079) - No such process 00:46:47.938 00:27:26 nvmf_abort_qd_sizes -- common/autotest_common.sh@981 -- # echo 'Process with pid 174079 is not found' 00:46:47.938 Process with pid 174079 is not found 00:46:47.938 00:27:26 nvmf_abort_qd_sizes -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:46:47.938 00:27:26 nvmf_abort_qd_sizes -- nvmf/common.sh@521 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:46:50.473 Waiting for block devices as requested 00:46:50.473 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:46:50.473 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:46:50.473 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:46:50.473 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:46:50.473 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:46:50.473 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:46:50.473 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:46:50.732 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:46:50.732 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:46:50.732 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:46:50.732 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:46:50.991 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:46:50.991 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:46:50.991 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:46:50.991 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:46:51.250 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:46:51.250 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:46:51.250 00:27:30 nvmf_abort_qd_sizes -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:46:51.250 00:27:30 nvmf_abort_qd_sizes -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:46:51.250 00:27:30 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # iptr 00:46:51.250 00:27:30 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-save 00:46:51.250 00:27:30 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:46:51.250 00:27:30 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-restore 00:46:51.250 00:27:30 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:46:51.250 00:27:30 nvmf_abort_qd_sizes -- nvmf/common.sh@302 -- # remove_spdk_ns 00:46:51.250 00:27:30 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:46:51.250 00:27:30 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:46:51.250 00:27:30 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:46:53.789 00:27:32 nvmf_abort_qd_sizes -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:46:53.789 00:46:53.789 real 0m48.695s 00:46:53.789 user 1m12.796s 00:46:53.789 sys 0m15.758s 00:46:53.789 00:27:32 nvmf_abort_qd_sizes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:46:53.789 00:27:32 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:46:53.789 ************************************ 00:46:53.789 END TEST nvmf_abort_qd_sizes 00:46:53.789 ************************************ 00:46:53.789 00:27:32 -- spdk/autotest.sh@292 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:46:53.789 00:27:32 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:46:53.789 00:27:32 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:46:53.789 00:27:32 -- common/autotest_common.sh@10 -- # set +x 00:46:53.789 ************************************ 00:46:53.789 START TEST keyring_file 00:46:53.789 ************************************ 00:46:53.789 00:27:32 keyring_file -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:46:53.789 * Looking for test storage... 00:46:53.789 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:46:53.789 00:27:32 keyring_file -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:46:53.789 00:27:32 keyring_file -- common/autotest_common.sh@1711 -- # lcov --version 00:46:53.789 00:27:32 keyring_file -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:46:53.789 00:27:32 keyring_file -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:46:53.789 00:27:32 keyring_file -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:46:53.789 00:27:32 keyring_file -- scripts/common.sh@333 -- # local ver1 ver1_l 00:46:53.789 00:27:32 keyring_file -- scripts/common.sh@334 -- # local ver2 ver2_l 00:46:53.789 00:27:32 keyring_file -- scripts/common.sh@336 -- # IFS=.-: 00:46:53.789 00:27:32 keyring_file -- scripts/common.sh@336 -- # read -ra ver1 00:46:53.789 00:27:32 keyring_file -- scripts/common.sh@337 -- # IFS=.-: 00:46:53.789 00:27:32 keyring_file -- scripts/common.sh@337 -- # read -ra ver2 00:46:53.789 00:27:32 keyring_file -- scripts/common.sh@338 -- # local 'op=<' 00:46:53.789 00:27:32 keyring_file -- scripts/common.sh@340 -- # ver1_l=2 00:46:53.789 00:27:32 keyring_file -- scripts/common.sh@341 -- # ver2_l=1 00:46:53.789 00:27:32 keyring_file -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:46:53.789 00:27:32 keyring_file -- scripts/common.sh@344 -- # case "$op" in 00:46:53.789 00:27:32 keyring_file -- scripts/common.sh@345 -- # : 1 00:46:53.789 00:27:32 keyring_file -- scripts/common.sh@364 -- # (( v = 0 )) 00:46:53.789 00:27:32 keyring_file -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:46:53.789 00:27:32 keyring_file -- scripts/common.sh@365 -- # decimal 1 00:46:53.789 00:27:32 keyring_file -- scripts/common.sh@353 -- # local d=1 00:46:53.789 00:27:32 keyring_file -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:46:53.789 00:27:32 keyring_file -- scripts/common.sh@355 -- # echo 1 00:46:53.789 00:27:32 keyring_file -- scripts/common.sh@365 -- # ver1[v]=1 00:46:53.789 00:27:32 keyring_file -- scripts/common.sh@366 -- # decimal 2 00:46:53.789 00:27:32 keyring_file -- scripts/common.sh@353 -- # local d=2 00:46:53.789 00:27:32 keyring_file -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:46:53.789 00:27:32 keyring_file -- scripts/common.sh@355 -- # echo 2 00:46:53.789 00:27:32 keyring_file -- scripts/common.sh@366 -- # ver2[v]=2 00:46:53.789 00:27:32 keyring_file -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:46:53.789 00:27:32 keyring_file -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:46:53.789 00:27:32 keyring_file -- scripts/common.sh@368 -- # return 0 00:46:53.789 00:27:32 keyring_file -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:46:53.789 00:27:32 keyring_file -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:46:53.789 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:46:53.789 --rc genhtml_branch_coverage=1 00:46:53.789 --rc genhtml_function_coverage=1 00:46:53.789 --rc genhtml_legend=1 00:46:53.789 --rc geninfo_all_blocks=1 00:46:53.789 --rc geninfo_unexecuted_blocks=1 00:46:53.789 00:46:53.789 ' 00:46:53.789 00:27:32 keyring_file -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:46:53.789 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:46:53.789 --rc genhtml_branch_coverage=1 00:46:53.789 --rc genhtml_function_coverage=1 00:46:53.789 --rc genhtml_legend=1 00:46:53.789 --rc geninfo_all_blocks=1 00:46:53.789 --rc geninfo_unexecuted_blocks=1 00:46:53.789 00:46:53.789 ' 00:46:53.789 00:27:32 keyring_file -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:46:53.789 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:46:53.789 --rc genhtml_branch_coverage=1 00:46:53.789 --rc genhtml_function_coverage=1 00:46:53.789 --rc genhtml_legend=1 00:46:53.789 --rc geninfo_all_blocks=1 00:46:53.789 --rc geninfo_unexecuted_blocks=1 00:46:53.789 00:46:53.789 ' 00:46:53.789 00:27:32 keyring_file -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:46:53.789 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:46:53.789 --rc genhtml_branch_coverage=1 00:46:53.789 --rc genhtml_function_coverage=1 00:46:53.789 --rc genhtml_legend=1 00:46:53.789 --rc geninfo_all_blocks=1 00:46:53.789 --rc geninfo_unexecuted_blocks=1 00:46:53.789 00:46:53.789 ' 00:46:53.789 00:27:32 keyring_file -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:46:53.789 00:27:32 keyring_file -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:46:53.789 00:27:32 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:46:53.789 00:27:32 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:46:53.789 00:27:32 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:46:53.789 00:27:32 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:46:53.789 00:27:32 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:46:53.789 00:27:32 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:46:53.789 00:27:32 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:46:53.789 00:27:32 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:46:53.789 00:27:32 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:46:53.790 00:27:32 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:46:53.790 00:27:32 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:46:53.790 00:27:32 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:46:53.790 00:27:32 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:46:53.790 00:27:32 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:46:53.790 00:27:32 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:46:53.790 00:27:32 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:46:53.790 00:27:32 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:46:53.790 00:27:32 keyring_file -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:46:53.790 00:27:32 keyring_file -- scripts/common.sh@15 -- # shopt -s extglob 00:46:53.790 00:27:32 keyring_file -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:46:53.790 00:27:32 keyring_file -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:46:53.790 00:27:32 keyring_file -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:46:53.790 00:27:32 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:46:53.790 00:27:32 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:46:53.790 00:27:32 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:46:53.790 00:27:32 keyring_file -- paths/export.sh@5 -- # export PATH 00:46:53.790 00:27:32 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:46:53.790 00:27:32 keyring_file -- nvmf/common.sh@51 -- # : 0 00:46:53.790 00:27:32 keyring_file -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:46:53.790 00:27:32 keyring_file -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:46:53.790 00:27:32 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:46:53.790 00:27:32 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:46:53.790 00:27:32 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:46:53.790 00:27:32 keyring_file -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:46:53.790 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:46:53.790 00:27:32 keyring_file -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:46:53.790 00:27:32 keyring_file -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:46:53.790 00:27:32 keyring_file -- nvmf/common.sh@55 -- # have_pci_nics=0 00:46:53.790 00:27:32 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:46:53.790 00:27:32 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:46:53.790 00:27:32 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:46:53.790 00:27:32 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:46:53.790 00:27:32 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:46:53.790 00:27:32 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:46:53.790 00:27:32 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:46:53.790 00:27:32 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:46:53.790 00:27:32 keyring_file -- keyring/common.sh@17 -- # name=key0 00:46:53.790 00:27:32 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:46:53.790 00:27:32 keyring_file -- keyring/common.sh@17 -- # digest=0 00:46:53.790 00:27:32 keyring_file -- keyring/common.sh@18 -- # mktemp 00:46:53.790 00:27:32 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.iITMFPpGW6 00:46:53.790 00:27:32 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:46:53.790 00:27:32 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:46:53.790 00:27:32 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:46:53.790 00:27:32 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:46:53.790 00:27:32 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:46:53.790 00:27:32 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:46:53.790 00:27:32 keyring_file -- nvmf/common.sh@733 -- # python - 00:46:53.790 00:27:32 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.iITMFPpGW6 00:46:53.790 00:27:32 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.iITMFPpGW6 00:46:53.790 00:27:32 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.iITMFPpGW6 00:46:53.790 00:27:32 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:46:53.790 00:27:32 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:46:53.790 00:27:32 keyring_file -- keyring/common.sh@17 -- # name=key1 00:46:53.790 00:27:32 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:46:53.790 00:27:32 keyring_file -- keyring/common.sh@17 -- # digest=0 00:46:53.790 00:27:32 keyring_file -- keyring/common.sh@18 -- # mktemp 00:46:53.790 00:27:32 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.TJMQsELWpr 00:46:53.790 00:27:32 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:46:53.790 00:27:32 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:46:53.790 00:27:32 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:46:53.790 00:27:32 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:46:53.790 00:27:32 keyring_file -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:46:53.790 00:27:32 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:46:53.790 00:27:32 keyring_file -- nvmf/common.sh@733 -- # python - 00:46:53.790 00:27:32 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.TJMQsELWpr 00:46:53.790 00:27:32 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.TJMQsELWpr 00:46:53.790 00:27:32 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.TJMQsELWpr 00:46:53.790 00:27:32 keyring_file -- keyring/file.sh@30 -- # tgtpid=183571 00:46:53.790 00:27:32 keyring_file -- keyring/file.sh@32 -- # waitforlisten 183571 00:46:53.790 00:27:32 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 183571 ']' 00:46:53.790 00:27:32 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:46:53.790 00:27:32 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:46:53.790 00:27:32 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:46:53.790 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:46:53.790 00:27:32 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:46:53.790 00:27:32 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:46:53.790 00:27:32 keyring_file -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:46:53.790 [2024-12-14 00:27:32.815171] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:46:53.790 [2024-12-14 00:27:32.815271] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid183571 ] 00:46:54.049 [2024-12-14 00:27:32.929058] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:46:54.049 [2024-12-14 00:27:33.033006] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:46:54.987 00:27:33 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:46:54.987 00:27:33 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:46:54.987 00:27:33 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:46:54.987 00:27:33 keyring_file -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:54.987 00:27:33 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:46:54.987 [2024-12-14 00:27:33.838314] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:46:54.987 null0 00:46:54.987 [2024-12-14 00:27:33.870355] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:46:54.987 [2024-12-14 00:27:33.870704] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:46:54.987 00:27:33 keyring_file -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:54.987 00:27:33 keyring_file -- keyring/file.sh@44 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:46:54.987 00:27:33 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:46:54.987 00:27:33 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:46:54.987 00:27:33 keyring_file -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:46:54.987 00:27:33 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:46:54.987 00:27:33 keyring_file -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:46:54.987 00:27:33 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:46:54.987 00:27:33 keyring_file -- common/autotest_common.sh@655 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:46:54.988 00:27:33 keyring_file -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:54.988 00:27:33 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:46:54.988 [2024-12-14 00:27:33.894392] nvmf_rpc.c: 762:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:46:54.988 request: 00:46:54.988 { 00:46:54.988 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:46:54.988 "secure_channel": false, 00:46:54.988 "listen_address": { 00:46:54.988 "trtype": "tcp", 00:46:54.988 "traddr": "127.0.0.1", 00:46:54.988 "trsvcid": "4420" 00:46:54.988 }, 00:46:54.988 "method": "nvmf_subsystem_add_listener", 00:46:54.988 "req_id": 1 00:46:54.988 } 00:46:54.988 Got JSON-RPC error response 00:46:54.988 response: 00:46:54.988 { 00:46:54.988 "code": -32602, 00:46:54.988 "message": "Invalid parameters" 00:46:54.988 } 00:46:54.988 00:27:33 keyring_file -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:46:54.988 00:27:33 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:46:54.988 00:27:33 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:46:54.988 00:27:33 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:46:54.988 00:27:33 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:46:54.988 00:27:33 keyring_file -- keyring/file.sh@47 -- # bperfpid=183659 00:46:54.988 00:27:33 keyring_file -- keyring/file.sh@49 -- # waitforlisten 183659 /var/tmp/bperf.sock 00:46:54.988 00:27:33 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 183659 ']' 00:46:54.988 00:27:33 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:46:54.988 00:27:33 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:46:54.988 00:27:33 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:46:54.988 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:46:54.988 00:27:33 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:46:54.988 00:27:33 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:46:54.988 00:27:33 keyring_file -- keyring/file.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:46:54.988 [2024-12-14 00:27:33.971404] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:46:54.988 [2024-12-14 00:27:33.971499] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid183659 ] 00:46:54.988 [2024-12-14 00:27:34.083337] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:46:55.247 [2024-12-14 00:27:34.193388] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:46:55.816 00:27:34 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:46:55.816 00:27:34 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:46:55.816 00:27:34 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.iITMFPpGW6 00:46:55.816 00:27:34 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.iITMFPpGW6 00:46:56.075 00:27:34 keyring_file -- keyring/file.sh@51 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.TJMQsELWpr 00:46:56.075 00:27:34 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.TJMQsELWpr 00:46:56.075 00:27:35 keyring_file -- keyring/file.sh@52 -- # get_key key0 00:46:56.075 00:27:35 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:46:56.075 00:27:35 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:46:56.075 00:27:35 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:46:56.075 00:27:35 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:46:56.333 00:27:35 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.iITMFPpGW6 == \/\t\m\p\/\t\m\p\.\i\I\T\M\F\P\p\G\W\6 ]] 00:46:56.333 00:27:35 keyring_file -- keyring/file.sh@53 -- # get_key key1 00:46:56.333 00:27:35 keyring_file -- keyring/file.sh@53 -- # jq -r .path 00:46:56.333 00:27:35 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:46:56.333 00:27:35 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:46:56.333 00:27:35 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:46:56.592 00:27:35 keyring_file -- keyring/file.sh@53 -- # [[ /tmp/tmp.TJMQsELWpr == \/\t\m\p\/\t\m\p\.\T\J\M\Q\s\E\L\W\p\r ]] 00:46:56.592 00:27:35 keyring_file -- keyring/file.sh@54 -- # get_refcnt key0 00:46:56.592 00:27:35 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:46:56.592 00:27:35 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:46:56.592 00:27:35 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:46:56.592 00:27:35 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:46:56.592 00:27:35 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:46:56.592 00:27:35 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:46:56.592 00:27:35 keyring_file -- keyring/file.sh@55 -- # get_refcnt key1 00:46:56.592 00:27:35 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:46:56.592 00:27:35 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:46:56.592 00:27:35 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:46:56.592 00:27:35 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:46:56.592 00:27:35 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:46:56.851 00:27:35 keyring_file -- keyring/file.sh@55 -- # (( 1 == 1 )) 00:46:56.851 00:27:35 keyring_file -- keyring/file.sh@58 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:46:56.851 00:27:35 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:46:57.110 [2024-12-14 00:27:36.077346] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:46:57.110 nvme0n1 00:46:57.110 00:27:36 keyring_file -- keyring/file.sh@60 -- # get_refcnt key0 00:46:57.110 00:27:36 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:46:57.110 00:27:36 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:46:57.110 00:27:36 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:46:57.110 00:27:36 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:46:57.110 00:27:36 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:46:57.369 00:27:36 keyring_file -- keyring/file.sh@60 -- # (( 2 == 2 )) 00:46:57.369 00:27:36 keyring_file -- keyring/file.sh@61 -- # get_refcnt key1 00:46:57.369 00:27:36 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:46:57.369 00:27:36 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:46:57.369 00:27:36 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:46:57.369 00:27:36 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:46:57.369 00:27:36 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:46:57.629 00:27:36 keyring_file -- keyring/file.sh@61 -- # (( 1 == 1 )) 00:46:57.629 00:27:36 keyring_file -- keyring/file.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:46:57.629 Running I/O for 1 seconds... 00:46:58.566 15072.00 IOPS, 58.88 MiB/s 00:46:58.566 Latency(us) 00:46:58.566 [2024-12-13T23:27:37.707Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:46:58.566 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:46:58.566 nvme0n1 : 1.01 15119.17 59.06 0.00 0.00 8447.06 4025.78 13981.01 00:46:58.566 [2024-12-13T23:27:37.707Z] =================================================================================================================== 00:46:58.566 [2024-12-13T23:27:37.707Z] Total : 15119.17 59.06 0.00 0.00 8447.06 4025.78 13981.01 00:46:58.566 { 00:46:58.566 "results": [ 00:46:58.566 { 00:46:58.566 "job": "nvme0n1", 00:46:58.566 "core_mask": "0x2", 00:46:58.566 "workload": "randrw", 00:46:58.566 "percentage": 50, 00:46:58.566 "status": "finished", 00:46:58.566 "queue_depth": 128, 00:46:58.566 "io_size": 4096, 00:46:58.566 "runtime": 1.005346, 00:46:58.566 "iops": 15119.172901667685, 00:46:58.566 "mibps": 59.059269147139396, 00:46:58.566 "io_failed": 0, 00:46:58.566 "io_timeout": 0, 00:46:58.566 "avg_latency_us": 8447.057612030074, 00:46:58.566 "min_latency_us": 4025.782857142857, 00:46:58.566 "max_latency_us": 13981.013333333334 00:46:58.566 } 00:46:58.566 ], 00:46:58.566 "core_count": 1 00:46:58.566 } 00:46:58.566 00:27:37 keyring_file -- keyring/file.sh@65 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:46:58.566 00:27:37 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:46:58.824 00:27:37 keyring_file -- keyring/file.sh@66 -- # get_refcnt key0 00:46:58.824 00:27:37 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:46:58.824 00:27:37 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:46:58.824 00:27:37 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:46:58.824 00:27:37 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:46:58.824 00:27:37 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:46:59.082 00:27:38 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:46:59.082 00:27:38 keyring_file -- keyring/file.sh@67 -- # get_refcnt key1 00:46:59.082 00:27:38 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:46:59.082 00:27:38 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:46:59.082 00:27:38 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:46:59.082 00:27:38 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:46:59.082 00:27:38 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:46:59.342 00:27:38 keyring_file -- keyring/file.sh@67 -- # (( 1 == 1 )) 00:46:59.342 00:27:38 keyring_file -- keyring/file.sh@70 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:46:59.342 00:27:38 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:46:59.342 00:27:38 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:46:59.342 00:27:38 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:46:59.342 00:27:38 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:46:59.342 00:27:38 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:46:59.342 00:27:38 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:46:59.342 00:27:38 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:46:59.342 00:27:38 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:46:59.342 [2024-12-14 00:27:38.416893] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:46:59.342 [2024-12-14 00:27:38.417122] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032ad00 (107): Transport endpoint is not connected 00:46:59.342 [2024-12-14 00:27:38.418105] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032ad00 (9): Bad file descriptor 00:46:59.342 [2024-12-14 00:27:38.419101] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:46:59.342 [2024-12-14 00:27:38.419122] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:46:59.342 [2024-12-14 00:27:38.419134] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:46:59.342 [2024-12-14 00:27:38.419148] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:46:59.342 request: 00:46:59.342 { 00:46:59.342 "name": "nvme0", 00:46:59.342 "trtype": "tcp", 00:46:59.342 "traddr": "127.0.0.1", 00:46:59.342 "adrfam": "ipv4", 00:46:59.342 "trsvcid": "4420", 00:46:59.342 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:46:59.342 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:46:59.342 "prchk_reftag": false, 00:46:59.342 "prchk_guard": false, 00:46:59.342 "hdgst": false, 00:46:59.342 "ddgst": false, 00:46:59.342 "psk": "key1", 00:46:59.342 "allow_unrecognized_csi": false, 00:46:59.342 "method": "bdev_nvme_attach_controller", 00:46:59.342 "req_id": 1 00:46:59.342 } 00:46:59.342 Got JSON-RPC error response 00:46:59.342 response: 00:46:59.342 { 00:46:59.342 "code": -5, 00:46:59.342 "message": "Input/output error" 00:46:59.342 } 00:46:59.342 00:27:38 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:46:59.342 00:27:38 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:46:59.342 00:27:38 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:46:59.342 00:27:38 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:46:59.342 00:27:38 keyring_file -- keyring/file.sh@72 -- # get_refcnt key0 00:46:59.342 00:27:38 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:46:59.342 00:27:38 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:46:59.342 00:27:38 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:46:59.342 00:27:38 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:46:59.342 00:27:38 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:46:59.601 00:27:38 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:46:59.601 00:27:38 keyring_file -- keyring/file.sh@73 -- # get_refcnt key1 00:46:59.601 00:27:38 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:46:59.601 00:27:38 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:46:59.601 00:27:38 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:46:59.601 00:27:38 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:46:59.601 00:27:38 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:46:59.859 00:27:38 keyring_file -- keyring/file.sh@73 -- # (( 1 == 1 )) 00:46:59.859 00:27:38 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key0 00:46:59.859 00:27:38 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:47:00.118 00:27:39 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_file_remove_key key1 00:47:00.118 00:27:39 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:47:00.118 00:27:39 keyring_file -- keyring/file.sh@78 -- # bperf_cmd keyring_get_keys 00:47:00.118 00:27:39 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:47:00.118 00:27:39 keyring_file -- keyring/file.sh@78 -- # jq length 00:47:00.377 00:27:39 keyring_file -- keyring/file.sh@78 -- # (( 0 == 0 )) 00:47:00.377 00:27:39 keyring_file -- keyring/file.sh@81 -- # chmod 0660 /tmp/tmp.iITMFPpGW6 00:47:00.377 00:27:39 keyring_file -- keyring/file.sh@82 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.iITMFPpGW6 00:47:00.377 00:27:39 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:47:00.377 00:27:39 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.iITMFPpGW6 00:47:00.377 00:27:39 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:47:00.377 00:27:39 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:47:00.377 00:27:39 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:47:00.377 00:27:39 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:47:00.377 00:27:39 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.iITMFPpGW6 00:47:00.377 00:27:39 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.iITMFPpGW6 00:47:00.636 [2024-12-14 00:27:39.552575] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.iITMFPpGW6': 0100660 00:47:00.636 [2024-12-14 00:27:39.552609] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:47:00.636 request: 00:47:00.636 { 00:47:00.636 "name": "key0", 00:47:00.636 "path": "/tmp/tmp.iITMFPpGW6", 00:47:00.636 "method": "keyring_file_add_key", 00:47:00.636 "req_id": 1 00:47:00.636 } 00:47:00.636 Got JSON-RPC error response 00:47:00.636 response: 00:47:00.636 { 00:47:00.636 "code": -1, 00:47:00.636 "message": "Operation not permitted" 00:47:00.636 } 00:47:00.636 00:27:39 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:47:00.636 00:27:39 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:47:00.636 00:27:39 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:47:00.636 00:27:39 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:47:00.636 00:27:39 keyring_file -- keyring/file.sh@85 -- # chmod 0600 /tmp/tmp.iITMFPpGW6 00:47:00.636 00:27:39 keyring_file -- keyring/file.sh@86 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.iITMFPpGW6 00:47:00.636 00:27:39 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.iITMFPpGW6 00:47:00.636 00:27:39 keyring_file -- keyring/file.sh@87 -- # rm -f /tmp/tmp.iITMFPpGW6 00:47:00.636 00:27:39 keyring_file -- keyring/file.sh@89 -- # get_refcnt key0 00:47:00.636 00:27:39 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:47:00.636 00:27:39 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:47:00.636 00:27:39 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:47:00.636 00:27:39 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:47:00.636 00:27:39 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:47:00.895 00:27:39 keyring_file -- keyring/file.sh@89 -- # (( 1 == 1 )) 00:47:00.895 00:27:39 keyring_file -- keyring/file.sh@91 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:47:00.895 00:27:39 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:47:00.895 00:27:39 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:47:00.895 00:27:39 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:47:00.895 00:27:39 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:47:00.895 00:27:39 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:47:00.895 00:27:39 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:47:00.895 00:27:39 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:47:00.895 00:27:39 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:47:01.155 [2024-12-14 00:27:40.150213] keyring.c: 31:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.iITMFPpGW6': No such file or directory 00:47:01.155 [2024-12-14 00:27:40.150253] nvme_tcp.c:2498:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:47:01.155 [2024-12-14 00:27:40.150274] nvme.c: 682:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:47:01.155 [2024-12-14 00:27:40.150286] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, No such device 00:47:01.155 [2024-12-14 00:27:40.150297] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:47:01.155 [2024-12-14 00:27:40.150307] bdev_nvme.c:6801:spdk_bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:47:01.155 request: 00:47:01.155 { 00:47:01.155 "name": "nvme0", 00:47:01.155 "trtype": "tcp", 00:47:01.155 "traddr": "127.0.0.1", 00:47:01.155 "adrfam": "ipv4", 00:47:01.155 "trsvcid": "4420", 00:47:01.155 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:47:01.155 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:47:01.155 "prchk_reftag": false, 00:47:01.155 "prchk_guard": false, 00:47:01.155 "hdgst": false, 00:47:01.155 "ddgst": false, 00:47:01.155 "psk": "key0", 00:47:01.155 "allow_unrecognized_csi": false, 00:47:01.155 "method": "bdev_nvme_attach_controller", 00:47:01.155 "req_id": 1 00:47:01.155 } 00:47:01.155 Got JSON-RPC error response 00:47:01.155 response: 00:47:01.155 { 00:47:01.155 "code": -19, 00:47:01.155 "message": "No such device" 00:47:01.155 } 00:47:01.155 00:27:40 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:47:01.155 00:27:40 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:47:01.155 00:27:40 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:47:01.155 00:27:40 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:47:01.155 00:27:40 keyring_file -- keyring/file.sh@93 -- # bperf_cmd keyring_file_remove_key key0 00:47:01.155 00:27:40 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:47:01.414 00:27:40 keyring_file -- keyring/file.sh@96 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:47:01.414 00:27:40 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:47:01.414 00:27:40 keyring_file -- keyring/common.sh@17 -- # name=key0 00:47:01.414 00:27:40 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:47:01.414 00:27:40 keyring_file -- keyring/common.sh@17 -- # digest=0 00:47:01.414 00:27:40 keyring_file -- keyring/common.sh@18 -- # mktemp 00:47:01.414 00:27:40 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.dxOpouXkTf 00:47:01.414 00:27:40 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:47:01.414 00:27:40 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:47:01.414 00:27:40 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:47:01.414 00:27:40 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:47:01.414 00:27:40 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:47:01.414 00:27:40 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:47:01.414 00:27:40 keyring_file -- nvmf/common.sh@733 -- # python - 00:47:01.414 00:27:40 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.dxOpouXkTf 00:47:01.414 00:27:40 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.dxOpouXkTf 00:47:01.414 00:27:40 keyring_file -- keyring/file.sh@96 -- # key0path=/tmp/tmp.dxOpouXkTf 00:47:01.414 00:27:40 keyring_file -- keyring/file.sh@97 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.dxOpouXkTf 00:47:01.414 00:27:40 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.dxOpouXkTf 00:47:01.673 00:27:40 keyring_file -- keyring/file.sh@98 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:47:01.673 00:27:40 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:47:01.932 nvme0n1 00:47:01.932 00:27:40 keyring_file -- keyring/file.sh@100 -- # get_refcnt key0 00:47:01.932 00:27:40 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:47:01.932 00:27:40 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:47:01.932 00:27:40 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:47:01.932 00:27:40 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:47:01.932 00:27:40 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:47:01.932 00:27:41 keyring_file -- keyring/file.sh@100 -- # (( 2 == 2 )) 00:47:01.932 00:27:41 keyring_file -- keyring/file.sh@101 -- # bperf_cmd keyring_file_remove_key key0 00:47:01.932 00:27:41 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:47:02.191 00:27:41 keyring_file -- keyring/file.sh@102 -- # get_key key0 00:47:02.191 00:27:41 keyring_file -- keyring/file.sh@102 -- # jq -r .removed 00:47:02.191 00:27:41 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:47:02.191 00:27:41 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:47:02.191 00:27:41 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:47:02.450 00:27:41 keyring_file -- keyring/file.sh@102 -- # [[ true == \t\r\u\e ]] 00:47:02.450 00:27:41 keyring_file -- keyring/file.sh@103 -- # get_refcnt key0 00:47:02.450 00:27:41 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:47:02.450 00:27:41 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:47:02.450 00:27:41 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:47:02.450 00:27:41 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:47:02.450 00:27:41 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:47:02.708 00:27:41 keyring_file -- keyring/file.sh@103 -- # (( 1 == 1 )) 00:47:02.708 00:27:41 keyring_file -- keyring/file.sh@104 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:47:02.708 00:27:41 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:47:02.709 00:27:41 keyring_file -- keyring/file.sh@105 -- # bperf_cmd keyring_get_keys 00:47:02.709 00:27:41 keyring_file -- keyring/file.sh@105 -- # jq length 00:47:02.709 00:27:41 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:47:02.967 00:27:41 keyring_file -- keyring/file.sh@105 -- # (( 0 == 0 )) 00:47:02.967 00:27:41 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.dxOpouXkTf 00:47:02.967 00:27:41 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.dxOpouXkTf 00:47:03.226 00:27:42 keyring_file -- keyring/file.sh@109 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.TJMQsELWpr 00:47:03.226 00:27:42 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.TJMQsELWpr 00:47:03.227 00:27:42 keyring_file -- keyring/file.sh@110 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:47:03.227 00:27:42 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:47:03.486 nvme0n1 00:47:03.486 00:27:42 keyring_file -- keyring/file.sh@113 -- # bperf_cmd save_config 00:47:03.486 00:27:42 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:47:03.745 00:27:42 keyring_file -- keyring/file.sh@113 -- # config='{ 00:47:03.745 "subsystems": [ 00:47:03.745 { 00:47:03.745 "subsystem": "keyring", 00:47:03.745 "config": [ 00:47:03.745 { 00:47:03.745 "method": "keyring_file_add_key", 00:47:03.745 "params": { 00:47:03.745 "name": "key0", 00:47:03.745 "path": "/tmp/tmp.dxOpouXkTf" 00:47:03.745 } 00:47:03.745 }, 00:47:03.745 { 00:47:03.745 "method": "keyring_file_add_key", 00:47:03.745 "params": { 00:47:03.745 "name": "key1", 00:47:03.745 "path": "/tmp/tmp.TJMQsELWpr" 00:47:03.745 } 00:47:03.745 } 00:47:03.745 ] 00:47:03.745 }, 00:47:03.745 { 00:47:03.745 "subsystem": "iobuf", 00:47:03.745 "config": [ 00:47:03.745 { 00:47:03.745 "method": "iobuf_set_options", 00:47:03.745 "params": { 00:47:03.745 "small_pool_count": 8192, 00:47:03.745 "large_pool_count": 1024, 00:47:03.745 "small_bufsize": 8192, 00:47:03.745 "large_bufsize": 135168, 00:47:03.745 "enable_numa": false 00:47:03.745 } 00:47:03.745 } 00:47:03.745 ] 00:47:03.745 }, 00:47:03.745 { 00:47:03.745 "subsystem": "sock", 00:47:03.745 "config": [ 00:47:03.745 { 00:47:03.745 "method": "sock_set_default_impl", 00:47:03.745 "params": { 00:47:03.745 "impl_name": "posix" 00:47:03.745 } 00:47:03.745 }, 00:47:03.745 { 00:47:03.745 "method": "sock_impl_set_options", 00:47:03.745 "params": { 00:47:03.745 "impl_name": "ssl", 00:47:03.745 "recv_buf_size": 4096, 00:47:03.745 "send_buf_size": 4096, 00:47:03.745 "enable_recv_pipe": true, 00:47:03.745 "enable_quickack": false, 00:47:03.745 "enable_placement_id": 0, 00:47:03.745 "enable_zerocopy_send_server": true, 00:47:03.745 "enable_zerocopy_send_client": false, 00:47:03.745 "zerocopy_threshold": 0, 00:47:03.745 "tls_version": 0, 00:47:03.745 "enable_ktls": false 00:47:03.745 } 00:47:03.745 }, 00:47:03.745 { 00:47:03.745 "method": "sock_impl_set_options", 00:47:03.745 "params": { 00:47:03.745 "impl_name": "posix", 00:47:03.745 "recv_buf_size": 2097152, 00:47:03.745 "send_buf_size": 2097152, 00:47:03.745 "enable_recv_pipe": true, 00:47:03.746 "enable_quickack": false, 00:47:03.746 "enable_placement_id": 0, 00:47:03.746 "enable_zerocopy_send_server": true, 00:47:03.746 "enable_zerocopy_send_client": false, 00:47:03.746 "zerocopy_threshold": 0, 00:47:03.746 "tls_version": 0, 00:47:03.746 "enable_ktls": false 00:47:03.746 } 00:47:03.746 } 00:47:03.746 ] 00:47:03.746 }, 00:47:03.746 { 00:47:03.746 "subsystem": "vmd", 00:47:03.746 "config": [] 00:47:03.746 }, 00:47:03.746 { 00:47:03.746 "subsystem": "accel", 00:47:03.746 "config": [ 00:47:03.746 { 00:47:03.746 "method": "accel_set_options", 00:47:03.746 "params": { 00:47:03.746 "small_cache_size": 128, 00:47:03.746 "large_cache_size": 16, 00:47:03.746 "task_count": 2048, 00:47:03.746 "sequence_count": 2048, 00:47:03.746 "buf_count": 2048 00:47:03.746 } 00:47:03.746 } 00:47:03.746 ] 00:47:03.746 }, 00:47:03.746 { 00:47:03.746 "subsystem": "bdev", 00:47:03.746 "config": [ 00:47:03.746 { 00:47:03.746 "method": "bdev_set_options", 00:47:03.746 "params": { 00:47:03.746 "bdev_io_pool_size": 65535, 00:47:03.746 "bdev_io_cache_size": 256, 00:47:03.746 "bdev_auto_examine": true, 00:47:03.746 "iobuf_small_cache_size": 128, 00:47:03.746 "iobuf_large_cache_size": 16 00:47:03.746 } 00:47:03.746 }, 00:47:03.746 { 00:47:03.746 "method": "bdev_raid_set_options", 00:47:03.746 "params": { 00:47:03.746 "process_window_size_kb": 1024, 00:47:03.746 "process_max_bandwidth_mb_sec": 0 00:47:03.746 } 00:47:03.746 }, 00:47:03.746 { 00:47:03.746 "method": "bdev_iscsi_set_options", 00:47:03.746 "params": { 00:47:03.746 "timeout_sec": 30 00:47:03.746 } 00:47:03.746 }, 00:47:03.746 { 00:47:03.746 "method": "bdev_nvme_set_options", 00:47:03.746 "params": { 00:47:03.746 "action_on_timeout": "none", 00:47:03.746 "timeout_us": 0, 00:47:03.746 "timeout_admin_us": 0, 00:47:03.746 "keep_alive_timeout_ms": 10000, 00:47:03.746 "arbitration_burst": 0, 00:47:03.746 "low_priority_weight": 0, 00:47:03.746 "medium_priority_weight": 0, 00:47:03.746 "high_priority_weight": 0, 00:47:03.746 "nvme_adminq_poll_period_us": 10000, 00:47:03.746 "nvme_ioq_poll_period_us": 0, 00:47:03.746 "io_queue_requests": 512, 00:47:03.746 "delay_cmd_submit": true, 00:47:03.746 "transport_retry_count": 4, 00:47:03.746 "bdev_retry_count": 3, 00:47:03.746 "transport_ack_timeout": 0, 00:47:03.746 "ctrlr_loss_timeout_sec": 0, 00:47:03.746 "reconnect_delay_sec": 0, 00:47:03.746 "fast_io_fail_timeout_sec": 0, 00:47:03.746 "disable_auto_failback": false, 00:47:03.746 "generate_uuids": false, 00:47:03.746 "transport_tos": 0, 00:47:03.746 "nvme_error_stat": false, 00:47:03.746 "rdma_srq_size": 0, 00:47:03.746 "io_path_stat": false, 00:47:03.746 "allow_accel_sequence": false, 00:47:03.746 "rdma_max_cq_size": 0, 00:47:03.746 "rdma_cm_event_timeout_ms": 0, 00:47:03.746 "dhchap_digests": [ 00:47:03.746 "sha256", 00:47:03.746 "sha384", 00:47:03.746 "sha512" 00:47:03.746 ], 00:47:03.746 "dhchap_dhgroups": [ 00:47:03.746 "null", 00:47:03.746 "ffdhe2048", 00:47:03.746 "ffdhe3072", 00:47:03.746 "ffdhe4096", 00:47:03.746 "ffdhe6144", 00:47:03.746 "ffdhe8192" 00:47:03.746 ], 00:47:03.746 "rdma_umr_per_io": false 00:47:03.746 } 00:47:03.746 }, 00:47:03.746 { 00:47:03.746 "method": "bdev_nvme_attach_controller", 00:47:03.746 "params": { 00:47:03.746 "name": "nvme0", 00:47:03.746 "trtype": "TCP", 00:47:03.746 "adrfam": "IPv4", 00:47:03.746 "traddr": "127.0.0.1", 00:47:03.746 "trsvcid": "4420", 00:47:03.746 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:47:03.746 "prchk_reftag": false, 00:47:03.746 "prchk_guard": false, 00:47:03.746 "ctrlr_loss_timeout_sec": 0, 00:47:03.746 "reconnect_delay_sec": 0, 00:47:03.746 "fast_io_fail_timeout_sec": 0, 00:47:03.746 "psk": "key0", 00:47:03.746 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:47:03.746 "hdgst": false, 00:47:03.746 "ddgst": false, 00:47:03.746 "multipath": "multipath" 00:47:03.746 } 00:47:03.746 }, 00:47:03.746 { 00:47:03.746 "method": "bdev_nvme_set_hotplug", 00:47:03.746 "params": { 00:47:03.746 "period_us": 100000, 00:47:03.746 "enable": false 00:47:03.746 } 00:47:03.746 }, 00:47:03.746 { 00:47:03.746 "method": "bdev_wait_for_examine" 00:47:03.746 } 00:47:03.746 ] 00:47:03.746 }, 00:47:03.746 { 00:47:03.746 "subsystem": "nbd", 00:47:03.746 "config": [] 00:47:03.746 } 00:47:03.746 ] 00:47:03.746 }' 00:47:03.746 00:27:42 keyring_file -- keyring/file.sh@115 -- # killprocess 183659 00:47:03.746 00:27:42 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 183659 ']' 00:47:03.746 00:27:42 keyring_file -- common/autotest_common.sh@958 -- # kill -0 183659 00:47:03.746 00:27:42 keyring_file -- common/autotest_common.sh@959 -- # uname 00:47:03.746 00:27:42 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:47:03.746 00:27:42 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 183659 00:47:04.005 00:27:42 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:47:04.005 00:27:42 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:47:04.005 00:27:42 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 183659' 00:47:04.005 killing process with pid 183659 00:47:04.005 00:27:42 keyring_file -- common/autotest_common.sh@973 -- # kill 183659 00:47:04.005 Received shutdown signal, test time was about 1.000000 seconds 00:47:04.005 00:47:04.005 Latency(us) 00:47:04.005 [2024-12-13T23:27:43.146Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:47:04.005 [2024-12-13T23:27:43.146Z] =================================================================================================================== 00:47:04.005 [2024-12-13T23:27:43.146Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:47:04.005 00:27:42 keyring_file -- common/autotest_common.sh@978 -- # wait 183659 00:47:04.943 00:27:43 keyring_file -- keyring/file.sh@118 -- # bperfpid=185337 00:47:04.943 00:27:43 keyring_file -- keyring/file.sh@120 -- # waitforlisten 185337 /var/tmp/bperf.sock 00:47:04.943 00:27:43 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 185337 ']' 00:47:04.943 00:27:43 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:47:04.943 00:27:43 keyring_file -- keyring/file.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:47:04.943 00:27:43 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:47:04.943 00:27:43 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:47:04.943 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:47:04.943 00:27:43 keyring_file -- keyring/file.sh@116 -- # echo '{ 00:47:04.943 "subsystems": [ 00:47:04.943 { 00:47:04.943 "subsystem": "keyring", 00:47:04.943 "config": [ 00:47:04.943 { 00:47:04.943 "method": "keyring_file_add_key", 00:47:04.943 "params": { 00:47:04.943 "name": "key0", 00:47:04.943 "path": "/tmp/tmp.dxOpouXkTf" 00:47:04.943 } 00:47:04.943 }, 00:47:04.943 { 00:47:04.943 "method": "keyring_file_add_key", 00:47:04.943 "params": { 00:47:04.943 "name": "key1", 00:47:04.943 "path": "/tmp/tmp.TJMQsELWpr" 00:47:04.943 } 00:47:04.943 } 00:47:04.943 ] 00:47:04.943 }, 00:47:04.943 { 00:47:04.943 "subsystem": "iobuf", 00:47:04.943 "config": [ 00:47:04.943 { 00:47:04.943 "method": "iobuf_set_options", 00:47:04.943 "params": { 00:47:04.943 "small_pool_count": 8192, 00:47:04.943 "large_pool_count": 1024, 00:47:04.943 "small_bufsize": 8192, 00:47:04.943 "large_bufsize": 135168, 00:47:04.943 "enable_numa": false 00:47:04.943 } 00:47:04.943 } 00:47:04.943 ] 00:47:04.943 }, 00:47:04.943 { 00:47:04.943 "subsystem": "sock", 00:47:04.943 "config": [ 00:47:04.943 { 00:47:04.943 "method": "sock_set_default_impl", 00:47:04.943 "params": { 00:47:04.943 "impl_name": "posix" 00:47:04.943 } 00:47:04.943 }, 00:47:04.943 { 00:47:04.943 "method": "sock_impl_set_options", 00:47:04.943 "params": { 00:47:04.943 "impl_name": "ssl", 00:47:04.943 "recv_buf_size": 4096, 00:47:04.943 "send_buf_size": 4096, 00:47:04.943 "enable_recv_pipe": true, 00:47:04.943 "enable_quickack": false, 00:47:04.943 "enable_placement_id": 0, 00:47:04.943 "enable_zerocopy_send_server": true, 00:47:04.943 "enable_zerocopy_send_client": false, 00:47:04.943 "zerocopy_threshold": 0, 00:47:04.943 "tls_version": 0, 00:47:04.943 "enable_ktls": false 00:47:04.943 } 00:47:04.943 }, 00:47:04.943 { 00:47:04.943 "method": "sock_impl_set_options", 00:47:04.943 "params": { 00:47:04.943 "impl_name": "posix", 00:47:04.943 "recv_buf_size": 2097152, 00:47:04.943 "send_buf_size": 2097152, 00:47:04.943 "enable_recv_pipe": true, 00:47:04.943 "enable_quickack": false, 00:47:04.943 "enable_placement_id": 0, 00:47:04.943 "enable_zerocopy_send_server": true, 00:47:04.943 "enable_zerocopy_send_client": false, 00:47:04.943 "zerocopy_threshold": 0, 00:47:04.943 "tls_version": 0, 00:47:04.943 "enable_ktls": false 00:47:04.943 } 00:47:04.943 } 00:47:04.943 ] 00:47:04.943 }, 00:47:04.943 { 00:47:04.943 "subsystem": "vmd", 00:47:04.943 "config": [] 00:47:04.943 }, 00:47:04.943 { 00:47:04.943 "subsystem": "accel", 00:47:04.943 "config": [ 00:47:04.943 { 00:47:04.943 "method": "accel_set_options", 00:47:04.943 "params": { 00:47:04.943 "small_cache_size": 128, 00:47:04.943 "large_cache_size": 16, 00:47:04.943 "task_count": 2048, 00:47:04.943 "sequence_count": 2048, 00:47:04.943 "buf_count": 2048 00:47:04.943 } 00:47:04.943 } 00:47:04.943 ] 00:47:04.943 }, 00:47:04.943 { 00:47:04.943 "subsystem": "bdev", 00:47:04.943 "config": [ 00:47:04.943 { 00:47:04.943 "method": "bdev_set_options", 00:47:04.943 "params": { 00:47:04.943 "bdev_io_pool_size": 65535, 00:47:04.943 "bdev_io_cache_size": 256, 00:47:04.943 "bdev_auto_examine": true, 00:47:04.943 "iobuf_small_cache_size": 128, 00:47:04.943 "iobuf_large_cache_size": 16 00:47:04.943 } 00:47:04.943 }, 00:47:04.943 { 00:47:04.943 "method": "bdev_raid_set_options", 00:47:04.943 "params": { 00:47:04.943 "process_window_size_kb": 1024, 00:47:04.943 "process_max_bandwidth_mb_sec": 0 00:47:04.943 } 00:47:04.943 }, 00:47:04.943 { 00:47:04.943 "method": "bdev_iscsi_set_options", 00:47:04.943 "params": { 00:47:04.943 "timeout_sec": 30 00:47:04.943 } 00:47:04.943 }, 00:47:04.943 { 00:47:04.943 "method": "bdev_nvme_set_options", 00:47:04.943 "params": { 00:47:04.943 "action_on_timeout": "none", 00:47:04.943 "timeout_us": 0, 00:47:04.943 "timeout_admin_us": 0, 00:47:04.943 "keep_alive_timeout_ms": 10000, 00:47:04.943 "arbitration_burst": 0, 00:47:04.943 "low_priority_weight": 0, 00:47:04.943 "medium_priority_weight": 0, 00:47:04.943 "high_priority_weight": 0, 00:47:04.943 "nvme_adminq_poll_period_us": 10000, 00:47:04.943 "nvme_ioq_poll_period_us": 0, 00:47:04.943 "io_queue_requests": 512, 00:47:04.943 "delay_cmd_submit": true, 00:47:04.943 "transport_retry_count": 4, 00:47:04.943 "bdev_retry_count": 3, 00:47:04.943 "transport_ack_timeout": 0, 00:47:04.943 "ctrlr_loss_timeout_sec": 0, 00:47:04.943 "reconnect_delay_sec": 0, 00:47:04.943 "fast_io_fail_timeout_sec": 0, 00:47:04.943 "disable_auto_failback": false, 00:47:04.943 "generate_uuids": false, 00:47:04.943 "transport_tos": 0, 00:47:04.943 "nvme_error_stat": false, 00:47:04.943 "rdma_srq_size": 0, 00:47:04.943 "io_path_stat": false, 00:47:04.943 "allow_accel_sequence": false, 00:47:04.943 "rdma_max_cq_size": 0, 00:47:04.943 "rdma_cm_event_timeout_ms": 0, 00:47:04.943 "dhchap_digests": [ 00:47:04.943 "sha256", 00:47:04.943 "sha384", 00:47:04.943 "sha512" 00:47:04.943 ], 00:47:04.943 "dhchap_dhgroups": [ 00:47:04.943 "null", 00:47:04.943 "ffdhe2048", 00:47:04.943 "ffdhe3072", 00:47:04.943 "ffdhe4096", 00:47:04.943 "ffdhe6144", 00:47:04.943 "ffdhe8192" 00:47:04.943 ], 00:47:04.943 "rdma_umr_per_io": false 00:47:04.943 } 00:47:04.943 }, 00:47:04.943 { 00:47:04.943 "method": "bdev_nvme_attach_controller", 00:47:04.943 "params": { 00:47:04.943 "name": "nvme0", 00:47:04.943 "trtype": "TCP", 00:47:04.943 "adrfam": "IPv4", 00:47:04.943 "traddr": "127.0.0.1", 00:47:04.943 "trsvcid": "4420", 00:47:04.943 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:47:04.943 "prchk_reftag": false, 00:47:04.943 "prchk_guard": false, 00:47:04.943 "ctrlr_loss_timeout_sec": 0, 00:47:04.943 "reconnect_delay_sec": 0, 00:47:04.943 "fast_io_fail_timeout_sec": 0, 00:47:04.943 "psk": "key0", 00:47:04.943 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:47:04.943 "hdgst": false, 00:47:04.943 "ddgst": false, 00:47:04.943 "multipath": "multipath" 00:47:04.943 } 00:47:04.943 }, 00:47:04.943 { 00:47:04.943 "method": "bdev_nvme_set_hotplug", 00:47:04.943 "params": { 00:47:04.943 "period_us": 100000, 00:47:04.943 "enable": false 00:47:04.943 } 00:47:04.943 }, 00:47:04.943 { 00:47:04.943 "method": "bdev_wait_for_examine" 00:47:04.943 } 00:47:04.943 ] 00:47:04.943 }, 00:47:04.943 { 00:47:04.943 "subsystem": "nbd", 00:47:04.943 "config": [] 00:47:04.943 } 00:47:04.943 ] 00:47:04.943 }' 00:47:04.943 00:27:43 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:47:04.944 00:27:43 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:47:04.944 [2024-12-14 00:27:43.888521] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:47:04.944 [2024-12-14 00:27:43.888628] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid185337 ] 00:47:04.944 [2024-12-14 00:27:44.003653] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:47:05.202 [2024-12-14 00:27:44.113421] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:47:05.462 [2024-12-14 00:27:44.541083] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:47:05.722 00:27:44 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:47:05.722 00:27:44 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:47:05.722 00:27:44 keyring_file -- keyring/file.sh@121 -- # bperf_cmd keyring_get_keys 00:47:05.722 00:27:44 keyring_file -- keyring/file.sh@121 -- # jq length 00:47:05.722 00:27:44 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:47:05.981 00:27:44 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:47:05.981 00:27:44 keyring_file -- keyring/file.sh@122 -- # get_refcnt key0 00:47:05.981 00:27:44 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:47:05.981 00:27:44 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:47:05.981 00:27:44 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:47:05.981 00:27:44 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:47:05.981 00:27:44 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:47:05.981 00:27:45 keyring_file -- keyring/file.sh@122 -- # (( 2 == 2 )) 00:47:05.981 00:27:45 keyring_file -- keyring/file.sh@123 -- # get_refcnt key1 00:47:05.981 00:27:45 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:47:05.981 00:27:45 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:47:05.981 00:27:45 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:47:05.981 00:27:45 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:47:05.981 00:27:45 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:47:06.240 00:27:45 keyring_file -- keyring/file.sh@123 -- # (( 1 == 1 )) 00:47:06.240 00:27:45 keyring_file -- keyring/file.sh@124 -- # bperf_cmd bdev_nvme_get_controllers 00:47:06.240 00:27:45 keyring_file -- keyring/file.sh@124 -- # jq -r '.[].name' 00:47:06.240 00:27:45 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:47:06.498 00:27:45 keyring_file -- keyring/file.sh@124 -- # [[ nvme0 == nvme0 ]] 00:47:06.498 00:27:45 keyring_file -- keyring/file.sh@1 -- # cleanup 00:47:06.498 00:27:45 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.dxOpouXkTf /tmp/tmp.TJMQsELWpr 00:47:06.498 00:27:45 keyring_file -- keyring/file.sh@20 -- # killprocess 185337 00:47:06.498 00:27:45 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 185337 ']' 00:47:06.498 00:27:45 keyring_file -- common/autotest_common.sh@958 -- # kill -0 185337 00:47:06.498 00:27:45 keyring_file -- common/autotest_common.sh@959 -- # uname 00:47:06.498 00:27:45 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:47:06.498 00:27:45 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 185337 00:47:06.498 00:27:45 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:47:06.498 00:27:45 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:47:06.498 00:27:45 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 185337' 00:47:06.498 killing process with pid 185337 00:47:06.498 00:27:45 keyring_file -- common/autotest_common.sh@973 -- # kill 185337 00:47:06.498 Received shutdown signal, test time was about 1.000000 seconds 00:47:06.498 00:47:06.498 Latency(us) 00:47:06.498 [2024-12-13T23:27:45.639Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:47:06.498 [2024-12-13T23:27:45.639Z] =================================================================================================================== 00:47:06.498 [2024-12-13T23:27:45.639Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:47:06.498 00:27:45 keyring_file -- common/autotest_common.sh@978 -- # wait 185337 00:47:07.435 00:27:46 keyring_file -- keyring/file.sh@21 -- # killprocess 183571 00:47:07.435 00:27:46 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 183571 ']' 00:47:07.435 00:27:46 keyring_file -- common/autotest_common.sh@958 -- # kill -0 183571 00:47:07.435 00:27:46 keyring_file -- common/autotest_common.sh@959 -- # uname 00:47:07.435 00:27:46 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:47:07.435 00:27:46 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 183571 00:47:07.435 00:27:46 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:47:07.435 00:27:46 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:47:07.435 00:27:46 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 183571' 00:47:07.435 killing process with pid 183571 00:47:07.435 00:27:46 keyring_file -- common/autotest_common.sh@973 -- # kill 183571 00:47:07.435 00:27:46 keyring_file -- common/autotest_common.sh@978 -- # wait 183571 00:47:10.126 00:47:10.126 real 0m16.363s 00:47:10.126 user 0m35.532s 00:47:10.126 sys 0m2.918s 00:47:10.126 00:27:48 keyring_file -- common/autotest_common.sh@1130 -- # xtrace_disable 00:47:10.126 00:27:48 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:47:10.126 ************************************ 00:47:10.126 END TEST keyring_file 00:47:10.126 ************************************ 00:47:10.126 00:27:48 -- spdk/autotest.sh@293 -- # [[ y == y ]] 00:47:10.126 00:27:48 -- spdk/autotest.sh@294 -- # run_test keyring_linux /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:47:10.126 00:27:48 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:47:10.126 00:27:48 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:47:10.126 00:27:48 -- common/autotest_common.sh@10 -- # set +x 00:47:10.126 ************************************ 00:47:10.126 START TEST keyring_linux 00:47:10.126 ************************************ 00:47:10.126 00:27:48 keyring_linux -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:47:10.126 Joined session keyring: 530619637 00:47:10.126 * Looking for test storage... 00:47:10.126 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:47:10.126 00:27:48 keyring_linux -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:47:10.126 00:27:48 keyring_linux -- common/autotest_common.sh@1711 -- # lcov --version 00:47:10.126 00:27:48 keyring_linux -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:47:10.126 00:27:49 keyring_linux -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:47:10.126 00:27:49 keyring_linux -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:47:10.126 00:27:49 keyring_linux -- scripts/common.sh@333 -- # local ver1 ver1_l 00:47:10.126 00:27:49 keyring_linux -- scripts/common.sh@334 -- # local ver2 ver2_l 00:47:10.126 00:27:49 keyring_linux -- scripts/common.sh@336 -- # IFS=.-: 00:47:10.126 00:27:49 keyring_linux -- scripts/common.sh@336 -- # read -ra ver1 00:47:10.126 00:27:49 keyring_linux -- scripts/common.sh@337 -- # IFS=.-: 00:47:10.126 00:27:49 keyring_linux -- scripts/common.sh@337 -- # read -ra ver2 00:47:10.126 00:27:49 keyring_linux -- scripts/common.sh@338 -- # local 'op=<' 00:47:10.126 00:27:49 keyring_linux -- scripts/common.sh@340 -- # ver1_l=2 00:47:10.126 00:27:49 keyring_linux -- scripts/common.sh@341 -- # ver2_l=1 00:47:10.126 00:27:49 keyring_linux -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:47:10.126 00:27:49 keyring_linux -- scripts/common.sh@344 -- # case "$op" in 00:47:10.126 00:27:49 keyring_linux -- scripts/common.sh@345 -- # : 1 00:47:10.126 00:27:49 keyring_linux -- scripts/common.sh@364 -- # (( v = 0 )) 00:47:10.126 00:27:49 keyring_linux -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:47:10.126 00:27:49 keyring_linux -- scripts/common.sh@365 -- # decimal 1 00:47:10.126 00:27:49 keyring_linux -- scripts/common.sh@353 -- # local d=1 00:47:10.126 00:27:49 keyring_linux -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:47:10.126 00:27:49 keyring_linux -- scripts/common.sh@355 -- # echo 1 00:47:10.126 00:27:49 keyring_linux -- scripts/common.sh@365 -- # ver1[v]=1 00:47:10.126 00:27:49 keyring_linux -- scripts/common.sh@366 -- # decimal 2 00:47:10.126 00:27:49 keyring_linux -- scripts/common.sh@353 -- # local d=2 00:47:10.126 00:27:49 keyring_linux -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:47:10.126 00:27:49 keyring_linux -- scripts/common.sh@355 -- # echo 2 00:47:10.126 00:27:49 keyring_linux -- scripts/common.sh@366 -- # ver2[v]=2 00:47:10.126 00:27:49 keyring_linux -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:47:10.126 00:27:49 keyring_linux -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:47:10.126 00:27:49 keyring_linux -- scripts/common.sh@368 -- # return 0 00:47:10.126 00:27:49 keyring_linux -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:47:10.126 00:27:49 keyring_linux -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:47:10.126 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:47:10.126 --rc genhtml_branch_coverage=1 00:47:10.126 --rc genhtml_function_coverage=1 00:47:10.126 --rc genhtml_legend=1 00:47:10.126 --rc geninfo_all_blocks=1 00:47:10.126 --rc geninfo_unexecuted_blocks=1 00:47:10.126 00:47:10.126 ' 00:47:10.126 00:27:49 keyring_linux -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:47:10.126 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:47:10.126 --rc genhtml_branch_coverage=1 00:47:10.126 --rc genhtml_function_coverage=1 00:47:10.126 --rc genhtml_legend=1 00:47:10.126 --rc geninfo_all_blocks=1 00:47:10.126 --rc geninfo_unexecuted_blocks=1 00:47:10.126 00:47:10.126 ' 00:47:10.126 00:27:49 keyring_linux -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:47:10.126 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:47:10.126 --rc genhtml_branch_coverage=1 00:47:10.126 --rc genhtml_function_coverage=1 00:47:10.126 --rc genhtml_legend=1 00:47:10.126 --rc geninfo_all_blocks=1 00:47:10.126 --rc geninfo_unexecuted_blocks=1 00:47:10.126 00:47:10.126 ' 00:47:10.126 00:27:49 keyring_linux -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:47:10.126 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:47:10.126 --rc genhtml_branch_coverage=1 00:47:10.126 --rc genhtml_function_coverage=1 00:47:10.126 --rc genhtml_legend=1 00:47:10.126 --rc geninfo_all_blocks=1 00:47:10.126 --rc geninfo_unexecuted_blocks=1 00:47:10.126 00:47:10.126 ' 00:47:10.126 00:27:49 keyring_linux -- keyring/linux.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:47:10.126 00:27:49 keyring_linux -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:47:10.126 00:27:49 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:47:10.126 00:27:49 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:47:10.126 00:27:49 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:47:10.126 00:27:49 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:47:10.126 00:27:49 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:47:10.126 00:27:49 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:47:10.126 00:27:49 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:47:10.126 00:27:49 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:47:10.126 00:27:49 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:47:10.126 00:27:49 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:47:10.126 00:27:49 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:47:10.126 00:27:49 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:47:10.126 00:27:49 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:47:10.126 00:27:49 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:47:10.126 00:27:49 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:47:10.126 00:27:49 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:47:10.126 00:27:49 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:47:10.126 00:27:49 keyring_linux -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:47:10.126 00:27:49 keyring_linux -- scripts/common.sh@15 -- # shopt -s extglob 00:47:10.126 00:27:49 keyring_linux -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:47:10.126 00:27:49 keyring_linux -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:47:10.127 00:27:49 keyring_linux -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:47:10.127 00:27:49 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:47:10.127 00:27:49 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:47:10.127 00:27:49 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:47:10.127 00:27:49 keyring_linux -- paths/export.sh@5 -- # export PATH 00:47:10.127 00:27:49 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:47:10.127 00:27:49 keyring_linux -- nvmf/common.sh@51 -- # : 0 00:47:10.127 00:27:49 keyring_linux -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:47:10.127 00:27:49 keyring_linux -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:47:10.127 00:27:49 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:47:10.127 00:27:49 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:47:10.127 00:27:49 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:47:10.127 00:27:49 keyring_linux -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:47:10.127 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:47:10.127 00:27:49 keyring_linux -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:47:10.127 00:27:49 keyring_linux -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:47:10.127 00:27:49 keyring_linux -- nvmf/common.sh@55 -- # have_pci_nics=0 00:47:10.127 00:27:49 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:47:10.127 00:27:49 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:47:10.127 00:27:49 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:47:10.127 00:27:49 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:47:10.127 00:27:49 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:47:10.127 00:27:49 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:47:10.127 00:27:49 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:47:10.127 00:27:49 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:47:10.127 00:27:49 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:47:10.127 00:27:49 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:47:10.127 00:27:49 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:47:10.127 00:27:49 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:47:10.127 00:27:49 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:47:10.127 00:27:49 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:47:10.127 00:27:49 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:47:10.127 00:27:49 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:47:10.127 00:27:49 keyring_linux -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:47:10.127 00:27:49 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:47:10.127 00:27:49 keyring_linux -- nvmf/common.sh@733 -- # python - 00:47:10.127 00:27:49 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:47:10.127 00:27:49 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:47:10.127 /tmp/:spdk-test:key0 00:47:10.127 00:27:49 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:47:10.127 00:27:49 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:47:10.127 00:27:49 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:47:10.127 00:27:49 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:47:10.127 00:27:49 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:47:10.127 00:27:49 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:47:10.127 00:27:49 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:47:10.127 00:27:49 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:47:10.127 00:27:49 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:47:10.127 00:27:49 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:47:10.127 00:27:49 keyring_linux -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:47:10.127 00:27:49 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:47:10.127 00:27:49 keyring_linux -- nvmf/common.sh@733 -- # python - 00:47:10.127 00:27:49 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:47:10.127 00:27:49 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:47:10.127 /tmp/:spdk-test:key1 00:47:10.127 00:27:49 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=186329 00:47:10.127 00:27:49 keyring_linux -- keyring/linux.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:47:10.127 00:27:49 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 186329 00:47:10.127 00:27:49 keyring_linux -- common/autotest_common.sh@835 -- # '[' -z 186329 ']' 00:47:10.127 00:27:49 keyring_linux -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:47:10.127 00:27:49 keyring_linux -- common/autotest_common.sh@840 -- # local max_retries=100 00:47:10.127 00:27:49 keyring_linux -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:47:10.127 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:47:10.127 00:27:49 keyring_linux -- common/autotest_common.sh@844 -- # xtrace_disable 00:47:10.127 00:27:49 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:47:10.386 [2024-12-14 00:27:49.251616] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:47:10.386 [2024-12-14 00:27:49.251709] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid186329 ] 00:47:10.386 [2024-12-14 00:27:49.359788] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:47:10.386 [2024-12-14 00:27:49.466962] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:47:11.322 00:27:50 keyring_linux -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:47:11.322 00:27:50 keyring_linux -- common/autotest_common.sh@868 -- # return 0 00:47:11.322 00:27:50 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:47:11.322 00:27:50 keyring_linux -- common/autotest_common.sh@563 -- # xtrace_disable 00:47:11.322 00:27:50 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:47:11.322 [2024-12-14 00:27:50.306489] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:47:11.322 null0 00:47:11.322 [2024-12-14 00:27:50.338519] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:47:11.322 [2024-12-14 00:27:50.338906] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:47:11.322 00:27:50 keyring_linux -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:47:11.322 00:27:50 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:47:11.322 349399241 00:47:11.322 00:27:50 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:47:11.322 89905074 00:47:11.322 00:27:50 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=186565 00:47:11.322 00:27:50 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 186565 /var/tmp/bperf.sock 00:47:11.322 00:27:50 keyring_linux -- common/autotest_common.sh@835 -- # '[' -z 186565 ']' 00:47:11.322 00:27:50 keyring_linux -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:47:11.322 00:27:50 keyring_linux -- common/autotest_common.sh@840 -- # local max_retries=100 00:47:11.322 00:27:50 keyring_linux -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:47:11.322 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:47:11.322 00:27:50 keyring_linux -- common/autotest_common.sh@844 -- # xtrace_disable 00:47:11.322 00:27:50 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:47:11.322 00:27:50 keyring_linux -- keyring/linux.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:47:11.322 [2024-12-14 00:27:50.435449] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:47:11.323 [2024-12-14 00:27:50.435532] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid186565 ] 00:47:11.582 [2024-12-14 00:27:50.547281] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:47:11.582 [2024-12-14 00:27:50.660343] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:47:12.152 00:27:51 keyring_linux -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:47:12.152 00:27:51 keyring_linux -- common/autotest_common.sh@868 -- # return 0 00:47:12.152 00:27:51 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:47:12.152 00:27:51 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:47:12.410 00:27:51 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:47:12.411 00:27:51 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:47:12.978 00:27:51 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:47:12.978 00:27:51 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:47:12.978 [2024-12-14 00:27:52.073526] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:47:13.236 nvme0n1 00:47:13.236 00:27:52 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:47:13.236 00:27:52 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:47:13.236 00:27:52 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:47:13.236 00:27:52 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:47:13.236 00:27:52 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:47:13.236 00:27:52 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:47:13.236 00:27:52 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:47:13.236 00:27:52 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:47:13.236 00:27:52 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:47:13.236 00:27:52 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:47:13.236 00:27:52 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:47:13.236 00:27:52 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:47:13.236 00:27:52 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:47:13.495 00:27:52 keyring_linux -- keyring/linux.sh@25 -- # sn=349399241 00:47:13.495 00:27:52 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:47:13.495 00:27:52 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:47:13.495 00:27:52 keyring_linux -- keyring/linux.sh@26 -- # [[ 349399241 == \3\4\9\3\9\9\2\4\1 ]] 00:47:13.495 00:27:52 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 349399241 00:47:13.495 00:27:52 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:47:13.495 00:27:52 keyring_linux -- keyring/linux.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:47:13.753 Running I/O for 1 seconds... 00:47:14.690 15929.00 IOPS, 62.22 MiB/s 00:47:14.690 Latency(us) 00:47:14.690 [2024-12-13T23:27:53.831Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:47:14.690 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:47:14.690 nvme0n1 : 1.01 15928.13 62.22 0.00 0.00 8002.70 3432.84 10985.08 00:47:14.690 [2024-12-13T23:27:53.831Z] =================================================================================================================== 00:47:14.690 [2024-12-13T23:27:53.831Z] Total : 15928.13 62.22 0.00 0.00 8002.70 3432.84 10985.08 00:47:14.690 { 00:47:14.690 "results": [ 00:47:14.690 { 00:47:14.690 "job": "nvme0n1", 00:47:14.690 "core_mask": "0x2", 00:47:14.690 "workload": "randread", 00:47:14.690 "status": "finished", 00:47:14.690 "queue_depth": 128, 00:47:14.690 "io_size": 4096, 00:47:14.690 "runtime": 1.008091, 00:47:14.690 "iops": 15928.12553628591, 00:47:14.690 "mibps": 62.219240376116836, 00:47:14.690 "io_failed": 0, 00:47:14.690 "io_timeout": 0, 00:47:14.690 "avg_latency_us": 8002.702866276983, 00:47:14.690 "min_latency_us": 3432.8380952380953, 00:47:14.690 "max_latency_us": 10985.081904761904 00:47:14.690 } 00:47:14.690 ], 00:47:14.690 "core_count": 1 00:47:14.690 } 00:47:14.690 00:27:53 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:47:14.690 00:27:53 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:47:14.949 00:27:53 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:47:14.949 00:27:53 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:47:14.949 00:27:53 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:47:14.949 00:27:53 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:47:14.949 00:27:53 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:47:14.949 00:27:53 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:47:14.949 00:27:54 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:47:14.949 00:27:54 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:47:14.949 00:27:54 keyring_linux -- keyring/linux.sh@23 -- # return 00:47:14.949 00:27:54 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:47:14.949 00:27:54 keyring_linux -- common/autotest_common.sh@652 -- # local es=0 00:47:14.949 00:27:54 keyring_linux -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:47:14.949 00:27:54 keyring_linux -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:47:14.949 00:27:54 keyring_linux -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:47:14.949 00:27:54 keyring_linux -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:47:14.949 00:27:54 keyring_linux -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:47:14.949 00:27:54 keyring_linux -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:47:14.949 00:27:54 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:47:15.207 [2024-12-14 00:27:54.262563] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:47:15.208 [2024-12-14 00:27:54.262906] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032ad00 (107): Transport endpoint is not connected 00:47:15.208 [2024-12-14 00:27:54.263891] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500032ad00 (9): Bad file descriptor 00:47:15.208 [2024-12-14 00:27:54.264887] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:47:15.208 [2024-12-14 00:27:54.264914] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:47:15.208 [2024-12-14 00:27:54.264931] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:47:15.208 [2024-12-14 00:27:54.264943] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:47:15.208 request: 00:47:15.208 { 00:47:15.208 "name": "nvme0", 00:47:15.208 "trtype": "tcp", 00:47:15.208 "traddr": "127.0.0.1", 00:47:15.208 "adrfam": "ipv4", 00:47:15.208 "trsvcid": "4420", 00:47:15.208 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:47:15.208 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:47:15.208 "prchk_reftag": false, 00:47:15.208 "prchk_guard": false, 00:47:15.208 "hdgst": false, 00:47:15.208 "ddgst": false, 00:47:15.208 "psk": ":spdk-test:key1", 00:47:15.208 "allow_unrecognized_csi": false, 00:47:15.208 "method": "bdev_nvme_attach_controller", 00:47:15.208 "req_id": 1 00:47:15.208 } 00:47:15.208 Got JSON-RPC error response 00:47:15.208 response: 00:47:15.208 { 00:47:15.208 "code": -5, 00:47:15.208 "message": "Input/output error" 00:47:15.208 } 00:47:15.208 00:27:54 keyring_linux -- common/autotest_common.sh@655 -- # es=1 00:47:15.208 00:27:54 keyring_linux -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:47:15.208 00:27:54 keyring_linux -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:47:15.208 00:27:54 keyring_linux -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:47:15.208 00:27:54 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:47:15.208 00:27:54 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:47:15.208 00:27:54 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:47:15.208 00:27:54 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:47:15.208 00:27:54 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:47:15.208 00:27:54 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:47:15.208 00:27:54 keyring_linux -- keyring/linux.sh@33 -- # sn=349399241 00:47:15.208 00:27:54 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 349399241 00:47:15.208 1 links removed 00:47:15.208 00:27:54 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:47:15.208 00:27:54 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:47:15.208 00:27:54 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:47:15.208 00:27:54 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:47:15.208 00:27:54 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:47:15.208 00:27:54 keyring_linux -- keyring/linux.sh@33 -- # sn=89905074 00:47:15.208 00:27:54 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 89905074 00:47:15.208 1 links removed 00:47:15.208 00:27:54 keyring_linux -- keyring/linux.sh@41 -- # killprocess 186565 00:47:15.208 00:27:54 keyring_linux -- common/autotest_common.sh@954 -- # '[' -z 186565 ']' 00:47:15.208 00:27:54 keyring_linux -- common/autotest_common.sh@958 -- # kill -0 186565 00:47:15.208 00:27:54 keyring_linux -- common/autotest_common.sh@959 -- # uname 00:47:15.208 00:27:54 keyring_linux -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:47:15.208 00:27:54 keyring_linux -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 186565 00:47:15.467 00:27:54 keyring_linux -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:47:15.467 00:27:54 keyring_linux -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:47:15.467 00:27:54 keyring_linux -- common/autotest_common.sh@972 -- # echo 'killing process with pid 186565' 00:47:15.467 killing process with pid 186565 00:47:15.467 00:27:54 keyring_linux -- common/autotest_common.sh@973 -- # kill 186565 00:47:15.467 Received shutdown signal, test time was about 1.000000 seconds 00:47:15.467 00:47:15.467 Latency(us) 00:47:15.467 [2024-12-13T23:27:54.608Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:47:15.467 [2024-12-13T23:27:54.608Z] =================================================================================================================== 00:47:15.467 [2024-12-13T23:27:54.608Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:47:15.467 00:27:54 keyring_linux -- common/autotest_common.sh@978 -- # wait 186565 00:47:16.404 00:27:55 keyring_linux -- keyring/linux.sh@42 -- # killprocess 186329 00:47:16.404 00:27:55 keyring_linux -- common/autotest_common.sh@954 -- # '[' -z 186329 ']' 00:47:16.404 00:27:55 keyring_linux -- common/autotest_common.sh@958 -- # kill -0 186329 00:47:16.404 00:27:55 keyring_linux -- common/autotest_common.sh@959 -- # uname 00:47:16.404 00:27:55 keyring_linux -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:47:16.404 00:27:55 keyring_linux -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 186329 00:47:16.404 00:27:55 keyring_linux -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:47:16.404 00:27:55 keyring_linux -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:47:16.404 00:27:55 keyring_linux -- common/autotest_common.sh@972 -- # echo 'killing process with pid 186329' 00:47:16.404 killing process with pid 186329 00:47:16.404 00:27:55 keyring_linux -- common/autotest_common.sh@973 -- # kill 186329 00:47:16.404 00:27:55 keyring_linux -- common/autotest_common.sh@978 -- # wait 186329 00:47:18.939 00:47:18.939 real 0m8.741s 00:47:18.939 user 0m14.303s 00:47:18.939 sys 0m1.612s 00:47:18.939 00:27:57 keyring_linux -- common/autotest_common.sh@1130 -- # xtrace_disable 00:47:18.939 00:27:57 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:47:18.939 ************************************ 00:47:18.939 END TEST keyring_linux 00:47:18.939 ************************************ 00:47:18.939 00:27:57 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:47:18.939 00:27:57 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:47:18.939 00:27:57 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:47:18.939 00:27:57 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:47:18.939 00:27:57 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:47:18.939 00:27:57 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:47:18.939 00:27:57 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:47:18.939 00:27:57 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:47:18.939 00:27:57 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:47:18.939 00:27:57 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:47:18.939 00:27:57 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:47:18.939 00:27:57 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:47:18.939 00:27:57 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:47:18.939 00:27:57 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:47:18.939 00:27:57 -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]] 00:47:18.939 00:27:57 -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT 00:47:18.940 00:27:57 -- spdk/autotest.sh@387 -- # timing_enter post_cleanup 00:47:18.940 00:27:57 -- common/autotest_common.sh@726 -- # xtrace_disable 00:47:18.940 00:27:57 -- common/autotest_common.sh@10 -- # set +x 00:47:18.940 00:27:57 -- spdk/autotest.sh@388 -- # autotest_cleanup 00:47:18.940 00:27:57 -- common/autotest_common.sh@1396 -- # local autotest_es=0 00:47:18.940 00:27:57 -- common/autotest_common.sh@1397 -- # xtrace_disable 00:47:18.940 00:27:57 -- common/autotest_common.sh@10 -- # set +x 00:47:24.213 INFO: APP EXITING 00:47:24.213 INFO: killing all VMs 00:47:24.213 INFO: killing vhost app 00:47:24.213 INFO: EXIT DONE 00:47:25.590 0000:5e:00.0 (8086 0a54): Already using the nvme driver 00:47:25.590 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:47:25.590 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:47:25.590 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:47:25.590 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:47:25.591 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:47:25.849 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:47:25.849 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:47:25.849 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:47:25.849 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:47:25.849 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:47:25.849 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:47:25.849 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:47:25.849 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:47:25.849 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:47:25.849 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:47:25.849 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:47:28.382 Cleaning 00:47:28.382 Removing: /var/run/dpdk/spdk0/config 00:47:28.382 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:47:28.382 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:47:28.382 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:47:28.382 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:47:28.382 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:47:28.382 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:47:28.382 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:47:28.382 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:47:28.382 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:47:28.382 Removing: /var/run/dpdk/spdk0/hugepage_info 00:47:28.382 Removing: /var/run/dpdk/spdk1/config 00:47:28.382 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:47:28.382 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:47:28.382 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:47:28.382 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:47:28.382 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:47:28.382 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:47:28.382 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:47:28.382 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:47:28.382 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:47:28.382 Removing: /var/run/dpdk/spdk1/hugepage_info 00:47:28.382 Removing: /var/run/dpdk/spdk2/config 00:47:28.382 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:47:28.382 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:47:28.382 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:47:28.382 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:47:28.382 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:47:28.382 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:47:28.382 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:47:28.382 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:47:28.382 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:47:28.382 Removing: /var/run/dpdk/spdk2/hugepage_info 00:47:28.382 Removing: /var/run/dpdk/spdk3/config 00:47:28.382 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:47:28.382 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:47:28.382 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:47:28.382 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:47:28.382 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:47:28.382 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:47:28.382 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:47:28.382 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:47:28.382 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:47:28.382 Removing: /var/run/dpdk/spdk3/hugepage_info 00:47:28.382 Removing: /var/run/dpdk/spdk4/config 00:47:28.383 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:47:28.383 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:47:28.383 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:47:28.383 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:47:28.383 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:47:28.383 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:47:28.383 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:47:28.383 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:47:28.383 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:47:28.383 Removing: /var/run/dpdk/spdk4/hugepage_info 00:47:28.383 Removing: /dev/shm/bdev_svc_trace.1 00:47:28.383 Removing: /dev/shm/nvmf_trace.0 00:47:28.383 Removing: /dev/shm/spdk_tgt_trace.pid3790072 00:47:28.383 Removing: /var/run/dpdk/spdk0 00:47:28.383 Removing: /var/run/dpdk/spdk1 00:47:28.383 Removing: /var/run/dpdk/spdk2 00:47:28.383 Removing: /var/run/dpdk/spdk3 00:47:28.383 Removing: /var/run/dpdk/spdk4 00:47:28.383 Removing: /var/run/dpdk/spdk_pid107944 00:47:28.383 Removing: /var/run/dpdk/spdk_pid116687 00:47:28.383 Removing: /var/run/dpdk/spdk_pid118462 00:47:28.383 Removing: /var/run/dpdk/spdk_pid119575 00:47:28.383 Removing: /var/run/dpdk/spdk_pid136776 00:47:28.383 Removing: /var/run/dpdk/spdk_pid140974 00:47:28.383 Removing: /var/run/dpdk/spdk_pid143863 00:47:28.383 Removing: /var/run/dpdk/spdk_pid151648 00:47:28.383 Removing: /var/run/dpdk/spdk_pid151679 00:47:28.383 Removing: /var/run/dpdk/spdk_pid15515 00:47:28.383 Removing: /var/run/dpdk/spdk_pid156823 00:47:28.383 Removing: /var/run/dpdk/spdk_pid158943 00:47:28.383 Removing: /var/run/dpdk/spdk_pid161075 00:47:28.383 Removing: /var/run/dpdk/spdk_pid162390 00:47:28.383 Removing: /var/run/dpdk/spdk_pid164531 00:47:28.383 Removing: /var/run/dpdk/spdk_pid165920 00:47:28.383 Removing: /var/run/dpdk/spdk_pid174705 00:47:28.383 Removing: /var/run/dpdk/spdk_pid175252 00:47:28.383 Removing: /var/run/dpdk/spdk_pid175941 00:47:28.383 Removing: /var/run/dpdk/spdk_pid178972 00:47:28.383 Removing: /var/run/dpdk/spdk_pid179425 00:47:28.383 Removing: /var/run/dpdk/spdk_pid179880 00:47:28.383 Removing: /var/run/dpdk/spdk_pid183571 00:47:28.383 Removing: /var/run/dpdk/spdk_pid183659 00:47:28.383 Removing: /var/run/dpdk/spdk_pid185337 00:47:28.383 Removing: /var/run/dpdk/spdk_pid186329 00:47:28.383 Removing: /var/run/dpdk/spdk_pid186565 00:47:28.383 Removing: /var/run/dpdk/spdk_pid22899 00:47:28.383 Removing: /var/run/dpdk/spdk_pid22904 00:47:28.383 Removing: /var/run/dpdk/spdk_pid3786192 00:47:28.383 Removing: /var/run/dpdk/spdk_pid3787683 00:47:28.383 Removing: /var/run/dpdk/spdk_pid3790072 00:47:28.383 Removing: /var/run/dpdk/spdk_pid3790939 00:47:28.383 Removing: /var/run/dpdk/spdk_pid3792693 00:47:28.383 Removing: /var/run/dpdk/spdk_pid3793406 00:47:28.383 Removing: /var/run/dpdk/spdk_pid3794680 00:47:28.383 Removing: /var/run/dpdk/spdk_pid3794908 00:47:28.383 Removing: /var/run/dpdk/spdk_pid3795695 00:47:28.383 Removing: /var/run/dpdk/spdk_pid3797399 00:47:28.383 Removing: /var/run/dpdk/spdk_pid3798879 00:47:28.383 Removing: /var/run/dpdk/spdk_pid3799622 00:47:28.383 Removing: /var/run/dpdk/spdk_pid3800357 00:47:28.383 Removing: /var/run/dpdk/spdk_pid3801103 00:47:28.383 Removing: /var/run/dpdk/spdk_pid3801844 00:47:28.383 Removing: /var/run/dpdk/spdk_pid3802111 00:47:28.383 Removing: /var/run/dpdk/spdk_pid3802355 00:47:28.383 Removing: /var/run/dpdk/spdk_pid3802755 00:47:28.383 Removing: /var/run/dpdk/spdk_pid3803799 00:47:28.383 Removing: /var/run/dpdk/spdk_pid3807167 00:47:28.383 Removing: /var/run/dpdk/spdk_pid3807867 00:47:28.383 Removing: /var/run/dpdk/spdk_pid3808362 00:47:28.383 Removing: /var/run/dpdk/spdk_pid3808588 00:47:28.383 Removing: /var/run/dpdk/spdk_pid3810366 00:47:28.383 Removing: /var/run/dpdk/spdk_pid3810440 00:47:28.383 Removing: /var/run/dpdk/spdk_pid3812238 00:47:28.383 Removing: /var/run/dpdk/spdk_pid3812462 00:47:28.383 Removing: /var/run/dpdk/spdk_pid3812949 00:47:28.383 Removing: /var/run/dpdk/spdk_pid3813177 00:47:28.383 Removing: /var/run/dpdk/spdk_pid3813667 00:47:28.383 Removing: /var/run/dpdk/spdk_pid3813890 00:47:28.383 Removing: /var/run/dpdk/spdk_pid3815361 00:47:28.383 Removing: /var/run/dpdk/spdk_pid3815730 00:47:28.383 Removing: /var/run/dpdk/spdk_pid3816102 00:47:28.383 Removing: /var/run/dpdk/spdk_pid3820335 00:47:28.383 Removing: /var/run/dpdk/spdk_pid3824830 00:47:28.383 Removing: /var/run/dpdk/spdk_pid3835637 00:47:28.383 Removing: /var/run/dpdk/spdk_pid3836304 00:47:28.383 Removing: /var/run/dpdk/spdk_pid3840825 00:47:28.383 Removing: /var/run/dpdk/spdk_pid3841193 00:47:28.383 Removing: /var/run/dpdk/spdk_pid3845823 00:47:28.383 Removing: /var/run/dpdk/spdk_pid3851811 00:47:28.383 Removing: /var/run/dpdk/spdk_pid3854715 00:47:28.383 Removing: /var/run/dpdk/spdk_pid3865696 00:47:28.383 Removing: /var/run/dpdk/spdk_pid3874886 00:47:28.383 Removing: /var/run/dpdk/spdk_pid3877250 00:47:28.383 Removing: /var/run/dpdk/spdk_pid3878340 00:47:28.383 Removing: /var/run/dpdk/spdk_pid3896131 00:47:28.383 Removing: /var/run/dpdk/spdk_pid3900427 00:47:28.383 Removing: /var/run/dpdk/spdk_pid3984764 00:47:28.383 Removing: /var/run/dpdk/spdk_pid3990250 00:47:28.383 Removing: /var/run/dpdk/spdk_pid3996119 00:47:28.383 Removing: /var/run/dpdk/spdk_pid4006567 00:47:28.642 Removing: /var/run/dpdk/spdk_pid4035242 00:47:28.642 Removing: /var/run/dpdk/spdk_pid4040300 00:47:28.642 Removing: /var/run/dpdk/spdk_pid4041978 00:47:28.642 Removing: /var/run/dpdk/spdk_pid4043877 00:47:28.642 Removing: /var/run/dpdk/spdk_pid4044317 00:47:28.642 Removing: /var/run/dpdk/spdk_pid4044674 00:47:28.642 Removing: /var/run/dpdk/spdk_pid4045006 00:47:28.642 Removing: /var/run/dpdk/spdk_pid4045948 00:47:28.642 Removing: /var/run/dpdk/spdk_pid4047963 00:47:28.642 Removing: /var/run/dpdk/spdk_pid4049546 00:47:28.642 Removing: /var/run/dpdk/spdk_pid4050314 00:47:28.642 Removing: /var/run/dpdk/spdk_pid4052804 00:47:28.642 Removing: /var/run/dpdk/spdk_pid4053732 00:47:28.642 Removing: /var/run/dpdk/spdk_pid4054679 00:47:28.642 Removing: /var/run/dpdk/spdk_pid4059150 00:47:28.642 Removing: /var/run/dpdk/spdk_pid4065012 00:47:28.642 Removing: /var/run/dpdk/spdk_pid4065013 00:47:28.642 Removing: /var/run/dpdk/spdk_pid4065014 00:47:28.642 Removing: /var/run/dpdk/spdk_pid4068956 00:47:28.642 Removing: /var/run/dpdk/spdk_pid4073021 00:47:28.642 Removing: /var/run/dpdk/spdk_pid4078414 00:47:28.642 Removing: /var/run/dpdk/spdk_pid4115101 00:47:28.642 Removing: /var/run/dpdk/spdk_pid4119497 00:47:28.642 Removing: /var/run/dpdk/spdk_pid4125798 00:47:28.642 Removing: /var/run/dpdk/spdk_pid4127849 00:47:28.642 Removing: /var/run/dpdk/spdk_pid4130028 00:47:28.642 Removing: /var/run/dpdk/spdk_pid41319 00:47:28.642 Removing: /var/run/dpdk/spdk_pid4131995 00:47:28.642 Removing: /var/run/dpdk/spdk_pid4137065 00:47:28.642 Removing: /var/run/dpdk/spdk_pid4141996 00:47:28.642 Removing: /var/run/dpdk/spdk_pid4146393 00:47:28.642 Removing: /var/run/dpdk/spdk_pid4154100 00:47:28.642 Removing: /var/run/dpdk/spdk_pid4154109 00:47:28.642 Removing: /var/run/dpdk/spdk_pid4159474 00:47:28.642 Removing: /var/run/dpdk/spdk_pid4159700 00:47:28.642 Removing: /var/run/dpdk/spdk_pid4159922 00:47:28.642 Removing: /var/run/dpdk/spdk_pid4160420 00:47:28.642 Removing: /var/run/dpdk/spdk_pid4160588 00:47:28.642 Removing: /var/run/dpdk/spdk_pid4161951 00:47:28.642 Removing: /var/run/dpdk/spdk_pid4163720 00:47:28.642 Removing: /var/run/dpdk/spdk_pid4165280 00:47:28.643 Removing: /var/run/dpdk/spdk_pid4166839 00:47:28.643 Removing: /var/run/dpdk/spdk_pid4168511 00:47:28.643 Removing: /var/run/dpdk/spdk_pid4170158 00:47:28.643 Removing: /var/run/dpdk/spdk_pid4176114 00:47:28.643 Removing: /var/run/dpdk/spdk_pid4176699 00:47:28.643 Removing: /var/run/dpdk/spdk_pid4178582 00:47:28.643 Removing: /var/run/dpdk/spdk_pid4179597 00:47:28.643 Removing: /var/run/dpdk/spdk_pid4185633 00:47:28.643 Removing: /var/run/dpdk/spdk_pid4188454 00:47:28.643 Removing: /var/run/dpdk/spdk_pid4194262 00:47:28.643 Removing: /var/run/dpdk/spdk_pid42003 00:47:28.643 Removing: /var/run/dpdk/spdk_pid42879 00:47:28.643 Removing: /var/run/dpdk/spdk_pid43586 00:47:28.643 Removing: /var/run/dpdk/spdk_pid44964 00:47:28.643 Removing: /var/run/dpdk/spdk_pid46168 00:47:28.643 Removing: /var/run/dpdk/spdk_pid46867 00:47:28.643 Removing: /var/run/dpdk/spdk_pid47547 00:47:28.643 Removing: /var/run/dpdk/spdk_pid52053 00:47:28.643 Removing: /var/run/dpdk/spdk_pid52457 00:47:28.643 Removing: /var/run/dpdk/spdk_pid58760 00:47:28.643 Removing: /var/run/dpdk/spdk_pid58988 00:47:28.643 Removing: /var/run/dpdk/spdk_pid64404 00:47:28.643 Removing: /var/run/dpdk/spdk_pid6632 00:47:28.643 Removing: /var/run/dpdk/spdk_pid68780 00:47:28.643 Removing: /var/run/dpdk/spdk_pid78374 00:47:28.643 Removing: /var/run/dpdk/spdk_pid79025 00:47:28.643 Removing: /var/run/dpdk/spdk_pid83338 00:47:28.643 Removing: /var/run/dpdk/spdk_pid83672 00:47:28.643 Removing: /var/run/dpdk/spdk_pid88312 00:47:28.643 Removing: /var/run/dpdk/spdk_pid94569 00:47:28.643 Removing: /var/run/dpdk/spdk_pid97301 00:47:28.643 Clean 00:47:28.902 00:28:07 -- common/autotest_common.sh@1453 -- # return 0 00:47:28.902 00:28:07 -- spdk/autotest.sh@389 -- # timing_exit post_cleanup 00:47:28.902 00:28:07 -- common/autotest_common.sh@732 -- # xtrace_disable 00:47:28.902 00:28:07 -- common/autotest_common.sh@10 -- # set +x 00:47:28.902 00:28:07 -- spdk/autotest.sh@391 -- # timing_exit autotest 00:47:28.902 00:28:07 -- common/autotest_common.sh@732 -- # xtrace_disable 00:47:28.902 00:28:07 -- common/autotest_common.sh@10 -- # set +x 00:47:28.902 00:28:07 -- spdk/autotest.sh@392 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:47:28.902 00:28:07 -- spdk/autotest.sh@394 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:47:28.902 00:28:07 -- spdk/autotest.sh@394 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:47:28.902 00:28:07 -- spdk/autotest.sh@396 -- # [[ y == y ]] 00:47:28.902 00:28:07 -- spdk/autotest.sh@398 -- # hostname 00:47:28.902 00:28:07 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-wfp-04 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:47:28.902 geninfo: WARNING: invalid characters removed from testname! 00:47:50.839 00:28:27 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:47:51.407 00:28:30 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:47:53.311 00:28:32 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:47:55.215 00:28:33 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:47:56.590 00:28:35 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:47:58.492 00:28:37 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:48:00.397 00:28:39 -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:48:00.397 00:28:39 -- spdk/autorun.sh@1 -- $ timing_finish 00:48:00.397 00:28:39 -- common/autotest_common.sh@738 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt ]] 00:48:00.397 00:28:39 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:48:00.397 00:28:39 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:48:00.397 00:28:39 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:48:00.397 + [[ -n 3707934 ]] 00:48:00.397 + sudo kill 3707934 00:48:00.407 [Pipeline] } 00:48:00.422 [Pipeline] // stage 00:48:00.427 [Pipeline] } 00:48:00.442 [Pipeline] // timeout 00:48:00.447 [Pipeline] } 00:48:00.461 [Pipeline] // catchError 00:48:00.466 [Pipeline] } 00:48:00.481 [Pipeline] // wrap 00:48:00.488 [Pipeline] } 00:48:00.500 [Pipeline] // catchError 00:48:00.510 [Pipeline] stage 00:48:00.512 [Pipeline] { (Epilogue) 00:48:00.525 [Pipeline] catchError 00:48:00.526 [Pipeline] { 00:48:00.539 [Pipeline] echo 00:48:00.541 Cleanup processes 00:48:00.546 [Pipeline] sh 00:48:00.832 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:48:00.832 198815 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:48:00.845 [Pipeline] sh 00:48:01.130 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:48:01.130 ++ grep -v 'sudo pgrep' 00:48:01.130 ++ awk '{print $1}' 00:48:01.130 + sudo kill -9 00:48:01.130 + true 00:48:01.141 [Pipeline] sh 00:48:01.429 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:48:13.649 [Pipeline] sh 00:48:13.934 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:48:13.934 Artifacts sizes are good 00:48:13.948 [Pipeline] archiveArtifacts 00:48:13.956 Archiving artifacts 00:48:14.123 [Pipeline] sh 00:48:14.485 + sudo chown -R sys_sgci: /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:48:14.500 [Pipeline] cleanWs 00:48:14.510 [WS-CLEANUP] Deleting project workspace... 00:48:14.510 [WS-CLEANUP] Deferred wipeout is used... 00:48:14.516 [WS-CLEANUP] done 00:48:14.518 [Pipeline] } 00:48:14.535 [Pipeline] // catchError 00:48:14.547 [Pipeline] sh 00:48:14.829 + logger -p user.info -t JENKINS-CI 00:48:14.837 [Pipeline] } 00:48:14.851 [Pipeline] // stage 00:48:14.856 [Pipeline] } 00:48:14.870 [Pipeline] // node 00:48:14.875 [Pipeline] End of Pipeline 00:48:14.937 Finished: SUCCESS